j=1 [We will show that the triangle inequality holds for each p-norm in Chapter 3 Section 6.] The 1-norm is A F = tr(a H A).

Similar documents
Section 3.9. Matrix Norm

Functional Analysis Review

Part 1a: Inner product, Orthogonality, Vector/Matrix norm

Lecture # 3 Orthogonal Matrices and Matrix Norms. We repeat the definition an orthogonal set and orthornormal set.

Lecture 5. Ch. 5, Norms for vectors and matrices. Norms for vectors and matrices Why?

Review of Basic Concepts in Linear Algebra

Linear Algebra Massoud Malek

Section 7.5 Inner Product Spaces

Math 290, Midterm II-key

Basic Concepts in Linear Algebra

Kernel Method: Data Analysis with Positive Definite Kernels

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

Review of Some Concepts from Linear Algebra: Part 2

Characterisation of Accumulation Points. Convergence in Metric Spaces. Characterisation of Closed Sets. Characterisation of Closed Sets

Linear Analysis Lecture 5

b 1 b 2.. b = b m A = [a 1,a 2,...,a n ] where a 1,j a 2,j a j = a m,j Let A R m n and x 1 x 2 x = x n

NORMS ON SPACE OF MATRICES

Functional Analysis MATH and MATH M6202

Mathematics Department Stanford University Math 61CM/DM Inner products

Elementary linear algebra

Basic Calculus Review

Inner products and Norms. Inner product of 2 vectors. Inner product of 2 vectors x and y in R n : x 1 y 1 + x 2 y x n y n in R n

INNER PRODUCT SPACE. Definition 1

Contents. Appendix D (Inner Product Spaces) W-51. Index W-63

Functional Analysis Review

SPECTRAL THEOREM FOR COMPACT SELF-ADJOINT OPERATORS

Typical Problem: Compute.

Math 328 Course Notes

Linear Algebra. Session 12

1. General Vector Spaces

Functional Analysis Exercise Class

Exercise Sheet 1.

Vector Spaces. Commutativity of +: u + v = v + u, u, v, V ; Associativity of +: u + (v + w) = (u + v) + w, u, v, w V ;

Notions such as convergent sequence and Cauchy sequence make sense for any metric space. Convergent Sequences are Cauchy

Lecture 1: Review of linear algebra

Linear Algebra Review

Definition 1. A set V is a vector space over the scalar field F {R, C} iff. there are two operations defined on V, called vector addition

Recall: Dot product on R 2 : u v = (u 1, u 2 ) (v 1, v 2 ) = u 1 v 1 + u 2 v 2, u u = u u 2 2 = u 2. Geometric Meaning:

Your first day at work MATH 806 (Fall 2015)

Definitions and Properties of R N

Chapter 3. Vector spaces

Spectral Theory, with an Introduction to Operator Means. William L. Green

L p Functions. Given a measure space (X, µ) and a real number p [1, ), recall that the L p -norm of a measurable function f : X R is defined by

y 2 . = x 1y 1 + x 2 y x + + x n y n 2 7 = 1(2) + 3(7) 5(4) = 3. x x = x x x2 n.

MATH 51H Section 4. October 16, Recall what it means for a function between metric spaces to be continuous:

M17 MAT25-21 HOMEWORK 6

4 Linear operators and linear functionals

Functional Analysis Exercise Class

MATH 304 Linear Algebra Lecture 10: Linear independence. Wronskian.

Algebra II. Paulius Drungilas and Jonas Jankauskas

Mathematical Methods wk 1: Vectors

Mathematical Methods wk 1: Vectors

Quantum Computing Lecture 2. Review of Linear Algebra

Inner products. Theorem (basic properties): Given vectors u, v, w in an inner product space V, and a scalar k, the following properties hold:

Lecture 8 : Eigenvalues and Eigenvectors

Lecture 20: 6.1 Inner Products

Class notes: Approximation

Mathematical Analysis Outline. William G. Faris

2 Two-Point Boundary Value Problems

MAT 419 Lecture Notes Transcribed by Eowyn Cenek 6/1/2012

CHAPTER II HILBERT SPACES

Basic Elements of Linear Algebra

Functional Analysis. Martin Brokate. 1 Normed Spaces 2. 2 Hilbert Spaces The Principle of Uniform Boundedness 32

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Gaussian Hilbert spaces

v = v 1 2 +v 2 2. Two successive applications of this idea give the length of the vector v R 3 :

Lecture 23: 6.1 Inner Products

MAT 771 FUNCTIONAL ANALYSIS HOMEWORK 3. (1) Let V be the vector space of all bounded or unbounded sequences of complex numbers.

Chapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space.

MATH 426, TOPOLOGY. p 1.

Exercise Solutions to Functional Analysis

Applied Analysis (APPM 5440): Final exam 1:30pm 4:00pm, Dec. 14, Closed books.

MATH 304 Linear Algebra Lecture 20: Review for Test 1.

Lecture Notes DRE 7007 Mathematics, PhD

Real Analysis Notes. Thomas Goller

MATH 304 Linear Algebra Lecture 19: Least squares problems (continued). Norms and inner products.

Test 1 Review Problems Spring 2015

MATH 304 Linear Algebra Lecture 8: Vector spaces. Subspaces.

Analysis-3 lecture schemes

Vector Spaces. Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms.

Jim Lambers MAT 610 Summer Session Lecture 2 Notes

Functional Analysis HW #5

The following definition is fundamental.

Vector spaces. DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis.

Math 443 Differential Geometry Spring Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook.

Lecture Notes for Inf-Mat 3350/4350, Tom Lyche

Analysis and Linear Algebra. Lectures 1-3 on the mathematical tools that will be used in C103

Numerical Linear Algebra

Vector Spaces and Linear Transformations

FUNCTIONAL ANALYSIS LECTURE NOTES: ADJOINTS IN HILBERT SPACES

1 Basics of vector space

(, ) : R n R n R. 1. It is bilinear, meaning it s linear in each argument: that is

Chapter 2. Vectors and Vector Spaces

Distances and similarities Based in part on slides from textbook, slides of Susan Holmes. October 3, Statistics 202: Data Mining

Problem Set # 1 Solution, 18.06

Definitions for Quizzes

MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors.

The spectrum of a self-adjoint operator is a compact subset of R

Math 4377/6308 Advanced Linear Algebra

Applied Linear Algebra in Geoscience Using MATLAB

Transcription:

Math 344 Lecture #19 3.5 Normed Linear Spaces Definition 3.5.1. A seminorm on a vector space V over F is a map : V R that for all x, y V and for all α F satisfies (i) x 0 (positivity), (ii) αx = α x (scale preservation) (iii) x + y x + y (triangle inequality) A norm on V is a seminorm that satisfies the property of x = 0 if and only if x = 0. A vector space V with a norm is called a normed linear space (NLS) and is denoted by (V, ). Theorem 3.5.2. Every inner product space (V,, ) is a normed linear space with the norm x = x, x. See the Appendix for a proof. 3.5.1 Examples Examples 3.5.4 and 3.5.5. Let x = [ x 1 x 2 x n ] T F n. For p [1, ) the p-norm on F n is ( ) 1/p x p = x j p. [We will show that the triangle inequality holds for each p-norm in Chapter 3 Section 6.] The 1-norm is The 2-norm, x 1 = x 1 + x 2 + + x n. x 2 = x 1 2 + x 2 2 + + x n 2, is that obtained by the standard inner product on F n. The -norm (i.e., p = ), x = sup{ x 1, x 2,..., x n }, is the limit of x p as p. Example 3.5.6. The Frobenius norm on M m n (F) is given by A F = tr(a H A). This norm is invariant under left multiplication of orthonormal m m matrices Q because (QA) H (QA) = A H Q H QA = A H IA = A H A.

Example 3.5.7. For p [1, ), the p-norm on L p ([a, b], F) is The -norm on L ([a, b], F) is ( b 1/p f p = f(x) dx) p. a f = sup f(x). x [a,b] Defintion 3.5.8. For a normed linear space Y with norm Y and a nonempty set X, define the L -norm of f : X Y by f L = sup f(x) Y. x X Let L (X; Y ) be the collection of all f : X Y for which f L <. Proposition 3.5.9. For a normed linear space Y and any nonempty set X, the pair (L (X; Y ), L ) is a normed linear space. The proof of this is HW (Exercise 3.25). 3.5.2 Induced Norms on Linear Transformations Definition 3.5.10. Let (V, V ) and (W, W ) be two normed linear spaces. The norm of T L (V, W ) is defined to be quantity T x W T V,W = sup. x 0 induced by the norms on V and W. A map T L (V, W ) is called bounded if T V,W <. Let B(V, W ) denote the collection of all bounded T L (V, W ). If W = V, we write B(V ) instead of B(V, V ) and write (by abuse of notation) T V instead of T V,V. The set B(V ) is the collection of all bounded T L (V ) and V is the operator norm. Equivalent Definitions of T V,W. A simple proof (that the book does not give) for is: for nonzero y V set α = y V T x W sup = sup T x W. x 0 =1 and x = α 1 y, so that = 1 and y = αx; then T y W y V = T (αx) W αx V = α T x W α = T x W = T x W, so the supremum over y 0 is the same as the supremum over x with = 1. It is also true that sup =1 T x W = sup 1 T x W. Theorem 3.5.11. The collection B(V, W ) is a subspace of L (V, W ) and the pair (B(V, W ), V,W ) is a normed linear space.

See the Appendix for a proof. Remark 3.5.12. For each T B(V, W ), the norm V,W satisfies because for nonzero x V, we have ( ) T x W = T x W T x W T V,W for all x V, ( ) x W = T T V,W. In fact, the quantity T V,W is the smallest one for which T x W T V,W for all x V. Remark 3.5.13. When V and W are finite dimensional normed linear spaces, we have B(V, W ) is precisely L (V, W ). This is generally not true when V and W are infinite dimensional. Theorem 3.5.14. Let (V, V ), (W, W ), and (X, X ) be normed linear spaces. If T B(V, W ) and S B(W, X), then ST B(V, X) and ST V,X S W,X T V,W. In particular, the operator norm V on B(V ) satisfies the submultiplicative property ST V S V T V for all S, T B(V ). Proof. For v V we have giving the result. ST v X = S(T v) X S W,X T v W S W,X T V,W v V, Definition 3.5.15. A norm on M n (F) is called a matrix norm if AB A B for all A, B M n (F) (i.e., it satisfies the submultiplicative property). Example 3.5.17. For 1 p, the p-norms on F m and F n induce a norm p on M m n (F) defined by Ax p A p = sup. x 0 x p When m = n, the norm p is the induced operator norm on M n (F). Theorem 3.5.14 shows that this induced operator norm p is submultiplicative, and so p is a matrix norm. Unexample 3.5.18. Although not an induced norm, the Frobenius norm F M n (F) is a matrix norm, as to be shown in HW (Exercise 4.28). 3.5.3 Explicit Formulas for A 1 and A on

Theorem 3.5.20. For A = [a ij ] M m n (F) we have See the Appendix for a proof. A 1 = sup 1 j n A = sup 1 i n m a ij, i=1 a ij. i=j

Appendix Proof of Theorem 3.5.2. We have already shown in Remark 3.1.12 that x = x, x satisfies properties (i) and (ii) and that x = 0 if and only if x = 0. To show property (iii) holds, we have x + y 2 = x + y, x + y = x, x + x, y + y, x + y, y x 2 + 2 x, y + y 2 x 2 + 2 x y + y 2 = ( x + y ) 2, where for the first inequality, x, y + y, x = x, y + x, y = a + ib + a ib = 2a is a real number bounded above by 2 x, y = 2 a 2 + b 2, and for the second inequality, we used the Cauchy-Schwarz inequality. Proof of Theorem 3.5.11. First we show that the induced norm V,W is indeed a norm on B(V, W ). (i) Positivity T V,W 0 and T V,W = 0 if and only if T = 0. That T V,W 0 follows directly from the definition of the induced norm. Now if T = 0 (the zero transformation T x = 0 for all x V ), then T x W = 0 for all x V, so that T V,W = 0. We use the contrapositive to show that T V,W = 0 implies T = 0. Suppose there is a nonzero y V such that T y 0. Then so that T V,W > 0. T x W sup x 0 T y V y V > 0 (ii) Scale Preservation. For T B(V, W ) and α F we have at V,W = sup at (x) W = sup α T x W = α sup T x W. =1 =1 =1 (iii) Triangle Inequality. For S, T B(V, W ), we have S + T V,W = sup (S + T )x W =1 = sup Sx + T x W =1 ( ) sup Sx W + T x W =1 sup Sx W + sup T x W =1 =1 = S V,W + T V,W, where for the first inequality we have used the triangle inequality for W, and for the second inequality we have used the following property of supremum.

For α = sup x V =1( Sx W + T x W ), β = sup x V =1 Sx W, and γ = sup x V =1 T x W, there is for ɛ > 0 a y V satisfying y V = 1 such that α ɛ < Sy W + T y W β + γ. This holds for any ɛ > 0 which implies that α β + γ. We have shown that V,W is an norm on B(V, W ). Scale preservation shows that B(V, W ) is closed under scalar multiplication, and the triangle inequality shows that B(V, W ) is closed under addition. Thus the subset B(V, W ) of L (V, W ) is a subspace of L (V, W ), and hence B(V, W ) is a normed linear space with the norm V,W. Proof of Theorem 3.5.20. The proof of the formula for A 1 is HW (Exercise 3.27). Here is a proof of the formula for A. Writing x = [ x 1 x 2 x n ] T F n, the i th entry of Ax F n is a ij x j. We thus we obtain Ax = sup a i i m ij x j sup a ij x j sup When x 0, we can divide both sides by the positive x to get Ax x sup We now show the opposite inequality holds too. Let k be the row index satisfying a kj = sup a ij. a ij. a ij x. Let x F n be the vector whose i th entry is 0 if a ki = 0, and is a ki / a ki if a ki 0. If every entry of x were zero, then a ki = 0 for all i = 1,..., m, and since n a kj = sup n a ij, every entry of A would be zero, in which case the formula holds. So we may assume that x 0, which implies that x = 1. From Ax A x we have A Ax a kj = sup a ij x a kj a kj a kj a kj = because of the meaning of k. a jk 2 a jk = sup a ij