Math 344 Lecture #19 3.5 Normed Linear Spaces Definition 3.5.1. A seminorm on a vector space V over F is a map : V R that for all x, y V and for all α F satisfies (i) x 0 (positivity), (ii) αx = α x (scale preservation) (iii) x + y x + y (triangle inequality) A norm on V is a seminorm that satisfies the property of x = 0 if and only if x = 0. A vector space V with a norm is called a normed linear space (NLS) and is denoted by (V, ). Theorem 3.5.2. Every inner product space (V,, ) is a normed linear space with the norm x = x, x. See the Appendix for a proof. 3.5.1 Examples Examples 3.5.4 and 3.5.5. Let x = [ x 1 x 2 x n ] T F n. For p [1, ) the p-norm on F n is ( ) 1/p x p = x j p. [We will show that the triangle inequality holds for each p-norm in Chapter 3 Section 6.] The 1-norm is The 2-norm, x 1 = x 1 + x 2 + + x n. x 2 = x 1 2 + x 2 2 + + x n 2, is that obtained by the standard inner product on F n. The -norm (i.e., p = ), x = sup{ x 1, x 2,..., x n }, is the limit of x p as p. Example 3.5.6. The Frobenius norm on M m n (F) is given by A F = tr(a H A). This norm is invariant under left multiplication of orthonormal m m matrices Q because (QA) H (QA) = A H Q H QA = A H IA = A H A.
Example 3.5.7. For p [1, ), the p-norm on L p ([a, b], F) is The -norm on L ([a, b], F) is ( b 1/p f p = f(x) dx) p. a f = sup f(x). x [a,b] Defintion 3.5.8. For a normed linear space Y with norm Y and a nonempty set X, define the L -norm of f : X Y by f L = sup f(x) Y. x X Let L (X; Y ) be the collection of all f : X Y for which f L <. Proposition 3.5.9. For a normed linear space Y and any nonempty set X, the pair (L (X; Y ), L ) is a normed linear space. The proof of this is HW (Exercise 3.25). 3.5.2 Induced Norms on Linear Transformations Definition 3.5.10. Let (V, V ) and (W, W ) be two normed linear spaces. The norm of T L (V, W ) is defined to be quantity T x W T V,W = sup. x 0 induced by the norms on V and W. A map T L (V, W ) is called bounded if T V,W <. Let B(V, W ) denote the collection of all bounded T L (V, W ). If W = V, we write B(V ) instead of B(V, V ) and write (by abuse of notation) T V instead of T V,V. The set B(V ) is the collection of all bounded T L (V ) and V is the operator norm. Equivalent Definitions of T V,W. A simple proof (that the book does not give) for is: for nonzero y V set α = y V T x W sup = sup T x W. x 0 =1 and x = α 1 y, so that = 1 and y = αx; then T y W y V = T (αx) W αx V = α T x W α = T x W = T x W, so the supremum over y 0 is the same as the supremum over x with = 1. It is also true that sup =1 T x W = sup 1 T x W. Theorem 3.5.11. The collection B(V, W ) is a subspace of L (V, W ) and the pair (B(V, W ), V,W ) is a normed linear space.
See the Appendix for a proof. Remark 3.5.12. For each T B(V, W ), the norm V,W satisfies because for nonzero x V, we have ( ) T x W = T x W T x W T V,W for all x V, ( ) x W = T T V,W. In fact, the quantity T V,W is the smallest one for which T x W T V,W for all x V. Remark 3.5.13. When V and W are finite dimensional normed linear spaces, we have B(V, W ) is precisely L (V, W ). This is generally not true when V and W are infinite dimensional. Theorem 3.5.14. Let (V, V ), (W, W ), and (X, X ) be normed linear spaces. If T B(V, W ) and S B(W, X), then ST B(V, X) and ST V,X S W,X T V,W. In particular, the operator norm V on B(V ) satisfies the submultiplicative property ST V S V T V for all S, T B(V ). Proof. For v V we have giving the result. ST v X = S(T v) X S W,X T v W S W,X T V,W v V, Definition 3.5.15. A norm on M n (F) is called a matrix norm if AB A B for all A, B M n (F) (i.e., it satisfies the submultiplicative property). Example 3.5.17. For 1 p, the p-norms on F m and F n induce a norm p on M m n (F) defined by Ax p A p = sup. x 0 x p When m = n, the norm p is the induced operator norm on M n (F). Theorem 3.5.14 shows that this induced operator norm p is submultiplicative, and so p is a matrix norm. Unexample 3.5.18. Although not an induced norm, the Frobenius norm F M n (F) is a matrix norm, as to be shown in HW (Exercise 4.28). 3.5.3 Explicit Formulas for A 1 and A on
Theorem 3.5.20. For A = [a ij ] M m n (F) we have See the Appendix for a proof. A 1 = sup 1 j n A = sup 1 i n m a ij, i=1 a ij. i=j
Appendix Proof of Theorem 3.5.2. We have already shown in Remark 3.1.12 that x = x, x satisfies properties (i) and (ii) and that x = 0 if and only if x = 0. To show property (iii) holds, we have x + y 2 = x + y, x + y = x, x + x, y + y, x + y, y x 2 + 2 x, y + y 2 x 2 + 2 x y + y 2 = ( x + y ) 2, where for the first inequality, x, y + y, x = x, y + x, y = a + ib + a ib = 2a is a real number bounded above by 2 x, y = 2 a 2 + b 2, and for the second inequality, we used the Cauchy-Schwarz inequality. Proof of Theorem 3.5.11. First we show that the induced norm V,W is indeed a norm on B(V, W ). (i) Positivity T V,W 0 and T V,W = 0 if and only if T = 0. That T V,W 0 follows directly from the definition of the induced norm. Now if T = 0 (the zero transformation T x = 0 for all x V ), then T x W = 0 for all x V, so that T V,W = 0. We use the contrapositive to show that T V,W = 0 implies T = 0. Suppose there is a nonzero y V such that T y 0. Then so that T V,W > 0. T x W sup x 0 T y V y V > 0 (ii) Scale Preservation. For T B(V, W ) and α F we have at V,W = sup at (x) W = sup α T x W = α sup T x W. =1 =1 =1 (iii) Triangle Inequality. For S, T B(V, W ), we have S + T V,W = sup (S + T )x W =1 = sup Sx + T x W =1 ( ) sup Sx W + T x W =1 sup Sx W + sup T x W =1 =1 = S V,W + T V,W, where for the first inequality we have used the triangle inequality for W, and for the second inequality we have used the following property of supremum.
For α = sup x V =1( Sx W + T x W ), β = sup x V =1 Sx W, and γ = sup x V =1 T x W, there is for ɛ > 0 a y V satisfying y V = 1 such that α ɛ < Sy W + T y W β + γ. This holds for any ɛ > 0 which implies that α β + γ. We have shown that V,W is an norm on B(V, W ). Scale preservation shows that B(V, W ) is closed under scalar multiplication, and the triangle inequality shows that B(V, W ) is closed under addition. Thus the subset B(V, W ) of L (V, W ) is a subspace of L (V, W ), and hence B(V, W ) is a normed linear space with the norm V,W. Proof of Theorem 3.5.20. The proof of the formula for A 1 is HW (Exercise 3.27). Here is a proof of the formula for A. Writing x = [ x 1 x 2 x n ] T F n, the i th entry of Ax F n is a ij x j. We thus we obtain Ax = sup a i i m ij x j sup a ij x j sup When x 0, we can divide both sides by the positive x to get Ax x sup We now show the opposite inequality holds too. Let k be the row index satisfying a kj = sup a ij. a ij. a ij x. Let x F n be the vector whose i th entry is 0 if a ki = 0, and is a ki / a ki if a ki 0. If every entry of x were zero, then a ki = 0 for all i = 1,..., m, and since n a kj = sup n a ij, every entry of A would be zero, in which case the formula holds. So we may assume that x 0, which implies that x = 1. From Ax A x we have A Ax a kj = sup a ij x a kj a kj a kj a kj = because of the meaning of k. a jk 2 a jk = sup a ij