Section 3.9. Matrix Norm

Similar documents
Lecture 5. Ch. 5, Norms for vectors and matrices. Norms for vectors and matrices Why?

Inner products and Norms. Inner product of 2 vectors. Inner product of 2 vectors x and y in R n : x 1 y 1 + x 2 y x n y n in R n

j=1 [We will show that the triangle inequality holds for each p-norm in Chapter 3 Section 6.] The 1-norm is A F = tr(a H A).

Review of Some Concepts from Linear Algebra: Part 2

Properties of Matrices and Operations on Matrices

Linear Algebra Massoud Malek

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2017 LECTURE 5

Introduction to Numerical Linear Algebra II

Part 1a: Inner product, Orthogonality, Vector/Matrix norm

Functional Analysis Review

Basic Concepts in Linear Algebra

Review of Basic Concepts in Linear Algebra

Notes on Linear Algebra and Matrix Theory

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

Computational math: Assignment 1

Jim Lambers MAT 610 Summer Session Lecture 2 Notes

Linear Algebra (Review) Volker Tresp 2018

NORMS ON SPACE OF MATRICES

Chapter 8 Integral Operators

ORTHOGONALITY AND LEAST-SQUARES [CHAP. 6]

Basic Calculus Review

Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014

Knowledge Discovery and Data Mining 1 (VO) ( )

Analysis Preliminary Exam Workshop: Hilbert Spaces

LinGloss. A glossary of linear algebra

MAT 610: Numerical Linear Algebra. James V. Lambers

The Singular Value Decomposition

Algebra C Numerical Linear Algebra Sample Exam Problems

Review of Linear Algebra

The following definition is fundamental.

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

Linear Algebra (Review) Volker Tresp 2017

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

Chapter 3 Transformations

b 1 b 2.. b = b m A = [a 1,a 2,...,a n ] where a 1,j a 2,j a j = a m,j Let A R m n and x 1 x 2 x = x n

Lecture # 3 Orthogonal Matrices and Matrix Norms. We repeat the definition an orthogonal set and orthornormal set.

INNER PRODUCT SPACE. Definition 1

Principal Component Analysis

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p

MATH 22A: LINEAR ALGEBRA Chapter 4

Introduction to Matrix Algebra

Numerical Methods for Differential Equations Mathematical and Computational Tools

Numerical Linear Algebra Homework Assignment - Week 2

Lecture 1: Review of linear algebra

14 Singular Value Decomposition

Matrix Algebra: Summary

Linear Algebra. Session 12

Linear Algebra: Characteristic Value Problem

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

Linear Algebra Methods for Data Mining

EXAM. Exam 1. Math 5316, Fall December 2, 2012

Lecture 7. Econ August 18

Math for ML: review. ML and knowledge of other fields

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.

Lecture 8: Linear Algebra Background

1 Inner Product and Orthogonality

Orthonormal Bases; Gram-Schmidt Process; QR-Decomposition

1. General Vector Spaces

The Eigenvalue Problem: Perturbation Theory

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Linear Algebra Review

Quantum Computing Lecture 2. Review of Linear Algebra

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Chapter 2. Vectors and Vector Spaces

Problem # Max points possible Actual score Total 120

be a Householder matrix. Then prove the followings H = I 2 uut Hu = (I 2 uu u T u )u = u 2 uut u

MATH 581D FINAL EXAM Autumn December 12, 2016

The Transpose of a Vector

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors

Functional Analysis Exercise Class

2. Review of Linear Algebra

Chapter 7: Symmetric Matrices and Quadratic Forms

OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY

MATH2071: LAB #5: Norms, Errors and Condition Numbers

Linear Algebra Review. Vectors

Numerical Analysis: Solving Systems of Linear Equations

Section 3.3. Matrix Rank and the Inverse of a Full Rank Matrix

Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2

Nonlinear Programming Algorithms Handout

Numerical Linear Algebra

The University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013.

Inner product spaces. Layers of structure:

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T.

Foundations of Matrix Analysis

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Matrix decompositions

Chapter 4 Euclid Space

ELE/MCE 503 Linear Algebra Facts Fall 2018

1. Linear systems of equations. Chapters 7-8: Linear Algebra. Solution(s) of a linear system of equations (continued)

Mobile Robotics 1. A Compact Course on Linear Algebra. Giorgio Grisetti

Tutorials in Optimization. Richard Socher

Typical Problem: Compute.

LEAST SQUARES SOLUTION TRICKS

Chapter 1. Matrix Algebra

Lecture 7: Positive Semidefinite Matrices

Large Scale Data Analysis Using Deep Learning

MATH 532: Linear Algebra

Transcription:

3.9. Matrix Norm 1 Section 3.9. Matrix Norm Note. We define several matrix norms, some similar to vector norms and some reflecting how multiplication by a matrix affects the norm of a vector. We use matrix norms to discuss the convergence of sequences and series of matrices. Definition. Consider the vector space R n m of all n m matrices with real entries. A matrix norm on R n m is a mapping : R n m R such that for all A,B R n m and a R: (1) Nonnegativity and Mapping of the Identity: If A 0 then A > 0 and 0 = 0. (2) Relation of Real Scalar Multiplication to Real Multiplication: aa = a A. (3) Triangle Inequality: A + B A + B. (4) Consistence Property: AB A B. A matrix norm is orthogonally invariant if for A and B orthogonally similar we have A = B. The consistence property is commonly called the submultiplicative property. Note. We briefly denote the norm of a vector as v and the norm of a matrix as M. Definition. The matrix norm on R n m Ax v is A M = sup where A R n m x 0 x v and x R m. This is also called the metic norm induced by the vector norm v, or the operator norm.

3.9. Matrix Norm 2 Note. Gentle uses max in place of sup but this cannot be done since R m contains an infinite number of nonzero vectors. In Exercise 3.22 (modified) you are to show that the matrix norm actually is a norm. Theorem 3.9.1. The vector norm and its induced matrix norm satisfy: (1) Ax A x. (2) A = sup x =1 Ax. Note. The proof of Theorem 3.9.1 is to be given in Exercise 3.23. In R m, {x x = 1} is a compact set and so it is correct to define A = max x =1 Ax. However, in infinite dimensional vector spaces, the set {x x = 1} is not compact and in that case sup is necessary. (Implicit in this observation is the fact that : R m R is continuous.) Definition. If we use the l p norm on R m (see Section 2.1), then the induced matrix norm is the l p matrix norm A p = max x p =1 Ax p. Theorem 3.9.2. For n m matrix A = [a ij ], the l 1 norm satisfies A 1 = max 1 j m n i=1 a ij and so it is also called the column-sum norm. The l norm satisfies A = max 1 i n m j=1 a ij and so it is also called the row-sum norm.

3.9. Matrix Norm 3 Note. Theorem 3.9.2 immediately implies that for any A R n m, we have A T = A 1. If A is symmetric then A 1 = A. Theorem 3.9.3. The l 2 matrix norm and spectral radius are related as: A 2 = ρ(a T A). Note. The proof of Theorem 3.9.3 is to be given in Exercise 3.24. Note. In Exercise 3.25(a), it is shown that for orthogonal Q that Qx 2 = x 2 ; that is, the matrix transformation Q : R m R n is an isometry and orthogonal matrices are examples of isometric matrices. Notice that this gives Q = 1. More generally, if A and B are orthogonally similar then (by the Consistency Property) A 2 = B 2. Definition. For A R n m, the Frobenius norm is n m A F = (a ij ) 2 (also called the Euclidean matrix norm). i=1 j=1 Note. The fact that the Frobenius norm satisfies properties (1), (2), (3) of the definition of matrix norm we need only observe that for any A R n m there is a vector v R nm (and conversely) with A 2 = v 2 and that 2 is a vector norm. Proof of the Consistency Property is to be given in Exercise 3.27.

3.9. Matrix Norm 4 Note. Recall that for A and B n m matrices with the columns of A as a 1,a 2,...,a m and the columns of B as b 1,b 2,...,b m, we have the inner product A,B = m a T j b j = j=1 m a j,b j. So with A = [a ij ] and B = [b ij ], A,B = m n j=1 i=1 a ijb ij and so A,A = m j=1 (a ij) 2 = A 2 F. Also, by Theorem 3.2.8(5), A,B = tr(at B), so we n i=1 also have A F = A,A and hence A F = j=1 tr(a T A) = A,A. So the Frobenius norm is the norm induced by the matrix inner product (see page 74 of the text). Clearly from the definition of Frobenius norm we have A T F = A F (since the entries of A and A T are collectively the same). If Q R n m is orthogonal, then rank(q) = rank(q T Q) = Q,Q = Q F. Theorem 3.9.4. If square matrices A and B are orthogonally similar then A F = B F. Note. Since the Frobenius norm is induced by the matrix inner product and the spectral decomposition of A is A = r i=1 d iu i vi T where u i vi T,u jvj T = 1 if i = j 0 if i j and d i = A,u i vi T then A 2 F = r i=1 d2 i where the d i are the singular values of A. this is Parseval s identity for the Frobenius norm (see Section 2.2).

3.9. Matrix Norm 5 Note 3.9.A. In Theorem 2.1.10 we showed that all vector norms on a given finite dimensional vector space are equivalent. In Exercise 3.28 it is to be shown that any two matrix norms induced by vector norms are equivalent. In fact, Gentle states that all matrix norms are equivalent (see page 133). Theorem 3.9.5. Let A be an n m real matrix. Then A m A F A F min{n,m} A 2 A 2 m A 1 A 1 n A 2 A 2 A F A F n A and each inequality is sharp (that is, there is a matrix A for which the inequality reduces to equality). Note. The proof of Theorem 3.9.5 is to be given in Exercise 2.30. Theorem 3.9.6. For any matrix norm and any square matrix A, ρ(a) A. Note. For square A, A 1 = max 1 j n n i=1 a ij and A = max 1 i n m j=1 a ij. So by Theorem 3.9.6 we see that the spectral radius ρ(a) is less than or equal to both the largest sum of absolute values of the elements in any row or column.

3.9. Matrix Norm 6 Note. Since we have a matrix norm, we can use it to define limits of sequences of matrices (of the same size). Since all matrix norms and equivalent by Note 3.9.A we can express the definition in terms of a generic matrix norm. Definition. A sequence of matrices of the same size, A 1,A 2,... converges to matrix A if for all ε > 0 there is N N such that for all n N we have A A n < ε. Theorem 3.9.7. Let A be a square matrix. Then lim k A k = 0 if and only if ρ(a) < 1. Note. If fact, we can express the spectral radius in terms of a limit of the matrix norms as follows. Theorem 3.9.8. For square matrix A, lim k A k 1/k = ρ(a). Theorem 3.9.9. Let A be an n n matrix with A < 1. Then ( k ) I + lim a n = (i A) 1. k n=1 Definition. A square matrix A such that A k = 0 and A k 1 0 for some k N is a nillpotent of index k.

3.9. Matrix Norm 7 Theorem 3.9.10. Suppose A is an n n matrix which is nillpotent of order k N. Then: (1) ρ(a) = 0 ( so all eigenvalues of A are 0), and (2) tr(a) = 0, and (3) rank(a) n 1. Note. The proof of Theorem 3.9.10 is to be given in Exercise 3.33. Revised: 5/4/2018