Proposition 42. Let M be an m n matrix. Then (32) N (M M)=N (M) (33) R(MM )=R(M)

Similar documents
Properties of Matrices and Operations on Matrices

October 25, 2013 INNER PRODUCT SPACES

Lecture notes: Applied linear algebra Part 1. Version 2

Review problems for MA 54, Fall 2004.

Moore-Penrose Conditions & SVD

Singular Value Decomposition and Polar Form

ECE 275A Homework #3 Solutions

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice

ECE 275A Homework # 3 Due Thursday 10/27/2016

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

MATH 581D FINAL EXAM Autumn December 12, 2016

Linear Algebra Review. Vectors

Deep Learning Book Notes Chapter 2: Linear Algebra

7. Symmetric Matrices and Quadratic Forms

Singular Value Decomposition

UNIT 6: The singular value decomposition.

Large Scale Data Analysis Using Deep Learning

Singular Value Decomposition and Polar Form

MATH36001 Generalized Inverses and the SVD 2015

The Singular Value Decomposition

Designing Information Devices and Systems II

σ 11 σ 22 σ pp 0 with p = min(n, m) The σ ii s are the singular values. Notation change σ ii A 1 σ 2

Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2

Singular Value Decomposition (SVD) and Polar Form

Chapter 3 Transformations

Pseudoinverse & Orthogonal Projection Operators

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2017 LECTURE 5

The Singular Value Decomposition

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

EIGENVALUES AND EIGENVECTORS 3

1. Select the unique answer (choice) for each problem. Write only the answer.

14 Singular Value Decomposition

Lecture 7: Positive Semidefinite Matrices

Singular Value Decomposition. 1 Singular Value Decomposition and the Four Fundamental Subspaces

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Maths for Signals and Systems Linear Algebra in Engineering

Image Registration Lecture 2: Vectors and Matrices

Lecture 2: Linear operators

Notes on Eigenvalues, Singular Values and QR

Singular Value Decomposition (SVD)

THE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR

Quantum Computing Lecture 2. Review of Linear Algebra

Assignment #10: Diagonalization of Symmetric Matrices, Quadratic Forms, Optimization, Singular Value Decomposition. Name:

Review of Linear Algebra

1 Last time: least-squares problems

Math Linear Algebra II. 1. Inner Products and Norms

Linear Algebra. Shan-Hung Wu. Department of Computer Science, National Tsing Hua University, Taiwan. Large-Scale ML, Fall 2016

m We can similarly replace any pair of complex conjugate eigenvalues with 2 2 real blocks. = R

The Singular Value Decomposition

Mathematical foundations - linear algebra

2. Review of Linear Algebra

Linear Algebra - Part II

Linear Algebra: Matrix Eigenvalue Problems

MATH 612 Computational methods for equation solving and function minimization Week # 2

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T.

Chapter 3. Matrices. 3.1 Matrices

Pseudoinverse & Moore-Penrose Conditions

Stat 159/259: Linear Algebra Notes

Real symmetric matrices/1. 1 Eigenvalues and eigenvectors

Lecture 6. Numerical methods. Approximation of functions

COMP 558 lecture 18 Nov. 15, 2010

Lecture 6 Positive Definite Matrices

Linear Algebra, part 3 QR and SVD

Linear Algebra 2 Spectral Notes

Numerical Methods I Singular Value Decomposition

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background

Fall TMA4145 Linear Methods. Exercise set Given the matrix 1 2

Singular Value Decompsition

IV. Matrix Approximation using Least-Squares

Dimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas

EE731 Lecture Notes: Matrix Computations for Signal Processing

Linear Algebra for Machine Learning. Sargur N. Srihari

LinGloss. A glossary of linear algebra

The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA)

22m:033 Notes: 7.1 Diagonalization of Symmetric Matrices

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES

The Singular Value Decomposition and Least Squares Problems

Computational Methods. Eigenvalues and Singular Values

Eigenvalues and diagonalization

Maths for Signals and Systems Linear Algebra in Engineering

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

Summary of Week 9 B = then A A =

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

Linear Least Squares. Using SVD Decomposition.

1 Linearity and Linear Systems

MATH 583A REVIEW SESSION #1

5.6. PSEUDOINVERSES 101. A H w.

Linear Algebra March 16, 2019

Mathematical foundations - linear algebra

Review of Some Concepts from Linear Algebra: Part 2

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

Jeffrey D. Ullman Stanford University

Linear Algebra- Final Exam Review

Linear Algebra and Eigenproblems

Lecture 3: Review of Linear Algebra

Math Linear Algebra

Transcription:

RODICA D. COSTIN. Singular Value Decomposition.1. Rectangular matrices. For rectangular matrices M the notions of eigenvalue/vector cannot be defined. However, the products MM and/or M M (which are square, even self-adoint, and even positive semi-definite matrices) carry a lot of information about M: Proposition. Let M be an m n matrix. Then () N (M M)=N (M) () R(MM )=R(M) Proof. To show (), let x N(M M); then M Mx =, so that = hm Mx, xi = hmx,mxi which implies Mx =, showing that N (M M) N (M). The converse inclusion is immediate. To show (), note that (), used for M interchanged with M implies that N (MM )=N (M ), hence N (MM )? = N (M )?, which is exactly () (recall that for any linear transformation L we have N (L )? = R(L)). Moreover, MM and M M have the same nonzero eigenvalues: Proposition. Let M be an m n matrix. The matrices MM and M M are positive semi-definite. Moreover, they have the same nonzero eigenvalues (with the same multiplicity). More precisely, let 1,..., r be the positive eigenvalues. If M Mv = v with v 1,...,v r an orthonormal set, then MM u = u for u = p 1 Mv and u 1,...,u r is an orthonormal set. Proof. MM and M M obviously self-adoint; they are positive semidefinite since hx,m Mxi = hmx,mxi and hx,mm xi = hm x,m xi. Let v 1,...,v n be an orthonormal set of eigenvectors of M M,thefirst r corresponding to nonzero eigenvalues: M Mv = v with >, for =1,...,r and M Mv = for >r. Applying M we discover that MM Mv = Mv with >, for =1,...,r and MM Mv = for >rwhichwould mean that Mv are eigenvectors to MM corresponding to the eigenvalue provided we ensure that Mv =. This is true for apple r by (). Also, all Mv 1,...,Mv r are mutually orthogonal, since hmv,mv i i = hv,m Mv i i = i i so Mv? Mv i for all i = apple r, and kmv k =. Therefore, all the nonzero eigenvalues of M M are also eigenvalues for MM, with corresponding orthonormal eigenvectors u = p 1 Mv, =1,...,r.

SPECTRAL PROPERTIES OF SELF-ADJOINT MATRICES The same argument can be applied replacing M by M, showing that indeed, MM and M M have the same nonzero eigenvalues and with the same multiplicity... The SVD theorem. We are going to bring any m n matrix M to a (rectangular) diagonal form by writing M = U V where is a diagonal m n matrix, and U and V are unitary (of obvious dimensions). The diagonal elements of are called the singular values of M. The SVD has a myriad applications in filtering, image reconstruction, image compression, statistics, to name ust a few. Theorem. Singular Value Decomposition Let M be an m n matrix. Then M = U V where: U is a unitary matrix whose columns are eigenvectors of MM V is a unitary matrix whose columns are eigenvectors of M M is an m n diagonal matrix More precisely: if U =[u 1,...,u r, u r+1,...,u m ] and V =[v 1,...,v r, v r+1,...,v n ] then for =1,...r the vectors u and v correspond to the eigenvalue = while all the others correspond to the eigenvalue. The diagonal matrix has = = p for =1,...,r,andall other elements are. Also, u = 1 Mv for =1,...,r. Remarks. 1. M M = V n V and MM = U m U where m,n are diagonal matrices with entries 1,..., r and everywhere else.. The singular values are preferred be listed in decreasing order 1... r for reasons coming from applications, see.. Proof of Theorem. Let v 1,...,v r and u 1,...,u r be as in Proposition ; u r+1,...,u m and v r+1,...,v n correspond to the eigenvalue. Calculating U MV = u 1. u m M [v 1,...,v n ]= u 1. u m [Mv 1,...,Mv n ]= where is a matrix with elements i = u i Mv. For > r we have M Mv =, hence by () also Mv =, hence i =, while for apple r we have u i Mv = u i (p )u = p i, showing that is the diagonal matrix stated.

RODICA D. COSTIN.. Examples and applications of SVD. Example 1. How does the SVD look like for a square, diagonal matrix? Say " # a1 () M = a In this case MM = " a1 a # = M M therefore = a, V = I, and u = 1 Me = a a e. By the polar decomposition of complex numbers, write a = a e i u = e i e and the SVD is " # " #" # a1 e i 1 a1 = a e i a then which is called the polar decomposition of the matrix (). In general: Proposition. Polar decomposition of square matrices. Every square matrix M can be decomposed as M = US with U unitary and S positive semidefinite. Proof. Writing the SDV of the matrix M = U V =(UV )(V V )whichis the polar decomposition since UV is a unitary matrix and V V is a selfadoint matrix with non-negative eigenvalues. Example. A rectangular diagonal matrix, say a 1 e i 1 a 1 a = e i a 1 " 1 1 # Example. A column matrix apple " 1 = 1 p p p p 1 # apple p [1] Example. An orthogonal matrix Q is its own SVD since QQ = Q Q = I hence V = I, = 1 and U = Q.

SPECTRAL PROPERTIES OF SELF-ADJOINT MATRICES.. The matrix of an oblique proection. Recall that a square matrix P is a proection if P = P ;thenp proects onto R = R(P ), parallel to N = N (P ). For given complementary subspaces R and N a simple formula for the matrix of P can be obtained from the singular value decomposition. Since the eigenvalues of P can only be or 1, then all = 1. The vectors v 1,...,v r in Theorem form an orthonormal basis for R(P P )= N (P P )? = N?, and u 1 = P v 1,...,u r = P v r form an orthonormal basis for R(PP )=R(P)=R. Split the matrices U, V into blocks, the first one containing the first r columns: U =[U A U ], V =[V B V ], and since has its upper-left r r diagonal sub matrix equal to the identity, the SDV of P becomes apple apple P = U V =[U A U I V ] B = U A VB Note that VB U A = I. Indeed, the elements of this matrix are (VB U A) i, = hv i, u i = 1 hv i, v i = 1 i and for proections = 1. Let y 1,...,y r be any basis of R; thenb =[y 1,...,y r ]=V B S for some invertible r r matrix S. Similarly, if x 1,...,x r is any basis of N?,thenA = [x 1,...,x r ]=U A T for some invertible r r matrix T.ThenA(B A) 1 B is the matrix of P since A(B A) 1 B = U A T (S V BU A T ) 1 S V B = U A (V BU A ) 1 V B = U A IV B = P.. Low-rank approximations, image compression. Suppose an m n matrix M is to be approximated by a matrix X of same dimensions, but lower rank k. If M = U V with singular values 1... k... r, letx = U k V where k has the same singular values 1,..., k and everywhere else. Then the sum of the squares of the singular values of M X is minimum among all matrices m n of rank k (in the sense that the Frobenius norm of M X is minimum). This low rank approximations are used in image compression, noise filtering and many other applications.. Pseudoinverse There are many ways to define a matrix which behaves, in some sense, like the inverse of a matrix which is not invertible. This section describes the Moore-Penrose pseudoinverse. Finding the best fit solution (in the least square sense) to a possibly overdetermined linear system Mx = b yields a vector x + which depends linearly on b, hence there is a matrix M + so that x = M + b;thisisthe Moore-Penrose pseudoinverse of M. Recall the construction of this solution. Step I. If Mx = b is overdetermined (i.e. has no solutions) this is because b R(M). Then find x so that kmx bk is minimum. This happens if Mx = P b where P b is the orthogonal proection of b on R(M).

8 RODICA D. COSTIN Step II. Now Mx = P b is solvable. The solution is not unique if N (M) is not {}, in which case, if x p is a solution, then all vectors in x p + N (M) are solutions. Choosing among them the solution of minimal length: find w N(M) with kx p + wk minimum. Since kx p + wk is the distance between x p and w N(M) itisminimumwhenw is the orthogonal proection of x p on N (M). Define x + = x p + w. ThenM + is defined by M + x = x + for all x. Example. Solve x = b for = 1 Clearly R( ) = {y R y =} hence P b = P (b 1,b,b ) T =(b 1,b, ) T. Then x = P b has the solutions x with x = b / for =1, and x, arbitrary, which has minimal norm for x, =. We obtained x + = b 1 / 1 b / = 1/ 1 1/ b 1 b + b For a general m n diagonal matrix with singular values = similar arguments show that its pseudoinverse + is an n m diagonal matrix with singular values + =1/. For a general m n matrix M with singular value decomposition M = U V, solving Mx = b is equivalent to solving y = U b where y = V x. This that the optimal solution y + = + U b, therefore (since U preserves distances) x + = V + U b.weproved Theorem. The pseudoinverse of a matrix M with singular value decomposition M = U V is M + = V + U. The pseudoinverse has many properties similar to those of an inverse. The following statements are left as exercises. 1. If M is invariable, then M 1 = M +.. MM + M = M and M + MM + = M + (though MM + and M + M are not necessarily the identity).. MM + and M + M are orthogonal proectors.. The operator + commutes with complex conugation and transposition.. ( M) + = 1 M +. If is a scalar (think M =[ ]) then + equals if = and 1/ if =.. The pseudoinverse of a vector x is x + = x kxk if x = and T if x =.