Moore-Penrose Conditions & SVD

Similar documents
Pseudoinverse & Orthogonal Projection Operators

Pseudoinverse & Moore-Penrose Conditions

ECE 275A Homework #3 Solutions

Pseudoinverse and Adjoint Operators

ECE 275A Homework # 3 Due Thursday 10/27/2016

Normed & Inner Product Vector Spaces

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice

Lecture notes: Applied linear algebra Part 1. Version 2

Sample ECE275A Midterm Exam Questions

Proposition 42. Let M be an m n matrix. Then (32) N (M M)=N (M) (33) R(MM )=R(M)

Review problems for MA 54, Fall 2004.

Properties of Matrices and Operations on Matrices

Lecture: Linear algebra. 4. Solutions of linear equation systems The fundamental theorem of linear algebra

LinGloss. A glossary of linear algebra

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

UNIT 6: The singular value decomposition.

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2

Maths for Signals and Systems Linear Algebra in Engineering

Notes on Eigenvalues, Singular Values and QR

MATH36001 Generalized Inverses and the SVD 2015

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2017 LECTURE 5

Large Scale Data Analysis Using Deep Learning

Lecture 13: Orthogonal projections and least squares (Section ) Thang Huynh, UC San Diego 2/9/2018

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

Lecture II: Linear Algebra Revisited

Lecture 7: Positive Semidefinite Matrices

7. Symmetric Matrices and Quadratic Forms

Concentration Ellipsoids

COMP 558 lecture 18 Nov. 15, 2010

The Singular Value Decomposition

Linear Systems. Class 27. c 2008 Ron Buckmire. TITLE Projection Matrices and Orthogonal Diagonalization CURRENT READING Poole 5.4

I. Multiple Choice Questions (Answer any eight)

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors

2. Review of Linear Algebra

Mathematical foundations - linear algebra

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

5.6. PSEUDOINVERSES 101. A H w.

Singular Value Decomposition

LEC 2: Principal Component Analysis (PCA) A First Dimensionality Reduction Approach

EE731 Lecture Notes: Matrix Computations for Signal Processing

Numerical Methods I Singular Value Decomposition

Vector Space Concepts

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

MATH 612 Computational methods for equation solving and function minimization Week # 2

Linear Algebra Review. Vectors

UCSD ECE269 Handout #18 Prof. Young-Han Kim Monday, March 19, Final Examination (Total: 130 points)

Quantum Computing Lecture 2. Review of Linear Algebra

Exercise Set 7.2. Skills

MIT Final Exam Solutions, Spring 2017

A Review of Linear Algebra

Review of Some Concepts from Linear Algebra: Part 2

Vector Derivatives and the Gradient

Final Exam, Linear Algebra, Fall, 2003, W. Stephen Wilson

Linear Algebra Fundamentals

1 Singular Value Decomposition and Principal Component

Singular Value Decomposition (SVD) and Polar Form

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Designing Information Devices and Systems II

Matrix Mathematics. Theory, Facts, and Formulas with Application to Linear Systems Theory. Dennis S. Bernstein

Linear Systems. Carlo Tomasi

There are six more problems on the next two pages

ECE 275A Homework 6 Solutions

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic

Linear Algebra. Shan-Hung Wu. Department of Computer Science, National Tsing Hua University, Taiwan. Large-Scale ML, Fall 2016

Math Linear Algebra II. 1. Inner Products and Norms

Summary of Week 9 B = then A A =

Fall TMA4145 Linear Methods. Exercise set Given the matrix 1 2

Assignment #10: Diagonalization of Symmetric Matrices, Quadratic Forms, Optimization, Singular Value Decomposition. Name:

Review of some mathematical tools

Dimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas

THE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR

EE731 Lecture Notes: Matrix Computations for Signal Processing

Matrices A brief introduction

Lecture 10 - Eigenvalues problem

EXAM. Exam 1. Math 5316, Fall December 2, 2012

Linear Algebra: Matrix Eigenvalue Problems

(VII.E) The Singular Value Decomposition (SVD)

The Singular Value Decomposition and Least Squares Problems

Deep Learning Book Notes Chapter 2: Linear Algebra

AMS526: Numerical Analysis I (Numerical Linear Algebra)

forms Christopher Engström November 14, 2014 MAA704: Matrix factorization and canonical forms Matrix properties Matrix factorization Canonical forms

Math 113 Final Exam: Solutions

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

MATH 581D FINAL EXAM Autumn December 12, 2016

Exercise Sheet 1.

1 Linearity and Linear Systems

MATH 583A REVIEW SESSION #1

14 Singular Value Decomposition

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

LECTURE 7. k=1 (, v k)u k. Moreover r

Chapter 3 Transformations

. = V c = V [x]v (5.1) c 1. c k

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES

Dot Products. K. Behrend. April 3, Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem.

Data Analysis and Manifold Learning Lecture 2: Properties of Symmetric Matrices and Examples

Conceptual Questions for Review

Lecture 3: Review of Linear Algebra

Applied Linear Algebra in Geoscience Using MATLAB

Lecture 4: Postulates of quantum mechanics

Transcription:

ECE 275AB Lecture 8 Fall 2008 V1.0 c K. Kreutz-Delgado, UC San Diego p. 1/1 Lecture 8 ECE 275A Moore-Penrose Conditions & SVD

ECE 275AB Lecture 8 Fall 2008 V1.0 c K. Kreutz-Delgado, UC San Diego p. 2/1 Four Moore-Penrose Pseudoinverse Conditions MOORE-PENROSE THEOREM Consider a linear operator A : X Y. A linear operator M : Y X is the unique pseudoinverse of A, M = A +, if and only if it satisfies the Four Moore-Penrose Conditions: I. (AM) = AM II. (MA) = MA III. AMA = A IV. MAM = M More simply we usually say that A + is the unique p-inv of A iff I. (AA + ) = AA + II. (A + A) = A + A III. AA + A = A IV. A + AA + = A + The theorem statement provides greater clarity because there we distinguish between a candidate p-inv M and the true p-inv A +. If and only if the candidate p-inv M satisfies the four M-P conditions can we claim that indeed A + = M.

ECE 275AB Lecture 8 Fall 2008 V1.0 c K. Kreutz-Delgado, UC San Diego p. 3/1 Proof of the M-P Theorem First we reprise some basic facts that are consequences of the definitional properties of the pseudoinverse. FACT 1: N(A + ) = N(A ) FACT 2: R(A + ) = R(A ) FACT 3: P R(A) = AA + FACT 4: P R(A ) = A + A We now proceed to prove two auxiliary theorems (Theorems A and B).

ECE 275AB Lecture 8 Fall 2008 V1.0 c K. Kreutz-Delgado, UC San Diego p. 4/1 Proof of the M-P Theorem Cont. THEOREM A Let C : X Y and B : Y Z be linear mappings. It is readily shown that the composite mapping BC : X Z is a linear mapping where BC is defined by (BC)x B(Cx) x X Then N(B) R(C) = {0} BC = 0 iff C = 0 Proof BC = 0 (BC)x = 0 x B(Cx) = 0 x Cx = 0 x C = 0 (definition of zero operator) (definition of composition) (because Cx R(C) N(B) = {0}, x) (definition of zero operator) QED

Proof of the M-P Theorem Cont. Theorem B covers the uniqueness part of the M-P Theorem. THEOREM B. The pseudoinverse of A is unique. Proof. Suppose that A + and M are both p-inv s of A. Then Fact 3 gives P R(A) = AA + = AM or A(A + M) = 0 From Fact 2, R(A ) = R(A + ) = R(M) and as a consequence But R(A ) N(A) and therefore R(A + M) R(A ) R(A + M) R(A ) = N(A) so that N(A) R(A + M) = {0} Therefore from Theorem A, A + M = 0 A + = M QED ECE 275AB Lecture 8 Fall 2008 V1.0 c K. Kreutz-Delgado, UC San Diego p. 5/1

ECE 275AB Lecture 8 Fall 2008 V1.0 c K. Kreutz-Delgado, UC San Diego p. 6/1 Proof of the M-P Theorem Cont. Necessity ( only if part) of the M-P Conditions. Assume that M = A +. Necessity of M-P Conditions I & II. Easy consequences of Facts 3 and 4. Necessity of M-P Condition III. Note that Fact 4 and indempotency of a projection operator implies (A + A)(A + A) = A + A, or A + (AA + A A) }{{} C=A(A + A I) = AC = 0 We have N(A + ) = N(A ) (Fact 1) and R(C) R(A) = N(A ). Therefore N(A + ) R(C) = N(A ) R(C) = {0} so that by Theorem A, C = 0. Necessity of M-P Condition IV. Note that Fact 3 and indempotency of a projection operator implies (AA + )(AA + ) = AA +, or A(A + AA + A + ) }{{} C=A + (AA + I) = AC = 0 With R(A + ) = R(A ) (Fact 2) we have R(C) R(A + ) = R(A ) = N(A). Therefore N(A) R(C) = {0} so that by Theorem A, C = 0.

ECE 275AB Lecture 8 Fall 2008 V1.0 c K. Kreutz-Delgado, UC San Diego p. 7/1 Proof of the M-P Theorem Cont. Sufficiency( if part) of the M-P Conditions. Here we assume that M satisfies all four of the M-P conditions and then show as a consequence that M = A +. We do this using the following steps. (1) First prove that P R(A) = AM (proving that AM = AA + via uniqueness of projection operators). (2) Prove that P R(A ) = MA (proving that MA = A + A). (3) Finally, prove that as a consequence of (1) and (2), M = A +.

ECE 275AB Lecture 8 Fall 2008 V1.0 c K. Kreutz-Delgado, UC San Diego p. 8/1 Proof of the M-P Theorem Cont. Sufficiency Cont. Step (1): From M-P conditions 1 & 3, (AM) = AM and AM = (AMA)M = (AM)(AM), showing that AM is an orthogonal projection operator. But onto what? Obviously onto a subspace of R(A) as R(AM) R(A). However R(A) = A(X) = AMA(X) = AM (A(X)) = AM (R(A)) AM(Y) = R(AM) R(A) yields the stronger statement that R(AM) = R(A). Thus AM is the orthogonal projector onto the range of A, P R(A) = AM = AA +. Step (2): From M-P conditions 2 & 3, (MA) = MA and MA = M(AMA) = (MA)(MA), showing that M A is an orthogonal projection operator. Note that M-P conditions 3 and 2 imply A = (AMA) = A M A and MA = (MA) = A M. We have R(A ) = A (Y) = (A M A )(Y) = (A M )(R(A )) (A M )(X) }{{} =R(A M )=R(MA) R(A ) showing that R(MA) = R(A ). Thus P R(A ) = MA = A + A.

ECE 275AB Lecture 8 Fall 2008 V1.0 c K. Kreutz-Delgado, UC San Diego p. 9/1 Proof of the M-P Theorem Cont. Sufficiency Cont. Step (3): Note that we have yet to use M-P condition 4, MAM = M. From M-P condition 4 and the result of Step (2) we have MAM = P R(A )M = M Obviously, then R(M) R(A ), as can be rigorously shown via the subspace chain R(M) = M(Y) = P R(A )M(Y) = P R(A )(R(M)) P R(A )(X) = R(A ) Recalling that R(A + ) = R(A ) (Fact 2), it therefore must be the case that R(M A + ) R(A ) = N(A) Using the result of Step (1), P R(A) = AM = AA +, we have A(M A + ) = 0 with N(A) R(M A + ) = {0}. Therefore Theorem A yields M A + = 0. QED

ECE 275AB Lecture 8 Fall 2008 V1.0 c K. Kreutz-Delgado, UC San Diego p. 10/1 Proof of the M-P Theorem Cont. Note the similarity of the latter developments in Step 3 to the proof of Theorem B. In fact, some thought should convince yourself that the latter part of Step 3 provides justification for the claim that the pseudoinverse is unique, so that Theorem B can be viewed as redundant to the proof of the M-P Theorem. Theorem B was stated to introduce the student to the use of Theorem A (which played a key role in the proof of the M-P Theorem) and to present the uniqueness of the pseudoinverse as a key result in its own right.

ECE 275AB Lecture 8 Fall 2008 V1.0 c K. Kreutz-Delgado, UC San Diego p. 11/1 Singular Value Decomposition (SVD) Henceforth, let us consider only Cartesian Hilbert spaces (i.e., spaces with identity metric matrices) and consider all finite dimensional operators to be represented as complex m n matrices, A m n : X = Cn Y = C m Note that A in general A is non-square and therefore does not have a spectral representation (because eigenvalues and eigenvectors are not defined). Even if A is square, it will in general have complex valued eigenvalues and non-orthogonal eigenvectors. Even worse, a general n n matrix can be defective and not have a full set of n eigenvectors, in which case A is not diagonalizable. In the latter case, one must one use generalized eigenvectors to understand the spectral properties of the matrix (which is equivalent to placing the matrix in Jordan Canonical Form). It is well know that if a square, n n complex matrix is self-adjoint (Hermitian), A = A H, then its eigenvalues are all real and it has a full complement of n eigenvectors that can all be chosen to orthonormal. In this case for eigenpairs (λ i, x i ), i = 1,, n, A has a simple spectral representation given by an orthogonal transformation, A = λ 1 x 1 x H 1 + + λ n x n x H n = XΛX H with Λ = diag(λ 1 λ n ), and X is unitary, X H X = XX H = I, where the columns of X are comprised of the orthonormal eigenvectors x i. If in addition, a hermitian matrix A is positive-semidefinite, denoted as A 0, then the eigenvalues are all non-negative, and all strictly positive if the matrix A is invertible (positive-definite, A > 0).

ECE 275AB Lecture 8 Fall 2008 V1.0 c K. Kreutz-Delgado, UC San Diego p. 12/1 Singular Value Decomposition (SVD) Cont. Given an arbitrary (nonsquare) complex matrix operator A C m n we can regularized its structural properties by squaring it to produce a hermitian, positive-semidefinite matrix, and thereby exploit the very nice properties of hermitian, positive-semidefinite matrices mentioned above. Because matrix multiplication is noncommutative, there are two ways to square A to form a hermitian, positive-semidefinite matrix, viz AA H and A H A It is an easy exercise to proved that both of these forms are hermitian, positive-semidefinite, recalling that a matrix M is defined to be positive-semidefinite, M 0, if and only if the associated quadratic form x, Mx = x H Mx is real and positive-semidefinite x, Mx = x H Mx 0 x Note that a sufficient condition for the quadratic form to be real is that M be hermitian, M = M H. For the future, recall that a positive-semidefinite matrix M is positive-definite, M > 0, if in addition to the non-negativity property of the associated quadratic form we also have x, Mx = x H Mx = 0 if and only if x = 0

ECE 275AB Lecture 8 Fall 2008 V1.0 c K. Kreutz-Delgado, UC San Diego p. 13/1 Singular Value Decomposition (SVD) Cont. In Lecture 9 we will show that the eigenstructures of the well-behaved hermitian, positive-semidefinite squares A H A and AA H are captured in the Singular value Decomposition (SVD) introduced in Example 5 of Lecture 7. As noted in that example, knowledge of the SVD enables us to compute the pseudoinverse of A in the rank deficient case. The SVD will also allow us to compute a variety of important quantities, including the rank of A, orthonormal bases for all four fundamental subspaces of A, orthogonal projection operators onto all four fundamental subspaces of the matrix operator A, the spectral norm of A, the Frobenius norm of A, and the condition number of A. The SVD will also provide a simple geometrically intuitive understanding of the nature of A as an operator based on the action of A as mapping hyperspheres in R(A ) to hyperellipsoids in R(A) in addition to the fact that A maps N(A) to 0.