σ 11 σ 22 σ pp 0 with p = min(n, m) The σ ii s are the singular values. Notation change σ ii A 1 σ 2

Similar documents
MATH36001 Generalized Inverses and the SVD 2015

Let A an n n real nonsymmetric matrix. The eigenvalue problem: λ 1 = 1 with eigenvector u 1 = ( ) λ 2 = 2 with eigenvector u 2 = ( 1

EIGENVALE PROBLEMS AND THE SVD. [5.1 TO 5.3 & 7.4]

EE731 Lecture Notes: Matrix Computations for Signal Processing

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice

Math 407: Linear Optimization

Introduction to Numerical Linear Algebra II

Fall TMA4145 Linear Methods. Exercise set Given the matrix 1 2

7. Dimension and Structure.

Problem set 5: SVD, Orthogonal projections, etc.

Properties of Matrices and Operations on Matrices

DS-GA 1002 Lecture notes 10 November 23, Linear models

The Singular Value Decomposition and Least Squares Problems

Notes on Eigenvalues, Singular Values and QR

Computational math: Assignment 1

be a Householder matrix. Then prove the followings H = I 2 uut Hu = (I 2 uu u T u )u = u 2 uut u

The Singular Value Decomposition

Notes on Solving Linear Least-Squares Problems

Lecture notes: Applied linear algebra Part 1. Version 2

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Multiplicative Perturbation Bounds of the Group Inverse and Oblique Projection

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T.

Numerical Methods. Elena loli Piccolomini. Civil Engeneering. piccolom. Metodi Numerici M p. 1/??

Orthonormal Bases; Gram-Schmidt Process; QR-Decomposition

The Singular Value Decomposition

LinGloss. A glossary of linear algebra

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

Vector and Matrix Norms. Vector and Matrix Norms

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2017 LECTURE 5

Singular Value Decomposition

Least Squares. Tom Lyche. October 26, Centre of Mathematics for Applications, Department of Informatics, University of Oslo

1 Linearity and Linear Systems

The Singular Value Decomposition

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

Linear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg

Orthogonalization and least squares methods

Linear Algebra, part 3 QR and SVD

The University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013.

Matrix decompositions

SINGULAR VALUE DECOMPOSITION

THE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR

Block Bidiagonal Decomposition and Least Squares Problems

Designing Information Devices and Systems II

Singular Value Decomposition

Chapter 1. Matrix Algebra

7. Symmetric Matrices and Quadratic Forms

UNIT 6: The singular value decomposition.

Bindel, Fall 2009 Matrix Computations (CS 6210) Week 8: Friday, Oct 17

The Singular Value Decomposition

14 Singular Value Decomposition

Problem # Max points possible Actual score Total 120

5 Selected Topics in Numerical Linear Algebra

Cheat Sheet for MATH461

Least-Squares Systems and The QR factorization

Chapter 0 Miscellaneous Preliminaries

Dimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas

Linear Least Squares. Using SVD Decomposition.

Singular value decomposition

Numerical Methods I Singular Value Decomposition

Proposition 42. Let M be an m n matrix. Then (32) N (M M)=N (M) (33) R(MM )=R(M)

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

Linear Systems. Carlo Tomasi. June 12, r = rank(a) b range(a) n r solutions

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

Linear Algebra in Actuarial Science: Slides to the lecture

Computational Methods. Eigenvalues and Singular Values

EIGENVALUE PROBLEMS. Background on eigenvalues/ eigenvectors / decompositions. Perturbation analysis, condition numbers..

Solution of Linear Equations

5 Linear Algebra and Inverse Problem

Maths for Signals and Systems Linear Algebra in Engineering

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic

Linear Algebra Review. Vectors

Solutions to Final Practice Problems Written by Victoria Kala Last updated 12/5/2015

Index. book 2009/5/27 page 121. (Page numbers set in bold type indicate the definition of an entry.)

Chapters 5 & 6: Theory Review: Solutions Math 308 F Spring 2015

18.06 Problem Set 10 - Solutions Due Thursday, 29 November 2007 at 4 pm in

Lecture 6. Numerical methods. Approximation of functions

EIGENVALUES AND SINGULAR VALUE DECOMPOSITION

Householder reflectors are matrices of the form. P = I 2ww T, where w is a unit vector (a vector of 2-norm unity)

Lecture 5 Singular value decomposition

Algorithmen zur digitalen Bildverarbeitung I

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Vector Spaces, Orthogonality, and Linear Least Squares

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9

NORMS ON SPACE OF MATRICES

Chapter 7: Symmetric Matrices and Quadratic Forms

Chap 3. Linear Algebra

1 Cricket chirps: an example

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition

IV. Matrix Approximation using Least-Squares

This can be accomplished by left matrix multiplication as follows: I

Least squares: the big idea

Since the determinant of a diagonal matrix is the product of its diagonal elements it is trivial to see that det(a) = α 2. = max. A 1 x.

Lecture 9: Numerical Linear Algebra Primer (February 11st)

Singular Value Decomposition (SVD) and Polar Form

Linear Systems. Carlo Tomasi

Q T A = R ))) ) = A n 1 R

Latent Semantic Models. Reference: Introduction to Information Retrieval by C. Manning, P. Raghavan, H. Schutze

Inverses. Stephen Boyd. EE103 Stanford University. October 28, 2017

Linear Analysis Lecture 16

Transcription:

HE SINGULAR VALUE DECOMPOSIION he SVD existence - properties. Pseudo-inverses and the SVD Use of SVD for least-squares problems Applications of the SVD he Singular Value Decomposition (SVD) heorem For any matrix A R m n there exist unitary matrices U R m m and V R n n such that A = UV where is a diagonal matrix with entries i 0. σ σ σ pp 0 with p = min(n, m) he i s are the singular values. Notation change i Proof: Let σ = A = max x, x = Ax. here exists a pair of unit vectors v, u such that Av = σ u 0- B: 4-5; AB:.,.; GvL.4,5.5 SVD 0-0- Complete v into an orthonormal basis of R n V [v, V ] = n n unitary Complete u into an orthonormal basis of R m U [u, U ] = m m unitary Observe that ( σ ) ( A σ w + w = σ + σ ) w w his shows that w must be zero [why?] Complete the proof by an induction argument. Define U, V as single Householder reflectors. hen, it is easy to show that σ w AV = U U σ w AV = A 0 B 0 B 0-3 B: 4-5; AB:.,.; GvL.4,5.5 SVD 0-4 B: 4-5; AB:.,.; GvL.4,5.5 SVD 0-3 0-4

Case : he thin SVD A = U V Consider the Case-. It can be rewritten as A = [U U ] 0 V Case : Which gives: A = U V A = U V where U is m n (same shape as A), and and V are n n Referred to as the thin SVD. Important in practice. How can you obtain the thin SVD from the QR factorization of A and the SVD of an n n matrix? 0-5 B: 4-5; AB:.,.; GvL.4,5.5 SVD 0-6 B: 4-5; AB:.,.; GvL.4,5.5 SVD 0-5 0-6 A few properties. Assume that Properties of the SVD (continued) σ σ σ r > 0 and σ r+ = = σ p = 0 hen: rank(a) = r = number of nonzero singular values. Ran(A) = span{u, u,..., u r } Null(A ) = span{u r+, u r+,..., u m } Ran(A ) = span{v, v,..., v r } Null(A) = span{v r+, v r+,..., v n } he matrix A admits the SVD expansion: A = r u i v i i= A = σ = largest singular value A F = ( r i= ) / When A is an n n nonsingular matrix then A = /σ n 0-7 B: 4-5; AB:.,.; GvL.4,5.5 SVD 0-7 0-8

heorem then Let k < r and A k = k u i v i i= min A B = A A k = σ k+ rank(b)=k Proof: First: A B σ k+, for any rank-k matrix B. Consider X = span{v, v,, v k+ }. Note: dim(null(b)) = n k Null(B) X {0} [Why?] Let x 0 Null(B) X, x 0 0. Write x 0 = V y. hen (A B)x 0 = Ax 0 = UV V y = y But y σ k+ x 0 (Show this). A B σ k+ Second: take B = A k. Achieves the min. 0-9 B: 4-5; AB:.,.; GvL.4,5.5 SVD 0-0 B: 4-5; AB:.,.; GvL.4,5.5 SVD 0-9 0-0 Right and Left Singular vectors: A = u i A u j = σ j v j Consequence A A = σ i and AA u i = σ i u i Right singular vectors ( s) are eigenvectors of A A Left singular vectors (u i s) are eigenvectors of AA Possible to get the SVD from eigenvectors of AA and A A but: difficulties due to non-uniqueness of the SVD Define the r r matrix = diag(σ,..., σ r ) Let A R m n and consider A A ( R n n ): A A = V V A A = V 0 }{{} n n his gives the spectral decomposition of A A. V 0- B: 4-5; AB:.,.; GvL.4,5.5 SVD 0- B: 4-5; AB:.,.; GvL.4,5.5 SVD 0-0-

Similarly, U gives the eigenvectors of AA. Pseudo-inverse of an arbitrary matrix AA = U Important: A A = V D V and AA U, V up to signs! 0 }{{} m m U = UD U give the SVD factors Let A = UV which we rewrite as A = U U 0 V hen the pseudo inverse of A is V = U V A = V U = r j= σ j v j u j he pseudo-inverse of A is the mapping from a vector b to the solution min x Ax b that has minimal norm (to be shown) In the full-rank overdetermined case, the normal equations yield x = (A } A) {{ A } b A 0-3 B: 4-5; AB:.,.; GvL.4,5.5 SVD 0-4 B: 4-5; AB:.,.; GvL.4,5.5 SVD 0-3 0-4 Least-squares problem via the SVD Pb: min b Ax in general case. Consider SVD of A: A = U U 0 V r = σ i u i V i= hen left multiply by U to get Ax b = 0 y U y U b y V with = y V x What are all least-squares solutions to the system? Among these which one has minimum norm? 0-5 B: 4-5; AB:.,.; GvL.4,5.5 SVD Answer: From above, must have y = U b and y = anything (free). Recall that x = V y and write y x = [V, V ] = V y y + V y = V U b + V y = A b + V y Note: A b Ran(A) and V y Null(A). herefore: least-squares solutions are of the form A b + w where w Null(A). Smallest norm when y = 0. 0-6 B: 4-5; AB:.,.; GvL.4,5.5 SVD 0-5 0-6

Minimum norm solution to min x Ax b satisfies y = U b, y = 0. It is: x LS = V U b = A b If A R m n what are the dimensions of A?, A A?, AA? Show that A A is an orthogonal projector. What are its range and null-space? Same questions for AA. Moore-Penrose Inverse he pseudo-inverse of A is given by A = V 0 U = Moore-Penrose conditions: r u i he pseudo inverse of a matrix is uniquely determined by these four conditions: i= () AXA = A () XAX = X (3) (AX) H = AX (4) (XA) H = XA In the full-rank overdetermined case, A = (A A) A 0-7 B: 4-5; AB:.,.; GvL.4,5.5 SVD 0-8 B: 4-5; AB:.,.; GvL.4,5.5 SVD 0-7 0-8 Least-squares problems and the SVD SVD can give much information about solving overdetermined and underdetermined linear systems. Let A be an m n matrix and A = UV its SVD with r = rank(a), V = [v,..., v n ] U = [u,..., u m ]. hen r u i x LS = b i= minimizes b Ax and has the smallest -norm among all possible minimizers. In addition, ρ LS b Ax LS = z with z = [u r+,..., u m ] b Least-squares problems and pseudo-inverses A restatement of the first part of the previous result: Consider the general linear least-squares problem min x, x S S = {x R n b Ax min}. his problem always has a unique solution given by x = A b 0-9 B: 4-5; AB:.,.; GvL.4,5.5 SVD 0-0 B: 4-5; AB:.,.; GvL.4,5.5 SVD 0-9 0-0

Consider the matrix: A = 0 0 Compute the singular value decomposition of A Find the matrix B of rank which is the closest to the above matrix in the -norm sense. What is the pseudo-inverse of A? What is the pseudo-inverse of B? Find the vector x of smallest norm which minimizes b Ax with b = (, ) Find the vector x of smallest norm which minimizes b Bx with b = (, ) 0- B: 4-5; AB:.,.; GvL.4,5.5 SVD Ill-conditioned systems and the SVD Let A be m m and A = UV its SVD Solution of Ax = b is x = A b = m u i b i= When A is very ill-conditioned, it has many small singular values. he division by these small s will amplify any noise in the data. If b = b + ɛ then A b = m i= u i b + m u i ɛ i= }{{} Error Result: solution could be completely meaningless. 0- B: 4-5; AB:.,.; GvL.4,5.5 SVD 0-0- Remedy: SVD regularization Numerical rank and the SVD runcate the SVD by only keeping the σ i s that are τ, where τ is a threshold Gives the runcated SVD solution (SVD solution:) x SV D = τ u i b Many applications [e.g., Image and signal processing,..] 0-3 B: 4-5; AB:.,.; GvL.4,5.5 SVD Assuming the original matrix A is exactly of rank k the computed SVD of A will be the SVD of a nearby matrix A + E Can show: ˆ α σ u Result: zero singular values will yield small computed singular values and r larger sing. values. Reverse problem: numerical rank he ɛ-rank of A : r ɛ = min{rank(b) : B R m n, A B ɛ}, Show that r ɛ equals the number sing. values that are >ɛ Show: r ɛ equals the number of columns of A that are linearly independent for any perturbation of A with norm ɛ. Practical problem : How to set ɛ? 0-4 B: 4-5; AB:.,.; GvL.4,5.5 SVD 0-3 0-4

Pseudo inverses of full-rank matrices Case : m < n hen A = A (AA ) Case : m > n hen A = (A A) A hin SVD is A = U V. Now U, are m m and: hin SVD is A = U V and V, are n n. hen: Example: (A A) A = (V V ) V U = V V V U = V U = A 0 Pseudo-inverse of is? 0 0-5 B: 4-5; AB:.,.; GvL.4,5.5 SVD Example: A (AA ) = V U [U U ] = V U U U = V U = V U = A Pseudo-inverse of 0 0 is? Mnemonic: he pseudo inverse of A is A completed by the inverse of the smallest of (A A) or (AA ) where it fits (i.e., left or right) 0-6 B: 4-5; AB:.,.; GvL.4,5.5 SVD 0-5 0-6