linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice

Similar documents
1. The Polar Decomposition

UNIT 6: The singular value decomposition.

Linear Algebra Review. Vectors

Properties of Matrices and Operations on Matrices

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

LinGloss. A glossary of linear algebra

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

σ 11 σ 22 σ pp 0 with p = min(n, m) The σ ii s are the singular values. Notation change σ ii A 1 σ 2

Chapter 7: Symmetric Matrices and Quadratic Forms

The Singular Value Decomposition and Least Squares Problems

Final Exam, Linear Algebra, Fall, 2003, W. Stephen Wilson

Maths for Signals and Systems Linear Algebra in Engineering

Lecture notes: Applied linear algebra Part 1. Version 2

Moore-Penrose Conditions & SVD

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2017 LECTURE 5

Vector and Matrix Norms. Vector and Matrix Norms

Chapter 6: Orthogonality

1 Linearity and Linear Systems

The Singular Value Decomposition

Pseudoinverse & Moore-Penrose Conditions

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #2 Solutions

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Knowledge Discovery and Data Mining 1 (VO) ( )

Quantum Computing Lecture 2. Review of Linear Algebra

Problem # Max points possible Actual score Total 120

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background

Linear Algebra. Session 12

Linear Algebra Massoud Malek

MATHEMATICS 217 NOTES

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

I. Multiple Choice Questions (Answer any eight)

1 Linear Algebra Problems

Linear Least Squares. Using SVD Decomposition.

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix

Designing Information Devices and Systems II

5.6. PSEUDOINVERSES 101. A H w.

Math113: Linear Algebra. Beifang Chen

Linear Algebra Practice Final

Pseudoinverse & Orthogonal Projection Operators

Linear Algebra in Actuarial Science: Slides to the lecture

Math Linear Algebra II. 1. Inner Products and Norms

14 Singular Value Decomposition

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Math 108b: Notes on the Spectral Theorem

BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x

The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA)

Notes on Linear Algebra

THE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR

18.06 Problem Set 8 - Solutions Due Wednesday, 14 November 2007 at 4 pm in

Diagonalizing Matrices

18.06 Problem Set 10 - Solutions Due Thursday, 29 November 2007 at 4 pm in

The QR Factorization

Dimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas

Linear Algebra, part 3 QR and SVD

Linear Algebra: Matrix Eigenvalue Problems

Math Homework 8 (selected problems)

The SVD-Fundamental Theorem of Linear Algebra

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #1. July 11, 2013 Solutions

Large Scale Data Analysis Using Deep Learning

Computational math: Assignment 1

CS 143 Linear Algebra Review

The Singular Value Decomposition

1 Last time: least-squares problems

MATH 612 Computational methods for equation solving and function minimization Week # 2

Fall TMA4145 Linear Methods. Exercise set Given the matrix 1 2

Linear Algebra. Shan-Hung Wu. Department of Computer Science, National Tsing Hua University, Taiwan. Large-Scale ML, Fall 2016

Cheat Sheet for MATH461

Math 408 Advanced Linear Algebra

Linear Algebra Lecture Notes-II

Linear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg

Stat 159/259: Linear Algebra Notes

Review of similarity transformation and Singular Value Decomposition

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

1 Inner Product and Orthogonality

ECE 275A Homework #3 Solutions

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2

Mathematical foundations - linear algebra

MATH36001 Generalized Inverses and the SVD 2015

Mathematical foundations - linear algebra

Matrices and Linear Algebra

Introduction to Numerical Linear Algebra II

Preliminary Linear Algebra 1. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 100

Linear Algebra 1. M.T.Nair Department of Mathematics, IIT Madras. and in that case x is called an eigenvector of T corresponding to the eigenvalue λ.

COMP 558 lecture 18 Nov. 15, 2010

Massachusetts Institute of Technology Department of Economics Statistics. Lecture Notes on Matrix Algebra

Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1. x 2. x =

Maths for Signals and Systems Linear Algebra in Engineering

1. Select the unique answer (choice) for each problem. Write only the answer.

Singular Value Decompsition

Review of Some Concepts from Linear Algebra: Part 2

6 Inner Product Spaces

Chapter 1. Matrix Algebra

2. Review of Linear Algebra

Conceptual Questions for Review

1. Addition: To every pair of vectors x, y X corresponds an element x + y X such that the commutative and associative properties hold

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

Transcription:

3. Eigenvalues and Eigenvectors, Spectral Representation 3.. Eigenvalues and Eigenvectors A vector ' is eigenvector of a matrix K, if K' is parallel to ' and ' 6, i.e., K' k' k is the eigenvalue. If is eigenvector of t K, then its components satisfy or X i ik ij k j K k and is called left eigenvector of K; ' is called right eigenvector of K. The equation for ' can be written as (K k)' This has a non-zero solution ' if and only if det (K k) For an n n matrix, the determinant is a polynomial of degree n in k with at most n district roots. For every root eigenvector. For a repeated root, there may be an many 49

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrices of the form that det (K k) with n distinct simple roots k i (i ; ; n) with eigenvectors ' i. ' i linearly independent and span space K basis vectors. Further: det (K k) det ( t K k) ) for each k i, there is a left eigenvector i and a right eigenvector ' i such that K' i k i ' i and i K k i i For k i 6 k j ; i? ' j and j? ' i : k j ( i ; ' j ) ( i ; K' j ) ( i t K; ' j ) k i ( i ; ' j ) Since i perpendicular to all vectors ' j with j 6 i i cannot be perpendicular to ' i, then it would be zero. ) ( i ; ' i ) 6 normalized such that ( i ; ' i ) ( i ; ' j ) ij Sets f i g and f' i g are called reciprocal. 5

3..2 Spectral Representation Use set f' i g as basis ) Each vector f can be written as: f X i f i ' i where f i are the components of f with respect to basis ' i. Components are inner product of f with left eigenvectors: ( j ; f) X i f i ( j ; ' i ) X k f i ij f j f X i ' i ( i ; f) Use left eigenvectors as basis (reciprocal basis) f X i (f; ' i ) i Notation: (ab) ij ^ a i b j Then identity can be represented as X i ' i i Apply K on vector f: 5

Kf X i K' i ( i ; f) X i k i ' i ( i ; f) With respect to basis f' i g; K acts like a diagonal matrix. Since above equation is valid for arbitrary f ) K X i k i ' i i This is the spectral representation of matrix K. The set of eigenvalues k i is called the spectrum of K. For any positive n: K n X i k n i ' i i and interpret K ^ For the inverse: K P i k i ' i i. If any eigenvalue k i, then K does not have an inverse. 52

3.2 Singular Value Decomposition (SVD) Theorem: Let A be an arbitrary (complex) m n matrix A 2 M(m n; C). There exists a unitary matrix U 2 M(m m; C)(U + U ) and a unitary matrix V 2 M(n n; C)(V + V ) such that U + AV is a m n "diagonal" matrix of the following form: D D : diag ( ; ; r ); r > where ; ; r are the nonvanishing singular values of A and r ranka. 2. The nonvanishing singular values of A + are also given by ; ; r. The decomposition A U V + is called singular value decomposition. Preliminaries:. We know that for every Hermitian matrix A 2 M(nn; C) there is a unitary matrix U 2 M(n n; C) with U AU B@ n CA ; i 2 R and a Hermitian matrix A is positive (positive semidenite) if and only if the eigenvalues of A are positive (non-negative). 53

2. An arbitrary matrix A 2 M(n n; C) is called normal if there exists a unitary matrix U 2 M(n n; C) such that U AU So, Hermitian matrices are normal. B@ What can be done if A 2 M(m n; C)? Then AA + 2 M(m m; C) to Hermitian. Proof: (AA + ) + AA + and A + A 2 M(n n; C) is Hermitian n CA (A + A) + A + A and A + A is positive semidenite via construction. Eigenvalues i can be written as i 2 i. The numbers i are called singular values of A. Proof of the theorem by induction on m and n:. m ; n nothing to prove. 2. Assume that theorem holds for matrices ~ A; ~ 2 M((m ) (n ); C), i.e., there exists unitary matrix ~ U 2 M((m ) (m ); C); unitary matrix ~ V 2 M((n ) (n ); C) such that ~U + ~ A ~ V ~ ~D with ~ D diag (2 ; ; r ) and 2 2 2 3 2 r. 54

3. Show under assumption (2) that theorem holds for A 2 M(m n; C). Let largest singular value of A be > (for A, since should be largest). Let x 6 be eigenvector of A + A with eigenvalue 2, i.e., A + Ax 2 x and k x k. Now nd additional n vectors x 2 ; x 3 ; ; x n 2 C n such that matrix X : (x ; x 2 ; ; x n ) 2 M(n n; C) is unitary; i.e., X + X. X is the column X B@ X X 2 x n CA k Ax k 2 hax j Ax i x + A+ Ax x + 2 x 2 k x k > Dene y Ax 2 C m, with k y k 2 kax k 2 y are well dened.. Find additional (m ) vectors y ; ; y m (orthogonal) such that Y : (y ; y 2 ; y m ) 2 M(m m; C) is unitary, i.e., Y + Y. Look at components of this matrix equation: t y y ; in general X ` y + ie y ej X` y ei y ej ij Take unit vectors e n 2 C n and e m 2 C m and take 'matrix elements' of Y + AX Y + AXe n Y + A x Y + y e m 2 C m (3.93) (Y + AX) + e m X + A + Y e m X + A + y X + A + Ax X + x e n 2 C n he m j Y + AX j e n i 55

or Y + AX B@ A ~ CA with ~ A 2 M((m ) (n ); C). According to induction hypothesis, there are unitary matrices U; ~ V ~ such that A ~ can be written as U ~ + A ~ V ~ ~D ~ with D ~ diag(2 ; ; r ) and 2 2 2 3 r 2. Now dene U : Y V : X U ~ ~ V 2 M(m m; C) (3.94) 2 M(n n; C) ) U + AV : ~ U + ~ U + Y + AX U ~ + A ~ V ~ D V ~ ~ A ~ V ~ ~ U + B@ ~D (3.95) A ~ V ~ CA with D diag( ; 2 ; ; r ); 2 M(m n; C) and 2 2 2 3 2 r. 56

2 max (A + A) according to choice at beginning. rank A rank (U + AV ) rank r Still to prove that 2 2 2 and that i are the singular values of A. Consider + V + A + UU + AV V + A + AV diag( 2 ; 2 2 ; ; 2 r) Since V is the unitary matrix diagonalizing A + A and A + A Hermitian, thus the unitary matrix V exists. 2 ; 2 2 ; ; 2 r are eigenvalues of A + A ) 2 ; 2 2 ; ; r are the nonvanishing singular values of A. Because 2 max (A + A) ) 2 2 2 Shown that there exists a decomposition U + AV or A U V + with U U + ; V V + and U 2 M(m m; C); V 2 M(n n; C); A; 2 r(m n; C) Columns of U represent m orthogonal eigenvectors of AA + 2 M(m m; C). Columns of V represent n orthogonal eigenvectors of A + A 2 M(n n; C). Easy to see: 57

U + AA + U U + AV V + A + U P +, i.e., U diagonalizes AA + with eigenvalues 2; ; 2 r P V + A + AV V + A + UU + AV +, i.e., V diagonalizes A + A with eigenvalues 2; ; 2 r. Side Remark: Denition: The Pseudoinverse (or Moore-Penrose inverse) of an arbitrary matrix A 2 M(m n; C) is a matrix A + 2 M(n m; C) with. A + A P, where P is the orthogonal projector P : C n N(A); N(A) : fx 2 C n ; Ax g AA + P, where P is the projector P : C m R(A); R(A) : fax 2 C m ; x 2 C n g 2. (a) A + A (A + A) + (b) AA + (AA + ) + (c) AA + A A (d) A + A A + A + Build the 'pseudoinverse' A + of matrix A 2 M(m n; C): D We have U + AV ; D diag( ; ; r ) + 2 M(n m; C) easy to obtain: + D 58

for A U V dene A + : V + U + and in principle the properties of a pseudoinverse have to be checked. for m n (square matrices) A U V + can be inverted to give A V D U + with D diag ; ; r Application of SVD: Solve system of homogeneous or inhomogeneous linear equations: Ax b x A b Since one can calculate A, even if A singular or ill-determined good way to solve x A b. Considerations about solution space are easy: Look at A x (U V + )x. Solution are in Ker A : then any column of V which corresponds to i are 2 Ker A. For A x b, has only solution if b 2 Im(A). If b 2 Im(A), then one still can construct a "solution" vector, which will not solve Ax b, but be closest possible solution in a least square sense, i.e., one nds X which minimizes r : j A x b j (3.96) r ^ residual solution (See Numerical Recipes, p. 54.) 59

Construct Orthonormal Basis: A 2 M(m n; C) or M(m n; R) Matrix U 2 M(m m; C(R)) represent the orthogonal eigenvectors of AA + Columns of U are desired orthonormal system of eigenvectors. If some of the 2 i, then the space spanned by the column vectors of U has dimension < m. Approximation of Matrices: A U V + with elements A ij X km mx k U ik X km V jm (3.97) km k U ik V jk (3.98) If only r with r < m are 6, then matrix A can be approximated by 'smaller' matrix. Then consider A X X j X k A ij x j X j k u ik X j X k k u ik v ik x j (3.99) v jk x j (3.) If k m, then the evaluation of A x requires k(m + n) multiplications which is in that sense smaller than m n multiplications needed to evaluate P j A ij x j for all i. 6