The Spectral Theorem for normal linear maps

Similar documents
Spectral Theorem for Self-adjoint Linear Operators

Linear Algebra 2 Spectral Notes

MATHEMATICS 217 NOTES

Econ 204 Supplement to Section 3.6 Diagonalization and Quadratic Forms. 1 Diagonalization and Change of Basis

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

Math 4153 Exam 3 Review. The syllabus for Exam 3 is Chapter 6 (pages ), Chapter 7 through page 137, and Chapter 8 through page 182 in Axler.

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES

Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2

MATH 304 Linear Algebra Lecture 23: Diagonalization. Review for Test 2.

1 Last time: least-squares problems

Review problems for MA 54, Fall 2004.

Summary of Week 9 B = then A A =

Real symmetric matrices/1. 1 Eigenvalues and eigenvectors

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm

Math Linear Algebra II. 1. Inner Products and Norms

Lecture 19: Isometries, Positive operators, Polar and singular value decompositions; Unitary matrices and classical groups; Previews (1)

BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x

Lecture 7: Positive Semidefinite Matrices

22m:033 Notes: 7.1 Diagonalization of Symmetric Matrices

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

LECTURE VI: SELF-ADJOINT AND UNITARY OPERATORS MAT FALL 2006 PRINCETON UNIVERSITY

MATH 221, Spring Homework 10 Solutions

Quantum Computing Lecture 2. Review of Linear Algebra

Linear algebra 2. Yoav Zemel. March 1, 2012

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

Then x 1,..., x n is a basis as desired. Indeed, it suffices to verify that it spans V, since n = dim(v ). We may write any v V as r

MATH 583A REVIEW SESSION #1

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

Homework 11 Solutions. Math 110, Fall 2013.

Symmetric and self-adjoint matrices

Linear Algebra Lecture Notes-II

The following definition is fundamental.

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors

Math 108b: Notes on the Spectral Theorem

MATH SOLUTIONS TO PRACTICE MIDTERM LECTURE 1, SUMMER Given vector spaces V and W, V W is the vector space given by

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

Lecture 19: Polar and singular value decompositions; generalized eigenspaces; the decomposition theorem (1)

MATH 304 Linear Algebra Lecture 34: Review for Test 2.

Eigenvalues and Eigenvectors

Lecture 2: Linear operators

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

1. General Vector Spaces

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Eigenvalues and Eigenvectors A =

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

LinGloss. A glossary of linear algebra

Chapter 6: Orthogonality

Elementary linear algebra

33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background

Travis Schedler. Thurs, Oct 27, 2011 (version: Thurs, Oct 27, 1:00 PM)

6 Inner Product Spaces

Inner Product Spaces

Linear Algebra: Matrix Eigenvalue Problems

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic

Economics 204 Summer/Fall 2010 Lecture 10 Friday August 6, 2010

Chapter 6 Inner product spaces

Math 25a Practice Final #1 Solutions

Lecture 19: Polar and singular value decompositions; generalized eigenspaces; the decomposition theorem (1)

MAT 445/ INTRODUCTION TO REPRESENTATION THEORY

Linear Algebra Highlights

DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL

Designing Information Devices and Systems II

Notes on basis changes and matrix diagonalization

Linear Algebra using Dirac Notation: Pt. 2

Online Exercises for Linear Algebra XM511

Math 113 Final Exam: Solutions

Linear Algebra Short Course Lecture 4

Lecture 11: Finish Gaussian elimination and applications; intro to eigenvalues and eigenvectors (1)

The Singular Value Decomposition

Diagonalizing Matrices

LINEAR ALGEBRA REVIEW

University of Colorado Denver Department of Mathematical and Statistical Sciences Applied Linear Algebra Ph.D. Preliminary Exam June 8, 2012

Lecture notes: Applied linear algebra Part 1. Version 2

Spectral Theorems in Euclidean and Hermitian Spaces

j=1 x j p, if 1 p <, x i ξ : x i < ξ} 0 as p.

EXAM. Exam 1. Math 5316, Fall December 2, 2012

Review of some mathematical tools

Eigenvalues and Eigenvectors

Eigenvectors and Hermitian Operators

MATH Spring 2011 Sample problems for Test 2: Solutions

Diagonalization by a unitary similarity transformation

235 Final exam review questions

Numerical Linear Algebra

LINEAR ALGEBRA SUMMARY SHEET.

Review of Linear Algebra Definitions, Change of Basis, Trace, Spectral Theorem

Linear Algebra Review. Vectors

SYLLABUS. 1 Linear maps and matrices

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

October 25, 2013 INNER PRODUCT SPACES

Homework Set 5 Solutions

Last name: First name: Signature: Student number:

Math 291-2: Lecture Notes Northwestern University, Winter 2016

Definition (T -invariant subspace) Example. Example

Linear Algebra 1. M.T.Nair Department of Mathematics, IIT Madras. and in that case x is called an eigenvector of T corresponding to the eigenvalue λ.

Transcription:

MAT067 University of California, Davis Winter 2007 The Spectral Theorem for normal linear maps Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (March 14, 2007) In this section we come back to the question when an operator on an inner product space V is diagonalizable. We first introduce the notion of the adjoint or hermitian conjugate of an operator and use this to define normal operators, which are those for which the operator and its adjoint commute with each other. The main result of this section is the Spectral Theorem which states that normal operators are diagonal with respect to an orthonormal basis. We use this to show that normal operators are unitarily diagonalizable and generalize this notion to find the singular-value decomposition of an operator. 1 Self-adjoint or hermitian operators Let V be a finite-dimensional inner product space over C with inner product,. A linear operator T L(V ) is uniquely determined by the values of Tv,w for all v, w V. This means in particular that if T,S L(V )and Tv,w = Sv,w for all v, w V, then T = S. To see this take w for example to be the elements of an orthonormal basis of V. Definition 1. Given T L(V ), the adjoint (sometimes hermitian conjugate) of T is the operator T L(V ) such that Tv,w = v, T w for all v, w V Moreover, we call T self-adjoint or hermitian if T = T. The uniqueness of T is clear by the previous observation. Copyright c 2007 by the authors. These lecture notes may be reproduced in their entirety for noncommercial purposes.

1 SELF-ADJOINT OR HERMITIAN OPERATORS 2 Example 1. Let V = C 3 and let T L(C 3 ) be defined as T (z 1,z 2,z 3 )=(2z 2 + iz 3,iz 1,z 2 ). Then (y 1,y 2,y 3 ),T (z 1,z 2,z 3 ) = T (y 1,y 2,y 3 ), (z 1,z 2,z 3 ) = (2y 2 + iy 3,iy 1,y 2 ), (z 1,z 2,z 3 ) =2y 2 z 1 + iy 3 z 1 + iy 1 z 2 + y 2 z 3 = (y 1,y 2,y 3 ), ( iz 2, 2z 1 + z 3, iz 1 ), so that T (z 1,z 2,z 3 )=( iz 2, 2z 1 + z 3, iz 1 ). Writing the matrix for T in terms of the canonical basis we see that 0 2 i 0 i 0 M(T )= i 0 0 and M(T )= 2 0 1. 0 1 0 i 0 0 Note that M(T ) can be obtained from M(T ) by taking the complex conjugate of each element and transposing. Elementary properties that you should prove as exercises are that for all S, T L(V ) and a F we have (S + T ) = S + T (at ) = at (T ) = T I = I (ST) = T S M(T )=M(T ) where A =(a ji ) n i,j=1 if A =(a ij) n i,j=1. The matrix A is the conjugate transpose of A. For n = 1 the conjugate transpose of the 1 1 matrix A is just the complex conjugate of its element. Hence requiring A to be self-adjoint (A = A ) amounts to saying that the entry of A is real. Because of the transpose, reality is not the same as self-adjointness, but the analogy does carry over to the eigenvalues of self-adjoint operators as the next Proposition shows. Proposition 1. Every eigenvalue of a self-adjoint operator is real. Proof. Suppose λ C is an eigenvalue of T and 0 v V the corresponding eigenvector such that Tv = λv. Then λ v 2 = λv, v = Tv,v = v, T v = v, Tv = v, λv = λ v, v = λ v 2.

2 NORMAL OPERATORS 3 This implies that λ = λ. [ ] 2 1+i Example 2. The operator T L(V ) be defined by T (v) = v is self-adjoint 1 i 3 (or hermitian) and it can be checked that the eigenvalues are λ =1, 4 by determining the zeroes of the polynomial p(λ) =(2 λ)(3 λ) (1 + i)(1 i) =λ 2 5λ +4. 2 Normal operators Normal operator are those which commute with their adjoint. Definition 2. We call T L(V ) normal if TT = T T. In general TT T T. Note that TT and T T are both self-adjoint for all T L(V ). Also, any self-adjoint operator T is normal. We now give a different characterization for normal operators in terms of norms. In order to prove this results, we first discuss the next proposition. Proposition 2. Let V be a complex inner product space and T L(V ) such that Then T =0. Proof. Verify that Tv,v =0 for all v V. Tu,w = 1 { T (u + w),u+ w T (u w),u w 4 +i T (u + iw),u+ iw i T (u iw),u iw }. Since each term on the right is of the form Tv,v, weobtain0forallu, w V. Hence T =0. Proposition 3. Let T L(V ). ThenT is normal if and only if Proof. Note that Tv = T v for all v V. T is normal T T TT =0 (T T TT )v, v =0 forallv V TT v, v = T Tv,v for all v V Tv 2 = T v 2 for all v V.

3 NORMAL OPERATORS AND THE SPECTRAL DECOMPOSITION 4 Corollary 4. Let T L(V ) be a normal operator. Then 1. null T =nullt. 2. If λ C is an eigenvalue of T,thenλ is an eigenvalue of T with the same eigenvector. 3. If λ, µ C are distinct eigenvalues of T with associated eigenvectors v, w V respectively, then v, w =0. Proof. Note that 1 follows from Proposition 3 and the positive definiteness of the norm. To prove 2, first verify that if T is normal, then T λi is also normal and (T λi) = T λi. Therefore by Proposition 3 we have 0= (T λi)v = (T λi) v = (T λi)v, so that v is an eigenvector of T with eigenvalue λ. Using part 2, note that (λ µ) v, w = λv, w v, µw = Tv,w v, T w =0. Since λ µ 0 it follows that v, w = 0, proving part 3. 3 Normal operators and the spectral decomposition Recall that an operator is diagonalizable if there exists a basis of V consisting of eigenvectors of V. The nicest operators on V are those that are diagonalizable with respect to an orthonormal basis of V. These are the operators such that there is an orthonormal basis consisting of eigenvectors of V. The spectral theorem for complex inner product spaces shows that these are precisely the normal operators. Theorem 5 (Spectral Theorem). Let V be a finite-dimensional inner product space over C and T L(V ). Then T is normal if and only if there exists an orthonormal basis for V consisting of eigenvectors for T. Proof. = Suppose that T is normal. We proved before that for any operator T on a complex inner product space V of dimension n, there exists an orthonormal basis e =(e 1,...,e n )for which the matrix M(T ) is upper-triangular M(T )= a 11 a 1n.... 0 a nn.

4 APPLICATIONS OF THE SPECTRAL THEOREM: DIAGONALIZATION 5 We will show that M(T ) is in fact diagonal, which implies that e 1,...,e n are eigenvectors of T. Since M(T )=(a ij ) n i,j=1 with a ij =0fori>j,wehaveTe 1 = a 11 e 1 and T e 1 = n k=1 a 1ke k. Thus, by the Pythagorean Theorem and Proposition 3 n a 11 2 = a 11 e 1 2 = Te 1 2 = T e 1 2 = a 1k e k 2 = k=1 n a 1k 2 k=1 from which follows that a 12 = = a 1n = 0. One can repeat this argument, calculating Te j 2 = a jj 2 and T e j 2 = n k=j a jk 2 to find a ij =0forall2 i<j n. Hence T is diagonal with respect to the basis e and e 1,...,e n are eigenvectors of T. = Suppose there exists an orthonormal basis (e 1,...,e n )ofv consisting of eigenvectors for T. Then the matrix M(T ) with respect to this basis is diagonal. Moreover, M(T )= M(T ) with respect to this basis must also be a diagonal matrix. Any two diagonal matrices commute. It follows that TT = T T since their corresponding matrices commute M(TT )=M(T )M(T )=M(T )M(T )=M(T T ). The next corollary is the best possible decomposition of the complex vector space V into subspaces invariant under a normal operator T. On each subspace null (T λ i I) the operator T just acts by multiplication by λ i. Corollary 6. Let T L(V ) be a normal operator. Then 1. Denoting λ 1,...,λ m the distinct eigenvalues for T, V =null(t λ 1 I) null (T λ m I). 2. If i j, thennull (T λ i I) null (T λ j I). As we will see next, the canonical matrix for T admits a unitary diagonalization. 4 Applications of the spectral theorem: Diagonalization We already discussed that if e =(e 1,...,e n )isabasisofavectorspacev of dimension n and T L(V ), then we can associate a matrix M(T )tot. To remember the dependency on the basis e let us now denote this matrix by [T ] e.thatis [Tv] e =[T ] e [v] e for all v V

4 APPLICATIONS OF THE SPECTRAL THEOREM: DIAGONALIZATION 6 where v 1 [v] e =. is the coordinate vector for v = v 1 e 1 + + v n e n with v i F. The operator T is diagonalizable if there exists a basis e such that [T ] e is diagonal, that is, there exist λ 1,...,λ n F such that [T ] e = v n λ 1 0... 0 λ n The scalars λ 1,...,λ n are necessarily eigenvalues of T and e 1,...,e n are the corresponding eigenvectors. Therefore: Proposition 7. T L(V ) is diagonalizable if and only if there exists a basis (e 1,...,e n ) consisting entirely of eigenvectors of T. We can reformulate this proposition as follows using the change of basis transformations. Suppose that e and f are bases of V such that [T ] e is diagonal and let S be the change of basis transformation such that [v] e = S[v] f.thens[t ] f S 1 =[T ] e is diagonal. Proposition 8. T L(V ) is diagonalizable if and only if there exists a invertible matrix S F n n such that λ 1 0 S[T ] f S 1 =... 0 λ n where [T ] f is the matrix of T with respect to a given arbitrary basis f =(f 1,...,f n ). On the other hand, the spectral theorem tells us that T is diagonalizable with respect to an orthonormal basis if and only if T is normal. Recall that. for any orthonormal basis f of V.Here [T ] f =[T ] f A =(a ji ) n ij=1 for A =(a ij ) n i,j=1 is the complex conjugate transpose of the matrix A. WhenF = R then A = A t is just the transpose of the matrix, where A t =(a ji ) n i,j=1. The change of basis transformation between two orthonormal bases is called unitary in the complex case or orthogonal in the real case. Let e =(e 1,...,e n )andf =(f 1,...,f n )

4 APPLICATIONS OF THE SPECTRAL THEOREM: DIAGONALIZATION 7 be two orthonormal bases of V and U the change of basis matrix [v] f = U[v] e for all v V. Then e i,e j = δ ij = f i,f j = Ue i,ue j. Since this is true on the basis e we in fact have that U is unitary if and only if Uv,Uw = v, w for all v, w V. (1) This means that unitary matrices preserve the inner product. Operators which preserve the inner product are also called isometries. Similar conditions hold for orthogonal matrices. Since by the definition of the adjoint Uv,Uw = v, U Uw, equation 1 also shows that unitary matrices are characterized by the property U U = I O t O = I for the unitary case for the orthogonal case. The equation U U = I implies that U 1 = U. For finite-dimensional inner product spaces V the left inverse of an operator is also the right inverse, so that UU = I if and only if U U = I OO t = I if and only if O t O = I. (2) It is easy to see that the columns of a unitary matrix are the coefficients of the elements of an orthonormal basis with respect to another orthonormal basis. Therefore the columns are orthonormal vectors in C n (or in R n in the real case). By (2) this is also true for the rows of the matrix. The spectral theorem shows that T is normal if and only if [T ] e is diagonal with respect to an orthonormal basis e, that is, there exists a unitary matrix U such that UTU = λ 1 0... 0 λ n Conversely, if a unitary matrix U exists such that UTU = D is diagonal, then TT T T = U (DD DD)U =0 since diagonal matrices commute, and hence T is normal. Let us summarize all definitions so far.

4 APPLICATIONS OF THE SPECTRAL THEOREM: DIAGONALIZATION 8 Definition 3. A is hermitian if A = A. A is symmetric if A t = A. U is unitary if UU = I. O is orthogonal if OO t = I. Note that all cases of Definition 3 are examples of normal operators. An example of a normal operator N that is none of the above is [ ] 1 1 N = i. 1 1 You can easily verify that NN = N N.NotethatiN is symmetric. Example 3. Take the matrix A = [ 2 ] 1+i 1 i 3 of Example 2. To unitarily diagonalize A, we need to find a unitary matrix U and a diagonal matrix D such that A = UDU 1. To do this, we want to change basis to one composed of orthonormal eigenvectors for T L(C 2 ) defined by Tv = Av for all v C 2. To find such an orthonormal basis, we start by finding the eigenspaces[ of T.Wealready ] 1 0 determined that the eigenvalues of T are λ 1 =1andλ 2 =4,sothatD =. Hence 0 4 C 2 =null(t I) null (T 4I) =span(( 1 i, 1)) span((1 + i, 2)). Now apply the Gram-Schmidt procedure to each eigenspace to obtain the columns of U. Here [ ] 1 i A = UDU 1 1+i [1 ] [ ] 1 1 i = 3 6 0 1+i 3 6 1 6 2 2 3 0 4 6 = [ 1 i 3 1+i 6 1 3 2 6 ] [1 0 0 4 1 3 ] [ 1+i 3 Note that the diagonal decomposition allows us to compute powers and the exponential 1 i 6 1 3 2 6 ].

5 POSITIVE OPERATORS 9 of matrices. Namely if A = UDU 1 where D is diagonal, we have A n =(UDU 1 ) n = UD n U 1 ( ) 1 1 exp(a) = k! Ak = U k! Dk k=0 k=0 Example 4. Continuing the previous Example [ ] 1 0 A 2 =(UDU 1 ) 2 = UD 2 U 1 = U U = 0 16 U 1 = U exp(d)u 1. [ 6 ] 5+5i 5 5i 11 [ ] [ 1 0 2 A n =(UDU 1 ) n = UD n U 1 = U 0 2 2n U = (1 + ] 3 2n 1 1+i ) 3 ( 1+22n ) 1 i 3 ( 1+22n 1 ) (1 + 3 22n+1 ) [ ] e 0 exp(a) =Uexp(D)U 1 = U 0 e 4 U 1 = 1 [ ] 2e + e 4 e 4 e + i(e 4 e) 3 e 4 e + i(e e 4 ) e +2e 4. 5 Positive operators Recall that self-adjoint operators are the operator analogue of real numbers. Let us now define the operator analogue of positive (or more precisely nonnegative) real numbers. Definition 4. An operator T L(V ) is called positive (in symbols T 0) if T = T and Tv,v 0 for all v V. (If V is a complex vector space the condition of self-adjointness follows from the condition Tv,v 0 and can hence be dropped). Example 5. Note that for all T L(V )wehavet T 0sinceT T is self-adjoint and T Tv,v = Tv,Tv 0. Example 6. Let U V be a subspace of V and P U the orthogonal projection onto U. Then P U 0. To see this write V = U U and v = u v +u v for all v V where u v U and u v U.Then P U v, w = u v,u w +u w = u v,u w = u v +u v,u w = v, P U w, sothatp U = P U. Also, setting v = w in the above string of equations we obtain P U v, v = u v,u v 0 for all v V. Hence P U 0. If λ is an eigenvalue of a positive operator T and v V is the associated eigenvector, then Tv,v = λv, v = λ v, v 0. Since v, v 0 for all vectors v V, it follows that λ 0. This fact can be used to define T by setting Tei = λ i e i,

6 POLAR DECOMPOSITION 10 where λ i are the eigenvalues of T with respect to the orthonormal basis e =(e 1,...,e n ). We know that these exist by the spectral theorem. 6 Polar decomposition Continuing the analogy between C and L(V ), recall the polar form of a complex number z = z e iθ,where z is the absolute value or length of z and e iθ is an element on the unit circle. In terms of an operator T L(V )wherev is a complex inner product space, a unitary operator U takes the role of e iθ and T takes the role of the length. As we discussed above T T 0sothat T := T T exists and T 0 as well. Theorem 9. For all T L(V ) there exists a unitary U such that T = U T. This is called the polar decomposition of T. Sketch of proof. We start by noting that Tv 2 = T v 2, since Tv,Tv = v, T Tv = T Tv, T Tv. This implies that null T =null T. Because of the dimension formula dim null T +dimranget = dimv, this also means that dim range T =dimrange T. Moreover, we can hence define an isometry S : range T range T by setting S( T v) =T v. The trick is now to define a unitary operator U on all of V such that the restriction of U onto the range of T is S U range T = S. Note that null T range T because for v null T and w = T u range T w, v = T u, v = u, T v = u, 0 =0 since T is self-adjoint. Pick an orthonormal basis e =(e 1,...,e m )ofnull T and an orthonormal basis f = (f 1,...,f m ) of (range T ). Set Se i = f i and extend S on null T by linearity. Any v V can be uniquely written as v = v 1 + v 2 where v 1 null T and v 2 range T since null T range T. Then define U : V V by Uv = Sv 1 + Sv 2.NowU is an isometry and

7 SINGULAR-VALUE DECOMPOSITION 11 hence unitary as shown by the following calculation using the Pythagorean theorem Uv 2 = Sv 1 + Sv 2 2 = Sv 1 2 + Sv 2 2 = v 1 2 + v 2 2 = v 2. Also by construction U T = T since U null T does not matter. 7 Singular-value decomposition The singular-value decomposition generalizes the notion of diagonalization. To unitarily diagonalize T L(V ) means to find an orthonormal basis e such that T is diagonal with respect to this basis λ 1 0 M(T ; e, e) =[T ] e =... 0 λ n where the notation M(T ; e, e) indicates that the basis e is used both for the domain and codomain of T. The spectral theorem tells us that unitary diagonalization can only be done for normal operators. In general, we can find two orthonormal bases e and f such that s 1 0 M(T ; e, f) =... 0 s n which means that Te i = s i f i. The scalars s i are called singular values of T. If T is diagonalizable they are the absolute values of the eigenvalues. Theorem 10. All T L(V ) have a singular-value decomposition. orthonormal bases e =(e 1,...,e n ) and f =(f 1,...,f n ) such that where s i are the singular values of T. Tv = s 1 v, e 1 f 1 + + s n v, e n f n, That is, there exist Proof. Since T 0 and hence also self-adjoint, there is an orthonormal basis e =(e 1,...,e n ) by the spectral theorem so that T e i = s i e i. Let U be the unitary matrix in the polar decomposition, so that T = U T. Sincee is orthonormal, we can write any vector v V as v = v, e 1 e 1 + + v, e n e n and hence Tv = U T v = s 1 v, e 1 Ue 1 + + s n v, e n Ue n.

7 SINGULAR-VALUE DECOMPOSITION 12 Now set f i = Ue i for all 1 i n. Since U is unitary, (f 1,...,f n ) is also an orthonormal basis, proving the theorem.