CHAPTER 5 Eigenpairs and Similarity Transformations Exercise 56: Characteristic polynomial of transpose we have that A T ( )=det(a T I)=det((A I) T )=det(a I) = A ( ) A ( ) = det(a I) =det(a T I) =det(a T I)= A T ( )= A ( ) Exercise 57: Characteristic polynomial of inverse If Ax = x we have that A ( x) =x, sothata x = x,sothat(, x) is an eigenpair for A A k x = A k Ax = A k A x = A k A x = = k x Exercise 58: The power of the eigenvector expansion If the eigenvectors form a basis we can write x = c j x j for some scalars c,,c n Butthen Ax = c j Ax j = c j j x j Iterating this we obtain A k k x = c j Ax j = c j j x j Exercise 59: Idempotent matrix Suppose that (, x) isaneigenpairofamatrixa satisfying A = A Then x = Ax = A x = Ax = x Since any eigenvector is nonzero, one has =,fromwhichitfollowsthateither =0or = Weconcludethattheeigenvaluesofanyidempotentmatrixcanonly be zero or one 5
Exercise 50: Nilpotent matrix Suppose that (, x) isaneigenpairofamatrixa satisfying A k = 0 for some natural number k Then 0 = A k x = A k x = A k x = = k x Since any eigenvector is nonzero, one has k =0,fromwhichitfollowsthat =0 We conclude that any eigenvalue of a nilpotent matrix is zero Exercise 5: Eigenvalues of a unitary matrix Let x be an eigenvector corresponding to ThenAx = x and, as a consequence, x A = x To use that A A = I, itistemptingtomultiplythelefthandsidesof these equations, yielding kxk = x x = x A Ax = x Ix = kxk Since x is an eigenvector, it must be nonzero Nonzero vectors have nonzero norms, and we can therefore divide the above equation by kxk,whichresultsin = Taking square roots we find that =, which is what needed to be shown Apparently the eigenvalues of any unitary matrix reside on the unit circle in the complex plane Exercise 5: Nonsingular approximation of a singular matrix Let,, n be the eigenvalues of the matrix A As the matrix A is singular, its determinant det(a) = n is zero, implying that one of its eigenvalues is zero If all the eigenvalues of A are zero let " 0 := Otherwise, let " 0 := min i 6=0 i be the absolute value of the eigenvalue closest to zero By definition of the eigenvalues, det(a I) iszerofor =,, n, andnonzerootherwise Inparticulardet(A "I) is nonzero for any " (0," 0 ), and A "I will be nonsingular in this interval This is what we needed to prove (a) To show that ( need to compute A ( ) = det(a Exercise 5: Companion matrix ) n f is the characteristic polynomial A of the matrix A, we I) = det 6 4 q n q n q q 0 0 0 0 0 0 7 5 0 0 By the rules of determinant evaluation, we can substract from any column a linear combination of the other columns without changing the value of the determinant Multiply columns,,,n by n n,,, and adding the corresponding linear combination to the final column, we find q n q n q f( ) 0 0 A ( ) = det 0 0 0 6 4 7 5 =( )n f( ), 0 0 0 6
where the second equality follows from cofactor expansion along the final column Multiplying this equation by ( ) n yields the statement of the Exercise (b) Similar to (a), by multiplying rows,,,nby, n,, and adding the corresponding linear combination to the first row Exercise 57: Find eigenpair example As A is a triangular matrix, its eigenvalues correspond to the diagonal entries One finds two eigenvalues =and =,thelatterwithalgebraicmultiplicitytwo Solving Ax = x and Ax = x, one finds (valid choices of) eigenpairs, for instance (, x )=(, 4 05), (, x )=(, 4 5) 0 0 It follows that the eigenvectors span a space of dimension, and this means that A is defective Exercise 5: Jordan example This exercise shows that it matters in which order we solve for the columns of S One would here need to find the second column first before solving for the other two The matrices given are A = 4 0 4 5, J = 4 0 0 05, 4 0 0 0 we are asked to find S =[s, s, s ]satisfying [As, As, As ]=AS = SJ =[s, s, s ]J = s, s + s, s The equations for the first and third columns say that s and s are eigenvectors for =,sothattheycanbefoundbyrowreducinga I: A I = 4 0 4 0 5 4 0 0 0 05 4 0 0 0 0 (, 0, ) T and (0,, 0) T thus span the set of eigenvectors for = s can be found by solving As = s + s,sothat(a I)s = s This means that (A I) s =(A I)s = 0, sothats ker(a I) A simple computation shows that (A I) =0sothatanys will do, but we must also choose s so that (A I)s = s is an eigenvector of A Since A I has rank one, we may choose any s so that (A I)s is nonzero In particular we can choose s = e,andthen s =(A I)s =(, 4, 4) T We can also choose s =(0,, 0) T,sinceitisaneigenvectornotspannedbythes and s which we just defined All this means that we can set S = 4 0 4 0 5 4 0 0 7
Exercise 5: A nilpotent matrix applewe show this by induction For r = the statement is obvious Define E r = 0 Im r Wehavethat 0 0 (E E r ) i,j = X k (E ) i,k (E r ) k,j In the sum on the right hand side only one term can contribute (since any row/column in E and E r contains only one nonzero entry, being a one) This occurs when there is a k so that k = i +,k + r = j, ie whenj = i + r + E r+ has all nonzero entries when j = i + r +,andthisprovesthate r+ = E E r Itnowfollowsthat (J m ( ) I) r+ =(J m ( ) I)(J m ( ) I) r = E E r = E r+, and the result follows Exercise 54: Properties of the Jordan form Let J = S AS be the Jordan form of the matrix A as in Theorem 59 Items are easily shown by induction, making use of the rules of block multiplication in and For Item 4, write E m := J m ( ) I m,withj m ( ) the Jordan block of order m By the binomial theorem, rx r rx r J m ( ) r =(E m + I m ) r = E k k m( I m ) r k r = k E k k m Since E k m = 0 for any k J m ( ) r = min{r,m } X k=0 k=0 m, weobtain r k r k E k m k=0 Exercise 55: Powers of a Jordan block Let S be as in Exercise 5 J is block-diagonal so that we can write (?) J n = 4 0 n apple n 0 05 = 4 0 0 5 = 4 n 0 0 05, 0 0 0 n 0 0 where we used property 4 in exercise 54 on the upper left block It follows that A 00 =(SJS ) 00 = SJ 00 S = 4 4 0 0 5 4 00 0 0 05 4 4 0 0 5 4 0 0 0 4 0 = 4 4 0 0 5 4 00 0 0 05 4 0 0 0 0 00 4 0 5 = 4 400 005 4 0 0 0 400 0 99 0 8
Exercise 56: The minimal polynomial For each i, ( i ) a i = ( i ) P g i m i,j divides A ( ) Since P g i m i,j max applejapplegi m i,j = m i,also( i ) m i divides A ( ) From this it follows that also µ A ( )divides A ( ) We have that µ A (A) = = S ( i I A) m i = ( i I SJS ) m i ( i I J) m i S = Sµ A (J)S It follows that µ A (A) =0 if and only if µ A (J) =0 We have that µ A (J) = = Now we also have ( i I J) m i = (diag( i I U,,( i I U )) m i (diag(( i I U ) m i,,( i I U ) m i ) =diag ( i I U ) m i,, ( i I U k ) m i ( i I U i ) m i =( i I diag(j mi, ( i ),,J mi,gi ( i ))) m i since =diag( i I J mi, ( i ),, ii J mi,gi ( i )) m i =diag(( i I J mi, ( i )) m i,,( i I J mi,gi ( i )) m i )=0, ( i I J mi,j ( i )) m i =( i I J mi,j ( i )) m i,j ( i I J mi, ( i )) m i m i,j We now get that ( i I U j ) m i =( j I U j ) m j so that µ A (J) =diag = 0( i I J mi, ( i )) m i m i,j = 0,i6=j ( i I U ) m i,, ( i I U j ) m i = 0, ( i I U k ) m i = 0 It follows that µ A (A) =0 Suppose now that p(a) =0 We can write p(a) =C Q r (k ii A) s i,wherek i are the zeros of p, withmultiplicitys i As above it follows that p(a) =0 if and only 9
if p(j) =0 Factorp(J) asabovetoobtain ry p(j) = diag (k i I U ) s i,, (k i I U k ) s i Note that k i I U j =diag(k i I J mj, ( j ),,k i I J mj,gj ( j )) is upper triangular with k i j on the diagonal If k i 6= j k i I U j must then be invertible, but then also (k i I U j ) s i is invertible In order for p(j) =0 we must then have that, for each j there exists a t so that k t = j The qth diagonal block entry in (k i I U j ) s i are ry ry (k i I J mj,q ( j )) s i =(k t I J mj,q ( j )) st (k i I J mj,q ( j )) s i,i6=t =( j I J mj,q ( j )) st ry,i6=t (k i I J mj,q ( j )) s i The last matrix here is invertible (all k i 6= j wjhen i 6= t), so that we must have that ( j I J mj,q ( j )) st = 0 in order for p(j) =0 We know from the exercises that this happens only when s t m j,q Sinceqwas arbitrary we obtain that s t m j,ie that j is a zero in p of multiplicity m j Since this applied for any j, itfollowsthatthe minimal polynomial divides p, andtheresultfollows 4 We have that A (A) =µ A (A) A (A) =0 Exercise 57: Big Jordan example The matrix A has Jordan form A = SJS,with 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 J = 0 0 0 0 0 0 0 0 0 0 0 0 0, S = 9 6 0 0 0 0 0 0 7 6 4 0 0 0 0 0 0 0 5 4 0 0 0 0 0 0 0 4 9 5 6 0 8 9 9 8 8 0 0 7 0 0 4 7 5 8 0 6 0 9 56 6 0 4 0 5 0 0 70 45 6 9 4 0 0 84 54 9 0 0 0 98 6 8 6 0 0 0 49 0 4 0 0 0 7 5 Exercise 50: Schur decomposition example The matrix U is unitary, as U U = U T U = I One directly verifies that apple R := U T AU = 0 4 Since this matrix is upper triangular, A = URU T is a Schur decomposition of A 0
Exercise 54: Skew-Hermitian matrix By definition, a matrix C is skew-hermitian if C = C =) : Suppose that C = A + ib, witha, B R m,m, is skew-hermitian Then A ib = C = C =(A + ib) = A T ib T, which implies that A T = A and B = B T (use that two complex numbers coincide if and only if their real parts coincide and their imaginary parts coincide) In other words, A is skew-hermitian and B is real symmetric (= : Suppose that we are given matrices A, B R m,m such that A is skew- Hermitian and B is real symmetric Let C = A + ib Then C =(A + ib) = A T ib T = A ib = (A + ib) = C, meaning that C is skew-hermitian Exercise 55: Eigenvalues of a skew-hermitian matrix Let A be a skew-hermitian matrix and consider a Schur triangularization A = URU of A Then R = U AU = U ( A )U = U A U = (U AU) = R Since R di ers from A by a similary transform, their eigenvalues coincide (use the multiplicative property of the determinant to show that det(a I) =det(u )det(uru I)) det(u) =det(r I)) As R is a triangular matrix, its eigenvalues i appear on its diagonal From the equation R = R it then follows that i = i, implyingthateach i is purely imaginary Exercise 56: Eigenvector expansion using orthogonal eigenvectors If x = P n c ju j and the eigenvectors are orthogonal we get that u i x = c j u i u j = c i u i u i, so that c i = u i x/(u i u i ) Exercise??: Left Eigenpairs If y A = y we get that (y A) = A y = y, sothat(, y) isaneigenpairfor A,sothat is an eigenvalue for A Thereisnoreasontobelieve,however,thatA and A have the same eigenvectors If the matrix is hermitian this is clearly the case Assume that (y, ) isalefteigenpairofa, and(x, ) isarighteigenpairof A, andsothat 6= Wehavethat y x = y Ax = y x, so that ( )y x =0,sothaty x =0,sothatx and y are orthogonal If A has a basis of right eigenvectors then it is diagonalizable, so that A = PDP,wherethecolumnsinPare x i We also have that A =(P ) D P The columns of (P ) are thus right eigenvectors for A,andthuslefteigenvectorsy i for A We now get that yi x j is the ij-entry in the matrix product ((P ) ) P = P P = I, and the result follows
The coordinates of v in the basis x j is P vsincetherowsinp are the conjugate of the columns in (P ),thesecoordinatesareyj v This proves that v = (yj v)x j The coordinates of v in the basis y j is P vsincetherowsinp are the conjugate of the columns in P, thesecoordinatesarex kv This proves that v = (x kv)x k, k= and this completes the proof 4 We have that A ( )=( )( ) = 5 +4,sothattheeigenvalues are =4and = Arighteigenvectorfor =4canbefoundfrom, 0 0 so that we can use x =(, ) A right eigenvector for 0 0, so that we can use x =(, ) Alefteigenvectorfor =4canbefoundfrom, 0 0 so that we can use y =(, ) A left eigenvector for 0 0, =canbefoundfrom =canbefoundfrom so that we can use x =(, ) We see that yx ==yx =,sothatthetwoexpansionsare v = yj vx j = y kvx k k= Exercise 546: Eigenvalue perturbation for Hermitian matrices Since a positive semidefinite matrix has no negative eigenvalues, one has n 0 It immediately follows from i + n apple i that in this case i i Exercise 548: Ho man-wielandt The matrix A has eigenvalues 0 and 4, and the matrix B has eigenvalue 0 with algebraic multiplicity two Independently of the choice of the permutation i,,i n,the Ho man-wielandt Theorem would yield 6 = µ ij j apple a ij b ij =,
which clearly cannot be valid The Ho man-wielandt Theorem cannot be applied to these matrices, because B is not normal, B H B = 6= = BB H Exercise 55: Biorthogonal expansion The matrix A has characteristic polynomial det(a I) =( 4)( ) and right eigenpairs (, x )=(4,[, ] T )and(, x )=(,[, ] T ) Since the right eigenvectors x, x are linearly independent, there exists vectors y, y satisfying hy i, x j i = ij Avectororthogonaltox must be on the form y = [, ] T, and a vector orthogonal to x =[, ] T must be on the form y = [, ] T These choices secure that hy i, x j i =0wheni6= j Wealsomusthavethat =hy, x i = + = = hy, x i = + =, so that = =/, and we can choose the dual basis as y = [, ]T and y = [, ]T Equation(5)thengivesusthebiorthogonalexpansions v = hv, y ix + hv, y ix = (v + v )x + (v v )x = hv, x iy + hv, x iy =(v + v )y +(v v )y Exercise 554: Generalized Rayleigh quotient Suppose (, x) isarighteigenpairfora, sothatax = x Then the generalized Rayleight quotient for A is R(y, x) := y Ax y x = y x y x =, which is well defined whenever y x 6= 0 Ontheotherhand,if(, y) isalefteigenpair for A, theny A = y and it follows that R(y, x) := y Ax y x = y x y x =