Eigenpairs and Similarity Transformations

Similar documents
Eigenvalues and eigenvectors

Numerical Linear Algebra A Solution Manual. Georg Muntingh and Christian Schulz

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Computational Methods CMSC/AMSC/MAPL 460. Eigenvalues and Eigenvectors. Ramani Duraiswami, Dept. of Computer Science

Numerical Linear Algebra Homework Assignment - Week 2

Eigenvalue and Eigenvector Problems

Conceptual Questions for Review

Lecture 10 - Eigenvalues problem

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

Review problems for MA 54, Fall 2004.

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background

Synopsis of Numerical Linear Algebra

Linear Algebra: Matrix Eigenvalue Problems

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

EIGENVALUE PROBLEMS. Background on eigenvalues/ eigenvectors / decompositions. Perturbation analysis, condition numbers..

Foundations of Matrix Analysis

Review of Linear Algebra

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic

Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Math 489AB Exercises for Chapter 2 Fall Section 2.3

Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2

Math 408 Advanced Linear Algebra

Chapter 3. Determinants and Eigenvalues

LinGloss. A glossary of linear algebra

Math Linear Algebra Final Exam Review Sheet

University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm

Eigenvalues and Eigenvectors

Math 240 Calculus III

Diagonalizing Matrices

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

(b) If a multiple of one row of A is added to another row to produce B then det(b) =det(a).

Lecture Notes for Inf-Mat 3350/4350, Tom Lyche

1. General Vector Spaces

Linear Algebra Lecture Notes-II

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

Lecture Notes in Linear Algebra

1 Linear Algebra Problems

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

CS 246 Review of Linear Algebra 01/17/19

Math Homework 8 (selected problems)

The Eigenvalue Problem: Perturbation Theory

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

1. Let m 1 and n 1 be two natural numbers such that m > n. Which of the following is/are true?

a 11 a 12 a 11 a 12 a 13 a 21 a 22 a 23 . a 31 a 32 a 33 a 12 a 21 a 23 a 31 a = = = = 12

Recall : Eigenvalues and Eigenvectors

Chapter 3 Transformations

I. Multiple Choice Questions (Answer any eight)

Matrix Operations: Determinant

G1110 & 852G1 Numerical Linear Algebra

c c c c c c c c c c a 3x3 matrix C= has a determinant determined by

Linear Algebra. Workbook

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors

Knowledge Discovery and Data Mining 1 (VO) ( )

Linear Algebra, part 2 Eigenvalues, eigenvectors and least squares solutions

Math Matrix Algebra

3 (Maths) Linear Algebra

Lecture notes: Applied linear algebra Part 1. Version 2

Linear Algebra Review. Vectors

Cheat Sheet for MATH461

A Brief Outline of Math 355

Linear Algebra Practice Problems

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Linear algebra and applications to graphs Part 1

0.1 Rational Canonical Forms

Homework 1 Elena Davidson (B) (C) (D) (E) (F) (G) (H) (I)

1 Determinants. 1.1 Determinant

Control Systems. Linear Algebra topics. L. Lanari

MATH Topics in Applied Mathematics Lecture 12: Evaluation of determinants. Cross product.

Math 489AB Exercises for Chapter 1 Fall Section 1.0

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION)

1 Last time: least-squares problems

Math 215 HW #11 Solutions

18.06SC Final Exam Solutions

18.06 Problem Set 8 - Solutions Due Wednesday, 14 November 2007 at 4 pm in

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice

LINEAR ALGEBRA REVIEW

Linear Systems and Matrices

2 b 3 b 4. c c 2 c 3 c 4

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show

Chapters 5 & 6: Theory Review: Solutions Math 308 F Spring 2015

SECTIONS 5.2/5.4 BASIC PROPERTIES OF EIGENVALUES AND EIGENVECTORS / SIMILARITY TRANSFORMATIONS

ANSWERS. E k E 2 E 1 A = B

Matrix Algebra: Summary

Review of some mathematical tools

ACM 104. Homework Set 5 Solutions. February 21, 2001

MTH 102: Linear Algebra Department of Mathematics and Statistics Indian Institute of Technology - Kanpur. Problem Set

forms Christopher Engström November 14, 2014 MAA704: Matrix factorization and canonical forms Matrix properties Matrix factorization Canonical forms

Quantum Computing Lecture 2. Review of Linear Algebra

Study Guide for Linear Algebra Exam 2

Linear Algebra Practice Final

EIGENVALUE PROBLEMS. EIGENVALUE PROBLEMS p. 1/4

MATH 583A REVIEW SESSION #1

Linear Algebra Review

Online Exercises for Linear Algebra XM511

Symmetric and anti symmetric matrices

Transcription:

CHAPTER 5 Eigenpairs and Similarity Transformations Exercise 56: Characteristic polynomial of transpose we have that A T ( )=det(a T I)=det((A I) T )=det(a I) = A ( ) A ( ) = det(a I) =det(a T I) =det(a T I)= A T ( )= A ( ) Exercise 57: Characteristic polynomial of inverse If Ax = x we have that A ( x) =x, sothata x = x,sothat(, x) is an eigenpair for A A k x = A k Ax = A k A x = A k A x = = k x Exercise 58: The power of the eigenvector expansion If the eigenvectors form a basis we can write x = c j x j for some scalars c,,c n Butthen Ax = c j Ax j = c j j x j Iterating this we obtain A k k x = c j Ax j = c j j x j Exercise 59: Idempotent matrix Suppose that (, x) isaneigenpairofamatrixa satisfying A = A Then x = Ax = A x = Ax = x Since any eigenvector is nonzero, one has =,fromwhichitfollowsthateither =0or = Weconcludethattheeigenvaluesofanyidempotentmatrixcanonly be zero or one 5

Exercise 50: Nilpotent matrix Suppose that (, x) isaneigenpairofamatrixa satisfying A k = 0 for some natural number k Then 0 = A k x = A k x = A k x = = k x Since any eigenvector is nonzero, one has k =0,fromwhichitfollowsthat =0 We conclude that any eigenvalue of a nilpotent matrix is zero Exercise 5: Eigenvalues of a unitary matrix Let x be an eigenvector corresponding to ThenAx = x and, as a consequence, x A = x To use that A A = I, itistemptingtomultiplythelefthandsidesof these equations, yielding kxk = x x = x A Ax = x Ix = kxk Since x is an eigenvector, it must be nonzero Nonzero vectors have nonzero norms, and we can therefore divide the above equation by kxk,whichresultsin = Taking square roots we find that =, which is what needed to be shown Apparently the eigenvalues of any unitary matrix reside on the unit circle in the complex plane Exercise 5: Nonsingular approximation of a singular matrix Let,, n be the eigenvalues of the matrix A As the matrix A is singular, its determinant det(a) = n is zero, implying that one of its eigenvalues is zero If all the eigenvalues of A are zero let " 0 := Otherwise, let " 0 := min i 6=0 i be the absolute value of the eigenvalue closest to zero By definition of the eigenvalues, det(a I) iszerofor =,, n, andnonzerootherwise Inparticulardet(A "I) is nonzero for any " (0," 0 ), and A "I will be nonsingular in this interval This is what we needed to prove (a) To show that ( need to compute A ( ) = det(a Exercise 5: Companion matrix ) n f is the characteristic polynomial A of the matrix A, we I) = det 6 4 q n q n q q 0 0 0 0 0 0 7 5 0 0 By the rules of determinant evaluation, we can substract from any column a linear combination of the other columns without changing the value of the determinant Multiply columns,,,n by n n,,, and adding the corresponding linear combination to the final column, we find q n q n q f( ) 0 0 A ( ) = det 0 0 0 6 4 7 5 =( )n f( ), 0 0 0 6

where the second equality follows from cofactor expansion along the final column Multiplying this equation by ( ) n yields the statement of the Exercise (b) Similar to (a), by multiplying rows,,,nby, n,, and adding the corresponding linear combination to the first row Exercise 57: Find eigenpair example As A is a triangular matrix, its eigenvalues correspond to the diagonal entries One finds two eigenvalues =and =,thelatterwithalgebraicmultiplicitytwo Solving Ax = x and Ax = x, one finds (valid choices of) eigenpairs, for instance (, x )=(, 4 05), (, x )=(, 4 5) 0 0 It follows that the eigenvectors span a space of dimension, and this means that A is defective Exercise 5: Jordan example This exercise shows that it matters in which order we solve for the columns of S One would here need to find the second column first before solving for the other two The matrices given are A = 4 0 4 5, J = 4 0 0 05, 4 0 0 0 we are asked to find S =[s, s, s ]satisfying [As, As, As ]=AS = SJ =[s, s, s ]J = s, s + s, s The equations for the first and third columns say that s and s are eigenvectors for =,sothattheycanbefoundbyrowreducinga I: A I = 4 0 4 0 5 4 0 0 0 05 4 0 0 0 0 (, 0, ) T and (0,, 0) T thus span the set of eigenvectors for = s can be found by solving As = s + s,sothat(a I)s = s This means that (A I) s =(A I)s = 0, sothats ker(a I) A simple computation shows that (A I) =0sothatanys will do, but we must also choose s so that (A I)s = s is an eigenvector of A Since A I has rank one, we may choose any s so that (A I)s is nonzero In particular we can choose s = e,andthen s =(A I)s =(, 4, 4) T We can also choose s =(0,, 0) T,sinceitisaneigenvectornotspannedbythes and s which we just defined All this means that we can set S = 4 0 4 0 5 4 0 0 7

Exercise 5: A nilpotent matrix applewe show this by induction For r = the statement is obvious Define E r = 0 Im r Wehavethat 0 0 (E E r ) i,j = X k (E ) i,k (E r ) k,j In the sum on the right hand side only one term can contribute (since any row/column in E and E r contains only one nonzero entry, being a one) This occurs when there is a k so that k = i +,k + r = j, ie whenj = i + r + E r+ has all nonzero entries when j = i + r +,andthisprovesthate r+ = E E r Itnowfollowsthat (J m ( ) I) r+ =(J m ( ) I)(J m ( ) I) r = E E r = E r+, and the result follows Exercise 54: Properties of the Jordan form Let J = S AS be the Jordan form of the matrix A as in Theorem 59 Items are easily shown by induction, making use of the rules of block multiplication in and For Item 4, write E m := J m ( ) I m,withj m ( ) the Jordan block of order m By the binomial theorem, rx r rx r J m ( ) r =(E m + I m ) r = E k k m( I m ) r k r = k E k k m Since E k m = 0 for any k J m ( ) r = min{r,m } X k=0 k=0 m, weobtain r k r k E k m k=0 Exercise 55: Powers of a Jordan block Let S be as in Exercise 5 J is block-diagonal so that we can write (?) J n = 4 0 n apple n 0 05 = 4 0 0 5 = 4 n 0 0 05, 0 0 0 n 0 0 where we used property 4 in exercise 54 on the upper left block It follows that A 00 =(SJS ) 00 = SJ 00 S = 4 4 0 0 5 4 00 0 0 05 4 4 0 0 5 4 0 0 0 4 0 = 4 4 0 0 5 4 00 0 0 05 4 0 0 0 0 00 4 0 5 = 4 400 005 4 0 0 0 400 0 99 0 8

Exercise 56: The minimal polynomial For each i, ( i ) a i = ( i ) P g i m i,j divides A ( ) Since P g i m i,j max applejapplegi m i,j = m i,also( i ) m i divides A ( ) From this it follows that also µ A ( )divides A ( ) We have that µ A (A) = = S ( i I A) m i = ( i I SJS ) m i ( i I J) m i S = Sµ A (J)S It follows that µ A (A) =0 if and only if µ A (J) =0 We have that µ A (J) = = Now we also have ( i I J) m i = (diag( i I U,,( i I U )) m i (diag(( i I U ) m i,,( i I U ) m i ) =diag ( i I U ) m i,, ( i I U k ) m i ( i I U i ) m i =( i I diag(j mi, ( i ),,J mi,gi ( i ))) m i since =diag( i I J mi, ( i ),, ii J mi,gi ( i )) m i =diag(( i I J mi, ( i )) m i,,( i I J mi,gi ( i )) m i )=0, ( i I J mi,j ( i )) m i =( i I J mi,j ( i )) m i,j ( i I J mi, ( i )) m i m i,j We now get that ( i I U j ) m i =( j I U j ) m j so that µ A (J) =diag = 0( i I J mi, ( i )) m i m i,j = 0,i6=j ( i I U ) m i,, ( i I U j ) m i = 0, ( i I U k ) m i = 0 It follows that µ A (A) =0 Suppose now that p(a) =0 We can write p(a) =C Q r (k ii A) s i,wherek i are the zeros of p, withmultiplicitys i As above it follows that p(a) =0 if and only 9

if p(j) =0 Factorp(J) asabovetoobtain ry p(j) = diag (k i I U ) s i,, (k i I U k ) s i Note that k i I U j =diag(k i I J mj, ( j ),,k i I J mj,gj ( j )) is upper triangular with k i j on the diagonal If k i 6= j k i I U j must then be invertible, but then also (k i I U j ) s i is invertible In order for p(j) =0 we must then have that, for each j there exists a t so that k t = j The qth diagonal block entry in (k i I U j ) s i are ry ry (k i I J mj,q ( j )) s i =(k t I J mj,q ( j )) st (k i I J mj,q ( j )) s i,i6=t =( j I J mj,q ( j )) st ry,i6=t (k i I J mj,q ( j )) s i The last matrix here is invertible (all k i 6= j wjhen i 6= t), so that we must have that ( j I J mj,q ( j )) st = 0 in order for p(j) =0 We know from the exercises that this happens only when s t m j,q Sinceqwas arbitrary we obtain that s t m j,ie that j is a zero in p of multiplicity m j Since this applied for any j, itfollowsthatthe minimal polynomial divides p, andtheresultfollows 4 We have that A (A) =µ A (A) A (A) =0 Exercise 57: Big Jordan example The matrix A has Jordan form A = SJS,with 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 J = 0 0 0 0 0 0 0 0 0 0 0 0 0, S = 9 6 0 0 0 0 0 0 7 6 4 0 0 0 0 0 0 0 5 4 0 0 0 0 0 0 0 4 9 5 6 0 8 9 9 8 8 0 0 7 0 0 4 7 5 8 0 6 0 9 56 6 0 4 0 5 0 0 70 45 6 9 4 0 0 84 54 9 0 0 0 98 6 8 6 0 0 0 49 0 4 0 0 0 7 5 Exercise 50: Schur decomposition example The matrix U is unitary, as U U = U T U = I One directly verifies that apple R := U T AU = 0 4 Since this matrix is upper triangular, A = URU T is a Schur decomposition of A 0

Exercise 54: Skew-Hermitian matrix By definition, a matrix C is skew-hermitian if C = C =) : Suppose that C = A + ib, witha, B R m,m, is skew-hermitian Then A ib = C = C =(A + ib) = A T ib T, which implies that A T = A and B = B T (use that two complex numbers coincide if and only if their real parts coincide and their imaginary parts coincide) In other words, A is skew-hermitian and B is real symmetric (= : Suppose that we are given matrices A, B R m,m such that A is skew- Hermitian and B is real symmetric Let C = A + ib Then C =(A + ib) = A T ib T = A ib = (A + ib) = C, meaning that C is skew-hermitian Exercise 55: Eigenvalues of a skew-hermitian matrix Let A be a skew-hermitian matrix and consider a Schur triangularization A = URU of A Then R = U AU = U ( A )U = U A U = (U AU) = R Since R di ers from A by a similary transform, their eigenvalues coincide (use the multiplicative property of the determinant to show that det(a I) =det(u )det(uru I)) det(u) =det(r I)) As R is a triangular matrix, its eigenvalues i appear on its diagonal From the equation R = R it then follows that i = i, implyingthateach i is purely imaginary Exercise 56: Eigenvector expansion using orthogonal eigenvectors If x = P n c ju j and the eigenvectors are orthogonal we get that u i x = c j u i u j = c i u i u i, so that c i = u i x/(u i u i ) Exercise??: Left Eigenpairs If y A = y we get that (y A) = A y = y, sothat(, y) isaneigenpairfor A,sothat is an eigenvalue for A Thereisnoreasontobelieve,however,thatA and A have the same eigenvectors If the matrix is hermitian this is clearly the case Assume that (y, ) isalefteigenpairofa, and(x, ) isarighteigenpairof A, andsothat 6= Wehavethat y x = y Ax = y x, so that ( )y x =0,sothaty x =0,sothatx and y are orthogonal If A has a basis of right eigenvectors then it is diagonalizable, so that A = PDP,wherethecolumnsinPare x i We also have that A =(P ) D P The columns of (P ) are thus right eigenvectors for A,andthuslefteigenvectorsy i for A We now get that yi x j is the ij-entry in the matrix product ((P ) ) P = P P = I, and the result follows

The coordinates of v in the basis x j is P vsincetherowsinp are the conjugate of the columns in (P ),thesecoordinatesareyj v This proves that v = (yj v)x j The coordinates of v in the basis y j is P vsincetherowsinp are the conjugate of the columns in P, thesecoordinatesarex kv This proves that v = (x kv)x k, k= and this completes the proof 4 We have that A ( )=( )( ) = 5 +4,sothattheeigenvalues are =4and = Arighteigenvectorfor =4canbefoundfrom, 0 0 so that we can use x =(, ) A right eigenvector for 0 0, so that we can use x =(, ) Alefteigenvectorfor =4canbefoundfrom, 0 0 so that we can use y =(, ) A left eigenvector for 0 0, =canbefoundfrom =canbefoundfrom so that we can use x =(, ) We see that yx ==yx =,sothatthetwoexpansionsare v = yj vx j = y kvx k k= Exercise 546: Eigenvalue perturbation for Hermitian matrices Since a positive semidefinite matrix has no negative eigenvalues, one has n 0 It immediately follows from i + n apple i that in this case i i Exercise 548: Ho man-wielandt The matrix A has eigenvalues 0 and 4, and the matrix B has eigenvalue 0 with algebraic multiplicity two Independently of the choice of the permutation i,,i n,the Ho man-wielandt Theorem would yield 6 = µ ij j apple a ij b ij =,

which clearly cannot be valid The Ho man-wielandt Theorem cannot be applied to these matrices, because B is not normal, B H B = 6= = BB H Exercise 55: Biorthogonal expansion The matrix A has characteristic polynomial det(a I) =( 4)( ) and right eigenpairs (, x )=(4,[, ] T )and(, x )=(,[, ] T ) Since the right eigenvectors x, x are linearly independent, there exists vectors y, y satisfying hy i, x j i = ij Avectororthogonaltox must be on the form y = [, ] T, and a vector orthogonal to x =[, ] T must be on the form y = [, ] T These choices secure that hy i, x j i =0wheni6= j Wealsomusthavethat =hy, x i = + = = hy, x i = + =, so that = =/, and we can choose the dual basis as y = [, ]T and y = [, ]T Equation(5)thengivesusthebiorthogonalexpansions v = hv, y ix + hv, y ix = (v + v )x + (v v )x = hv, x iy + hv, x iy =(v + v )y +(v v )y Exercise 554: Generalized Rayleigh quotient Suppose (, x) isarighteigenpairfora, sothatax = x Then the generalized Rayleight quotient for A is R(y, x) := y Ax y x = y x y x =, which is well defined whenever y x 6= 0 Ontheotherhand,if(, y) isalefteigenpair for A, theny A = y and it follows that R(y, x) := y Ax y x = y x y x =