4 Diagonalization 4 Change of basis Let B (b,b ) be an ordered basis for R and let B b b (the matrix with b and b as columns) If x is a vector in R, then its coordinate vector x B relative to B satisfies the formula Bx B x We can see this by writing x B α,α T to get Bx B α b b α α b +α b x The columns of B are linearly independent (since they form a basis) so B has full rank and is therefore invertible This allows us to solve the equation above for the coordinate vector: x B B x 4 Example Let B (b,b ) be the ordered basis of R with b, T and b, T Use the formula above to find the coordinate vector x B where x, T Solution We have so x B B x B b b, 5 (Check: ( 5)b +()b 5 + x) 5 Let C (c,c ) be another ordered basis for R and put C c c For any vector x in R, the coordinate vectors of x relative to B and C satisfy the equation Bx B Cx C
4 DIAGONALIZATION This equation holds since both sides equal x by the first equation of the section We can solve for x B to get x B B } {{ C } x C P The matrix P B C is called the transformation matrix from C to B 4 Example Let B (b,b ) and C (c,c ) be the ordered bases of R with 0 b, b 6, c 4, c 5 and let x and y be vectors in R (a) Find the transformation matrix P from C to B (b) Given that x C,6 T, find x B (c) Given that y B 7, T, find y C Solution (a) We have P B 0 C 6 4 5 4 6 0 5 4 (b) We have x B Px C 6 9 5 (c) Multiplying both sides of the change of basis formula by P we get y C P y B 7 5 0 8 6 Here is the general formulation for a change of basis:
4 DIAGONALIZATION Change of basis Let B (b,b,,b n ) and C (c,c,,c n ) be two ordered bases for R n, and put B b b b n and C c c c n For any vector x in R n, we have so that Bx B Cx C x B } B {{ C } x C P The matrix P B C is called the transformation matrix from C to B 4 Linear function and basis change Theorem Let L : R n R n be a linear function, let B and C be ordered bases for R n, let P be the transformation matrix from C to B, and let A be the matrix of L relative to B The matrix of L relative to C is P AP, that is, for all x R n L(x) C P APx C Proof For all x R n we have P APx C P Ax B definition of transformation matrix P L(x) B A is matrix of L relative to B L(x) C P is transformation matrix from B to C 4 Example Let L : R R be the linear function given by L(x) x + x x + x (a) Find the matrix A of L (relative to the standard ordered basis E (e,e ))
4 DIAGONALIZATION 4 (b) Find the transformation matrix P from C (, T,, T ) to E (c) Use the theorem to find the matrix of L relative to C Solution (a) We have A L(e ) L(e ) (b) Writing E e e we see that E I, so that P E C I C C (c) According to the theorem, the matrix of L relative to C is P AP C AC 4 0 0 0 0 In this example, the matrix of L relative to the basis C turned out to be much simpler than the matrix relative to the standard basis The reason is that C is a basis for R consisting of eigenvectors of L We investigate such bases in the next section 4 Method for diagonalization The matrix 0 0 0 0 0 0 is an example of a diagonal matrix In general, a diagonal matrix is a matrix having the property that every entry not on the main diagonal is 0 An n n matrix A is diagonalizable if there exists an invertible n n matrix P such that P AP D, where D is a diagonal matrix
4 DIAGONALIZATION 5 Theorem Let A be an n n matrix The matrix A is diagonalizable if and only if there exists a basis for R n consisting of eigenvectors of A In this case, if P is the matrix with the eigenvectors as columns, then P AP D with D diagonal Proof We will prove only the case n Assume that there exists a basis {b,b } for R consisting of eigenvectors of A Let λ and λ be the corresponding eigenvalues and write D We have λ 0 0 λ AP A λ b b Ab Ab λ b λ b b b 0 PD 0 λ Therefore, P AP D Conversely, if P diagonalizes A, then the equation above shows that the columns of P must be eigenvectors of A and these columns are linearly independent since P is invertible 4 Example Let A If possible, find an invertible 4 matrix P such that P AP D, where D is a diagonal matrix Solution According to the theorem, such a matrix P exists if and only if there exists a basis for R consisting of eigenvectors of A The characteristic polynomial of A is det(a λi) λ 4 λ ( λ)(4 λ)+ λ 5λ+6 (λ )(λ ) The eigenvalues of A are the zeros of this polynomial, namely, λ, Next, the λ-eigenspace of A is the solution set of the equation (A λi)x 0: (λ ) A I 0 0 0 0 0 0 0, so the -eigenspace of A is {t,t T t R} Letting t, we get a -eigenvector, T ;
4 DIAGONALIZATION 6 (λ ) A I 0 0 0 0 0 0 0, so the -eigenspace of A is { t,tt t R} Letting t (to avoid fractions), we get a -eigenvector, T Theeigenvectors, T and, T ofa formabasisforr (neitherisamultiple of the other so they are linearly independent; since dimr, they form a basis) According to the theorem, the matrix P with these vectors as columns should have the indicated property: P Computing we get P AP 4 4 0 D 0 4 (Note that the eigenvalues of A appear along the main diagonal of D) 4 Example Is the matrix A Solution The characteristic polynomial of A is det(a λi) λ λ λ + 0 diagonalizable? Explain 0 Since this polynomial has no zeros (in R), the matrix A has no eigenvalues Therefore, there is no basis for R consisting of eigenvectors of A and A is not diagonalizable according to the theorem (Here s another way to see that A has no eigenvalues A is the matrix of the linear function 90 clockwise rotation Since this function sends no nonzero vector to a multiple of itself, it has no eigenvalues, and therefore A has no eigenvalues either) 44 Power of matrix It is often necessary to compute high powers of a square matrix A, such as A 0 Just multiplying the matrix A by itself overand overagaincan be quite tedious
4 DIAGONALIZATION 7 However, if A is diagonalizable, then there is an observation that greatly reduces the number of computations: Let A be a diagonalizable matrix, so that P AP D with D diagonal Solving this equation for A, we get A PDP Note that and in less detail In general, A AA (PDP )(PDP ) PD(P P)DP PDIDP PDDP PD P, A (PDP )(PDP )(PDP ) PD P P AP D A n PD n P Note that since D is diagonal, the powerd n is obtained by raisingeach diagonal entry to the nth power 44 Example Compute A 0, where A 4 Solution In Example 4 we found a matrix P such that P AP D with D diagonal Using the results of that example we get 0 A 0 PD 0 P 0 0 0 0 0 0 0 0 0 0 0 0 + 0 0 0 + 0 5700 5805 6050 7074
4 DIAGONALIZATION 8 4 Exercises 4 Let B (b,b ) and C (c,c ) be the ordered bases of R with 0 b, b, c 5, c 4 and let x be a vector in R (a) Find the transformation matrix P from C to B (b) Given that x C, T, use part (a) to find x B and check your answer by showing that both coordinate vectors yield the same vector x 4 Let L : R R be reflection across the line x x / (the line through the origin making an angle of 0 with the x -axis) (a) Find the matrix A of L (relative to the standard ordered basis E (e,e )) (b) Find the transformation matrix P from C (, T,, T ) to E (c) Use the theorem in Section 4 to find the matrix of L relative to C Hint: Recall the 0-60-90 triangle with legs,, and 4 Let A 5 6 (a) If possible, find an invertible matrix Psuch that P AP D, where D is a diagonal matrix (b) Find A 0 (Hint: Use the formula in Section 44)