Eigenvalues and Eigenvectors Definition 0 Let A R n n be an n n real matrix A number λ R is a real eigenvalue of A if there exists a nonzero vector v R n such that A v = λ v The vector v is called an eigenvector of A for λ Example 0 Let and v = 0 5 T R We compute 0 A v = v Thus is an eigenvalue of A and v = 0 5 T is an eigenvector of A belonging to Suppose that A R n n and λ R Then is a subspace of R n as E A λ = { v R n A v = λ v} E A λ = NλI n A λ is an eigenvalue of A if and only if E A λ { 0} If λ is an eigenvector of A then E A λ is called an eigenspace of A Definition 0 Suppose that A R n n and t is an indeterminate The characteristic polynomial of A is χ A t = DettI n A Expanding the determinant we see that χ A t = t n + lower order terms in t is a polynomial in t of degree n The following theorem gives a method of computing the eigenvalues of a matrix Theorem 04 Suppose that A R n n Then λ R is a real eigenvalue of A if and only if λ is a real root of χ A t = 0 Proof λ R is a real eigenvalue of A if and only if λ R and E A λ = NλI n A { 0} which holds if and only if λ R and χ A λ = DetλI n 0 Corollary 05 A matrix A R n n has at most n distinct eigenvalues This follows from the fact that a polynomial of degree n has at most n distinct roots Since DetA ti n = n χ A t eigenvalues may also be computed as the roots of DetA ti n = 0 and eigenspaces as Example 06 Let E A λ = NλI n NA λi n 0 Compute the real eigenvalues and a basis of each of the eigenspaces of A
We compute DettI t t = t t + = t t which has the real roots t = and t = Thus the real eigenvalues of A are λ = and λ = We now compute a basis of the eigenspace E A of A We have that E A = NI A I 0 0 is the RRE form of I A The standard form solution of the associated homogeneous system is x = t x = t with t R We write to see that x = t x { is a basis of E A We compute a basis of the eigenspace E A of A We have that E A = NI A I 0 0 is the RRE form of I A The standard form solution of the associated homogeneous system is x = t x = t with t R We write to see that is a basis of E A } x = t x { Going back to Example 0 we found from direct computation that v = 0 5 T is an eigenvector of A belonging to We can also see this from the fact that the eigenvectors of A belonging to are the nonzero elements of the line } E A = Span T = {c T c R} and so v = 5 T E A is an eigenvector of A belonging to Definition 07 Suppose that V is a real vector space and L : V V is a linear map A number λ R is a real eigenvalue of L if there exists a nonzero vector v V such that L v = λ v The vector v is called an eigenvector of L for λ
The eigenspace E L λ of an eigenvalue λ of L is E L λ is a subspace of V E L λ = { v V L v = λ v} Example 08 Let C R be the infinitely differentiable functions on R Let L : C R C R be differentiation: Lf = df dx for f C R Then every real number is an eigenvalue of L For λ R {e λx } is a basis of E L λ If A R n n then the eigenvalues and eigenspaces of the matrix A and of the linear map L A : R n R n are the same Recall that a linear map L : V W is determined by its values on a basis of V Example 09 Let L : P P be the linear map defined by L = x Lx = Show that f = 6 x is an eigenvector for L Determine the eigenvalues of L and find bases of the eigenspaces of L We first solve We compute Lf = L6 x = 6L Lx = 6 x = 6x = 6 x = f Thus f is an eigenvector for L belonging to the eigenvalue λ = We now solve The standard basis of P is β = { x} We compute M β β L = L β Lx β = 0 This is the matrix A that we studied in Example 06 We found that the eigenvalues of A are λ = and λ = a basis for E A is { T } and a basis of E A is { T } We have that + x β = and + x β = Thus λ = and λ = are the eigenvalues of L { + x} is a basis of E L and { + x} is a basis of E L Theorem 00 Suppose that L : V V is a linear map and λ λ r are distinct eigenvalues of L Suppose that {v i vi vi d i } are bases of the eigenspaces E L λ i for i r with d i = dime L λ i Then {v v d v v d v r v r d r } are linearly independent
We prove this in a special case; we suppose that λ is an eigenvalue of A with eigenvector v λ is an eigenvalue of A with eigenvector v and λ λ We will show that {v v } are linearly independent Suppose that c c R and c v + c v = 0 Then 0 = L 0 = Lc v + c v = c Lv + c Lv = c λ v + c λ v subtracting equation from λ times equation we obtain 0 = λ c v + c v c λ v + c λ v = c λ λ v Since λ λ 0 and v 0 we have that c = 0 Now going back to we see that c v = 0 Since v 0 we have that c = 0 As c = c = 0 is the only solution to we have that {v v } are linearly independent Definition 0 Suppose that L : V V is a linear map of finite dimensional vector spaces L is diagonalizable if there exists a basis {v v n } of V consisting of eigenvectors of L The word diagonalizable in Definition 0 is explained by the following theorem Theorem 0 Suppose that L : V V is a linear map and there exists a basis β = {v v n } of V consisting of eigenvectors of L Then M β β L is a diagonal matrix Proof Let c c n R be the eigenvalues of the v i so that Lv i = c i v i for i n We then have c 0 0 M β β L = Lv 0 c 0 β Lv β Lv n β = 0 0 c n is a diagonal matrix Recall that matrices A B R n n are similar over the reals if there exists an invertible matrix C R n n such that B = C AC Definition 0 A matrix A is diagonalizable over the reals if A is similar over the reals to a diagonal matrix D Theorem 04 The matrix A R n n is diagonalizable over the reals if and only if the linear map L A : R n R n is diagonalizable We will prove the most interesting direction that L A diagonalizable implies A is diagonalizable 4
Let β = {v v n } be a basis of R n consisting of eigenvectors of A Let c c n be the corresponding eigenvalues so that Lv i = c i v i for i n Then c 0 0 D = M β β L = Lv 0 c 0 β Lv β Lv n β = 0 0 c n is a diagonal matrix Let β be the standard basis of R n and let C = M β β = v v v n We have C AC = M β β M β β L A M β β = M β β L D Thus A is similar to a diagonal matrix Theorem 04 gives an algorithm to determine if a matrix is diagonalizable and if it is how to diagonalize it Suppose that A is a square matrix of size n A R n n Let λ λ r be the distinct real eigenvalues of A Then dim E A λ + dim E A λ + + dim E A λ r n = sizea We have that A is diagonalizable over the reals if and only if Example 05 Let dim E A λ + dim E A λ + + dim E A λ r = n = sizea 0 Determine if A is diagonalizable over the reals If A is diagonalizable find an invertible matrix C and a diagonal matrix D such that C AC = D We calculated in Example 06 that the eigenvalues of A are λ = and λ = and that a basis of E A is { T } and a basis of E A is { T } A is diagonalizable over the reals since and are the real eigenvalues of A and dim E A + dim E A = = sizea Set and C = D = 0 0 Then C AC = D by the algorithm of Theorem 04 5
Example 06 Let 0 0 0 Determine if A is diagonalizable over the reals If A is diagonalizable find an invertible matrix C and a diagonal matrix D such that C AC = D The characteristic polynomial of A is χ A t = DettI t 0 t = t Thus the only eigenvalue of A is λ = 0 We compute a basis of the eigenspace E A 0 = N0I N NA 0 0 0 is the RRE form of A The standard form solution of the associated homogeneous system is x = t x = 0 with t R Writing x x T = t 0 T we see that { 0 T } is a basis of E A 0 A is not diagonalizable over the reals since 0 is the only eigenvalue of A and dim E A 0 = < = sizea The complex numbers C are defined by adjoining the imaginary number i = to R; that is i = and C = {a + bi a b R} For z = a + bi z = c + di C with a b c d R we have the formulas z + z = a + bi + c + di = a + c + b + di z z = a + bic + di = ac bd + ad + bci and if z 0 then 4 = z a + bi = a bi a + bia bi = a a + b b a + b i We define C n to be the n column vectors with complex coefficients C n to be the n row vectors with complex coefficients and C m n to be the m n matrices with complex coefficients Almost everything that we have done in this class is valid if we replace R with C; for instance we have real vector spaces over the reals R and complex vector spaces over the complex numbers C The only exception is that an inner product must be defined differently on a complex vector space Complex numbers are important for us because they have the important property that they are algebraically closed 6
Theorem 07 Fundamental Theorem of Algebra Suppose that ft = t n + a t n + +a n is a polynomial with complex coefficients and n Then ft = 0 has a complex root α As a corollary every complex polynomial factors into a product of linear factors Definition 08 Let A C n n be an n n complex matrix A number λ C is a complex eigenvalue of A if there exists a nonzero vector v C n such that A v = λ v The vector v is called an eigenvector of A for λ Suppose that A C n n and λ C Then the complex eigenspace of λ E C Aλ = { v C n A v = λ v} is a subspace of the complex vector space C n EA C λ is the complex null space E C Aλ = N C λi n { v C n λi n A v = 0} Since the reals are contained in the complex numbers any real matrix is also a complex matrix any real eigenvalue is also a complex eigenvalue and any real eigenvector is a complex eigenvector A complex matrix A C n n is diagonalizable over C if A is similar over C to a complex diagonal matrix D; that is there exists an invertible matrix C C n n such that D = C AC Example 09 Consider the matrix 0 0 A is not diagonalizable over the reals A is diagonalizable over the complex numbers We compute χ A t = DettI t t = t + = t it + i A has no real eigenvalues so A is not diagonalizable over the reals However A has two complex eigenvalues λ = i and λ = i We find a complex basis of EA Ci = N C ii A i i i ii i i 0 0 is the RRE form of A ii The first operation is to interchange the two rows The last operation is the elementary row operation of adding i times the first row to the second row as i i = i i = i The standard form solution of the associated homogeneous system is x = it x = t with t C Writing x x T = t i T we see that { } i is a basis of E C A i 7
Now we find a complex basis of EA C i = N C ii A i i i ii i i 0 0 is the RRE form of ii A The first operation is to interchange the two rows The last operation is the elementary row operation of adding i times the first row to the second row as i i = i i = i The standard form solution of the associated homogeneous system is x = it x = t with t C Writing x x T = ti T we see that { } i is a basis of E C A i i and i are the eigenvalues of A and dim E C Ai + dim E C A i = = sizea so that A is diagonalizable over the complex numbers Set C = i i and D = Then C AC = D by the algorithm of Theorem 04 Going back to the matrix 0 0 0 i 0 0 i which we showed was not diagonalizable over R we see that the only complex eigenvalue of A is 0 since χ A t = t and { 0 T } is a basis of the complex eigenspace E C A 0 0 is the only complex eigenvalue of A and dim E C A0 = < = sizea so that A is not diagonalizable over the complex numbers Diagonalization of Real Symmetric Matrices Suppose that A R n n is a symmetric matrix Then the spectral theorem tells us that all eigenvalues of A are real and that R n has a basis of eigenvectors of A Further eigenvectors with distinct eigenvalues are orthogonal Thus R n has an orthonormal basis of eigenvectors This means that we may refine our diagonalization algorithm above adding an extra step using Gram Schmidt to obtain an ON basis u i u isi of Eλ i from the basis v i v isi of Eλ i which we compute in that algorithm Since eigenvectors with distinct eigenvalues are perpendicular we may put all of these ON sets of vectors together to obtain an ON basis u u s u u rsr 8
of R n Let Q = u u s u u rsr Q is an orthogonal matrix Lecture Note 8 so that Q = Q T We have orthogonally diagonalized A λ Q T λ AQ = D = λ where all nondiagonal entries of D are zero and Q is an orthogonal matrix Example 00 Find an orthogonal matrix Q which orthogonally diagonalizes 4 4 4 Solution: From the equation t 4 DettI Det t 4 = t t + 6t = t t 8 = 0 t 4 we see that the eigenvalues of A are λ = and λ = 8 To factor this polynomial use the fact that the rational roots must be integers which divide the constant term - Testing divisors of - we find that and 8 are roots Then we divide t t + 6t by t t 8 = t 0t + 6 to get t with no remainder The eigenspace E A is the nullspace NI A We have that I and the RRE form of this matrix is From the standard form solution x x = t t t x t with t t R we deduce that 0 0 0 0 0 0 = t 0 { 0 T 0 T } λ r + t 0 is a basis of E A The eigenspace E A 8 is the nullspace N8I A We have that 4 8I 4 4 9
and the RRE form of this matrix is From the standard form solution with t R we deduce that x x x 0 0 0 0 0 = t t t = t { T } is a basis of E A 8 We next find an orthogonal basis of E A using Gram-Schmidt We have u = 0 T 0T = 0 T v = 0 T < 0 u > u = 0 T 0 T = T and using the trick explained in Lecture Note 8 when we learned Gram-Schmidt we compute Thus u = v v = T = 6 6 { 0 T 6 6 6 T } is an ON basis of E A Applying Gram-Schmidt to our basis of E A 8 we compute so that is an ON basis of E A 8 Finally we have that u = T T = Q = is an orthogonal matrix such that { Q T AQ = D = T } 6 6 0 6 0 0 0 0 0 0 8 0 T 6 T
Some applications of these methods: Solutions to systems of linear ordinary differential equations Section 6 of Leon and most generally Chapter of Braun Differential equations and their applications The Page Rank Algorithm pages 5-6 of Leon