Linear Algebra II Lecture 13 Xi Chen 1 1 University of Alberta November 14, 2014
Outline 1 2
If v is an eigenvector of T : V V corresponding to λ, then v is an eigenvector of T m corresponding to λ m since T (v) = λv T m (v) = T m 1 (T (v)) = T m 1 (λv) = λt m 1 (v) = λt m 2 (T (v)) =... = λ m v. More generally, if f (x) is a polynomial in x and v is an eigenvector of T : V V corresponding to λ, then f (T )(v) = (a 0 I + a 1 T +... + a n T n )(v) = a 0 v + a 1 T (v) +... + a n T n (v) = a 0 v + a 1 λv +... + a n λ n v = (a 0 + a 1 λ +... + a n λ n )v = f (λ)v So v is an eigenvector of f (T ) corresponding to f (λ).
T (v) = λv f (T )(v) = f (λ)v For example, find all the possible eigenvalues of T : V V satisfying T 3 = T : Let v be an eigenvector of T corresponding to λ. Then v 0, T (v) = λv and (T 3 T )(v) = (λ 3 λ)v λ 3 λ = 0 λ = 1, 0, 1. For example, find the eigenvalues of T : R[x] R[x] given by T (f (x)) = f (1 x). Since T 2 (f (x)) = T (f (1 x)) = f (x), T 2 = I. Let v = f (x) be an eigenvector of T corresponding to λ. Then (T 2 I)(v) = (λ 2 1)v λ 2 1 = 0 λ = 1, 1. Both 1 and 1 are eigenvalues of T since T (1) = 1 and T (x 1/2) = 1/2 x.
Let v 1, v 2,..., v n be the eigenvectors of T : V V corresponding to distinct eigenvalues λ 1, λ 2,..., λ n. Then v 1, v 2,..., v n are linearly independent. Otherwise, there are x 1, x 2,..., x n R, not all zero, such that x 1 v 1 + x 2 v 2 +... + x n v n = 0 (λ 2 I T )(λ 3 I T )...(λ n I T )(x 1 v 1 + x 2 v 2 +... + x n v n ) = 0 (λ 2 λ 1 )(λ 3 λ 1 )...(λ n λ 1 )x 1 v 1 = 0 x 1 = 0 Similarly, applying (λ 1 I T )(λ 3 I T )...(λ n I T ) yields (λ 1 I T )(λ 3 I T )...(λ n I T )(x 1 v 1 + x 2 v 2 +... + x n v n ) = 0 (λ 1 λ 2 )(λ 3 λ 2 )...(λ n λ 2 )x 2 v 2 = 0 x 2 = 0 Applying j i (λ ji T ) yields x i = 0 for i = 1, 2,..., n.
Since eigenvectors corresponding to different eigenvalues are linearly independent, K (λ 1 I T ) K (λ 2 I T ) = {0} for λ 1 λ 2. More generally, ( ) K (λ 1 I T ) K (λ 2 I T )+K (λ 3 I T )+...+K (λ n I T ) = {0} for λ 1, λ 2,..., λ n distinct. In other words, if B 1 is a basis of K (λ 1 I T ), B 2 is a basis of K (λ 2 I T ),..., and B n is a basis of K (λ n I T ), then B 1 B 2... B n is a linearly independent set (but not necessarily a basis of V ).
(Cayley-Hamilton) Let V be a finite-dimensional vector space and let T : V V be a linear transformation with characteristic polynomial f (x). Then f (T ) = 0. For example, find T 2014 for T : R 2 R 2 the linear transformation given by T (x, y) = (x + y, x + y). The characteristic polynomial of T is x 2 2x. Then T 2 = 2T by Cayley-Hamilton. So T n = T n 2 T 2 = T n 2 (2T ) = 2T n 1 =... = 2 n 1 T and hence T 2014 (x, y) = 2 2013 T (x, y) = (2 2013 (x + y), 2 2013 (x + y)).
Diagonalization of Matrices A square matrix A is diagonalizable if A is similar to a diagonal matrix, i.e., there exists an invertible matrix P such that PAP 1 is diagonal. s A linear transformation T : V V is diagonalizable if there exists a basis B of V such that [T ] B B is diagonal. If such B exists, T is diagonalized by B. If [T ] B B is not diagonal, find another basis B such that [T ] B B = P B B[T ] B B P 1 B B is diagonal. So diagonalizing T is equivalent to diagonalizing [T ] B B.
Non-Diagonalizable Linear Endomorphisms (Matrices) Not all linear endomorphisms/matrices are diagonalizable!!! For example, let T (x, y) = (x + y, y) with the corresponding matrix [ ] 1 1 A = [T ] B B =. 0 1 If T, or equivalently, A is diagonalizable, then [ ] PAP 1 λ1 = for some P. Then (x λ 1 )(x λ 2 ) = det(xi A) = (x 1) 2. So λ 1 = λ 2 = 1 and λ 2 PAP 1 = I A = P 1 IP = P 1 P = I. Contradiction.
Theorem Let V be a vector space of dimension n. Then a linear transformation T : V V is diagonalizable if and only if T has n linearly independent eigenvectors v 1, v 2,..., v n. In addition, λ 1 λ 2 [T ] B B =... λn for B = {v 1, v 2,..., v n } if v 1, v 2,..., v n are linearly independent eigenvectors of T corresponding to λ 1, λ 2,..., λ n. Equivalently, T is diagonalizable if and only if V = K (λ 1 I T ) + K (λ 2 I T ) +... + K (λ n I T ).
In the previous example T (x, y) = (x + y, y) It has one eigenvalue 1 with eigenspace K (I T ) = {(x, y) : ( y, 0) = 0} = {(x, y) : y = 0} = Span{(1, 0)} R 2. If T has n distinct eigenvalues, then T is diagonalizable. The converse does not hold. If T is diagonalizable, f (T ) is diagonalizable for all polynomials f (x). If T is diagonalizable and T is bijective, T 1 is also diagonalizable. Every symmetric matrix is diagonalizable. If [T ] B B is symmetric, then T is diagonalizable.
Upper Triagularization Theorem If A is an n n matrix with n eigenvalues λ 1, λ 2,..., λ n, then A is similar to an upper triangular matrix, i.e, there exists an invertible matrix P such that λ 1... PAP 1 λ 2... =.... λ n Proof. Let T : R n R n be the linear transformation given by T (v) = Av. Then A = [T ] B B for the standard basis B.
Upper Triagularization Cont. Let v 1 be an eigenvector of A corresponding to λ 1. Then T (v 1 ) = λ 1 v 1. We complete {v 1 } to a basis of R n : B = {v 1, v 2,..., v n }. Then [T ] B B = [ ] [T (v 1 )] B [T (v 2 )] B... [T (v n )] B λ 1... 0... =....... 0...
Upper Triagularization Cont. Then [T ] B B = P B B [T ] }{{}}{{ B B } PAP 1 = P [ ] λ1 D A P B B }{{} P 1 By induction on n, we can find Q such that QDQ 1 is upper triangular. Then [ 1 Q is upper triangular. ] [ ] [ λ1 1 D Q 1 ] = [ λ1 ] QDQ 1