The and its Applications Jeremy IMPACT Brigham Young University
A square matrix A is a linear operator on {R, C} n. A is diagonalizable if and only if it has n linearly independent eigenvectors. What happens if A does not have n linearly independent eigenvectors? When does this happen? What general form can we obtain in this case?
The The is one decomposition of a matrix, A = P 1 JP where J is the normal form. It has the advantage of corresponding to the eigenspaces and of being as close to diagonal as possible. More specifically, if a matrix is diagonal then its Jordan Normal Form is the diagonalization.
Complementary Subspaces Definition Two subspaces U, W of a vector space V are complementary if U W = {0} and for all v V, there exist u U, w W such that v = u + w. In fact, u, w are the unique vectors that satisfy this property. We denote this V = U W.
Complementary Subspaces Remark This idea extends to finite collections: V = W 1 W 2 W m. Remark If U, W are subspaces of V with dim U + dim W = dim V and U W = {0} then it can be shown that U W = V.
The Index of a Matrix Recall and N(A) N(A 2 ) N(A 3 )... R(A) R(A 2 ) R(A 3 )...
The Index of a Matrix Definition The index of a matrix is the smallest nonnegative integer k = Ind(A) such that N(A k ) = N(A k+1 ) =... R(A k ) = R(A k+1 ) =... where A 0 = I. Note that Ind(A) = 0 if A is invertible.
The Index of a Matrix Theorem Let A be a square matrix and let k = Ind(A). Then V = N(A k ) R(A k ).
The Index of a Matrix Proof. Suppose x N(A k ) R(A k ). Then A k x = 0 and there exists y such that x = A k y. Therefore, A k A k y = A 2k y = 0 so that y N(A 2k ). But N(A 2k ) = N(A k ) so that x = A k y = 0. The rank-nullity theorem implies that so V = N(A k ) R(A k ). dim N(A k ) + dim R(A k ) = n = dim(v )
Invariant Subspaces Definition A subspace W V is said to be invariant (with respect to a matrix A) if AW W.
Invariant Subspaces Example Notice that for any matrix A, the range R(A) is invariant since for x R(A), Ax R(A) by definition. It follows that R(A k ) is invariant for any k. Also, N(A) is invariant since Ax = 0 N(A). So is N(A k ). Another example is an eigenspae N(A λi ) because any vector satisfies Ax = λx N(A λi ).
Decomposing a matrix If V = U W and U and W are A-invariant subspaces then there exists an invertible matrix P such that [ ] A = P 1 AU 0 P. 0 A W In fact, P = [p 1,..., p r, p r+1,... p n ] where {p 1,..., p r } is a basis for U and {p r+1,..., p n } is a basis for W. Furthermore, A U = A U is the restriction of A to the subspace U.
Matrix Diagonalization When we diagonalize A, we are simply using complementary invariant spaces. These are the eigenspaces: V = N(A λ 1 I ) N(A λ 2 I ) N(A λ r I ). The matrix that diagonalizes A is P containing bases for the eigenspaces (the columns are eigenvectors) and the blocks A λi are diagonal because on the space N(A λi ), the action of A is simply that of λi.
Matrix Diagonalization How do we know that V = n N(A λ i I )? i=1
Matrix Diagonalization Example Consider the matrix A = [ ] 2 1 0 2 Since A is upper diagonal, its only eigenvalue is 2. What are the eigenvectors?
Matrix Diagonlization Clearly V N(A 2I ) because the dimensions do not match. This matrix cannot be diagonalized because it doesn t have a full set of linearly independent eigenvectors.
Generalized Eigenspaces Notice that we had repeated eigenvalues. Remember that if we have n distinct eigenvalues we know there are n linearly independent eigenvectors. This problem only occurs when we have repeated eigenvalues.
Generalized Eigenspaces What if we could make N(A 2I ) bigger so that it covered all of V? Let s try N(A 2I ) 2 for example. It is easy to show that V = N(A 2I ) 2. Notice that ([ ] 2 1 0 2 [ 2 0 0 2 ]) ( ) 0 = 1 ( ) 1 0 So we found a vector not in N(A 2I ) such that (A 2I )x N(A 2I ). This is called an generalized eigenvector of second order.
Generalized Eigenspaces This can be repeated. In fact, we can show that if λ 1,..., λ r are the distinct eigenvalues of A and k i = Ind(A λ i I ) then V = N(A λ 1 I ) k 1 N(A λ r I ) kr. N(A λ i I ) k i is called the generalized eigenspace of A corresponding to λ i.
Diagonalization Revisited If we can t diagonalize a matrix, how do we choose a basis that gets us close? Remember that if x is a generalized eigenvector of order k then (A λi )x is a generalized eigenvector of order k 1. Repeating we may obtain a sequence x 1, x 2,..., x k such that 0 = (A λi )x 1 x 1 = (A λi )x 2. x k 1 = (A λi )x k
Diagonalization Revisited What is the action of A on the space spanned by {x 1,..., x k }? Well, we know what A λi looks like relative to this basis: 0 1 0... 0 0 0 1... 0 A λi =. 0 0 0... 1 0 0 0... 0
Diagonlization Revisited So then A must be λ 1 0... 0 0 λ 1... 0 A =. 0 0 0... 1 0 0 0... λ
Diagonalization Revisited Of course, {x 1,..., x k } may not span all of N(A λi ) k. So, to get a basis for N(A λi ) k we follows this same idea. Take a basis {x 1,... x d1 } for N(A λi ). Extend this to a basis for N(A λi ) 2 so that {x 1,..., x d1, x d1 +1,..., x d2 } (A λi ){x d1 +1,..., x d2 } = {x 1,..., x d2 d 1 }. Then the portion of P corresponding to N(A λi ) is be [x 1, x d1 +1, x d2 +1,..., x 2, x d1 +2,..., x d1 ].
If we choose our basis this way, we can decompose A into the following form: J(λ 1 ) 0... 0 0 J(λ 2 )... 0 A =.. 0 0... J(λ r ) The block J(λ i ) is called a Jordan segment for λ i.
A Jordan segment is a matrix of the form J 1 (λ i ) 0... 0 0 J 2 (λ i )... 0 J(λ i ) =.. 0 0... J ri (λ i ) Each J l (λ i ) is called a Jordan block for λ i.
A Jordan block for λ i is a matrix λ i 1 0... 0 0 λ i 1... 0 J l (λ i ) =. 0 0 0... 1 0 0 0... λ i