7., 8., 8.2 Diagonalization P. Danziger Change of Basis Given a basis of R n, B {v,..., v n }, we have seen that the matrix whose columns consist of these vectors can be thought of as a change of basis matrix. A B v v 2... v n Given a vector u R n, we write u B, to represent the components of u with respect to the basis B. We write S for the standard basis, so u S are the components of u with respect to the standard basis. Given a vector u R n, the components of u with respect to the basis B by finding the solutions to A B u B u S. Since B is a basis, A B is invertible, and we can find u B by u B A B u S. [Note in the books rather confusing notation A B would be called A B S and A B is A S B.] A B can be thought of as a passive transformation, instead of thinking of the transformation T AB associated with A B as actively moving the space, we think of it as passively moving the coordinates. Now, note that if an n n matrix P is invertible then the columns of P form a basis for R n. every invertible matrix is a change of basis matrix. P can be thought of as the matrix which transforms to the coordinate system consisting of the columns of P. P then transforms such a coordinate system back to the original coordinate system. Let B be the column vectors of P. Given a vector u R n, we can find u B by solving the system of equations P u B u S. We can solve this equation using the inverse u B P u S. Thus the components of u with respect to the basis given by the column vectors of P are Example u B P u S.. Find the components of u 2, with respect to the basis B {,,, 2}. Consider P 2, P 2 2 2 3 u B 3, B are the components of the vector u with respect to the basis {,,, 2}.
2. Given that a vector has components 3, B with respect to the basis B {,,, 2}, find the components of u with respect to the standard basis. 3, P is the vector 2,. 2 3 2 2 Similarity Definition 2 Two n n matrices, A and B, are similar if there exists an invertible matrix P such that B P AP. Notes If A is similar to B, so B P AP, then P BP A, and so B is similar to A. If A is similar to B, so B P AP, then P B AP. If A and B are similar matrices, so B P AP, then B can be thought of as preforming the same transformation as A, but on vectors given with respect to the basis given by the columns of P. Let C be the basis formed by the columns of P and for any given u R n let v T A u. Now Example 3 Find a matrix similar to A T B u Bu P AP u P Au C P T A u C P v C v P. 2 2, P 2 B P AP 2 2 0 2 3 2 5 2 4 with respect to the basis {2,,, }, A looks like 2 5 2 4 2
Theorem 4 If A and B are similar matrices, then they have the same characteristic polynomial, and hence the same eigenvalues. Proof:Let A and B be similar matrices, so B P AP for some invertible matrix P. 3 Diagonalization λi B λi P AP P P λi P AP P P I P λip P AP P I IP P λi AP Distribution P λi A P XY X Y λi A P P Definition 5 A matrix is said to be diagonalizable if it is similar to a diagonal matrix. Consider an n n matrix A with n linearly independent eigenvectors v, v 2,..., v n, which thus form a basis B. Recall that the matrix associated with a transformation has columns given by the images of the basis vectors. With respect to this basis, T A v i B λ i v i B λ i e i. If D is the matrix of A with respect to this basis D is diagonal. Let P be the matrix whose columns are the linearly independent eigenvectors of A, then A in this coordinate system will be D, where A P DP, or more usefully D P AP. Theorem 6 An n n matrix A is diagonalizable if and only if it has n linearly independent eigenvectors, v, v 2,..., v n corresponding to eigenvalues λ, λ 2,..., λ n respectivley with possible repitition. In this case A is similar to the matrix D diagλ, λ 2,..., λ n. D P AP, where P is the matrix whose columns are the eigenvectors of A. i.e. P v v 2... v n Note not all matrices have n linearly independent eigenvectors. Theorem 7 Given an n n matrix A, the following statements are equivalent: A is diagonalizable. A has n linearly independent Eigenvectors. The algebraic multiplicity equals the geometric multiplicity for each eigenvalue of A. Example 8. Diagonalize A Eigenvalues λ 0, 2, with corresponding eigenvectors, and,. P, P 2 3
D diag0, 2 0 0 P AP 2 2 0 0 2. Diagonalize A 2 0 0 Eigenvalues λ and λ 2. But Algebraic multiplicity of λ is 2, whereas the geometric multiplicity is. A is not diagonalizable. 4 Applications 4. Powers of Matrices Theorem 9 If D P AP, then A k P D k P Proof: D P AP, so Example 0. Find A 0, where A D k P AP P AP... P AP }{{} k P AP P AP P A... AP P AP }{{} k P AAA }{{... AA} P k P A k P P, P 2 D diag0, 2 0 0 4
A 0 52 52 52 52 0 0 0 0 0 0 0 D 0 0 0 024 0 0 P D 0 P 0 024 2 0 0 52 52 52 52 52 52 4.2 Application to Differential Equations Suppose we have n functions of some variable t, x t, x 2 t,... x n t. We may represent them with the vector x t x 2 t xt. x n t xt is called a vector function. We wish to solve a system of differential equations of the form x t a x t + a 2 x 2 t +... + a n x n t x 2t a 2 x t + a 22 x 2 t +... + a 2n x n t. x nt a n x t + a n2 x 2 t +... + a nn x n t Where x i indicates the derivative of the function x i t. We can represent this in matrix notation as x t Axt. Given such a system and a set of initial conditions we would like to find the functions x i t. Recall for an arbitrary function xt, where C is a constant of integration. Example lve the system x t λxt xt Ce λt, xt yt zt Where x0, y0 3, z 2e 9. 3 0 0 0 0 0 9 5 xt yt zt
Thus Substituting in the initial conditions gives x t 3xt, y t 2yt, z t 9zt, xt C e 3t, yt C 2 e 2t, zt C 3 e 9t, C, C 2 3, C 3 2. Now suppose that A is diagonalizable, so there exists invertable P and diagonal D such that D P AP. Suppose xt satisfies the linear system of differential equations Define yt P xt. x t Ax Now x t Ax., premultiplying by P, P x t P AP P x. P P I y t Dyt. Thus to solve *, we first solve to find yt. Then y t Dyt. x t Cy t. Example 2 lve x t x 2t Where x 0 0, x 2 0. We know A P DP, where P x t x 2 t 0 0, and D diag0, 2 Consider Now, y t y 2t 0 0 y t y 2 t y t 0 y t C y 2t 2yt y 2 t C 2 e 2t xt P yt C C 2 e 2t C + C 2 e 2t C + C 2 e 2t 6
Now apply the initial conditions, x 0 0, so C + C 2 0, and x 2 0 gives C + C 2 lving gives C 2, C 2 2. x t x 2 t 2 e 2t + e 2t 7