Lecture 15, 16: Diagonalization Motivation: Eigenvalues and Eigenvectors are easy to compute for diagonal matrices. Hence, we would like (if possible) to convert matrix A into a diagonal matrix. Suppose A is a n n matrix and has n linearly independent eigenvectors p 1 p n, put them into columns of eigenvector matrix P. Put eigenvalues into a diagonal matrix D. D ii has the eigenvalue corresponding to the eigenvector in the i-th column of P. AP = A p 1 p n = λ 1 p 1 λ n p n = PD. From above equation it is implied: P 1 AP = D or PDP 1 = A The square matrix A is diagonalizable if it has n distinct eigenvalues or if its n eigenvectors are linearly independent. 1
Powers of A Suppose A is factored to PDP 1. To compute A 2 we multiply its factored equivalent: A 2 = PDP 1 PDP 1 = PD 2 P 1. We can imply that A k = PD k P 1 D k is the diagonal matrix D where its entries are to the power of k, and is very easy to calculate.
Powers of A
Steps for Matrix Diagonalization Diagonalize the following matrix: 1 3 3 A = 3 5 3 3 3 1 Find the eigenvalues λ 1, λ 2, λ 3. Find three linearly independent eigenvectors of A. Construct P = [v 1, v 2, v 3 ] Construct D. Check AP = PD and A = PDP 1
Not all Matrices are Diagonalizable
Application: Steady State Suppose there is a system such that u k+1 = Au k, where u is the state vector and its subscripts denote its order in a time series. If u 0 is the initial condition of the system at beginning, we can find u k, the state at the time n, using u k = A k u 0. We know that A k = XD k X 1. So if n : If λ i < 1 then λ i k 0. If λ i = 1 then λ i k = 1. If λ i > 1 then λ i k. So, stable systems A should not have eigenvalues λ i > 1.
Application: Steady State Example: Suppose 10% of residents of the city A move to city B in a year. During the same time 5% of the city B residents move to city A. If initial residents of cities A and B are 10,000 and 100,000 respectively, what will happen in a long run? residents of city A We formulate the system as u = residents of city B If no other thing affect the population of the cities A and B, the matrix A is:.9.05 u 1 = Au 0 u 1 =.1.95 u 0 Eigenvalues of A, are λ 1 = 0.85 and λ 2 = 1. Eigenvectors of A are x 1 = 1 1, x 2 = 1 2
Application: Steady State Example Cont d: In a long run u k = A k u 0 A k = XD k X 1 = 1 1 1 2 0.85 0 0 1 k 2 3 1 3 If k then λ k 1 0. Only the λ 2 will be effective in A k when k lim A k = 1 1 k 1 2 u = A u 0 = 1 3 2 3 1 3 2 3 0 0 0 1 2 3 1 3 1 3 1 3 = 10,000 100,000 ~ 36,667 73,333 1 3 2 3 1 3 1 3 1 3 2 3
Application: Steady State Example Cont d: Another approach: Recall that if y = α 1 x 1 + α 2 x 2 + + α n x n then Ay = α 1 λ 1 x 1 + α 2 λ 2 x 2 + + α n λ n x n Therefore, A k y = α 1 λ 1 k x 1 + α 2 λ 2 k x 2 + + α n λ n k x n 10,000 100,000 ~ 1 1 1 2 26667 36667 A k 10,000 100,000 = 1 26667(0.85)k 1 1 36667(1)k 2 A 10,000 100,000 = 36667 1 2 ~ 36,667 73,333
Application: Discrete Dynamic Systems
Application: Discrete Dynamic Systems
Application: Fibonacci Series A Fibonacci series: 1, 1, 2, 3, 5, 8, 13 How to find the 100 th element x 100 quickly?
Application: Fibonacci Series
Application: Fibonacci Series
Diagonalization of Symmetric Matrices Symmetric matrices always have real eigenvalues and are diagonalizable. What is special about Ax = λx, when A is symmetric (A = A T )? A = A T PDP 1 = (PDP 1 ) T = (P 1 ) T DP T Above equation is correct only if P 1 = P T, and this means P is orthonormal. Therefore, every symmetric matrix A = QDQ T, where Q is the normalized eigenvector matrix, and D is the eigenvalue diagonal matrix. Q is an orthonormal matrix and Q 1 = Q T.
Diagonalization of Symmetric Matrices Example: re-write matrix A = 1 2 2 2, as factorization A = QDQT. A λi = 1 λ 2 2 2 λ 1 λ 2 λ 4 = 0 λ 2 + λ 6 = 0 λ + 3 λ 2 = 0; λ 1 = 3 and λ 2 = 2 A λ 1 I x 1 = 0 4 2 2 1 = 0 x 1 = 1 2 A λ 2 I x 2 = 0 1 2 = 0 x 2 4 2 = 2 1 x 1 and x 2 are orthogonal, but not orthonormal. If we normalize them, they will be orthonormal. Q = 1 1 2 3 0, D = 5 2 1 0 2
Diagonalization of Symmetric Matrices Example: Diagonalize
Diagonalization of Symmetric Matrices
Spectrum Theorem If A is a n x n symmetric matrix All eigenvalues of A are real. A has exactly n real eigenvalues (counting for multiplicity). But this does not mean they are distinct. The geometric multiplicity of λ = dimension of Null(A - λi) = the algebraic multiplicity of λ. The eigenspaces are mutually orthogonal. If λ 1 λ 2 are two distinct eigenvalues, then their corresponding eigenvectors v 1 and v 2 are orthogonal.
Proof