Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Similar documents
Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Elementary maths for GMT

Math Linear Algebra Final Exam Review Sheet

CS 246 Review of Linear Algebra 01/17/19

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.

Introduction to Matrix Algebra

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

Review of Linear Algebra

Therefore, A and B have the same characteristic polynomial and hence, the same eigenvalues.

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Matrices and Linear Algebra

and let s calculate the image of some vectors under the transformation T.

Quantum Computing Lecture 2. Review of Linear Algebra

MATH2210 Notebook 2 Spring 2018

Linear Algebra Practice Problems

Chapter 2 Notes, Linear Algebra 5e Lay

Linear Algebra Practice Final

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

MATRICES. a m,1 a m,n A =

Remark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

Linear Algebra Review. Vectors

Knowledge Discovery and Data Mining 1 (VO) ( )

Linear Algebra Solutions 1

Linear Algebra Primer

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p

Definition (T -invariant subspace) Example. Example

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018

Review problems for MA 54, Fall 2004.

Linear Algebra V = T = ( 4 3 ).

Materials engineering Collage \\ Ceramic & construction materials department Numerical Analysis \\Third stage by \\ Dalya Hekmat

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

MATH 221, Spring Homework 10 Solutions

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

Linear Systems and Matrices

LINEAR ALGEBRA REVIEW

Online Exercises for Linear Algebra XM511

Introduction to Matrices

Section 9.2: Matrices.. a m1 a m2 a mn

Matrix Arithmetic. j=1

Math 343 Midterm I Fall 2006 sections 002 and 003 Instructor: Scott Glasgow

Matrices. 1 a a2 1 b b 2 1 c c π

Advanced Engineering Mathematics Prof. Pratima Panigrahi Department of Mathematics Indian Institute of Technology, Kharagpur

Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations

Phys 201. Matrices and Determinants

MAC Module 2 Systems of Linear Equations and Matrices II. Learning Objectives. Upon completing this module, you should be able to :

MATH 323 Linear Algebra Lecture 6: Matrix algebra (continued). Determinants.

Markov chains and the number of occurrences of a word in a sequence ( , 11.1,2,4,6)

[ Here 21 is the dot product of (3, 1, 2, 5) with (2, 3, 1, 2), and 31 is the dot product of

Introduction. Vectors and Matrices. Vectors [1] Vectors [2]

Linear Algebra Practice Problems

Dimension. Eigenvalue and eigenvector

Topic 1: Matrix diagonalization

c c c c c c c c c c a 3x3 matrix C= has a determinant determined by

Conceptual Questions for Review

Computational Methods CMSC/AMSC/MAPL 460. Eigenvalues and Eigenvectors. Ramani Duraiswami, Dept. of Computer Science

Linear Algebra M1 - FIB. Contents: 5. Matrices, systems of linear equations and determinants 6. Vector space 7. Linear maps 8.

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

n n matrices The system of m linear equations in n variables x 1, x 2,..., x n can be written as a matrix equation by Ax = b, or in full

NOTES on LINEAR ALGEBRA 1

. a m1 a mn. a 1 a 2 a = a n

Math 360 Linear Algebra Fall Class Notes. a a a a a a. a a a

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

ANSWERS. E k E 2 E 1 A = B

Elementary Linear Algebra

Fundamentals of Engineering Analysis (650163)

Linear Algebra and Matrix Inversion

Matrices Gaussian elimination Determinants. Graphics 2009/2010, period 1. Lecture 4: matrices

Linear Algebra: Matrix Eigenvalue Problems

Topics. Vectors (column matrices): Vector addition and scalar multiplication The matrix of a linear function y Ax The elements of a matrix A : A ij

A = 3 1. We conclude that the algebraic multiplicity of the eigenvalues are both one, that is,

8-15. Stop by or call (630)

1 9/5 Matrices, vectors, and their applications

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

Eigenvalues and Eigenvectors

City Suburbs. : population distribution after m years

1. In this problem, if the statement is always true, circle T; otherwise, circle F.

1 Matrices and Systems of Linear Equations. a 1n a 2n

Things we can already do with matrices. Unit II - Matrix arithmetic. Defining the matrix product. Things that fail in matrix arithmetic

ENGI 9420 Lecture Notes 2 - Matrix Algebra Page Matrix operations can render the solution of a linear system much more efficient.

2 b 3 b 4. c c 2 c 3 c 4

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

Calculating determinants for larger matrices

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

Section 9.2: Matrices. Definition: A matrix A consists of a rectangular array of numbers, or elements, arranged in m rows and n columns.

Inverting Matrices. 1 Properties of Transpose. 2 Matrix Algebra. P. Danziger 3.2, 3.3

is a 3 4 matrix. It has 3 rows and 4 columns. The first row is the horizontal row [ ]

2. Matrix Algebra and Random Vectors

88 CHAPTER 3. SYMMETRIES

MAT 1302B Mathematical Methods II

MA 138 Calculus 2 with Life Science Applications Matrices (Section 9.2)

Math Final December 2006 C. Robinson

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

Linear Algebra Review. Fei-Fei Li

Linear Algebra Highlights

Lecture Notes in Linear Algebra

Transcription:

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2018 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing a matrix Math 283 / Fall 2018 1 / 35

Matrices A matrix is a square or rectangular table of numbers. An m n matrix has m rows and n columns. This is read m by n. This matrix is 2 3: A = [ 1 2 ] 3 4 5 6 The entry in row i, column j, is denoted A i, j or A ij. A 1,1 = 1 A 1,2 = 2 A 1,3 = 3 A 2,1 = 4 A 2,2 = 5 A 2,3 = 6 Prof. Tesler Diagonalizing a matrix Math 283 / Fall 2018 2 / 35

Matrix multiplication A B = C [ ] 5 2 3 2 ] 1 2 3 0 1 1 1 [ = 4 5 6 } {{ } 1 6 4 3 } {{ } 2 3 } {{ } 2 4 3 4 Let A be p q and B be q r. The product AB = C is a certain p r matrix of dot products: C i, j = q A i,k B k, j = dot product (i th row of A) (j th column of B) k=1 The number of columns in A must equal the number of rows in B (namely q) in order to be able to compute the dot products. Prof. Tesler Diagonalizing a matrix Math 283 / Fall 2018 3 / 35

Matrix multiplication [ ] 5 2 3 2 1 2 3 0 1 1 1 = 4 5 6 1 6 4 3 [ 2 ] C 1,1 = 1(5) + 2(0) + 3( 1) = 5 + 0 3 = 2 Prof. Tesler Diagonalizing a matrix Math 283 / Fall 2018 4 / 35

Matrix multiplication [ ] 5 2 3 2 1 2 3 0 1 1 1 = 4 5 6 1 6 4 3 [ 2 18 ] C 1,2 = 1( 2) + 2(1) + 3(6) = 2 + 2 + 18 = 18 Prof. Tesler Diagonalizing a matrix Math 283 / Fall 2018 5 / 35

Matrix multiplication [ ] 5 2 3 2 1 2 3 0 1 1 1 = 4 5 6 1 6 4 3 [ 2 18 17 ] C 1,3 = 1(3) + 2(1) + 3(4) = 3 + 2 + 12 = 17 Prof. Tesler Diagonalizing a matrix Math 283 / Fall 2018 6 / 35

Matrix multiplication [ ] 5 2 3 2 1 2 3 0 1 1 1 = 4 5 6 1 6 4 3 [ 2 18 17 9 ] C 1,4 = 1(2) + 2( 1) + 3(3) = 2 2 + 9 = 9 Prof. Tesler Diagonalizing a matrix Math 283 / Fall 2018 7 / 35

Matrix multiplication [ ] 5 2 3 2 1 2 3 0 1 1 1 = 4 5 6 1 6 4 3 [ 2 18 17 9 14 ] C 2,1 = 4(5) + 5(0) + 6( 1) = 20 + 0 6 = 14 Prof. Tesler Diagonalizing a matrix Math 283 / Fall 2018 8 / 35

Matrix multiplication [ ] 5 2 3 2 1 2 3 0 1 1 1 = 4 5 6 1 6 4 3 [ 2 18 17 9 14 33 ] C 2,2 = 4( 2) + 5(1) + 6(6) = 8 + 5 + 36 = 33 Prof. Tesler Diagonalizing a matrix Math 283 / Fall 2018 9 / 35

Matrix multiplication [ ] 5 2 3 2 1 2 3 0 1 1 1 = 4 5 6 1 6 4 3 [ 2 18 17 9 14 33 41 ] C 2,3 = 4(3) + 5(1) + 6(4) = 12 + 5 + 24 = 41 Prof. Tesler Diagonalizing a matrix Math 283 / Fall 2018 10 / 35

Matrix multiplication [ ] 5 2 3 2 1 2 3 0 1 1 1 = 4 5 6 1 6 4 3 [ 2 18 17 ] 9 14 33 41 21 C 2,4 = 4(2) + 5( 1) + 6(3) = 8 5 + 18 = 21 Prof. Tesler Diagonalizing a matrix Math 283 / Fall 2018 11 / 35

Transpose of a matrix Given matrix A of dimensions p q, the transpose A is q p, obtained by interchanging rows and columns: (A ) ij = A ji. [ ] 1 2 3 = 4 5 6 1 4 2 5 3 6 Transpose of a product reverses the order and transposes the factors: (AB) = B A [ ] 5 2 3 2 1 2 3 0 1 1 1 = 4 5 6 1 6 4 3 5 0 1 2 1 6 1 4 3 1 4 2 5 = 3 6 2 1 3 [ 2 18 17 ] 9 14 33 41 21 2 14 18 33 17 41 9 21 Prof. Tesler Diagonalizing a matrix Math 283 / Fall 2018 12 / 35

Matrix multiplication is not commutative: usually, AB BA For both AB and BA to be defined, need compatible dimensions: A: m n, B: n m giving AB: m m, BA: n n The only chance for them to be equal would be if A and B are both square and of the same size, n n. Even then, they are usually not equal: [ ] [ ] 1 2 3 0 = 0 0 0 0 [ ] [ ] 3 0 1 2 0 0 0 0 = [ ] 3 0 0 0 [ ] 3 6 0 0 Prof. Tesler Diagonalizing a matrix Math 283 / Fall 2018 13 / 35

Multiplying several matricies Multiplication is associative: (AB)C = A(BC) Suppose A is p 1 p 2 B is p 2 p 3 C is p 3 p 4 D is p 4 p 5 Then ABCD is p 1 p 5. By associativity, it may be computed in many ways, such as A(B(CD)), (AB)(CD),... or directly by: (ABCD) i, j = p 2 p 3 p 4 k 2 =1 k 3 =1 k 4 =1 A i, k2 B k2, k 3 C k3, k 4 D k4, j This generalizes to any number of matrices. Powers A 2 = AA, A 3 = AAA,... are defined for square matrices. Prof. Tesler Diagonalizing a matrix Math 283 / Fall 2018 14 / 35

Identity matrix The n n identity matrix I is 1 0 0 I = 0 1 0 I i, j = 0 0 1 { 1 if i = j (main diagonal); 0 if i j (elsewhere). For any n n matrix A, IA = AI = A. This plays the same role as 1 does in multiplication of numbers: 1 x = x 1 = x. Prof. Tesler Diagonalizing a matrix Math 283 / Fall 2018 15 / 35

Inverse matrix The inverse of an n n matrix A is an n n matrix A 1 such that A A 1 = I and A 1 A = I. It may or may not exist. This plays the role of reciprocals of ordinary numbers, x 1 = 1/x. For 2 2 matrices [ ] a b A = c d A 1 = 1 ad bc [ d ] b c a unless det(a) = ad bc = 0, in which case A 1 is undefined. For n n matrices, use the row reduction algorithm (a.k.a. Gaussian elimination) in Linear Algebra. If A, B are invertible and the same size: (AB) 1 = B 1 A 1 The order is reversed and the factors are inverted. Prof. Tesler Diagonalizing a matrix Math 283 / Fall 2018 16 / 35

Span, basis, and linear (in)dependence The span of vectors v 1,..., v k is the set of all linear combinations α 1 v 1 + + α k v k α 1,..., α k R Prof. Tesler Diagonalizing a matrix Math 283 / Fall 2018 17 / 35

Span, basis, and linear (in)dependence Example 1 In 3D, 1 0 span 0, 0 = 0 1 x 0 : x, z R z = xz plane Here, the span of these two vectors is a 2-dimensional space. Every vector in it is generated by a unique linear combination. Prof. Tesler Diagonalizing a matrix Math 283 / Fall 2018 18 / 35

Span, basis, and linear (in)dependence Example 2 In 3D, 1 1 0 span 0, 1, 0 = 0 0 1/2 x y : x, y, z R = R3. z Note that x 1 1 0 y = (x y) 0 + y 1 2z 0 z 0 0 1/2 Here, the span of these three vectors is a 3-dimensional space. Every vector in R 3 is generated by a unique linear combination. Prof. Tesler Diagonalizing a matrix Math 283 / Fall 2018 19 / 35

Span, basis, and linear (in)dependence Example 3 In 3D, 1 1 0 span 0, 0, 0 = 0 1 1 x 0 : x, z R z = xz plane This is a plane (2D), even though it s a span of three vectors. Note that v 2 = v 1 + v 3, or v 1 v 2 + v 3 = 0. There are multiple ways to generate each vector in the span: for all x, z, t, x 1 0 1 1 0 0 = x 0 + z 0 + t ( v 1 v 2 + v 3 ) = (x + t) 0 t 0 + (z + t) 0 } {{ } z 0 1 0 1 1 = 0 Prof. Tesler Diagonalizing a matrix Math 283 / Fall 2018 20 / 35

Span, basis, and linear (in)dependence Given vectors v 1,..., v k, if there is a linear combination α 1 v 1 + + α k v k = 0 with at least one α i 0, the vectors are linearly dependent (Ex. 3). Otherwise they are linearly independent (Ex. 1 2). Linearly independent vectors form a basis of the space S they span. Any vector in S is a unique linear combination of basis vectors (vs. it s not unique if v 1,..., v k are linearly dependent). One basis of R n is a unit vector on each axis: but there are other possibilities, e.g., Example 2: [ 1 0 0 [ ] [ ] [ ] 10 01 00,, 0 0 1 ], [ 1 1 0 ], [ 0 0 1/2 ] Prof. Tesler Diagonalizing a matrix Math 283 / Fall 2018 21 / 35

Eigenvectors Eigenvalues and eigenvectors Let A be a square matrix (k k) and v 0 be a column vector (k 1). If A v = λ v for a scalar λ, then v is an eigenvector of A with eigenvalue λ. Example [8 [ ] 1 3 1 6 3 ] [ ] 1 3 = [ ] (8)(1) + ( 1)(3) (6)(1) + (3)(3) is an eigenvector with eigenvalue 5. = [ ] 5 15 = 5 [ ] 1 3 But this is just a verification. How do we find eigenvalues and eigenvectors? Prof. Tesler Diagonalizing a matrix Math 283 / Fall 2018 22 / 35

Finding eigenvalues and eigenvectors We will work with the example P = [ ] 8 1 6 3 Form the identity matrix of the same dimensions: [ ] 1 0 I = 0 1 The formula for the determinant depends on the dimensions of the matrix. For a 2 2 matrix, [ ] a b det = ad bc c d Prof. Tesler Diagonalizing a matrix Math 283 / Fall 2018 23 / 35

Finding eigenvalues and eigenvectors Compute the determinant of P λi: [ ] 8 λ 1 det(p λi) = det 6 3 λ = (8 λ)(3 λ) ( 1)(6) = 24 11λ + λ 2 + 6 = λ 2 11λ + 30 This is the characteristic polynomial of P. It has degree k in λ. The characteristic equation is det(p λi) = 0. Solve it for λ. For k = 2, use the quadratic formula: λ = 11 ± ( 11) 2 4(1)(30) 2 The eigenvalues are λ = 5 and λ = 6. = 5, 6 Prof. Tesler Diagonalizing a matrix Math 283 / Fall 2018 24 / 35

Finding the (right) eigenvector for λ = 5 Let v = [ ] a. We will solve for a, b. b The equation P v = λ v is equivalent to (P λi) v = 0. [ ] 0 0 = (P 5I) v = [ ] [ ] 3 1 a 6 2 b = [ ] 3a b 6a 2b so 3a b = 0 and 6a 2b = 0 (which are equivalent). Solving gives b = 3a. Thus, [ ] a v = = b Any nonzero scalar multiple of eigenvalue 5. [ ] a 3a [ ] 1 3 = a [ ] 1 3 is an eigenvector of P with Prof. Tesler Diagonalizing a matrix Math 283 / Fall 2018 25 / 35

Finding the (right) eigenvector for λ = 6 Let v = [ ] a. We will solve for a, b. b The equation P v = λ v is equivalent to (P λi) v = 0. [ ] 0 0 = (P 6I) v = [ ] [ ] 2 1 a 6 3 b = [ ] 2a b 6a 3b so 2a b = 0 and 6a 3b = 0 (which are equivalent). Solving gives b = 2a. Thus, [ ] a v = = b Any nonzero scalar multiple of eigenvalue 6. [ ] a 2a [ ] 1 2 = a [ ] 1 2 is an eigenvector of P with Prof. Tesler Diagonalizing a matrix Math 283 / Fall 2018 26 / 35

Verify the eigenvectors [ ] [ ] 8 1 1 6 3 3 [ ] [ ] 8 1 2 6 3 4 = = [ ] 8(1) 1(3) 6(1) + 3(3) [ ] 8(2) 1(4) 6(2) + 3(4) = = [ ] 5 15 [ ] 12 24 [ ] 1 = 5 3 [ ] 2 = 6 4 Prof. Tesler Diagonalizing a matrix Math 283 / Fall 2018 27 / 35

Normalization: Which scalar multiple should we use? In some applications, any nonzero multiple is fine. In others, a particular scaling is required. Markov chains / Stochastic matrices Entries are probabilities of different cases. Scale the vector so that the entries sum up to 1. [ ] [ ] 1 1/4 For v = a, the sum is a (1 + 3) = 4a = 1, so a = 1 3 4 : v = 3/4 Principal component analysis Scale it to be a unit vector, so that the sum of the squares of its entries equals 1: 1 = a 2 (1 2 + 3 2 ) = 10a 2 so a = v = ± [ ] 1/ 10 3/ 10 ±1 1 2 + 3 = ±1. 2 10 (two possibilities) Prof. Tesler Diagonalizing a matrix Math 283 / Fall 2018 28 / 35

Finding the left eigenvector for λ = 5 Let v = [ a b ]. We will solve for a, b. The equation vp = λ v is equivalent to v(p λi) = 0. [ ] [ ] [ ] 3 1 0 0 = v(p 5I) = a b 6 2 = [ 3a + 6b a 2b ] so 3a + 6b = 0 and a 2b = 0 (which are equivalent). Solving gives b = a/2. Thus, v = [ a b ] = [ a a/2 ] = a [ 1 1/2 ] Any nonzero scalar multiple of [ 1 with eigenvalue 5. 1/2 ] is a left eigenvector of P Prof. Tesler Diagonalizing a matrix Math 283 / Fall 2018 29 / 35

Finding the left eigenvector for λ = 6 Let v = [ a b ]. We will solve for a, b. The equation vp = λ v is equivalent to v(p λi) = 0. [ ] [ ] [ ] 2 1 0 0 = v(p 6I) = a b 6 3 = [ 2a + 6b a 3b ] so 2a + 6b = 0 and a 3b = 0 (which are equivalent). Solving gives b = a/3. Thus, v = [ a b ] = [ a a/3 ] = a [ 1 1/3 ] Any nonzero scalar multiple of [ 1 with eigenvalue 6. 1/3 ] is a left eigenvector of P Prof. Tesler Diagonalizing a matrix Math 283 / Fall 2018 30 / 35

Verify the left eigenvectors [ ] [ ] 8 1 2 1 6 3 = [ 2(8) + 1(6) 2( 1) + 1(3) ] = [ 10 5 ] = 5 [ 2 1 ] [ ] [ ] 8 1 1.5.5 6 3 = [ 1.5(8).5(6) 1.5( 1).5(3) ] = [ 9 3 ] = 6 [ 1.5.5 ] Prof. Tesler Diagonalizing a matrix Math 283 / Fall 2018 31 / 35

Diagonalizing a matrix This procedure assumes there are k linearly independent eigenvectors, where P is k k. If the characteristic polynomial has k distinct roots, then there are k such eigenvectors. But if roots are repeated, there may or may not be a full set of eigenvectors. We ll explore this complication later. Prof. Tesler Diagonalizing a matrix Math 283 / Fall 2018 32 / 35

Diagonalizing a matrix Put the right eigenvectors r 1, r 2,... into the columns of a matrix V. Form diagonal matrix D with eigenvalues λ 1, λ 2,... in the same order: V = [ [ ] [ ] [ ] ] 1 2 λ1 0 5 0 r 1 r 2 = D = = 3 4 0 λ 2 0 6 [ ] [ ] Compute V 1 l 1 2 1 = = l 2 3/2 1/2 Its rows are the left eigenvectors l 1, l 2,... of P, in the same order as the eigenvalues in D, scaled so that l i r i = 1. This gives the diagonalization P = VDV 1 : P = V D V 1 [ ] [ ] [ ] [ ] 8 1 1 2 5 0 2 1 = 6 3 3 4 0 6 3/2 1/2 Prof. Tesler Diagonalizing a matrix Math 283 / Fall 2018 33 / 35

Matrix powers using the spectral decomposition An expansion of P n is P n = (VDV 1 )(VDV 1 ) (VDV 1 ) = VD n V 1 : [ ] [ ] [ ] P n = VD n V 1 5 n 0 = V 0 6 n V 1 5 n 0 = V V 1 0 0 + V 0 0 0 6 n V 1 V [ ] 5 n 0 V 1 = 0 0 [ ][ ][ ] [ 1 2 5 n 0 2 1 (1)(5 = n )( 2) (1)(5 n ] )(1) 3 4 0 0 1.5.5 (3)(5 n )( 2) (3)(5 n )(1) ] [ ] [ 2 ] 1 = n λ1 r 1 l 1 = 5 n 2 1 = 5 n [ 1 3 6 3 V [ ] 0 0 0 6 n V 1 = [ ] [ ] [ ] [ 1 2 0 0 2 1 2(6 3 4 0 6 n = n )(1.5) 2(6 n ] )(.5) 1.5.5 4(6 n )(1.5) 4(6 n )(.5) ] [ ] [1.5 ].5 = n λ2 r 2 l 2 = 6 n 3 1 = 6 n [ 2 4 6 2 Prof. Tesler Diagonalizing a matrix Math 283 / Fall 2018 34 / 35

Matrix powers using the spectral decomposition Continue computing P n : P n = VD n V 1 = V [ ] = 5 n 2 1 6 3 [ ] 5 n 0 0 6 n + 6 n [ 3 1 6 2 V 1 = V ] [ ] 5 n 0 V 1 + V 0 0 [ ] 0 0 0 6 n V 1 General formula (with k = 2 and two distinct eigenvalues): P n = VD n V 1 = λ 1 n r 1 l 1 + λ 2 n r 2 l 2 General formula: If P is k k and is diagonalizable, this becomes: P n = VD n V 1 = λ 1 n r 1 l 1 + λ 2 n r 2 l 2 + + λ k n r k l k What if the matrix is not diagonalizable? We will see a generalization called the Jordan Canonical Form. Prof. Tesler Diagonalizing a matrix Math 283 / Fall 2018 35 / 35