MAT1302F Mathematical Methods II Lecture 19

Similar documents
MAT 1302B Mathematical Methods II

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

and let s calculate the image of some vectors under the transformation T.

Remark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Math 3191 Applied Linear Algebra

Recall : Eigenvalues and Eigenvectors

Chapter 4 & 5: Vector Spaces & Linear Transformations

Computationally, diagonal matrices are the easiest to work with. With this idea in mind, we introduce similarity:

Solutions to Final Exam

Lecture 12: Diagonalization

Lecture 15, 16: Diagonalization

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

PROBLEM SET. Problems on Eigenvalues and Diagonalization. Math 3351, Fall Oct. 20, 2010 ANSWERS

3.3 Eigenvalues and Eigenvectors

Definition (T -invariant subspace) Example. Example

Math 110 Linear Algebra Midterm 2 Review October 28, 2017

Chapter 5. Eigenvalues and Eigenvectors

Name Solutions Linear Algebra; Test 3. Throughout the test simplify all answers except where stated otherwise.

Eigenvalues and Eigenvectors

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

Problems for M 10/26:

Eigenvalues and Eigenvectors

MATH 310, REVIEW SHEET 2

Math 1553 Worksheet 5.3, 5.5

c c c c c c c c c c a 3x3 matrix C= has a determinant determined by

DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix

Final Exam Practice Problems Answers Math 24 Winter 2012

Jordan normal form notes (version date: 11/21/07)

Diagonalization. MATH 322, Linear Algebra I. J. Robert Buchanan. Spring Department of Mathematics

Study Guide for Linear Algebra Exam 2

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Eigenvalues and Eigenvectors

Eigenvalues, Eigenvectors, and Diagonalization

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

Chapter 3. Determinants and Eigenvalues

Therefore, A and B have the same characteristic polynomial and hence, the same eigenvalues.

From Lay, 5.4. If we always treat a matrix as defining a linear transformation, what role does diagonalisation play?

Diagonalization of Matrix

Eigenvalues by row operations

Announcements Monday, November 06

Examples True or false: 3. Let A be a 3 3 matrix. Then there is a pattern in A with precisely 4 inversions.

MATH 221, Spring Homework 10 Solutions

Chapter 5 Eigenvalues and Eigenvectors

REVIEW FOR EXAM III SIMILARITY AND DIAGONALIZATION

c 1 v 1 + c 2 v 2 = 0 c 1 λ 1 v 1 + c 2 λ 1 v 2 = 0

Math 205, Summer I, Week 4b:

Homework sheet 4: EIGENVALUES AND EIGENVECTORS. DIAGONALIZATION (with solutions) Year ? Why or why not? 6 9

Problems for M 11/2: A =

1. In this problem, if the statement is always true, circle T; otherwise, circle F.

Linear Algebra (MATH ) Spring 2011 Final Exam Practice Problem Solutions

What is A + B? What is A B? What is AB? What is BA? What is A 2? and B = QUESTION 2. What is the reduced row echelon matrix of A =

A = 3 1. We conclude that the algebraic multiplicity of the eigenvalues are both one, that is,

TMA Calculus 3. Lecture 21, April 3. Toke Meier Carlsen Norwegian University of Science and Technology Spring 2013

Math 1553, Introduction to Linear Algebra

Eigenvalues and Eigenvectors: An Introduction

Math 315: Linear Algebra Solutions to Assignment 7

Announcements Monday, October 29

Eigenvalues and Eigenvectors

Eigenvalue and Eigenvector Homework

Question: Given an n x n matrix A, how do we find its eigenvalues? Idea: Suppose c is an eigenvalue of A, then what is the determinant of A-cI?

Chapters 5 & 6: Theory Review: Solutions Math 308 F Spring 2015

= main diagonal, in the order in which their corresponding eigenvectors appear as columns of E.

Calculating determinants for larger matrices

Practice problems for Exam 3 A =

LINEAR ALGEBRA REVIEW

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

MAC Module 12 Eigenvalues and Eigenvectors. Learning Objectives. Upon completing this module, you should be able to:

MAC Module 12 Eigenvalues and Eigenvectors

spring, math 204 (mitchell) list of theorems 1 Linear Systems Linear Transformations Matrix Algebra

Final Review Written by Victoria Kala SH 6432u Office Hours R 12:30 1:30pm Last Updated 11/30/2015

Linear Algebra Practice Final

MTH 464: Computational Linear Algebra

REVIEW FOR EXAM II. The exam covers sections , the part of 3.7 on Markov chains, and

Recitation 9: Probability Matrices and Real Symmetric Matrices. 3 Probability Matrices: Definitions and Examples

MAT2342 : Introduction to Linear Algebra Mike Newman, 5 October assignment 1

22m:033 Notes: 7.1 Diagonalization of Symmetric Matrices

Summer Session Practice Final Exam

Eigenvectors and Hermitian Operators

2 b 3 b 4. c c 2 c 3 c 4

CS 246 Review of Linear Algebra 01/17/19

MATH 310, REVIEW SHEET

Diagonalization. Hung-yi Lee

UNDERSTANDING THE DIAGONALIZATION PROBLEM. Roy Skjelnes. 1.- Linear Maps 1.1. Linear maps. A map T : R n R m is a linear map if

Linear Algebra Practice Problems

Announcements Wednesday, November 01

Generalized Eigenvectors and Jordan Form

Differential Equations

Eigenvalues and Eigenvectors 7.2 Diagonalization

Math 416, Spring 2010 More on Algebraic and Geometric Properties January 21, 2010 MORE ON ALGEBRAIC AND GEOMETRIC PROPERTIES

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

Econ Slides from Lecture 7

Linear Algebra. Rekha Santhanam. April 3, Johns Hopkins Univ. Rekha Santhanam (Johns Hopkins Univ.) Linear Algebra April 3, / 7

MATH 1553, C. JANKOWSKI MIDTERM 3

Math Linear Algebra Final Exam Review Sheet

Advanced Engineering Mathematics Prof. Pratima Panigrahi Department of Mathematics Indian Institute of Technology, Kharagpur

Math 291-2: Lecture Notes Northwestern University, Winter 2016

Math 2331 Linear Algebra

33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM

DM554 Linear and Integer Programming. Lecture 9. Diagonalization. Marco Chiarandini

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

Transcription:

MAT302F Mathematical Methods II Lecture 9 Aaron Christie 2 April 205 Eigenvectors, Eigenvalues, and Diagonalization Now that the basic theory of eigenvalues and eigenvectors is in place most importantly a procedure that allows us to find all the eigenvalues, eigenvectors, and eigenspaces for a given matrix A, it s time to discuss an important application of these ideas, which is the notion of diagonalizing a matrix. First, recall what a diagonal matrix is: it is a matrix whose only nonzero entries lie on its main diagonal. A couple of examples are 6 0 0 0 5 0 0 0 0 0 0 3 7 and 0 0 0 0 0 0 0 0. 0 0 0 9 Diagonal matrices have a special form that s nice for a number of reasons: Diagonal matrices are both upper and lower triangular, so: the eigenvalues of a diagonal matrix are just its diagonal entries, and the determinant of a diagonal matrix is just the product of its diagonal entries. They are particularly easy to mutliply: just multiply diagonal entries in the same-numbered row.

This makes it easy to determine invertibility and find an inverse for a diagonal matrix: as long as no diagonal entry is 0 the matrix is invertible and the inverse is diagonal with entries equal to the inverses of the diagonal entries of the original matrix. These properties are legitimately important in application. For instance, computing powers of a matrix is generally quite difficult, especially by hand. But even using computing software, real applications often require computing very large powers of very large matrices, large enough to sometimes be a problem even for a machine made for doing that kind of thing. Computing A 7 if 3 2 8 7 4 6 3 9 A = 0 4 8 5 4 2 3 8 5 8 5 7 2 is going to be a pain, and relatively speaking it s not even that bad because it s not that big, it only has integer entries, and the power is not especially large. On the other hand, if 3 0 0 0 0 0 5 0 0 0 A = 0 0 0 0 0 0 0 3 0, 0 0 0 0 2 computing its powers isn t bad at all; it just reduces to computing powers of its entries: 3 7 0 0 0 0 0 5 7 0 0 0 A 7 = 0 0 7 0 0 0 0 0 ( 3) 7 0. 0 0 0 0 2 7 So, it would certainly be computationally helpful if it were possible to take advantage of this kind of simplification with matrices that aren t themselves diagonal. Here s an example of the sort of thing it might be reasonable to expect in a somewhat general context. 2

Example.. Take a collection of three matrices: [ ] [ ] 3 0 3 2 D = P = P = 0 4 Then, let A be the product of these: A = P DP = [ ] 7 42. 7 8 [ 2 3 Now suppose we want to compute just the 4th power of A. By hand, this will be difficult and lengthy. But the way we formed A to begin with actually makes this easier: A 4 = (P DP )(P DP )(P DP )(P DP ) = P D(P P )D(P P )D(P P )DP = P DIDIDIDP = P D 4 P [ ] 269 050 =. 75 606 Even though A is not diagonal, it s still possible to use the fact that diagonal matrices can be easily exponentiated to take some of the work out of this chore. The form appearing in this example is in some ways the nearest we can reasonably expect to get to a diagonal matrix if we re given an arbitrary matrix. Definition.2. If A = P BP for some invertible matrix P, then we say that A and B are similar. Remark.3. Notice that this definition applies beyond the situation we re planning on using it for: the matrix B doesn t have to be diagonal; it can be any matrix at all that s related to A through an invertible matrix. Our goal now, is to find a diagonal matrix similar to a given matrix, or to find out when it is possible to do this. Definition.4. If A is similar to a diagonal matrix, then A is said to be diagonalizable. (Explicitly, A is diagonalizable when there is a diagonal matrix D and an invertible matrix P such that A = P DP ) 3 ].

To diagonalize a matrix A is a matter of finding a diagonal matrix D and an invertible matrix P such that A = P DP. It turns out that not every matrix is diagonalizable, but that the circumstances under which a matrix is diagonalizable have everything to do with its eigenvalues and eigenvectors. Example.5. To see how this works out, recall an example from last time. Let [ ] 4 A =. 2 The characteristic polynomial of this matrix is so the eigenvalues are λ 2 5λ + 6 = (λ 2)(λ 3), λ = 2 and λ 2 = 3. Now let s find the corresponding eigenspaces. λ = 2: 2I A = [ ] 2 2 The solutions in vector parametric form are [ ] t 2, [ ] 2. 0 0 so we can choose any nonzero vector of this form as a basis vector for the eigenspace. Taking t = gives [ ] 2, but we could also remove the fractional entry by choosing t = 2 instead: [ ]. 2 λ 2 = 3: 3I A = [ ] 2 2 [ ]. 0 0 4

This has solutions in vector parametric form [ ] t, so [ ] is an eigenvector with eigenvalue 3 and a basis vector for the 3-eigenspace (as would any nonzero multiple of this vector). Recall that two classes ago, we saw a theorem that told us that eigenvectors for distinct eigenvalues of a matrix are linearly independent. This means that if we form a matrix with the basis vectors we chose for each eigenvalue, we get an invertible matrix (a matrix with linearly independent columns is invertible by the Invertible Matrix Theorem): [ ] P = 2 (you can check the determinant of this matrix to prove that it is invertible if you re doubtful). The inverse of this matrix can be quickly calculated to be [ ] P =. 2 Now, here s the miracle (it s not a miracle, but it seems to come out of nowhere for us). If we form a diagonal matrix A by taking the diagonal entry in the ith column to be the eigenvalue of the eigenvector appearing in the ith column of P, then P DP = [ 2 ] [ ] [ ] 2 0 = 0 3 2 [ ] 4 = A. 2 So: A is actually similar to the diagonal matrix whose entries are the eigenvalues of A! And now, with this similarity in hand, we can compute large powers of A: A 00 = (P DP ) 00 = P D 00 P [ ] [ ] [ ] 2 00 0 = 2 0 3 00 2 [ ] 2(3 = 00 ) 2 00 2 00 3 00 2(3 00 ) 2 0 2 0 3 00 5

Example.6. Suppose 4 2 2 A = 2 4 2. 2 2 4 First, we find the eigenvalues of A by calculating and factoring its characteristic polynomial. Along the way, we ll use a couple row and column operations to simplify the cofactor expansion: λ 4 2 2 det(λi A) = 2 λ 4 2 2 2 λ 4 λ 4 2 2 = 0 λ 2 2 λ 2 2 λ 4 λ 4 2 4 = 0 λ 2 0 2 2 λ 6 Therefore, A has two eigenvalues, = (λ 2)[(λ 4)(λ 6) ( 4)( 2)] = (λ 2)[λ 2 0λ + 24 8] = (λ 2)[λ 2 0 + 6] = (λ 2)(λ 8)(λ 2) = (λ 2) 2 (λ 8). λ = 2 and λ 2 = 8, one with algebraic multiplicity 2 and the other with multiplicity. Next, find the corresponding eigenspaces: λ = 2: 2 2 2 λi A = 2 2 2 = 0 0 0. 2 2 2 0 0 0 So the general solutions to (2I A) x = 0 are x = x 2 x 3 x 2 = s x 3 = t 6

Therefore, the eigenspace corresponding to the eigenvalue 2 is s + t 0. 0 This eigenspace is 2 dimensional, and a basis for it is and 0. 0 These will take up two columns in the matrix P when we form it. λ 2 = 8: 4 2 2 0 λi A = 2 4 2 = 0. 2 2 4 0 0 0 So the general solutions to (8I A) x = 0 are x = x 3 x 2 = x 3 x 3 = t Therefore, the eigenspace corresponding to the eigenvalue 8 is t, which is dimensional. A basis vector for it is. Now, with all of these parts, we can assemble them into a diagonalization. First, the diagonal matrix 2 0 0 D = 0 2 0 0 0 8 7

consisting of the eigenvalues of A. Then, the invertible matrix P = 0 0 consisting of the basis vectors for the two eigenspaces (NOTE: The eigenvectors are in the same columns of P as their matching eigenvalues in D!). The inverse of P (which you can calculate using the row reduction method) is P = 2 2. 3 Then, A is similar to D because 4 2 2 2 0 0 2 4 2 = 0 0 2 0 2 2. 3 2 2 4 0 0 0 8 Remark.7. In the above example, we could have used matrices D and P different from the ones above in a few different ways: The columns of D and P could be in a different order. For example, we could use 2 0 0 D = 0 8 0 and P = 0 0 0 2 0 instead. The only requirement is that corresponding eigenvalues and eigenvectors must appear in the same columns in the matrices. So, for example, using 2 0 0 D = 0 8 0 0 0 2 without changing the order of the columns in P would not successfully diagonalize A. For a given eigenvalue, you can use any basis for the corresponding eigenspace as columns in P. 8

An important thing to notice about the last example is that we would not have been able to complete the diagonalization if the eigenspace for 2 had only been dimensional (remember that the dimension of the eigenspace is also called the geometric multiplicity). If that had been the case, we would have found ourselves one column short forming the matrix P and there would have been no remedy for this. So: A given matrix is diagonalizable if and only if the geometric multiplicity of each eigenvalue is equal to its algebraic multiplicity. (It will always be the case that the geometric multiplicity is less than or equal to the algebraic multiplicity) Equivalently, an n n matrix is diagonalizable if and only if the sum of the dimensions of its eigenspaces is n. Remark.8. Some Warnings Diagonalizable is not the equivalent to invertible! It is possible to find matrices that are diagonalizable but not invertible, and invertible matrices that are not diagonalizable. Similarity is not the same as row equivalent! Row equivalence means there is a sequence of row operations that will take you from one matrix A to another, B. Similarity means there is an invertible matrix P such that A = P BP. Example.9. If 3 0 A = 0 3 0, 0 0 2 then we can see immediately that the eigenvalues of A are 3 and 2, since A is triangular. What might stop us from diagonalizing A is the eigenspace for 3 being less than 2 dimensional, and this does turn out to be what happens: 0 0 3I A = 0 0 0. 0 0 9

Therefore, the solutions to (3I A) x = 0 are x = 0 x 2 = t. x 3 = 0, Therefore, the eigenspace for 3 is dimensional, with basis vector 0. 0 The eigenspace for 2 will be -dimensional (it can t be more than, nor less than ), so there are insufficient vectors to build P, thus A is not diagonalizable. Remark.0. Since a given scalar λ is an eigenvalue if and only if (λi A) x = 0 has nontrivial solutions. If there are nontrivial solutions, the solution set involves at least one free variable, so the eigenspace for λ is at least dimensional. There is at least one circumstance where we can be sure that a matrix will be diagonalizable. Theorem.. An n n matrix with n distinct eigenvalues is diagonalizable. In this case, each eigenvalue has algebraic multiplicity, and each eigenvalue must have geometric multiplicity at least, so algebraic multiplicity = geometric multiplicity and the matrix must be diagonalizable. Therefore, the only instances we need to worry about is when an eigenvalue has algebraic multiplicity greater than. Procedure For Diagonalizing A Matrix Let A be n n.. Find the eigenvalues of A by solving the characteristic equation det(λi A) = 0. 2. Find the eigenspace for each eigenvalue: 0

Row reduce [λi A 0]. Write the solution in vector parametric form: x = t v + + t k v k Then, the eigenspace is Span{ v,..., v k } and v,..., v k is a basis for it. If the dimension of the eigenspace is less than the algebraic multiplicity of the eigenvalue, then A is not diagonalizable. If the geometric multiplicity of every eigenvalue is equal to its algebraic multiplicity, then A is diagonalizable proceed to the next step. 3. Construct P using the eigenvectors found in the previous step as columns, P = [ v v 2 ]. 4. Construct the diagonal matrix D from the eigenvalues of A, λ i is the eigenvalue of v i. D = λ λ 2,... 5. The diagonalization can be verified by checking that A = P DP or AP = P D (note that doing the latter doesn t require you to compute P!). Definition.2. If A is an n n matrix, then a basis of R n consisting of eigenvectors of A is called an eigenvector basis for A. Theorem.3 (The Diagonalization Theorem).. An n n matrix A is diagonalizable if and only if A has n linearly independent eigenvectors; equivalently, if there is an eigenvector basis for A. 2. A = P DP where D is a diagonal matrix if and only if the columns of P form an eigenvector basis for A and the entries of D are the eigenvalues of A.

Remark.4. The importance of the theorem is that it says that the method for diagonalizing a matrix that we ve seen this class is in fact the only method there is. There isn t some other completely different method that could be used to cover cases where the eigenvector/eigenvalue method doesn t work. Example.5. Is 6 3 5 A = 0 2 2 8 0 0 4 0 0 0 0 diagonalizable? Yes, which you can tell without going to the trouble of actually diagonalizing it. You can tell just because it is triangular and so it s possible to see immediately that it has four distinct eigenvalues. Note also that A is an example of a matrix that is diagonalizable but not invertible (which you can see because 0 is one if its eigenvalues). 2 Next Class: One last look at some applications: difference equations, Markov chains, and maybe a few words about Google PageRank. 2