Eigenvalues and Eigenvectors

Similar documents
235 Final exam review questions

Lecture 15, 16: Diagonalization

REVIEW FOR EXAM III SIMILARITY AND DIAGONALIZATION

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL

Math 21b Final Exam Thursday, May 15, 2003 Solutions

22m:033 Notes: 7.1 Diagonalization of Symmetric Matrices

Math 3191 Applied Linear Algebra

MATH 583A REVIEW SESSION #1

a 11 a 12 a 11 a 12 a 13 a 21 a 22 a 23 . a 31 a 32 a 33 a 12 a 21 a 23 a 31 a = = = = 12

MATH 221, Spring Homework 10 Solutions

Columbus State Community College Mathematics Department Public Syllabus

Reduction to the associated homogeneous system via a particular solution

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

DEPARTMENT OF MATHEMATICS

and let s calculate the image of some vectors under the transformation T.

Diagonalization of Matrix

MAT 1302B Mathematical Methods II

Math 307 Learning Goals

No books, no notes, no calculators. You must show work, unless the question is a true/false, yes/no, or fill-in-the-blank question.

Math 307 Learning Goals. March 23, 2010

18.06 Quiz 2 April 7, 2010 Professor Strang

Problem 1: Solving a linear equation

LINEAR ALGEBRA REVIEW

4. Linear transformations as a vector space 17

Practice problems for Exam 3 A =

Math 224, Fall 2007 Exam 3 Thursday, December 6, 2007

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

Lecture 2: Linear operators

AMS10 HW7 Solutions. All credit is given for effort. (-5 pts for any missing sections) Problem 1 (20 pts) Consider the following matrix 2 A =

Definition (T -invariant subspace) Example. Example

80 min. 65 points in total. The raw score will be normalized according to the course policy to count into the final score.

Conceptual Questions for Review

DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix

UNDERSTANDING THE DIAGONALIZATION PROBLEM. Roy Skjelnes. 1.- Linear Maps 1.1. Linear maps. A map T : R n R m is a linear map if

Name: Final Exam MATH 3320

Math 315: Linear Algebra Solutions to Assignment 7

DEPARTMENT OF MATHEMATICS

DEPARTMENT OF MATHEMATICS

12x + 18y = 30? ax + by = m

Linear Algebra- Final Exam Review

c 1 v 1 + c 2 v 2 = 0 c 1 λ 1 v 1 + c 2 λ 1 v 2 = 0

1 Last time: least-squares problems

Assignment 1 Math 5341 Linear Algebra Review. Give complete answers to each of the following questions. Show all of your work.

Examples True or false: 3. Let A be a 3 3 matrix. Then there is a pattern in A with precisely 4 inversions.

1. General Vector Spaces

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

MAT1302F Mathematical Methods II Lecture 19

EXAM. Exam 1. Math 5316, Fall December 2, 2012

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Problem # Max points possible Actual score Total 120

Recall : Eigenvalues and Eigenvectors

PRACTICE PROBLEMS FOR THE FINAL

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Class President: A Network Approach to Popularity. Due July 18, 2014

Diagonalizing Matrices

1 9/5 Matrices, vectors, and their applications

Chapter 6: Orthogonality

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

Lecture 7: Positive Semidefinite Matrices

Math Spring 2011 Final Exam

Eigenvalues and Eigenvectors

= main diagonal, in the order in which their corresponding eigenvectors appear as columns of E.

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Math 110 (Fall 2018) Midterm II (Monday October 29, 12:10-1:00)

Remark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

A Brief Outline of Math 355

5.) For each of the given sets of vectors, determine whether or not the set spans R 3. Give reasons for your answers.

Math 291-2: Lecture Notes Northwestern University, Winter 2016

Lecture 10: Powers of Matrices, Difference Equations

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Archive of past papers, solutions and homeworks for. MATH 224, Linear Algebra 2, Spring 2013, Laurence Barker

University of British Columbia Math 307, Final

Eigenvectors and Hermitian Operators

The Singular Value Decomposition

Chapter 3. Determinants and Eigenvalues

Chapter 4 & 5: Vector Spaces & Linear Transformations

Remarks on Definitions

The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA)

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

. = V c = V [x]v (5.1) c 1. c k

I. Multiple Choice Questions (Answer any eight)

Announcements Wednesday, November 01

APPLICATIONS The eigenvalues are λ = 5, 5. An orthonormal basis of eigenvectors consists of

HW2 - Due 01/30. Each answer must be mathematically justified. Don t forget your name.

MA 265 FINAL EXAM Fall 2012

Applied Linear Algebra in Geoscience Using MATLAB

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES

PROBLEM SET. Problems on Eigenvalues and Diagonalization. Math 3351, Fall Oct. 20, 2010 ANSWERS

18.06 Problem Set 8 - Solutions Due Wednesday, 14 November 2007 at 4 pm in

Linear algebra and applications to graphs Part 1

Linear algebra II Homework #1 due Thursday, Feb A =

MATH 235. Final ANSWERS May 5, 2015

homogeneous 71 hyperplane 10 hyperplane 34 hyperplane 69 identity map 171 identity map 186 identity map 206 identity matrix 110 identity matrix 45

Exercise Set 7.2. Skills

A = 3 B = A 1 1 matrix is the same as a number or scalar, 3 = [3].

Linear algebra II Homework #1 due Thursday, Feb. 2 A =. 2 5 A = When writing up solutions, write legibly and coherently.

Math Camp Notes: Linear Algebra II

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Transcription:

Eigenvalues and Eigenvectors MAT 67L, Laboratory III Contents Instructions (1) Read this document. (2) The questions labeled Experiments are not graded, and should not be turned in. They are designed for you to get more practice with MATLAB before you start working on the programming problems, and they reinforce mathematical ideas. (3) A subset of the questions labeled Problems are graded. You need to turn in MATLAB M-files for each problem via Smartsite. You must read the Getting started guide to learn what file names you must use. Incorrect naming of your files will result in zero credit. Every problem should be placed is its own M-file. (4) Don t forget to have fun! Eigenvalues One of the best ways to study a linear transformation f : V V is to find its eigenvalues and eigenvectors or in other words solve the equation f(v) = λv, v 0. In this MATLAB exercise we will lead you through some of the neat things you can to with eigenvalues and eigenvectors. First however you need to teach MATLAB to compute eigenvectors and eigenvalues. Lets briefly recall the steps you would have to perform by hand: As an example lets take the matrix of a linear transformation f : R 3 R 3 to be (in the canonical basis) M := 1 2 3 2 4 5. 3 5 6 The steps to compute eigenvalues and eigenvectors are (1) Calculate the characteristic polynomial P (λ) = det(m λi). (2) Compute the roots λ i of P (λ). These are the eigenvalues. (3) For each of the eigenvalues λ i calculate ker(m λ i I). The vectors of any basis for for ker(m λ i I) are the eigenvectors corresponding to λ i. 1

2 Computing Eigenvalues and Eigenvectors with MATLAB. As a warning, there are two very different ways to compute the characteristic polynomial of a matrix in MATLAB. >> M =[1 2 3 ; 2 4 5 ; 3 5 6 ] M = 1 2 3 2 4 5 3 5 6 >> %compute the c h a r a c t e r i s t i c polynomial >> poly ( M ) 1.0000 11.0000 4.0000 1.0000 >> %convert M i n t o a symbolic matrix using the symbolic math t o o l k i t >> sym B ; >> B= sym ( M ) B = [ 1, 2, 3 ] [ 2, 4, 5 ] [ 3, 5, 6 ] >> %n o t i c e how matlab p r i n t e d out the symbolic matrix d i f f e r e n t l y than M >> %compute the c h a r a c t e r i s t i c polynomial again >> poly ( B ) xˆ3 11 xˆ2 4 x + 1 Next, let us compute the eigenvalues by finding the roots of the characteristic polynomial and the eigenvectors of M. >> eigenvalues = r o o t s ( poly ( M ) ) eigenvalues = 11.3448 0.5157 0.1709 >> f o r i = 1 : l e n g t h ( eigenvalues ) >> shiftedm = M eigenvalues ( i ) eye ( s i z e ( M ) ) ; >> r r e f ( shiftedm ) >> end

3 1.0000 0 0.4450 0 1.0000 0.8019 1.0000 0 1.2470 0 1.0000 0.5550 1.0000 0 1.8019 0 1.0000 2.2470 It is now easy to see that the kernel of these three matrices are (0.4450, 0.8019, 1) T, ( 1.2470, 0.5550, 1) T, and (1.8019, 2.2470, 1) T To double check this, run the following in MATLAB and you should get vectors very close to zero. v1= [ 0. 4 4 5 0, 0. 8 0 1 9, 1 ] ' ; v2= [ 1.2470, 0.5550, 1 ] ' ; v3= [ 1. 8 0 1 9, 2.2470, 1 ] ' ; %t h e s e should be c l o s e to z e r o. M v1 eigenvalues ( 1 ) v1 M v2 eigenvalues ( 2 ) v2 M v3 eigenvalues ( 3 ) v3 Symmetric Matrices. The matrix M in the above example is symmetric, i.e. M = M T (remember that the transpose is the mirror reflection about the diagonal). It turns out (we will learn why from Chapter 11 of the book) that symmetric matrices can always be diagonalized. Recall that to diagonalize a matrix M you need to find a basis of eigenvectors and arrange these (or better said their components) as the columns of a change of basis matrix P. Then it follows that MP = P D where D is a diagonal matrix of eigenvalues. Thus D = P 1 MP. Diagonalization with MATLAB. Above, we computed the eigenvalues and vectors the long and hard way, but MATLAB has a function that will make your life easy: >> [ P, D ] = e i g ( M )

4 P = 0.7370 0.5910 0.3280 0.3280 0.7370 0.5910 0.5910 0.3280 0.7370 D = 0.5157 0 0 0 0.1709 0 0 0 11.3448 The i th column of P is an eigenvector corresponding to the eigenvalue in the i th column of D. Notice how MATLAB changed the order the eigenvectors from the previous way I wrote them down. Also, MATLAB normalized each eigenvector, and changed the sign of v 2. This is ok because eigenvectors that differ by a non-zero scaler are considered equivalent. >> v1/norm ( v1 ) 0.3280 0.5910 0.7370 >> v2/norm ( v2 ) 0.7370 0.3280 0.5910 >> v3/norm ( v3 ) 0.5910 0.7370 0.3280 For fun, let us check that MP = P D: >> M P 0.3801 0.1010 3.7209 0.1692 0.1260 6.7049 0.3048 0.0561 8.3609

5 >> P D 0.3801 0.1010 3.7209 0.1692 0.1260 6.7049 0.3048 0.0561 8.3609 Orthogonal Matrices. Symmetric matrices have another very nice property. Their eigenvectors (for different eigenvalues) are orthogonal. Hence we can rescale them so their length is unity to form an orthonormal basis (for any eigenspaces of dimension higher than one, we can use the Gram-Schmidt procedure to produce an orthonormal basis). Then when we form the change of basis matrix O of orthonormal basis vectors we find O T O = I, so that O is an orthogonal matrix. The diagonalization formula is now D = O T MO. Dot Products and Transposes with MATLAB. Taking the standard dot product over real or complex vectors in MATLAB is easy as it can be done by multiplying two vectors together: >> %remove the v a r i a b l e i from memory so matlab w i l l know i i s the complex number. >> c l e a r i >> a =[1; 3+i ; 5+i ] a = 1.0000 3.0000 + 1.0000 i 5.0000 + 1.0000 i >> b = [ 1 ; 2 ; 3 ] b = 1 2 3 >> % a ' w i l l g i v e the Hermitian t r a n s p o s e >> a ' 1.0000 3.0000 1.0000 i 5.0000 1.0000 i >> a ' b

6 22.0000 5.0000 i Because MATLAB already gave us normalized eigenvectors, in the notation of the last sections we have O = P. Let us check O T O = I and D = O T MO. >> P P = 0.7370 0.5910 0.3280 0.3280 0.7370 0.5910 0.5910 0.3280 0.7370 >> P P ' 1.0000 0.0000 0 0.0000 1.0000 0.0000 0 0.0000 1.0000 >> P ' M P >> D D = 0.5157 0.0000 0.0000 0.0000 0.1709 0.0000 0.0000 0.0000 11.3448 0.5157 0 0 0 0.1709 0 0 0 11.3448 Functions of matrices. If you can diagonalize a matrix then it is possible to compute functions of matrices f(m) by writing f(x) = 1 + x + 1 2! x2 + (a Taylor series) and use the fact that M k = (P 1 DP ) k = P 1 D k P so that f(m) = P 1( I + D + 1 2! D2 + )P = P 1 f(d)p. (so long as the series converges on the eigenvalues). Also f(d) is the matrix with f(λ i ) along the diagonal. Functions of matrices with MATLAB. MATLAB can compute some functions on matrices like e M and M, but care must be taken. For example, to compute e M, the MATLAB

function expm(m) must be used instead of exp(m) as the latter will simply return there matrix where each entry is replaced with e m ij, which is not the same thing as taking the matrix exponential. Also, for some functions (like the square root of a negative or complex eigenvalue) such things like branch cuts must be considered. 7 >> expm( M ) P expm( D ) P ' 1. 0 e 09 0.0327 0.0546 0.0655 0.0546 0.1055 0.1237 0.0655 0.1164 0.1310 >> sqrtm ( M ) P sqrtm ( D ) P ' Exercises.

8 Problem 1. Given a diagonalizable matrix A, let the list (λ 1,..., λ n ) be the eigenvalues. Write a function that will compute the sum i λ i and the product i λ i. The function signature is %Input % A: n x n d i a g o n a l i z a b l e matrix %Output % thesum : sum o f e i g e n v a l u e s % theprod : product o f e i g e n v a l u e s f u n c t i o n [ thesum, theprod ] = ev_sumandproduct ( A ) code here end You may NOT use MATLAB s eig function. If you do, you will get zero points. Hint: you may want to research what properties the determinant and trace function have. Problem 2. Given a diagonalizable matrix A, write a function that return true if A is positive definite and false otherwise. The function signature is %Input % A: n x n d i a g o n a l i z a b l e matrix %Output % isposdef : t r u e i f p o s i t i v e d e f i n i t e, f a l s e o t h e r w i s e f u n c t i o n [ isposdef ] = ev_ispositivedefinite ( A ) code here end A matrix A is called positive definite if x 0, x T Ax > 0. Hint, use the fact that A is diagonalizable and the eigenvector matrix in invertible. Another hint, the value true in MATLAB is typed like this: true.

Problem 3. Consider the following Markov Chain on the graph below. The nodes are called states, and the edge weights represent a probability of moving from a state to another state. 1/3 9 1/3 1 1/6 1/2 1/4 1/3 1/3 2 3 1/2 1/4 We can encode the probability of moving from one state to another in a matrix 1/3 1/3 1/3 P = 1/4 1/2 1/4 1/6 1/3 1/2 where P ij is the probability of moving from state i to state j. If you start at state 1, then you move to a new state randomly according to the probabilities on node 1. Using matrix notation, e 1 P is a probability vector of which state you will be in after one iteration, where e 1 = (1, 0, 0) encodes you are starting from state 1. It can be shown that e 1 P 2 is the probability vector of where you will be if you start at state 1 and then randomly moved twice. This generalizes to e 1 P n. Now lets say you do not know which state you are going to start with; instead, you know you will start at state 1 with probability 1/2 or state 2 with probability 1/2. Then after 1 step, your probability distribution of where you will be is (1/2, 1/2, 0)P = (7/24, 5/12, 7/24) (so there is a 7/24 chance you will be in state 1). After n steps, your probability distribution is (1/2, 1/2, 0)P n. An interesting question is that does there exist an initial probability distribution row vector π such that your probability distribution as you walk does not change: πp = π? Write a function to find such a starting probability distribution. The function signature is %Input % P: n x n d i a g o n a l i z a b l e Markov matrix %Output % pd : row v e c t o r such that pd P = pd and the sum o f the elements i n pd i s 1. %Example % For the P d e f i n e d above, pd =[6/25, 2/5, 9/25] f u n c t i o n [ pd ] = ev_markovsteadystate ( P ) code here end

10 Hint, use eig in a special matrix. Problem 4. Let us breed rabbits, starting with one pair of rabbits. Every month, each pair of rabbits produces one pair of offspring. After one month, the offspring is an adult, and will start reproduction. We neglect all kinds of effects like death and we always consider pairs of rabbits. To model this dynamical system, we use the space of rabbit vectors r = (j, a) T R 2. The first component of the rabbit vector gives the number of a juvenile pairs, the second component the number of adult pairs of rabbits. Translating the standard Fibonacci recurrence into matrix notation, we get ( ) ( ) ( ) ( ) jn+1 0 1 jn 0 1 r n+1 = = = r a n+1 1 1 a n 1 1 n =: Ar n The standard Fibonacci sequence uses r 0 = (1, 0) T, but we could start r 0 = (a, b) T at any point in R 2. Let M be a 3-row k-column matrix. Write a function that takes M and returns a k-vector f where f i is the m 3,i th Fibonacci sequence number where r 0 = (m 1,i, m 2,i ). Compute the f vector using the above matrix recursion (in a smart way). For example, if M = 1 1 3 0 0 5, 1 2 10 then the f vector is (1, 2, 987), because 1 = sum(a(1, 0) T ), 2 = sum(a 2 (1, 0) T ), and 987 = sum(a 10 (3, 5) T ) %Input % M: d e s c r i p t i o n i s above. %Output % f : d e s c r i p t i o n i s above. f u n c t i o n [ f ] = ev_fastfibonacci ( M ) code here end Remark: M will contain on the order of 100000 columns. The integers in the 3rd row of M will contain many digits. The submatrix M(1 : 2, :) may not be integer. Remark: Your function will be timed and the amount of memory it uses will be recorded. If it takes too long or uses to much memory, I will assume your function is simply computing A n and you will get zero credit. You must think of a way to find A n quickly.