Eigenvalues. Matrices: Geometric Interpretation. Calculating Eigenvalues

Similar documents
Econ Slides from Lecture 7

Matrices and Deformation

Linear Algebra: Matrix Eigenvalue Problems

Lecture Notes: Eigenvalues and Eigenvectors. 1 Definitions. 2 Finding All Eigenvalues

Math 489AB Exercises for Chapter 2 Fall Section 2.3

JUST THE MATHS UNIT NUMBER 9.8. MATRICES 8 (Characteristic properties) & (Similarity transformations) A.J.Hobson

Recall : Eigenvalues and Eigenvectors

Symmetric and anti symmetric matrices

q n. Q T Q = I. Projections Least Squares best fit solution to Ax = b. Gram-Schmidt process for getting an orthonormal basis from any basis.

Vectors, matrices, eigenvalues and eigenvectors

6 EIGENVALUES AND EIGENVECTORS

Linear Algebra Primer

Math Ordinary Differential Equations

FACULTY OF MATHEMATICAL STUDIES MATHEMATICS FOR PART I ENGINEERING. Lectures

CS 246 Review of Linear Algebra 01/17/19

Math Linear Algebra Final Exam Review Sheet

Lecture 15, 16: Diagonalization

Eigenvalues and Eigenvectors

Math 315: Linear Algebra Solutions to Assignment 7

Appendix A: Matrices

3 (Maths) Linear Algebra

Conceptual Questions for Review

MATH 220 FINAL EXAMINATION December 13, Name ID # Section #

The converse is clear, since

AN ITERATION. In part as motivation, we consider an iteration method for solving a system of linear equations which has the form x Ax = b

Massachusetts Institute of Technology Department of Economics Statistics. Lecture Notes on Matrix Algebra

Chapter 5 Eigenvalues and Eigenvectors

The Eigenvalue Problem: Perturbation Theory

STEP Support Programme. STEP 2 Matrices Topic Notes

Jordan Normal Form and Singular Decomposition

Chap 3. Linear Algebra

Multiplying matrices by diagonal matrices is faster than usual matrix multiplication.

4. Linear transformations as a vector space 17

Math Spring 2011 Final Exam

1. General Vector Spaces

Further Mathematical Methods (Linear Algebra)

3.3 Eigenvalues and Eigenvectors

5 More on Linear Algebra

Principal Component Analysis (PCA) Our starting point consists of T observations from N variables, which will be arranged in an T N matrix R,

Computational Methods. Eigenvalues and Singular Values

22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.

Generalized Eigenvectors and Jordan Form

Eigenvalues and Eigenvectors

Econ Slides from Lecture 8

av 1 x 2 + 4y 2 + xy + 4z 2 = 16.

MATH 320, WEEK 11: Eigenvalues and Eigenvectors

Agenda: Understand the action of A by seeing how it acts on eigenvectors.

Numerical Linear Algebra Homework Assignment - Week 2

Math Camp Notes: Linear Algebra II

A A x i x j i j (i, j) (j, i) Let. Compute the value of for and

A Brief Outline of Math 355

4. Determinants.

SPRING 2006 PRELIMINARY EXAMINATION SOLUTIONS

The Singular Value Decomposition

22.2. Applications of Eigenvalues and Eigenvectors. Introduction. Prerequisites. Learning Outcomes

Solutions to Dynamical Systems 2010 exam. Each question is worth 25 marks.

PY 351 Modern Physics Short assignment 4, Nov. 9, 2018, to be returned in class on Nov. 15.

21 Linear State-Space Representations

Repeated Eigenvalues and Symmetric Matrices

WI1403-LR Linear Algebra. Delft University of Technology

Diagonalization of Matrix

JUST THE MATHS SLIDES NUMBER 9.6. MATRICES 6 (Eigenvalues and eigenvectors) A.J.Hobson

Mathematical Methods wk 2: Linear Operators

Chapter 3. Linear and Nonlinear Systems

Week Quadratic forms. Principal axes theorem. Text reference: this material corresponds to parts of sections 5.5, 8.2,

Chapter 3. Matrices. 3.1 Matrices

Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2

Math Matrix Algebra

1. Matrix multiplication and Pauli Matrices: Pauli matrices are the 2 2 matrices. 1 0 i 0. 0 i

Introduction to Matrices

MIT Final Exam Solutions, Spring 2017

Part IA. Vectors and Matrices. Year

MATH 5720: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 2018

Quadratic forms. Here. Thus symmetric matrices are diagonalizable, and the diagonalization can be performed by means of an orthogonal matrix.

Final Exam Practice Problems Answers Math 24 Winter 2012

Modal Decomposition and the Time-Domain Response of Linear Systems 1

7. Symmetric Matrices and Quadratic Forms

TheHarmonicOscillator

Eigenvalues and Eigenvectors: An Introduction

Math 110 Linear Algebra Midterm 2 Review October 28, 2017

1. Linear systems of equations. Chapters 7-8: Linear Algebra. Solution(s) of a linear system of equations (continued)

Stat 206: Linear algebra

Eigenvalues and Eigenvectors

MATH 423 Linear Algebra II Lecture 20: Geometry of linear transformations. Eigenvalues and eigenvectors. Characteristic polynomial.

CS 143 Linear Algebra Review

Linear Algebra Practice Problems

Eigen Values and Eigen Vectors - GATE Study Material in PDF

SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to Linear Algebra

Remark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Further Mathematical Methods (Linear Algebra) 2002

Linear Algebra Highlights

MATHEMATICS FOR ENGINEERS BASIC MATRIX THEORY TUTORIAL 3 - EIGENVECTORS AND EIGENVALUES

COMP 558 lecture 18 Nov. 15, 2010

7 Planar systems of linear ODE

Chapter 5. Eigenvalues and Eigenvectors

4 Matrix Diagonalization and Eigensystems

SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to Linear Algebra

Chapter 3. Determinants and Eigenvalues

Jim Lambers MAT 610 Summer Session Lecture 2 Notes

Transcription:

Eigenvalues Matrices: Geometric Interpretation Start with a vector of length 2, for example, x =(1, 2). This among other things give the coordinates for a point on a plane. Take a 2 2 matrix, for example, 1 0. 0 1 Then Ax =(1, 2). That is, multiplication by the matrix has created a new set of coordinates (More generally, if A is an n n square matrix, and x is vector of length n, Ax is also a vector of length n). A matrix can be thought of as an operator moving points around the plane. The matrix A above represents a reflection in the x-axis. Other matrices will represent other geometric operations such as stretches, rotations and so on. Eigenvalues (aka characteristic values) and eigenvectors (aka characteristic vectors) enable these operations to be characterised relatively compactly. An eigenvector of a matrix is a vector which is left unchanged in direction (but not necessarily in magnitude) underthetransformation defined by that matrix. For example, the vector (1,0) is an eigenvector for A above, because A(1, 0) = (1, 0). A represents a reflection in the x-axis, and so points on the x-axis like (1,0) do not move. If B = 2 0 0 1 then B(1, 0) = (2, 0). B again represents a reflection in the x-axis but with some stretching. So (1,0) is also an eigenvector for B, it gets stretched to (2,0) but all that has happened to it is to be multiplied by 2. This factor by which the eigenvector is multiplied is the eigenvalue associated with that eigenvector.. Calculating Eigenvalues Formally, an eigenvector for a square matrix A is a non-zero vector that satisfies the equation Ax = λx (1) where λ is a scalar and is the associated eigenvalue. Every n n matrix has n eigenvalues and n (sets of) eigenvectors. This latter fact is because eigenvectors are not unique. If x is an eigenvector corresponding to an eigenvalue λ, then, for example, 3x, x/2, 2x are all eigenvectors for that eigenvalue as well. For example, for A above, all vectors of the form (a, 0) are unchanged under the operation defined by A. 1

Starting with the eigenvalue equation (1), subtract λx from both sides to obtain Ax λx =0 (A λi)x =0. (2) (I is the identity matrix). Matrix theory says that if for some matrix B and for some non-zero vector x, Bx =0, thenb is a singular matrix. That is, its determinant is zero. Therefore the matrix a11 λ a A λi = 12 (3) a 22 λ has a zero determinant. That is, a 21 (a 11 λ)(a 22 λ) a 12 a 21 =0 (4) which is a quadratic equation in λ. This equation has two solutions, which will be the eigenvalues of the matrix. For example, if 1 2. (5) 2 1 then we have (1 λ)( 1 λ) 4=0or λ 2 5=0which has solutions λ = ± 5. Useful Facts Note that if the matrix is diagonal (a 12 = a 21 =0) or triangular (either a 12 or a 21 is zero), then the above reduces to (a 11 λ)(a 22 λ) =0. This equation has two clear solutions λ = a 11 and λ = a 22. That is, the eigenvalues are the diagonal elements. The same principal applies to all n n matrices. 1. If a matrix is DIAGONAL 2. Or if a matrix is TRIANGULAR then the eigenvalues are just the diagonal elements. Two other important facts 1. The sum of the eigenvalues of a matrix is equal to the sum of its diagonal elements, whichiscalledthetrace of a matrix. 2. The product of the eigenvalues of a matrix is equal to the determinant of the matrix. 2

Intheexample(5)above,thetraceis0= 5+( 5) and the determinant is 5 = 5 5. The second point implies: IF A MATRIX HAS EVEN ONE ZERO EIGENVALUE, IT IS SINGULAR! Diagonalisation and Normal Forms Suppose there are n n matrices A and P,thennotethatA and (if P 1 exists) P 1 AP have the same eigenvalues. One says that a matrix A is diagonalisable if there exists a matrix P such that P 1 AP = D where D is a diagonal matrix. Note that the eigenvalues of D are just its diagonal elements. But as A and D havethesameeigenvalues,ifa is diagonalisable, then D has A s eigenvalues along its diagonal. Symmetric Matrices: We have the following result. Theorem 1 An n n symmetric matrix A 1. has n real eigenvalues λ 1,...λ n 2. there are n orthogonal eigenvectors (i.e. x i x j =0for j 6= i). 3. There exists a matrix P, for which the columns are eigenvectors of A, suchthat D = P 1 AP. Example: the matrix 1 2 (6) 2 1 has eigenvalues (3,-1) and corresponding eigenvectors (1,1) and (-1,1) respectively. Thus, 1 1 P = 1 1,P 1 = 1 1 1 1 3 0 /2,P 1 AP = 0 1 (7) 3

Non-Symmetric Matrices: We have the following result. Theorem 2 (Jordan) For any n n matrix A, thereexistsamatrixp,suchthat J = P 1 AP (J is the Jordan normal form ), where J = D + N where D is a diagonal matrix with the eigenvalues of A and N is nilpotent (i.e. N k =0 for some positive integer k). Example: the matrix 4 1 1 6 has repeated eigenvalues (5,5) and only one independent eigenvector (1,1). Nonetheless, we have 1 0 P =,P 1 1 1 1 0 5 1 5 0 0 1 =,P 1 1 1 AP = = + 0 5 0 5 0 0 NB here our matrix N is such that N 2 =0. Complex Eigenvalues Asymmetric matrices may have complex eigenvalues. That is, they may have eigenvalues of the form α + βi, whereα, β are normal numbers but i is the imaginary number defined as 1. For example, the matrix 0 1 1 0 represents geometrically a rotation of 90 anticlockwise. It has a zero trace but its determinant is 1. So λ 1 + λ 2 =0,andλ 1 λ 2 =1. Combining these two equations, you can obtain λ 2 1 = 1 or the two eigenvalues are equal to ± 1=±i, wherei represents thesquarerootof 1. This illustrates several points about complex eigenvalues 1. Complex eigenvalues are associated with circular and cyclical motion. 2. Complex eigenvalues come in pairs. If α + βi is an eigenvalue for a matrix, then α βi is also. 3. Note that the sum of such a pair (i.e α + βi + α βi =2α) andtheproductof such a pair (i.e. (α + βi)(α βi) =α 2 + β 2 ) are both real numbers and are not complex. It is still true that the sum of complex eigenvalues equals the trace of the matrix and that the product of the eigenvalues equals the determinant. 4.

4. Eigenvectors associated with complex eigenvalues are themselves complex. 5. The Jordan Theorem (Theorem 2) still applies. Example: the matrix 2 1 1 3 has eigenvalues (5 ± 3i)/2 and corresponding eigenvectors ( 3+(5+ 3i)/2, 1) and ( 3+(5 3i)/2, 1). These are not orthogonal but ( 3+(5+ 3i)/2 3+(5 3i)/2 i/ 3 1/2 i 3/6 P =,P 1 = 1 1 i/ 3 1/2+i 3/6 and P 1 AP = so A is diagonalisable. ( 3+(5+ 3i)/2 0 0 3+(5 3i)/2 Application: Eigenvalues and Quadratic Forms Remember that a matrix is called positive definite if x Ax > 0 (8) for all non-zero vectors x. There is a link between this and the eigenvalues of a matrix. Take the eigenvalue equation (1) and multiply both sides by x to obtain x Ax = λx x. (9) Now if A is positive definite, the lefthand side of the equation is positive. Now, x x = P x 2 i > 0, soλ must be positive. We have just proved, if a matrix is positive (negative) definite, all its eigenvalues are positive (negative). If a matrix is symmetric, we can add the following If a symmetric matrix has all its eigenvalues positive (negative), it is positive (negative) definite. Proof: If a matrix is symmetric it has n orthogonal eigenvectors z 1,..., z n. Then the eigenvectors form a basis for IR n. That is, any vector x IR n can be written x = P n a iz i for suitable constants a 1,.., a n.wethushave x Ax = a i z i A a i z i = 5 a 2 i λ i z i z i

which is strictly positive (negative) if all eigenvalues are positive (negative). If a matrix is not symmetric, this does not apply. For example, the matrix 1 101 0 2 is triangular so that its eigenvalues are 1 and 2, but if x =(1, 1) we have x Ax = 98 < 0 so that A is not positive definite. Application: Difference Equations Letuslookatlineardifference equations with n variables. Time is discrete t =0, 1, 2,... and we write the matrix difference equation as x t+1 = Ax t = A t x 0 (10) for some n n matrix A. First suppose that A is diagonal with diagonal elements a 1,..., a n then x it = a t ix 0. (11) That is, if the matrix A is diagonal we have an exact solution to the difference equation that is easily calculable. Suppose a matrix A is not diagonal but is diagonalisable, so that P 1 AP = D. Then define y t = P 1 x t for any t so that x t = Py t. Then (10) becomes Py t+1 = AP y t (12) or y t+1 = P 1 AP y t = Dy t (13) That is, we have constructed a diagonal dynamic system. From (11), we know that has the solution y it = λ t iy i0 where λ i is the ith eigenvalue of A (remember A and D have the same eigenvalues). But we convert this back and get a solution for x t λ t 1y 10 x t = Py t = P. (14) λ t ny n0 or x it = P n k iλ t i where the k i are constants determined by P and by initial values of x. Suppose a matrix A is not diagonal nor diagonalisable, but by the Jordan Theorem we have P 1 AP = D + N. Thendefine y t = P 1 x t for any t so that x t = Py t.then (10) becomes y t+1 = P 1 AP y t =(D + N)y t (15) 6

That is, we have constructed an almost diagonal dynamic system. How does this work? Well, y t+1 =(D + N) t y 0 =(D t + td t 1 N +... + N t )y 0 Now, remember N is nilpotent so that in the above equation the higher powers of N are zero. For example (and this is typical), N t =0for t 2. Then, it would be the case that x it = where the k i and m i are constants. k i λ t i + t m i λ t 1 i (16) So, under the the Jordan Theorem, any solution to the general difference equation (10) can be written something like (16), that is in powers of the eigenvalues of A. If λ i < 1, thenlim t λ t i =0,if λ i > 1, thenlim t λ t i =. Thus, we have the following Theorem 3 Assume A is non-singular. Then, the linear difference equation (10) has auniquefixed point x =0.IftheeigenvaluesofA satisfy λ i < 1 for i =1,.., n then all solutions of (10) from any initial conditions converge to 0. Application: Differential Equations Consider a linear differential equation of the form ẋ = Ax (17) where x IR n and A is a n n matrix. First, suppose A is diagonal. Then it is relatively easy to see that the differential equation will have solutions of the form x i (t) =k i e λ it for i =1,..., n, wheretheλ i are the diagonal elements, equivalently the eigenvalues of A and the k i are constants to be determined by initial conditions. Suppose again that the matrix A is not diagonal but is diagonalisable, so that P 1 AP = D. Thendefine y(t) =P 1 x(t) for any t so that x(t) =Py(t). Then(17), because dy/dt = dy/dx dx/dt ẏ = P 1 Ax = P 1 AP y = Dy (18) That is, we have constructed a diagonal dynamic system. We know that has the solution y i (t) =e λit y i (0) where λ i is the ith eigenvalue of A (remember A and D have the same eigenvalues). But we convert this back and get a solution for x(t) so that x i (t) = P n k ie λit where the k i are constants determined by P and by initial values of x. 7

Suppose n =2and the eigenvalues are complex, then the solution becomes x i (t) =k 1 e (α+βi)t + k 2 e (α βi)t = e αt (k 1 e βit + k 2 e βit ) Euler s formula is that e (α+βi)t = e αt (cos βt + i sin βt) a special case of which is e iπ = 1 (19) which gives you e, π and i all in the same equation! So, the solution can be written as x i (t) =e αt (k 3 cos βt + k 4 sin βt) where again k 3 and k 4 are constants to be determined by initial conditions Points worth knowing 1. The functions sin and cos give rise to undulating waves. 2. Since the whole solution is multiplied by e αt the solution explodes if α>0 and converges if α < 0. If α =0the solution has oscillations of constant size and neither converges or diverges. IE it is the real part α of the complex eigenvalue α + βi that determines stability. So, whether eigenvalues are real or complex, it is whether the real part is positive or negative that determines whether the solution to the differential equation converges or diverges. This leads to the following general result. Theorem 4 Assume A is non-singular. Then, the linear differential equation (17) has auniquefixed point x =0. If all the eigenvalues of A have real part negative then all solutions of (17) from any initial conditions converge to 0. More generally, if there are no zero eigenvalues, the stable manifold of the fixed point 0 is of the same dimension as the number of negative eigenvalues. Example: the matrix given in (6) was 1 2 2 1 and had eigenvalues (3,-1) and corresponding eigenvectors (1,1) and (-1,1) respectively. So, solutions to the differential equation ẋ = Ax will be of the form x i (t) = k i1 e 3t +k i2 e t and in general will diverge from (0,0). However, if the initial conditions correspond to the eigenvector of the negative eigenvalue, e.g. ( x 1,x 1 ), solutions will be of the form x i (t) =k i2 e t and will converge to zero. NB if e.g. x(0) = ( 1, 1) then from (7), we have y(0) = P 1 ( 1, 1) = (0, 2), that is, no weight is placed on the term in e 3t, the unstable eigenvalue. 8