Eigenvalues, Eigenvectors and the Jordan Form

Similar documents
21 Linear State-Space Representations

2. Every linear system with the same number of equations as unknowns has a unique solution.

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors

Solutions to Final Practice Problems Written by Victoria Kala Last updated 12/5/2015

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Linear Algebra- Final Exam Review

. = V c = V [x]v (5.1) c 1. c k

Symmetric and anti symmetric matrices

Final Review Written by Victoria Kala SH 6432u Office Hours R 12:30 1:30pm Last Updated 11/30/2015

A Brief Outline of Math 355

235 Final exam review questions

Chapter 4 & 5: Vector Spaces & Linear Transformations

Eigenvalues and Eigenvectors

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

Chapter 5 Eigenvalues and Eigenvectors

Chapter Two Elements of Linear Algebra

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

Linear Algebra and ODEs review

1. General Vector Spaces

Computational Methods CMSC/AMSC/MAPL 460. Eigenvalues and Eigenvectors. Ramani Duraiswami, Dept. of Computer Science

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

MATH 583A REVIEW SESSION #1

Section 8.2 : Homogeneous Linear Systems

Notes on Linear Algebra

Conceptual Questions for Review

Linear Algebra: Matrix Eigenvalue Problems

MATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible.

Announcements Monday, October 29

Chap 3. Linear Algebra

Elementary linear algebra

Examples and MatLab. Vector and Matrix Material. Matrix Addition R = A + B. Matrix Equality A = B. Matrix Multiplication R = A * B.

MIT Final Exam Solutions, Spring 2017

Eigenvalues and Eigenvectors. Review: Invertibility. Eigenvalues and Eigenvectors. The Finite Dimensional Case. January 18, 2018

6 Inner Product Spaces

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

Linear Algebra March 16, 2019

Geometric Modeling Summer Semester 2010 Mathematical Tools (1)

LINEAR ALGEBRA QUESTION BANK

Lecture 7: Positive Semidefinite Matrices

Solutions to Final Exam

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

Reduction to the associated homogeneous system via a particular solution

Solving a system by back-substitution, checking consistency of a system (no rows of the form

Lecture 1: Review of linear algebra

Review problems for MA 54, Fall 2004.

Background Mathematics (2/2) 1. David Barber

5.) For each of the given sets of vectors, determine whether or not the set spans R 3. Give reasons for your answers.

COMP 558 lecture 18 Nov. 15, 2010

ECE504: Lecture 6. D. Richard Brown III. Worcester Polytechnic Institute. 07-Oct-2008

LINEAR ALGEBRA SUMMARY SHEET.

Math 1553, Introduction to Linear Algebra

Numerical Methods - Numerical Linear Algebra

Study Guide for Linear Algebra Exam 2

Basic Elements of Linear Algebra

Linear Algebra Massoud Malek

AMS10 HW7 Solutions. All credit is given for effort. (-5 pts for any missing sections) Problem 1 (20 pts) Consider the following matrix 2 A =

Math 110 Linear Algebra Midterm 2 Review October 28, 2017

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Eigenvalues, Eigenvectors. Eigenvalues and eigenvector will be fundamentally related to the nature of the solutions of state space systems.

Linear Algebra Lecture Notes-II

ELE/MCE 503 Linear Algebra Facts Fall 2018

2. Review of Linear Algebra

and let s calculate the image of some vectors under the transformation T.

Generalized eigenspaces

Definition (T -invariant subspace) Example. Example

Linear algebra II Tutorial solutions #1 A = x 1

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

REVIEW FOR EXAM III SIMILARITY AND DIAGONALIZATION

Linear Algebra. Min Yan

Chapter 5. Eigenvalues and Eigenvectors

Linear System Theory

m We can similarly replace any pair of complex conjugate eigenvalues with 2 2 real blocks. = R

Singular Value Decomposition (SVD)

The Singular Value Decomposition

Summer Session Practice Final Exam

LINEAR ALGEBRA REVIEW

Calculating determinants for larger matrices

ft-uiowa-math2550 Assignment OptionalFinalExamReviewMultChoiceMEDIUMlengthForm due 12/31/2014 at 10:36pm CST

Linear Algebra Fundamentals

Eigenvalues and Eigenvectors

DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix

π 1 = tr(a), π n = ( 1) n det(a). In particular, when n = 2 one has

Math 443 Differential Geometry Spring Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook.

Generalized Eigenvectors and Jordan Form

Lec 2: Mathematical Economics

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

1 Last time: least-squares problems

Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2

4. Linear transformations as a vector space 17

Numerical Methods I Eigenvalue Problems

MATHEMATICS 217 NOTES

Announcements Wednesday, November 01

12. Homogeneous Linear Systems of ODEs

Review of Some Concepts from Linear Algebra: Part 2

Math 108b: Notes on the Spectral Theorem

EIGENVALUE PROBLEMS. Background on eigenvalues/ eigenvectors / decompositions. Perturbation analysis, condition numbers..

Notes on the matrix exponential

Designing Information Devices and Systems II

Transcription:

EE/ME 701: Advanced Linear Systems Eigenvalues, Eigenvectors and the Jordan Form Contents 1 Introduction 3 1.1 Review of basic facts about eigenvectors and eigenvalues..... 3 1.1.1 Looking at eigenvalues and eigenvectors in relation to the null space of (A λ k I)................... 4 1.1.2 Eigenvector example.................... 5 1.1.3 Bay section 4.1, A-Invariant Subspaces.......... 6 2 Properties of the Eigensystem 7 2.1 Finding eigenvalues and eigenvectors............... 8 2.2 Interpreting complex eigenvalues / eigenvectors.......... 9 2.2.1 Example: 3D Rotation................... 10 2.3 The eigenvectors diagonalize A.................. 13 2.4 The eigensystem of symmetric (Hermitian) matrices........ 14 2.5 Using the eigensystem to find the matrix exponential....... 18 2.5.1 Defining the matrix exponential.............. 18 2.5.2 The matrix exponential provides the homogeneous solution to the matrix differential equation......... 19 2.5.3 Using the eigensystem to solve for e At.......... 21 2.5.4 Conclusion on e At..................... 25 3 Repeated eigenvalues 26 3.1 Analysis of the structure of the eigensystem of a matrix...... 28 Part 4b: Eigenvalues and Eigenvectors (Revised: Oct 22, 2016) Page 1

EE/ME 701: Advanced Linear Systems 4 The Jordan-form 30 4.1 Nature of the Jordan-form..................... 30 4.1.1 An example where the Jordan form arises......... 31 4.2 Constructing the Jordan Form................... 34 4.2.1 Regular and Generalized Eigenvectors of A........ 34 4.2.2 First Jordan form example................. 36 4.2.3 More on Jordan blocks................... 40 4.2.4 The Jordan form, a second example............ 42 4.3 One more twist, freedom to choose the regular eigenvector.... 49 4.3.1 Example where regular E-vecs do not lie in the column space of (A λ k I)..................... 51 4.4 Summary of the Jordan Form................... 58 4.4.1 Why Matlab does not have a numeric Jordan command.. 59 5 Conclusions 60 6 Review questions and skills 61 Part 4b: Eigenvalues and Eigenvectors (Revised: Oct 22, 2016) Page 2

EE/ME 701: Advanced Linear Systems Section 1.1.0 1 Introduction We have seen the basic case of eigenvalues and eigenvectors, eigenvectors are special vectors that are not rotated by the action of matrix A the Av k = λ k v k. (1) 1.1 Review of basic facts about eigenvectors and eigenvalues Only square matrices have eigensystems Every n n matrix has n eigenvalues, λ 1...λ n The eigenvector satisfies the relationship Av k = λ k v k, which leads to the eigenvector being the solution to (A λ k I) v k = 0 (2) or, said another way, the eigenvector is a vector in the null space of the matrix (A λ k I). Notes: 1. Any vector in the null space of (A λ k I) is an eigenvector. For example, if the null space is 2 dimensional, then any vec. in this 2D subspace is an E-vec. 2. Since the determinant of any matrix with a non-empty null space is zero, we have: det(a λ k I) = 0, k = 1..n (3) which gives the characteristic equation of matrix A. Part 4b: Eigenvalues and Eigenvectors (Revised: Oct 22, 2016) Page 3

EE/ME 701: Advanced Linear Systems Section 1.1.1 1.1.1 Looking at eigenvalues and eigenvectors in relation to the null space of (A λ k I) Starting from (A λ k I) v k = 0 (4) The eigenvalues are values of λ k such that (A λ k I) has a non-trivial null space. The eigenvectors are any vectors in that null space! Since det(a λ k I) = 0 (5) We know the null space is at least 1 dimensional. Theorem: for each distinct eigenvalue, there is at least one independent eigenvector. Proof: The proof follows directly from Eqns (4) and (5). Part 4b: Eigenvalues and Eigenvectors (Revised: Oct 22, 2016) Page 4

EE/ME 701: Advanced Linear Systems Section 1.1.2 1.1.2 Eigenvector example Considering matrix A A = 1 2 3 4 5 4 7 8 9 >> [V, U] = eig(a) V = -0.2465-0.8720 0.3105 U = 15.0799 0 0-0.4404 0.3799-0.7979 0-0.9329 0-0.8633 0.3086 0.5167 0 0 0.8530 So the eigenvectors are: v 1 = 0.2465 0.4404 0.8633, v 2 = 0.8720 0.3799 0.3086, v 3 = 0.3105 0.7979 0.5167 And the eigenvalues are λ 1 = 15.0799, λ 2 = 0.9329, λ 2 = 0.8530 We can scale eigenvectors by any non-zero constant, if v 1 is an eigenvector, αv 1 is an eigenvector. For example, scaling each eigenvector so that the first element is 1.0: >> V = [ V(:,1)/V(1,1), V(:,2)/V(1,2), V(:,3)/V(1,3)] V = 1.0000 1.0000 1.0000 1.7866-0.4356-2.5697 3.5022-0.3539 1.6642 Part 4b: Eigenvalues and Eigenvectors (Revised: Oct 22, 2016) Page 5

EE/ME 701: Advanced Linear Systems Section 1.1.3 1.1.3 Bay section 4.1, A-Invariant Subspaces Consider a linear mapping from R n to R n A : R n R n (6) A subspace W is an invariant subspace with respect to operator A if the action of A on any vector in W returns a vector in W. This can be stated as w W = Aw W (7) The eigenvectors of a square matrix A are a suitable choice to be basis vectors for a subspace. The subspaces so given are A-invariant subspaces. The A-invariance property of subspaces spanned by eigenvectors is essential for our study of state-space systems. Since linear system dynamics can be written x(t 1 ) = Ax(t 0 ) (8) the A-invariance property says that if the state of a system lies within a subspace spanned by a subset of eigenvectors at a moment in time t 0, the state of the system will remain in that subspace. The property will permit us to break down the response of a system into modes, corresponding to eigenvalues and vector subspaces of state space given by the corresponding eigenvectors. Part 4b: Eigenvalues and Eigenvectors (Revised: Oct 22, 2016) Page 6

EE/ME 701: Advanced Linear Systems Section 2.1.0 2 Properties of the Eigensystem First we ll cover properties of the eigenvalues and regular eigenvectors. Example (Bay Example 4.1): Electric Fields In an isotropic dielectric medium, the electric field follows the relation D = εe where E is the electric field vector, D is the electric flux density (also called the displacement vector) and ε is the dielectric constant. Some materials, however, are anisotropic, governed by D 1 D 2 = ε 11 ε 12 ε 13 ε 21 ε 22 ε 23 E 1 E 2 D 3 ε 31 ε 32 ε 33 E 3 Find the directions, if any, in which the E-field and flux density are collinear. Solution: For the E-field and flux density to be collinear they must satisfy D = λe, Which is to say, the anisotropic directions are the eigenvectors of the dielectric tensor. Part 4b: Eigenvalues and Eigenvectors (Revised: Oct 22, 2016) Page 7

EE/ME 701: Advanced Linear Systems Section 2.1.0 2.1 Finding eigenvalues and eigenvectors For (A λi) v = 0 to have a solution, (A λi) must have a non-trivial null space. This is equivalent to saying det(a λi) = 0 (9) We have seen that Eqn (9) gives an n th order polynomial in λ This is more important for understanding than as a solution method Use: >> [V, U] = eig(a) There is no closed-form solution for the eigenvectors and values of a matrix larger than 4x4. And so eigen-solvers employ iterative numerical techniques. Matlab s eig() command is actually a front-end for a family of numerical techniques. Routine eig(a) starts by analyzing properties of matrix A, then selects the preferred routine. A good example is symmetric matrices. The eigensystem of symmetric matrices has the properties that all eigenvalues are real and that the eigenvectors are real and orthogonal. These special properties lead to a special eigensystem solver that applies only to symmetric matrices. Details of eigenvalue / eigenvector algorithms are beyond the scope of EE/ME 701 Part 4b: Eigenvalues and Eigenvectors (Revised: Oct 22, 2016) Page 8

EE/ME 701: Advanced Linear Systems Section 2.2.0 2.2 Interpreting complex eigenvalues / eigenvectors Real eigenvalues correspond to scaling the eigenvector Complex eigenvalues lead to complex eigenvectors, and correspond to rotations and come in complex conjugate pairs. Recall that a complex number can be written λ k = a+ j b = M e j θ (10) Examples: Consider the basic rotation matrix: R = C θ S θ S θ C θ (11) Forming (λi R): (λi R) = λ C θ S θ S θ λ C θ (12) which gives the characteristic equation det(λi R) = λ 2 2C θ λ+c 2 θ + S2 θ = λ2 2C θ λ+1 = 0 (13) which solves to give λ = 2C θ ± 4Cθ 2 4 = 2C θ ± 2 4S 2 θ 2 = C θ ± j S θ (14) The eigenvalues of R are a complex conjugate pair given by the ± solutions of the quadratic equation. λ k = 1.0 ± θ Part 4b: Eigenvalues and Eigenvectors (Revised: Oct 22, 2016) Page 9

EE/ME 701: Advanced Linear Systems Section 2.2.1 2.2.1 Example: 3D Rotation We ve seen the rotation matrix c tr = C κ C ϕ +C ω S κ C κ S ϕ S ω S ω S κ C ω C κ S ϕ S κ C ϕ S κ S ϕ S ω +C κ C ω C ω S ϕ S κ C κ S ω (15) S ϕ C ϕ S ω C ϕ C ω For example, the rotation matrix R = 2/3 1/3 2/3 2/3 2/3 1/3 1/3 2/3 2/3 (16) Corresponds to ω = 135 o, κ = 135 o, ϕ = 19.47 o Like the general 3D rotation matrix, this matrix has one real eigenvalue and a complex conjugate pair >> [V, U] = eig(r) V = -0.5774-0.5774 0.5774 0.2887 + 0.5000i 0.2887-0.5000i 0.5774 0.2887-0.5000i 0.2887 + 0.5000i 0.5774 U = 0.5000 + 0.8660i 0 0 0 0.5000-0.8660i 0 0 0 1.0000 Part 4b: Eigenvalues and Eigenvectors (Revised: Oct 22, 2016) Page 10

EE/ME 701: Advanced Linear Systems Section 2.2.1 The only real R-invariant subspace is given by the real eigenvector. v 3 is the axis of rotation! v v 3 = 0.577 0.577 0.577 y w Rw z x Figure 1: Illustration of the action of rotation matrix R on a vector w to give vector Rw. Every vector w v 3 contains a projection onto subspaces spanned by the complex eigenvectors, and will be rotated by R. To solve differential equations such as ẋ(t) = R x(t), we will be interested in solutions of the form x(t) = V e U t V 1 x(0) (17) In which case complex eigenvalue and eigenvector pair combines to form a single 2-D R-invariant subspace spanned by w 1 = 1 2 (v 1 + v 2 ) (w 1 from real part) (18) w 2 = j 1 2 (v 1 v 2 ) (w 2 from imaginary part) (19) With solutions to the differential equation written x(t) = a 1 e αt cos(ωt + θ)w 1 + a 2 e αt sin(ωt + θ)w 2 (20) Part 4b: Eigenvalues and Eigenvectors (Revised: Oct 22, 2016) Page 11

EE/ME 701: Advanced Linear Systems Section 2.2.1 The cos and sin terms in Eqn (20) show rotation in the 2D R-invariant subspace spanned by w 1 and w 2. (We treat matrix differential equations in up-coming sections). Part 4b: Eigenvalues and Eigenvectors (Revised: Oct 22, 2016) Page 12

EE/ME 701: Advanced Linear Systems Section 2.3.0 2.3 The eigenvectors diagonalize A Given a matrix A with eigensystem V, U. When A has a complete set of eigenvectors, A is related to its eigenvectors and eigenvalues by: A = V U V 1 (21) Example (from above), A = 1 2 3 4 5 4 7 8 9 >> [V, U] = eig(a) V = -0.2465-0.8720 0.3105 U = 15.0799 0 0-0.4404 0.3799-0.7979 0-0.9329 0-0.8633 0.3086 0.5167 0 0 0.8530 >> V*U*inv(V) ans = 1.0000 2.0000 3.0000 4.0000 5.0000 4.0000 7.0000 8.0000 9.0000 Writing Ax = V U V 1 x (22) Eqn (22) can be understood as: V 1 (x) ( ) U V 1 x ( ) V U V 1 x representation of x on V basis vectors, V x scaling of elements of V x by the eigenvalues transformation back onto the s basis (The difference with the SVD is that the columns of V are not orthonormal, we get no vector-by-vector decomposition). Part 4b: Eigenvalues and Eigenvectors (Revised: Oct 22, 2016) Page 13

EE/ME 701: Advanced Linear Systems Section 2.4.0 2.4 The eigensystem of symmetric (Hermitian) matrices The eigensystem of a Hermitian matrix Q, (symmetric matrix, if real) have special, and very helpful properties. Notation: use A to be the complex-conjugate transpose (equivalent to A in Matlab). Property 1: If A = A, then for all complex vectors x, x Ax is real. Proof: Define y = (x Ax). Applying the transpose to the product, y = (x Ax) = x A x = x A x (23) Since A = A, y = y. For a number to equal its complex conjugate, it must be real. Property 2: The eigenvalues of a Hermitian matrix must be real. Proof: Suppose the λ is an eigenvalue of Q, with v a corresponding eigenvector, then Qv = λv (24) Now multiply on the left and right by v, v Qv = v λv = λv v = λ v 2 (25) By property 1, v Qv must be real, and v 2 must be real, there for λ = v Qv v 2 (26) must be real. Part 4b: Eigenvalues and Eigenvectors (Revised: Oct 22, 2016) Page 14

EE/ME 701: Advanced Linear Systems Section 2.4.0 Property 3: The eigenvectors of a Hermitian matrix, if they correspond to distinct eigenvalues, must be orthogonal. Proof: Starting with the given information, eigenvalue values λ 1 λ 2, and corresponding eigenvectors v 1 and v 2 Qv 1 = λ 1 v 1 (27) Qv 2 = λ 2 v 2 (28) Forming the complex-conjugate transpose of Eqn (27) v 1 Q = λ 1 v 1 = λ 1 v 1 (29) where we can drop the complex conjugate on λ 1, because we know λ 1 is real. Now multiplying on the right by v 2 gives (v 1 Q ) v 2 = λ 1 v 1 v 2 = λ 1 v 1 v 2 (30) But the multiplication also gives: v 1 (Q v 2 ) = v 1 λ 2 v 2 = λ 2 v 1 v 2 (31) So we find λ 1 v 1 v 2 = λ 2 v 1 v 2 (32) If λ 1 λ 2, Eqn (32) is only possible if v 1 v 2 = 0, which is to say that v 1 and v 2 are orthogonal. Part 4b: Eigenvalues and Eigenvectors (Revised: Oct 22, 2016) Page 15

EE/ME 701: Advanced Linear Systems Section 2.4.0 Property 4: A Hermitian matrix has a complete set of orthogonal eigenvectors. Proof: Any square matrix Q has a Schur decomposition T = V 1 QV (33) where V is an orthonormal matrix and T is an upper-triangular matrix. Since T is upper-triangular, T will be lower-triangular. However, V is orthonormal and Q is Hermitian, so V = V 1, Q = Q, and ( V 1) = (V ) = V So T = V Q ( V 1) = V 1 QV = T (34) Since T = T, T must be both upper-triangular and lower-triangular. For T to be both upper-triangular and lower-triangular, it must be diagonal. Let U = T be a diagonal matrix. Multiplying Eqn (33) on the left by V and on the right by V 1 gives Q = V T V 1 = V U V 1 (35) Which is precisely the form of the eigen-decomposition of Q, where Diagonal matrix U holds the eigenvalues of Q, and Orthogonal matrix V holds the eigenvectors of Q. Part 4b: Eigenvalues and Eigenvectors (Revised: Oct 22, 2016) Page 16

EE/ME 701: Advanced Linear Systems Section 2.4.0 Remarks: Since T = T, the diagonal elements (the eigenvalues) must be real (see property 2). When Q is real, V will be real. Significance: When a matrix Q is Hermitian, it has a complete set of orthogonal eigenvectors. This is the property that assures that the SVD exists for any matrix A. Part 4b: Eigenvalues and Eigenvectors (Revised: Oct 22, 2016) Page 17

EE/ME 701: Advanced Linear Systems Section 2.5.1 2.5 Using the eigensystem to find the matrix exponential 2.5.1 Defining the matrix exponential Given a square matrix A, the matrix exponential is defined in a way analogous to the scalar exponential e at e At = I + At + 1 2! A2 t 2 + 1 3! A3 t 3 + 1 4! A4 t 4 + 1 5! A5 t 5 + (36) For example, A = 0.5000-0.3000 t = 1.2-0.2000 0.4000 >> T5 = eye(2) + A * t + (1/2)*A*A*t^2 + (1/6)*A*A*A*t^3 + (1/24)*A*A*A*A*t^4 + (1/120)*A*A*A*A*A*t^5 T5 = 1.8980-0.6267-0.4178 1.6891 >> B = expm(a*t) B = 1.8983-0.6271-0.4180 1.6893 Note: Matlab function exp(a*t) computes the element-wise exponential. For >> B = exp(a*t) gives A = a 11 a 12 a 21 a 22 B = ea 11 t e a 21 t e a 12 t e a 22 t Matlab function expm(a*t) evaluates the matrix exponential, Eqn (36). Part 4b: Eigenvalues and Eigenvectors (Revised: Oct 22, 2016) Page 18

EE/ME 701: Advanced Linear Systems Section 2.5.2 2.5.2 The matrix exponential provides the homogeneous solution to the matrix differential equation Consider the homogeneous matrix differential equation (no forcing function for now) The solution is where x 0 is x(t) at t = 0 (the initial condition). d x(t) = Ax(t) (37) dt x(t) = e At x 0 (38) To show that Eqn (38) is the solution to (37), start with the definition of the matrix exponential e At = I+At+ 1 2! A2 t 2 + 1 3! A3 t 3 + 1 4! A4 t 4 + 1 5! A5 t 5 + (36, repeated) And take the derivative w.r.t. time, d e At dt = 0+AI + 2 1 2! A2 t + 3 1 3! A3 t 2 + 4 1 4! A4 t 3 + 5 1 5! A5 t 4 + (39) Adjusting the factorial for the integers 2, 3,... from the differentiation, and factoring out one power of A d e At dt = AI + A At + A 1 2! A2 t 2 + A 1 3! A3 t 3 + A 1 4! A4 t 4 + ( = A I + At + 1 2! A2 t 2 + 1 3! A3 t 3 + 1 ) 4! A4 t 4 + = Ae At (40) Part 4b: Eigenvalues and Eigenvectors (Revised: Oct 22, 2016) Page 19

EE/ME 701: Advanced Linear Systems Section 2.5.2 So just as in the scalar case, where d e at dt = ae at For the matrix case, we have d e At dt = Ae At Choosing x(t) = e At x 0, and plugging into differential equation (37) gives d dt x(t) = d dt eat x 0 = Ae At x 0 = A x(t) or d x(t) = A x(t) (41) dt and so the matrix exponential gives the solution to the homogeneous matrix differential equation. Part 4b: Eigenvalues and Eigenvectors (Revised: Oct 22, 2016) Page 20

EE/ME 701: Advanced Linear Systems Section 2.5.3 2.5.3 Using the eigensystem to solve for e At To solve for e At, in principle one could evaluate the series expansion e At = I+At+ 1 2! A2 t 2 + 1 3! A3 t 3 + 1 4! A4 t 4 + 1 5! A5 t 5 + (36, repeated) However, for large A or t the series will converge slowly. For general matrices A, there is no simple expression for the elements of A p However, for a diagonal matrix U, U 2... U p are easily expressed: U = λ 1 0 0 λ 2, then U 2 = λ2 1 0 0 λ 2 2,, U p = λp 1 0 0 λ p 2 Given a diagonal matrix U = λ 1 0 λ 2... 0 λ n The matrix exponential is given by: e U t = I +U t + 1 2! U2 t 2 + 1 3! U3 t 3 + 1 4! U4 t 4 + 1 0 λ 1 t 0 λ 2 1 t2 0 1 λ = + 2 t + 1 λ 2 2 t2...... 2!... 0 1 0 λ n t 0 λ 2 nt 2 Part 4b: Eigenvalues and Eigenvectors (Revised: Oct 22, 2016) Page 21 +

EE/ME 701: Advanced Linear Systems Section 2.5.3 Summing the terms e U t = 1+λ 1 t + 1 2! λ2 1 t2 + 0 1+λ 2 t + 1 2! λ2 2 t2 + 0 1+λ n t + 1 2! λ2 n t2 + And so e U t = e λ 1 t 0 e λ 2t... 0 e λ nt (42) The matrix exponential of a diagonal matrix, is the matrix of element-wise exponentials Now consider again the matrix exponential of A e At = I + At + 1 2! A2 t 2 + 1 3! A3 t 3 + When V is full rank, expanding A with its eigen-decomposition, A = V U V 1, and replacing I with V V 1 gives: e At = V V 1 +V U V 1 t + 1 2! V U V 1 V U V 1 t 2 + 1 3! V U V 1 V U V 1 V U V 1 t 3 + Part 4b: Eigenvalues and Eigenvectors (Revised: Oct 22, 2016) Page 22

EE/ME 701: Advanced Linear Systems Section 2.5.3 Factoring out V on the left, and V 1 on the right gives e At = V ( I +U t + 1 2! U V 1 V U t 2 + 1 ) 3! U V 1 V U V 1 V U t 3 + V 1 The terms V 1 V cancel, and the inner term is e U t e At = V ( I +U t + 1 2! U2 t 2 + 1 ) 3! U3 t 3 + V 1 = V e U t V 1 (43) Since U is diagonal, e U t we can solve for, e At = V e U t V 1 = V e λ 1 t 0 e λ 2t... 0 e λ nt V 1 (44) When V is full rank, equation (44) solves for e At, and is computed using n scalar exponentials. Part 4b: Eigenvalues and Eigenvectors (Revised: Oct 22, 2016) Page 23

EE/ME 701: Advanced Linear Systems Section 2.5.3 Example: A = 0.5000-0.3000 t = 1.2-0.2000 0.4000 >> [V, U] = eig(a) V = 0.8321 0.7071 U = 0.7000 0-0.5547 0.7071 0 0.2000 The matrix exponential e U t >> B = [ exp(u(1,1)*t) 0; B = 2.3164 0 e At >> C = V * B * inv(v) 0, exp(u(2,2)*t) ] 0 1.2712 C = 1.8983-0.6271-0.4180 1.6893 And directly computing: >> D = expm(a*t) D = 1.8983-0.6271-0.4180 1.6893 Part 4b: Eigenvalues and Eigenvectors (Revised: Oct 22, 2016) Page 24

EE/ME 701: Advanced Linear Systems Section 2.5.4 2.5.4 Conclusion on e At Since e At is part of the solution to ẋ(t) = Ax(t)+Bu(t) the eigenvectors and eigenvalues are needed to solve the matrix differential equation. Note that a complete set of eigenvectors is required to form V 1 for Eqn (44). To find e At we need a complete set of eigenvectors (this is where generalized eigenvectors will come in). Part 4b: Eigenvalues and Eigenvectors (Revised: Oct 22, 2016) Page 25

EE/ME 701: Advanced Linear Systems Section 3.0.0 3 Repeated eigenvalues Matrices can have repeated eigenvalues, for example: >> A = [ 2 1; 0 2] A = 2 1 0 2 >> [V,U] = eig(a) V = 1.0000-1.0000 U = 2 0 0 0.0000 0 2 When there are repeated eigenvalues: 1. We are assured to have at least 1 independent eigenvector. 2. There may fewer independent eigenvectors than eigenvalues Definitions: The algebraic multiplicity of an eigenvalue is the number of times the eigenvalue is repeated. The geometric multiplicity is the number of independent eigenvectors corresponding to the eigenvalue. (dim null (A λi)) Consider the example above: >> A = [ 2 1; 0 2] A = 2 1 0 2 >> [V,U] = eig(a) V = 1.0000-1.0000 U = 2.0000 0 0 0.0000 0 2.0000 Part 4b: Eigenvalues and Eigenvectors (Revised: Oct 22, 2016) Page 26

EE/ME 701: Advanced Linear Systems Section 3.0.0 Eigenvalue: 2.0 Algebraic multiplicity: 2 Geometric Multiplicity: 1 Number of missing eigenvectors: 1 Recall the eigen-decomposition of a matrix: A = V U V 1 The eigen-decomposition only exists if V is invertible. That is if there is a complete set of independent eigenvectors. Part 4b: Eigenvalues and Eigenvectors (Revised: Oct 22, 2016) Page 27

EE/ME 701: Advanced Linear Systems Section 3.1.0 3.1 Analysis of the structure of the eigensystem of a matrix Analysis of the eigensystem of a matrix proceeds by completing table 1. 1. Group the λ into k sets of repeated eigenvalues (one set for each unique value of λ) The number of λ k in the k th set is called the algebraic multiplicity, and is given by m k 2. Determine the number of independent eigenvectors corresponding to λ k by evaluating q(a λ k I) = dim null(a λ k I). This is called the geometric multiplicity, and is given by g k. If m k 2, it is possible that there are fewer independent eigenvectors than eigenvalues. 1 g k m k (45) 3. If m k > g k for any k, the Jordan form and generalized eigenvectors are required. k λ k m k g k # Needed Generalized Evecs m k g k 1 2. Table 1: Analysis of the structure of the eigensystem of A. Example, >> A = [ 2 3 4 ; 0 2 1 ; 0 0 2 ] A = 2 3 4 0 2 1 0 0 2 Part 4b: Eigenvalues and Eigenvectors (Revised: Oct 22, 2016) Page 28

EE/ME 701: Advanced Linear Systems Section 3.1.0 Recall: for triangular and diagonal matrices, the eigenvalues are the diagonal elements >> [V, U ] = eig(a) >> RoundByRatCommand(V) V = 1.0000-1.0000 1.0000 ans = 1-1 1 0 0.0000-0.0000 0 0 0 0 0 0.0000 0 0 0 U = 2 0 0 0 2 0 0 0 2 Find g k : (A λ 1 I) = (A 2I) = 0 3 4 0 0 2 0 0 0 (46) dim null (A λ 1 I) = g 1 = 1 (47) The analysis of the structure of the eigensystem of matrix A is seen in table 2. We see that there is one eigenvalue that is triply repeated. dim null (A λ k I) = 1, there is one eigenvector. The Jordan form will be required. k λ k m k g k # Needed Generalized Evecs m k g k 1 2 3 1 2 Table 2: Analysis of the structure of the eigensystem of A. Part 4b: Eigenvalues and Eigenvectors (Revised: Oct 22, 2016) Page 29

EE/ME 701: Advanced Linear Systems Section 4.1.0 4 The Jordan-form 4.1 Nature of the Jordan-form Recall that the matrix exponential utilizes the eigen-decomposition, which as the form A = V U V 1 (48) But Eqn (48) is only valid when A has a complete set of eigenvectors. When A does not have a complete set of eigenvectors, we complete the set with generalized eigenvectors, giving the Jordan form decomposition of A: A = M J M 1 (49) where J is a block-diagonal matrix and M contains the regular eigenvectors of A, as well as one or more generalized eigenvectors, to make a complete set and assure that M 1 exists. When A has repeated eigenvalues it may have missing regular eigenvectors (g k < m k ), Analysis of e At requires decomposing matrix A using the Jordan form. Part 4b: Eigenvalues and Eigenvectors (Revised: Oct 22, 2016) Page 30

EE/ME 701: Advanced Linear Systems Section 4.1.1 4.1.1 An example where the Jordan form arises With scalar differential equations, we know that equations with repeated roots give solutions of the form y(t) = c 1 e λ 1 t + c 2 t e λ 1 t. For example, ÿ(t)+6ẏ(t)+9y(t) = 0 (50) has the characteristic equation s 2 + 6s+9 = 0 which as the roots s = { 3, 3}. The solution to Eqn (50) is: y(t) = c 1 e 3t + c 2 t e 3t. (51) But Eqn (??) for e At has no terms of the form t e 3t. And yet Eqn (50) is simply represented in state space with: ẋ(t) = Ax(t) with: x(t) = ẏ(t) y(t), ẋ(t) = d dt ẏ(t) y(t) = 6 9 1 0 ẏ(t) y(t) (52) Yet the solution to Eqn (52) is (as always for ẋ(t) = Ax(t)) x(t) = e At x(0). So how can e At have a term of the form t e 3t? Part 4b: Eigenvalues and Eigenvectors (Revised: Oct 22, 2016) Page 31

EE/ME 701: Advanced Linear Systems Section 4.1.1 Consider e J t for this matrix J: J = λ 1 0 λ (53) The expression for e J t is e J t = I + J t + J2 t 2 2! + + Jk t k + (54) k! = 1 0 + t λ 1 + t2 λ2 2λ + t3 λ3 3λ 2 + 0 1 1! 0 λ 2! 0 λ 2 3! 0 λ 3 + tk k! λk kλ k 1 0 λ k + (55) The (1,1) and (2,2) elements give rise to summations of the form 1+ and the (2,1) element of e J t is 0. k=1 1 k! λk t k = e λt (56) Now consider the summation for the (1,2) element, which is given as 0+ t 1! + k=2 1 k! kλk 1 t k = t ( 1+ k=2 ) ( 1 (k 1)! λk 1 t k 1 = t 1+ k=1 ) 1 k! λk t k = t e λt So if J has the form of Eqn (53), then e J t = expm λ 1 t = 0 λ eλt t e λt 0 e λt! (57) Part 4b: Eigenvalues and Eigenvectors (Revised: Oct 22, 2016) Page 32

EE/ME 701: Advanced Linear Systems Section 4.1.1 By the argument of Eqns (53)-(57) above, expm λ 1 0 0 λ 1 0 0 λ t = e λt t e λt 1 2 t2 e λt 0 e λt t e λt 0 0 e λt (58) which gives terms of the form t 2 e λt, expm λ 1 0 0 0 λ 1 0 0 0 λ 1 0 0 0 λ t (59) gives terms of the form t 3 e λt, etc. For our specific example, A decomposes according to with A = M J M 1 (60) A = 6 9 1 0 then M = 0.949 0.0316 0.316 0.0949, J = 3 1 0 3 And e At = M e J t M 1 = M e 3t t e 3t 0 e 3t M 1 (61) So the solution x(t) = e At x(0) will have terms of the form e 3t and t e 3t, as need! Part 4b: Eigenvalues and Eigenvectors (Revised: Oct 22, 2016) Page 33

EE/ME 701: Advanced Linear Systems Section 4.2.1 4.2 Constructing the Jordan Form Matrix A has been transformed to the Jordan form (sometimes called the Jordan canonical form) when J = M 1 A M (62) The columns of M are regular eigenvectors and generalized eigenvectors J is a block-diagonal matrix composed of m k m k Jordan blocks along the main diagonal. Each block on the diagonal of J is called a Jordan block. Eqns (53), (58) and (59) give examples of 2x2, 3x3 and 4x4 Jordan blocks. 4.2.1 Regular and Generalized Eigenvectors of A The regular eigenvectors are the eigenvectors we have considered all along, they satisfy the relationship Av = λ k v or (A λ k I) v = 0 (63) From Eqn (63), a set of independent regular eigenvectors is given by the null space of (A λ k I). Part 4b: Eigenvalues and Eigenvectors (Revised: Oct 22, 2016) Page 34

EE/ME 701: Advanced Linear Systems Section 4.2.1 The generalized eigenvectors form chains starting with a regular eigenvector. The generalized eigenvectors satisfy the relationship A V l+1 k, j = λ k V l+1 k, j +V l k, j (64) Or, rearranging (A λ k I) V l+1 k, j = V l k, j (65) Where V l+1 k, j is the the next generalized eigenvector in a chain (see Bay Eqn (4.14)). Each chain of generalized eigenvectors is anchored by a regular eigenvector. In this notation, V 1 k, j (66) is the first element of a chain; it is a regular eigenvector\. It is the j th regular eigenvector of the k th distinct eigenvalue. The l = 1 designates that Vk, 1 j is the first eigenvector in a chain, so it must be a regular eigenvector. Eqn (65) is an example of a recursive relationship, it is an equation that is applied repeatedly to get all elements of the chain. It has the solution V l+1 k, j = (A λ k I) # V l k, j (67) The method presented here to determine the Jordan form is the bottom-up method presented in Bay, section 4.4.3. Part 4b: Eigenvalues and Eigenvectors (Revised: Oct 22, 2016) Page 35

EE/ME 701: Advanced Linear Systems Section 4.2.2 4.2.2 First Jordan form example Consider e At, with A given as: >> A = [ 3 3 3 ; -3 3-3 ; 3 0 6 ] A = 3 3 3-3 3-3 3 0 6 First look at the eigenvalues >> U = eig(a) U = 3.0000 6.0000 3.0000 %% This command rounds-off values to nearby rational numbers %% which may be integers >> U = RoundByRatCommand(U) U = 3 6 3 A has a repeated eigenvalue, we can make a table analyzing the structure of the eigensystem of A k λ k m k g k # Needed Gen. Evecs m k g k 1 3 2 1 or 2 0 or 1 2 6 1 1 0 Table 3: Analysis of the structure of the eigensystem of A. Table 3 shows that A has two distinct eigenvalues, and we don t yet know if λ 1 has 1 or 2 independent eigenvectors. Part 4b: Eigenvalues and Eigenvectors (Revised: Oct 22, 2016) Page 36

EE/ME 701: Advanced Linear Systems Section 4.2.2 Evaluate the geometric multiplicity of λ 1 >> lambda1=3; lambda2=6; I = eye(3); >> v1 = null(a-lambda1*i); v1 = v1/v1(1) v1 = 1 %% Eigenvector, scaled so the -1 1 %% first element is an integer The geometric multiplicity is the dimension of the null space in which the eigenvectors lie. For λ 1, g 1 = 1. Putting this information into the table k λ k m k g k # Needed Generalized Evecs m k g k 1 3 2 1 1 2 6 1 1 0 Table 4: Analysis of the structure of the eigensystem of A. The total number of eigenvectors (regular+generalized) needed for λ k is m k. The number of regular eigenvectors is g k. The regular eigenvectors get the notation: where j = 1...g k. V 1 k,1, V 1 k,2,..., V 1 k,g k The number of needed generalized eigenvectors, corresponding to λ k, is m k g k. Part 4b: Eigenvalues and Eigenvectors (Revised: Oct 22, 2016) Page 37

EE/ME 701: Advanced Linear Systems Section 4.2.2 In the example, we need 1 generalized eigenvector for the k = 1 eigenvalue. In this case for k = 1 we have only one regular eigenvector, so it must serve as the first element, or anchor, of the chain of generalized eigenvectors regular eigenvector: V 1 1,1 solves (A λ 1 I)V 1 1,1 = 0 (68) generalized eigenvector: V 2 1,1 solves (A λ 1 I)V 2 1,1 = V 1 1,1 In Matlab >> lambda1=1; lambda2=2; I = eye(3); %% Find the first regular eigenvector, >> V111 = null(a-lambda1*i); V111=V111/V111(1) V111 = 1.0000 1.0000-1.0000 %% Find the generalized eigenvector by solving Eqn (68) >> V112 = pinv(a-lambda1*i)*v111 V112 = -1.0000 1.0000 0.0000 %% Find the regular eigenvector for lambda2 >> V211 = null(a-lambda2*i); V211=V211/V211(2) V211 = 0 1.0000-1.0000 Part 4b: Eigenvalues and Eigenvectors (Revised: Oct 22, 2016) Page 38

EE/ME 701: Advanced Linear Systems Section 4.2.2 Put the eigenvectors (regular and generalized) together in the M matrix. The regular and generalized eigenvectors of a chain must be put in order. For each k and j, put the vectors Vk, l j going to the end of the chain. into M, starting with l = 1, and Put in the chain corresponding to for each regular eigenvector ( j) for each distinct eigenvalue (k). Chains may have a length of 1. For the example, [ M = V 1 1,1 V 2 1,1 V 1 2,1 ] >> M = [V111, V112, V211] M = 1.00-1.00 0 1.00 1.00 1.00-1.00 0.00-1.00 J has 2 Jordan blocks 3 1. 0 0 3. 0 J =. 0 0. 6 >> J = inv(m) * A * M >> J = RoundByRatCommand(J) J = 3 1 0 0 3 0 0 0 6, e At = M e J t M 1 (69) For a system governed by ẋ(t) = Ax(t), and considering the J matrix, the output of the system will have solutions of the form y(t) = c 1 e 3t + c 2 t e 3t + c 3 e 6t (70) where the first two terms correspond to the first Jordan block, and the last term to the second Jordan block. Part 4b: Eigenvalues and Eigenvectors (Revised: Oct 22, 2016) Page 39

EE/ME 701: Advanced Linear Systems Section 4.2.3 k=1 j=1 λ κ =3 k=2 j=1 λ κ =6 V 1 1,1 V 2 1,1 V 1 2,1 Figure 2: Illustration of chains of eigenvectors. Three eigenvectors, two chains, one chain has two elements. 4.2.3 More on Jordan blocks A matrix in Jordan canonical form has a block diagonal structure, with Eigenvalues on the main diagonal Ones on the super diagonal within each block. A 2x2 block has 1 one, a 3x3 block has 2 ones, etc. One Jordan block corresponds to each regular eigenvector If the regular eigenvector has no generalized eigenvectors, then it creates a 1x1 block. If the regular eigenvector anchors a chain with one generalized eigenvector, then it creates a 2x2 block, etc. Part 4b: Eigenvalues and Eigenvectors (Revised: Oct 22, 2016) Page 40

EE/ME 701: Advanced Linear Systems Section 4.2.3 Each Jordan block corresponds to: 1x1 block: a regular eigenvector n n block, n 2: a chain anchored by a regular eigenvector, with n 1 generalized eigenvectors Using the Vk, l j notation, if we look at the structure of the M matrix, we can determine the layout of Jordan blocks. For example, [ M = V 1 1,1 V 1 1,2 V 2 1,2 V 1 2,1 V 2 2,1 ] Then the blocks of J are arranged: J = 1x1 2x2 2x2 Part 4b: Eigenvalues and Eigenvectors (Revised: Oct 22, 2016) Page 41

EE/ME 701: Advanced Linear Systems Section 4.2.4 4.2.4 The Jordan form, a second example A = 3-1 1 1 0 0 1 1-1 -1 0 0 0 0 2 0 1 1 0 0 0 2-1 -1 0 0 0 0 1 1 0 0 0 0 1 1 λ 1 = 0, λ 2 = 2 is repeated 5 times. >> U = eig(a) U = 2.0000 2.0000 2.0000 2.0000 2.0000 0 >> lambda1=0; lambda2=2; I = eye(6); >> [Row, Col, Null, LNull] = spaces(a-lambda2*i); >> g2=rank(null); g2 = 2 (A λ 2 I) has a 2-dimensional null space, so there are 2 independent regular eigenvectors. Null = 0 0.7071 0 0.7071-0.7071 0.0000 0.7071 0.0000 0.0000 0 0.0000 0 For convenience, scale the eigenvectors to get integer values >> V211 = Null(:,1)/Null(3,1) V211 = 0 0 1-1 0 0 >> V221 = Null(:,2)/Null(1,2) V221 = 1 1 0 0 0 0 Part 4b: Eigenvalues and Eigenvectors (Revised: Oct 22, 2016) Page 42

EE/ME 701: Advanced Linear Systems Section 4.2.4 Check that the regular eigenvectors are actually eigenvectors for λ 2 = 2 >> %% Check that the eigenvectors are indeed eigenvectors >> %% These norms come out to zero, very small would be sufficient >> NDiff1 = norm( A*V211 - lambda2*v211 ) NDiff1 = 0 >> NDiff2 = norm( A*V221 - lambda2*v221 ) NDiff2 = 0 Note: All vectors from null(a λ 2 I) are eigenvectors. For example, >> x = 0.3*V211 + 0.4*V221 x = 0.4 >> NDiffx = norm( A*x - lambda2*x ) 0.4 NDiffx = 0 0.3-0.3 0.0 0.0 Make a table of the structure of the eigensystem. k λ k m k g k # Needed Gen. Evecs m k g k 1 0 1 1 0 2 2 5 2 3 Table 5: Structure of the eigensystem of A. We need 3 generalized eigenvectors to have a complete set. These three will be in chains anchored by one or both of the regular eigenvectors of λ 2. Part 4b: Eigenvalues and Eigenvectors (Revised: Oct 22, 2016) Page 43

EE/ME 701: Advanced Linear Systems Section 4.2.4 The equation giving generalized eigenvectors is (A λ k I) V l+1 k, j = V l k, j (71) This is simply the relation Bx = y (72) with B = (A λ k I), y = Vk, l l+1 j and x = Vk, j. We know some things about the solutions of Eqn (72) 1. For an exact solution to exist V l k, j must lie in the column space of (A λ k I) 2. If we find a solution V l+1 k, j, it is not unique. We can add any component from the null space of (A λ k I), and it will still be a solution. Considering again the example problem, Check that V211 and V221 lie in the column space of (A λ 2 I) by checking that the projection of each onto the column space is equal to the original vector >> [Row, Col, Null, LNull] = spaces(a-lambda2*i); >> NIsInColumnSpaceV211 = norm( Col*Col *V211-V211 ) NIsInColumnSpaceV211 = 1.1102e-16 %% V211 is in the column space of (A-lambda2*I) >> NIsInColumnSpaceV2 = norm( Col*Col *V221-V221 ) NIsInColumnSpaceV2 = 1.1430e-15 %% V221 is in the column space of (A-lambda2*I) Both vectors lie in the column space of (A λ 2 I), so each will have at least one generalized eigenvector. Part 4b: Eigenvalues and Eigenvectors (Revised: Oct 22, 2016) Page 44

EE/ME 701: Advanced Linear Systems Section 4.2.4 Find V2,1 2 >> V212 = pinv(a-lambda2*i)*v211 V212 = 0 0 0.00-0.00 0.50 0.50 %% Check that V212 is a generalized eigenvector >> NDiffV212 = norm( (A-lambda2*I)*V212 - V211 ) NDiffV212 = 2.7581e-16 Yes, V 2 2,1 is a generalized eigenvector. Test to see if V 2 2,1 is in the column space of (A λ 2 I) >> norm( Col*Col *V212-V212 ) ans = 0.7071 No, so there is no V 3 2,1 Now evaluate V2,2 2 >> V222 = pinv(a-lambda2*i)*v221 V222 = 0.50-0.50 >> %% Check that V222 is a gen. eigenvector -0.00-0.00 >> NDiffV222 = norm( (A-lambda2*I)*V222 - V221 ) NDiffV222 = 6.2804e-16 0.00 0.00 Yes, V2,2 2 is a generalized eigenvector Part 4b: Eigenvalues and Eigenvectors (Revised: Oct 22, 2016) Page 45

EE/ME 701: Advanced Linear Systems Section 4.2.4 Now check to see that V2,2 2 is in the column space of (A λ 2 I) >> NIsInColumnSpaceV222 = norm( Col*Col *V222-V222 ) NIsInColumnSpaceV222 = 4.2999e-16 Yes, so there is a V 3 2,2. This will be the third generalized eigenvector >> V223 = pinv(a-lambda2*i)*v222, V223 = 0.00-0.00 0.25 0.25-0.00-0.00 The chains of eigenvectors are seen in figure 3. k=1 j=1 λ κ =0 k=2 j=1 λ κ =2 k=2 j=2 λ κ =2 V 1,1 1 V 2 1,1 V 2 1,2 V 2 2,1 V 2 2,2 V 2 3,2 Figure 3: Illustration of chains of eigenvectors for this 6x6 example. Part 4b: Eigenvalues and Eigenvectors (Revised: Oct 22, 2016) Page 46

EE/ME 701: Advanced Linear Systems Section 4.2.4 Now build the M matrix First we need the regular eigenvalue corresponding to λ 1 >> V111 = null(a-lambda1*i); V111 = 0 >> V111 = V111/ V111(5) 0 0 0 1-1 Put in the chains of E-vecs, starting each chain with its regular E-vec >> M = [ [V111] [V211 V212] [V221 V222 V223] ] >> M = RoundByRatCommand(M) M = 0 0 0 1.00 0.50 0 0 0 0 1.00-0.50 0 0 1.00 0 0 0 0.25 0-1.00 0 0 0 0.25 1.00 0 0.50 0 0 0-1.00 0 0.50 0 0 0 Now find J >> J = inv(m)*a*m; >> J = RoundByRatCommand(J) J = 0 0 0 0 0 0 0 2 1 0 0 0 0 0 2 0 0 0 0 0 0 2 1 0 0 0 0 0 2 1 0 0 0 0 0 2 Part 4b: Eigenvalues and Eigenvectors (Revised: Oct 22, 2016) Page 47

EE/ME 701: Advanced Linear Systems Section 4.2.4 Interpreting the result J has 3 Jordan blocks J = 0 0 0 0 0 0 ----------------- 0 2 1 0 0 0 0 0 2 0 0 0 ----------------------------- 0 0 0 2 1 0 0 0 0 0 2 1 0 0 0 0 0 2 Correspondingly, M has 3 chains of eigenvectors M = 0 0 0 1.00 0.50 0 0 0 0 1.00-0.50 0 0 1.00 0 0 0 0.25 0-1.00 0 0 0 0.25 1.00 0 0.50 0 0 0-1.00 0 0.50 0 0 0 The first eigenvector in each chain is a regular eigenvector. Part 4b: Eigenvalues and Eigenvectors (Revised: Oct 22, 2016) Page 48

EE/ME 701: Advanced Linear Systems Section 4.3.0 4.3 One more twist, freedom to choose the regular eigenvector Fact: If a matrix A has repeated eigenvalues, with g k > 1 independent eigenvector, the g k eigenvectors form a vector subspace X k = null (A λ k I) (73) Any vector from X k is an eigenvector. When m k 3, it is possible that g k 2, and we need to find a generalized eigenvector. In this case, dim null(a λ k I) = g k 2 and any vector from the 2-dimensional (or larger) null space of (A λ k I) is an eigenvector. Consider the generating equation for the generalized eigenvector (A λ k I) V 2 k, j = V 1 k, j (74) The anchor V 1 k, j must also lie in the column space of (A λ ki) A regular eigenvector that anchors a chain of generalized eigenvectors must lie in 2 spaces at once: The null space of (A λ k I)............... To be a regular e-vec of A. The column space of (A λ k I)... To generate a generalized e-vec of A. Part 4b: Eigenvalues and Eigenvectors (Revised: Oct 22, 2016) Page 49

EE/ME 701: Advanced Linear Systems Section 4.3.0 When g k 2, we have the freedom to choose the anchor for the chain of generalized eigenvectors Not just from a list, V 1 k,1, V 1 k,2,... It may be that we have valid eigenvectors, Vk,1 1, V k,2 1 which lies in the column space of (A λ k I)!, neither one of We can choose the regular eigenvectors as any vector from the null space of (A λ k I). By forming the intersection of the null and column spaces of (A λ k I), we can find the needed regular eigenvector. W = col(a λ k I) null(a λ k I), V 1 k, j W (75) Part 4b: Eigenvalues and Eigenvectors (Revised: Oct 22, 2016) Page 50

EE/ME 701: Advanced Linear Systems Section 4.3.1 4.3.1 Example where regular E-vecs do not lie in the column space of (A λ k I) Consider the matrix A = 2 1 1 1 0 4 1 3 0 1 2 2 0 1 0 0 (76) Analyzing the structure of its eigensystem >> RoundByRatCommand ( eig(a) ) ans = 2 2 2 2 >> I = eye(4) lambda1=2, >> [Row, Col, Null, Lnull] = spaces(a - lambda1*i) Col = 0.3055-0.7118 0.7342-0.2468-0.4287-0.4650-0.4287-0.4650 Null = 0 1 0.8165 0-0.4082 0-0.4082 0 So the structure of the eigensystem is given in table 6 k λ k m k g k # Needed Gen. Evecs m k g k 1 2 4 2 2 Table 6: Analysis of the structure of the eigensystem of A. Part 4b: Eigenvalues and Eigenvectors (Revised: Oct 22, 2016) Page 51

EE/ME 701: Advanced Linear Systems Section 4.3.1 A first choice for eigenvectors are the two basis vectors of the null space of (A λ k I) >> v1 = RoundByRatCommand( Null(:,1) / Null(3,1) ) >> v2 = RoundByRatCommand( Null(:,2) ) v1 = 0 v2 = 1-2 0 1 0 1 0 Determine if the eigenvectors lie in the column space of (A λ k I) >> NIsInColumnSpaceV1 = norm( Col*Col *v1-v1 ) NIsInColumnSpaceV1 = 0.6325 >> NIsInColumnSpaceV2 = norm( Col*Col *v2-v2 ) NIsInColumnSpaceV2 = 0.6325 No, neither eigenvector lies in the column space of (A λ k I) But what about the possibility that there exists another eigenvector which lies in the null space and column space of (A λ k I): x 1 = a 1 v 1 + a 2 v 2 First, consider the possibilities The universe is R 4, or 4D, with dim col(a λ k I) = 2, and dim null(a λ k I) = 2. So there are 3 possibilities: 1. Two 2D spaces can fit in a 4D universe and not intersect, so it is possible that col (A λ k I) null (A λ k I) = 0 2. It is possible that the intersection is 1 D 3. It is possible that the intersection is 2 D Part 4b: Eigenvalues and Eigenvectors (Revised: Oct 22, 2016) Page 52

EE/ME 701: Advanced Linear Systems Section 4.3.1 Previously, we have seen how to form the intersection of two subspaces, Given sets of basis vectors U = [u 1, u 2,...,u nu ] and V = [v 1, v 2,...,v nv ], vectors in the intersection Are solutions to W = U V (77) w i = a 1 u 1 + a 2 u 2 + +a nu u nu = b 1 v 1 + b 2 v 2 + +b nv v nv (78) [ for some non-zero values a 1 a nu, b 1 b nv ]. Where U and V are the sets of basis vectors on spaces U and V, [ a 1 a nu, b 1 b nv ] must solve [U, V] a 1. b nv = 0 (79) [ ] The coefficient vector must lie in the null space of Col, Null, where [Col] and [Null] are sets of basis vectors on the column and null spaces of (A λ k I). >> CoeffVec = null([col, -Null]) CoeffVec = 0.7033-0.0736 0.6547 0.2673 [ ] Since the null space of Col, Null is one dimensional, the intersection of the column and null spaces is 1D. Part 4b: Eigenvalues and Eigenvectors (Revised: Oct 22, 2016) Page 53

EE/ME 701: Advanced Linear Systems Section 4.3.1 Now find w 1, a vector in both the column and null spaces of (A λ k I) >> w1 = Col*CoeffVec(1:2) >> w1 = RoundByRatCommand( w1/w1(1)), Following the principle of check everything, w1 = 1 2-1 -1 verify that w 1 is a regular eigenvector and also lies in the needed Col. space >> NIsEigenvectorW1 = norm( A*w1 - lambda1 * w1) NIsEigenvectorW1 = 0 w 1 is constructed in the column space of (A λ k I), and the fact that it is an eigenvector shows it is also in the null space. Because the intersection of column and null spaces is 1-dimensional, (A λ k I) has exactly one regular eigenvector that can anchor a chain, = the chain must have length 3 (2 generalized E-vecs are needed). Compute a candidate for the first generalized eigenvector, V1,1 2 as solution to (A λ k I) V1,1 2 = V 1,1 1 >> V111 = w1, >> v3 = pinv(a - lambda1*i) * V111, v3 = 0 0.3333 0.3333 0.3333 Part 4b: Eigenvalues and Eigenvectors (Revised: Oct 22, 2016) Page 54

EE/ME 701: Advanced Linear Systems Section 4.3.1 To find the remaining generalized eigenvector, V 3 1,1, solve (A λ k I) V 3 1,1 = V 2 1,1 (80) which requires that V1,1 2 be in the column space of (A λ k I) >> NIsInColumnSpaceV112 = norm( Col*Col *v3 - v3) NIsInColumnSpaceV112 = 0.4216 It is not! Vector v 3 is a candidate generalized eigenvector, but we can not use it for V1,1 2, because it does not lead to V 1,1 3. Going back to the generating Eqn, V1,1 2 must simultaneously be part of two solutions: (A λ k I) V1,1 2 = V 1,1 1 (81) and (A λ k I) V 3 1,1 = V 2 1,1 (82) Vector v 3 is a particular solution to Eqn (81), but it is not the only solution. Any vector V1,1 2 = v 3 + b 1 n 1 + b 2 n 2 (83) is an exact solution to (81), where n 1 and n 2 are basis vectors for the null space of (A λ k I). We must find a vector V1,1 2 that is given by Eqn (83), but also lies in the columns space of (A λ k I) to provide a solution to Eqn (82). Part 4b: Eigenvalues and Eigenvectors (Revised: Oct 22, 2016) Page 55

EE/ME 701: Advanced Linear Systems Section 4.3.1 To find a value for V1,1 2 that is in the column space of (A λ k I), we need a solution to V1,1 2 = v 3 + b 1 n 1 + b 2 n 2 = a 1 c 1 + a 2 c 2 (84) where c 1 and c 2 are the basis vectors on the column space of (A λ k I). The solution to Eqn (84) is found by writing [ ] c 1 c 2 n 1 n 2 a 1 a 2 b 1 = v 3 (85) b 2 1. Find the coefficient vector >> CoeffVec2 = pinv( [Col, -Null]) * v3 CoeffVec2 = -0.0880-0.8406-0.2333 0.5714 (a) Determine the candidate vector >> v3b = v3 + Null * CoeffVec2(3:4), v3b = 0.5714 0.1429 0.4286 0.4286 (b) Check to be sure the new candidate is in the column space of (A λ k I) >> NIsInColumnSpaceV112b = norm( Col*Col *v3b - v3b) NIsInColumnSpaceV112b = 6.0044e-16 Yes! Part 4b: Eigenvalues and Eigenvectors (Revised: Oct 22, 2016) Page 56

EE/ME 701: Advanced Linear Systems Section 4.3.1 Set V1,1 2 and compute V 1,1 3 >> V112 = v3b >> V113 = pinv(a - lambda1*i) * V112 V112 = 0.5714 0.1429 0.4286 V113 = 0 0.1905 0.6905 0.4286-0.3095 Build the M matrix, V1,2 1 is any independent regular eigenvector. Compute J. >> M = [ V111, V112, V113, V121 ] M = 1.0000 0.5714 0 1.0000 2.0000 0.1429 0.1905 0-1.0000 0.4286 0.6905 0-1.0000 0.4286-0.3095 0 >> J = RoundByRatCommand( inv(m) * A * M ) J = 2 1 0 0 0 2 1 0 0 0 2 0 0 0 0 2 J has a 3x3 block, and a 1x1 block. Done! Recapping k=1 j=1 λ κ =2 V 1 1,1 k=1 j=2 λ κ =2 V 1 1,2 m k = 4 and g k = 2, 1. Regular E-vec V1,1 1 had to be a regular E-vec, but also had to satisfy Eqn (74) to lead to V1,1 2 2. Not any V1,1 2 would do, it had to be the one that could lead to V1,1 3 V 1 2,1 V 1 3,1 Figure 4: Illustration of chains of eigenvectors for this 4x4 example. All of our work on vector spaces, leading up to the four fundamental spaces, was required to find the solution ẋ(t) = Ax(t) with this A matrix! Part 4b: Eigenvalues and Eigenvectors (Revised: Oct 22, 2016) Page 57

EE/ME 701: Advanced Linear Systems Section 4.4.0 4.4 Summary of the Jordan Form Square matrices always have n eigenvalues, λ i The regular eigenvectors are given as the null space of (A λ i I) For a repeated eigenvalue λ k The algebraic multiplicity, m k, is the number of times λ k is repeated The geometric multiplicity, g k, is the dimension of null(a λ k I) When eigenvalues are repeated, we may not have enough independent regular eigenvectors (g k < m k ), in which case the Jordan form is required. The Jordan form corresponds to scalar differential equations with repeated roots and solutions of the form y(t) = a 1 e λ 1 t + a 2 t e λ 1 t + a 3 t 2 e λ 1 t... For repeated eigenvalues, regular eigenvectors give rise to chains of generalized eigenvectors. The generalized eigenvectors are solutions to (A λ i I)v l+1 k, j = v l k, j (86) where Eqn (86) is repeated as needed to obtain m k eigenvectors. Examples have shown several characteristics of eigensystems with repeated roots. Part 4b: Eigenvalues and Eigenvectors (Revised: Oct 22, 2016) Page 58

EE/ME 701: Advanced Linear Systems Section 4.4.1 4.4.1 Why Matlab does not have a numeric Jordan command Strikingly, Matlab has no numerical routine to find the generalized eigenvectors or Jordan form (standard Matlab no jordan() routine!) This is because the Jordan form calculation is numerically very sensitive, a small perturbation in A produces a large change in the chains of eigenvectors This sensitivity is true of the differential equations themselves, ÿ(t)+6ẏ(t)+9.00001y(t) = 0 has two distinct roots! Consider the stages where a decision must be made When there are two eigenvalues with λ a λ b, are they repeated or distinct? What is the dimension of null(a λi)? Does v l k, j lie in the column space of (A λi), or does it not? Is v l+1 k, j independent of the existing eigenvectors? There is no known numerical routine to find the Jordan form that is sufficiently numerically robust to be included by Mathworks in Matlab. The Matlab symbolic algebra package does have a jordan() routine. It runs symbolically on rational numbers to operate without round-off error, for example: 21/107 52/12 3/2 A = 0 119/120 8/5 1/1 11/12 13/14 Part 4b: Eigenvalues and Eigenvectors (Revised: Oct 22, 2016) Page 59

EE/ME 701: Advanced Linear Systems Section 5.0.0 5 Conclusions To solve The solution includes ẋ(t) = Ax(t)+Bu(t) (87) x(t) = e At x(0) (88) When we have a complete set of eigenvectors, we can diagonalize A and e At = V e U t V 1 (89) with U = V 1 AV (90) When we lack a complete set of eigenvectors, the Jordan form is used, with e At = M e J t M 1 (91) with J = M 1 AM (92) Part 4b: Eigenvalues and Eigenvectors (Revised: Oct 22, 2016) Page 60

EE/ME 701: Advanced Linear Systems Section 6.0.0 6 Review questions and skills 1. In what fundamental space do the regular eigenvectors lie. 2. Given the eigenvalues of a matrix, analyze the structure of the eigensystem (a) Determine the number of required generalized eigenvectors 3. Indicate the generating equation for the generalized eigenvectors. 4. Indicate in what fundamental space the vectors of the generating equations must lie. 5. When g k 2, and no regular eigenvector lies in the column space of (A λ k I), what steps can be taken? 6. When additional generalized eigenvectors are needed, and v, a candidate generalized eigenvector does not lie in the column space of (A λ k I), what steps can be taken? Part 4b: Eigenvalues and Eigenvectors (Revised: Oct 22, 2016) Page 61