= AΦ, Φ(0) = I, = k! tk A k = I + ta t2 A t3 A t4 A 4 +, (3) e ta 1

Similar documents
FIRST-ORDER SYSTEMS OF ORDINARY DIFFERENTIAL EQUATIONS II: Homogeneous Linear Systems with Constant Coefficients

π 1 = tr(a), π n = ( 1) n det(a). In particular, when n = 2 one has

Eigenpairs and Diagonalizability Math 401, Spring 2010, Professor David Levermore

Linear Planar Systems Math 246, Spring 2009, Professor David Levermore We now consider linear systems of the form

Solutions of the Sample Problems for the Third In-Class Exam Math 246, Fall 2017, Professor David Levermore

Third In-Class Exam Solutions Math 246, Professor David Levermore Thursday, 3 December 2009 (1) [6] Given that 2 is an eigenvalue of the matrix

HIGHER-ORDER LINEAR ORDINARY DIFFERENTIAL EQUATIONS II: Nonhomogeneous Equations. David Levermore Department of Mathematics University of Maryland

Solutions to Final Exam Sample Problems, Math 246, Spring 2011

HIGHER-ORDER LINEAR ORDINARY DIFFERENTIAL EQUATIONS. David Levermore Department of Mathematics University of Maryland.

Homework 3 Solutions Math 309, Fall 2015

7 Planar systems of linear ODE

MATH 312 Section 8.3: Non-homogeneous Systems

Linear Algebra Practice Problems

Systems of Algebraic Equations and Systems of Differential Equations

Math 3191 Applied Linear Algebra

Differential equations

Math Ordinary Differential Equations

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

MATH 320, WEEK 11: Eigenvalues and Eigenvectors

Lecture Notes: Eigenvalues and Eigenvectors. 1 Definitions. 2 Finding All Eigenvalues

Math Matrix Algebra

Spectral Theorem for Self-adjoint Linear Operators

MATH 56A: STOCHASTIC PROCESSES CHAPTER 0

Lecture 5. Complex Numbers and Euler s Formula

Chem 3502/4502 Physical Chemistry II (Quantum Mechanics) 3 Credits Fall Semester 2006 Christopher J. Cramer. Lecture 5, January 27, 2006

HIGHER-ORDER LINEAR ORDINARY DIFFERENTIAL EQUATIONS IV: Laplace Transform Method David Levermore Department of Mathematics University of Maryland

HIGHER-ORDER LINEAR ORDINARY DIFFERENTIAL EQUATIONS IV: Laplace Transform Method. David Levermore Department of Mathematics University of Maryland

Systems of Second Order Differential Equations Cayley-Hamilton-Ziebur

Archive of past papers, solutions and homeworks for. MATH 224, Linear Algebra 2, Spring 2013, Laurence Barker

Fundamentals of Linear Algebra. Marcel B. Finan Arkansas Tech University c All Rights Reserved

You may use a calculator, but you must show all your work in order to receive credit.

Linear Algebra Practice Problems

1. General Vector Spaces

Solutions to Final Exam Sample Problems, Math 246, Spring 2018

c Igor Zelenko, Fall

The Exponential of a Matrix

Ma 227 Review for Systems of DEs

1 Matrices and Systems of Linear Equations. a 1n a 2n

The following definition is fundamental.

FIRST-ORDER SYSTEMS OF ORDINARY DIFFERENTIAL EQUATIONS III: Autonomous Planar Systems David Levermore Department of Mathematics University of Maryland

Eigenvalues and Eigenvectors

Solutions to Homework 5, Introduction to Differential Equations, 3450: , Dr. Montero, Spring y 4y = 48t 3.

18.06 Problem Set 8 - Solutions Due Wednesday, 14 November 2007 at 4 pm in

Eigenvalues and Eigenvectors

REVIEW OF DIFFERENTIAL CALCULUS

Numerical Linear Algebra Homework Assignment - Week 2

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Lecture 10 - Eigenvalues problem

MATH 312 Section 4.5: Undetermined Coefficients

Course 216: Ordinary Differential Equations

4. Linear transformations as a vector space 17

We have already seen that the main problem of linear algebra is solving systems of linear equations.

Examples include: (a) the Lorenz system for climate and weather modeling (b) the Hodgkin-Huxley system for neuron modeling

and let s calculate the image of some vectors under the transformation T.

Differential Equations 2280 Sample Midterm Exam 3 with Solutions Exam Date: 24 April 2015 at 12:50pm

Matrices and Deformation

HIGHER-ORDER LINEAR ORDINARY DIFFERENTIAL EQUATIONS IV: Variable Coefficient Nonhomogeneous Case

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form

Notes on the matrix exponential

Announcements Monday, November 13

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Online Exercises for Linear Algebra XM511

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors

Lecture Notes in Mathematics. Arkansas Tech University Department of Mathematics. The Basics of Linear Algebra

Second In-Class Exam Solutions Math 246, Professor David Levermore Thursday, 31 March 2011

Old Math 330 Exams. David M. McClendon. Department of Mathematics Ferris State University

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

The minimal polynomial

8.5 Taylor Polynomials and Taylor Series

The goal of this chapter is to study linear systems of ordinary differential equations: dt,..., dx ) T

1 Matrices and Systems of Linear Equations

Eigenvalues and Eigenvectors

Solutions to Math 53 Math 53 Practice Final

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

Section 8.2 : Homogeneous Linear Systems

LU Factorization. A m x n matrix A admits an LU factorization if it can be written in the form of A = LU

Solutions to Second In-Class Exam: Math 401 Section 0201, Professor Levermore Friday, 16 April 2010

3.3 Eigenvalues and Eigenvectors

EIGENVALUES AND EIGENVECTORS 3

Vector Spaces and SubSpaces

18.S34 linear algebra problems (2007)

UCLA Math 135, Winter 2015 Ordinary Differential Equations

2.3. VECTOR SPACES 25

20D - Homework Assignment 5

November 18, 2013 ANALYTIC FUNCTIONAL CALCULUS

Systems of Linear Differential Equations Chapter 7

Eigenvalues and Eigenvectors: An Introduction

Even-Numbered Homework Solutions

0.1 Rational Canonical Forms

Linear algebra and differential equations (Math 54): Lecture 20

Final Exam. Linear Algebra Summer 2011 Math S2010X (3) Corrin Clarkson. August 10th, Solutions

Matrix Solutions to Linear Systems of ODEs

6 Linear Equation. 6.1 Equation with constant coefficients

Work sheet / Things to know. Chapter 3

Some Notes on Linear Algebra

Nonhomogeneous Linear Equations: Variation of Parameters Professor David Levermore 17 October 2004 We now return to the discussion of the general case

22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes

MATH 12 CLASS 4 NOTES, SEP

Putzer s Algorithm. Norman Lebovitz. September 8, 2016

MATH 1553-C MIDTERM EXAMINATION 3

Transcription:

Matrix Exponentials Math 26, Fall 200, Professor David Levermore We now consider the homogeneous constant coefficient, vector-valued initial-value problem dx (1) = Ax, x(t I) = x I, where A is a constant n n real matrix A special fundamental matrix associated with this problem is the solution Φ(t) of the matrix-valued initial-value problem dφ (2) = AΦ, Φ(0) = I, where I is the n n identity matrix We assert that Φ(t) satisfies (i) Φ(t + s) = Φ(t)Φ(s) for every t and s in R, (ii) Φ(t)Φ( t) = I for every t in R Assertion (i) follows because both sides satisfy the matrix-valued initial-value problem dψ = AΨ, Ψ(0) = Φ(s), and are therefore equal Assertion (ii) follows by setting s = t in assertion (i) and using the fact Φ(0) = I The fundamental matrix Φ(t) is therefore called the exponential of A and is commonly denoted as either e ta or exp(ta) It is easy to check that the solution of the initial-value problem (1) is given by x(t) = e (t ti)a x I The Taylor expansion of e ta about t = 0 is () e ta 1 = k! tk A k = I + ta + 1 2 t2 A 2 + 1 6 t A + 1 2 t A +, k=0 where we define A 0 = I Recall that the Taylor expansion of e at is e at 1 = k! ak t k = 1 + at + 1 2 a2 t 2 + 1 6 a t + 1 2 a t + k=0 Motivated by this fact, the book defines e ta by the infinite series () Matrix KEY Identity Given any polynomial p(z) = π 0 z m + π 1 z m 1 + + π m 1 z + π m and any n n matrix A we define the n n matrix p(a) by p(a) = π 0 A m + π 1 A m 1 + + π m 1 A + π m I Because for every nonnegative integer k one has d k = A k e ta, keta it follows from the definition of p(a) that d () p e ta = p(a)e ta This is the matrix version of the KEY identity Just as the scalar KEY identity allowed us to construct explicit solutions to higher-order linear differential equations with constant coefficients, the matrix KEY identity allows us to construct explicit solutions to first-order linear differential systems with a constant coefficient matrix 1

2 Computing the Matrix Exponential Given any n n matrix A, there are many ways to compute e At that are easier than evaluating the infinite series () The book gives a method that is based on computing the eigenvectors and (sometimes) the generalized eigenvectors of the matrix A This method requires a different approach depending on whether the eigenvalues of the real matrix A are real, complex conjugate, or have multiplicity greater than one These approaches are covered in Sections 75, 76, and 78, but these sections do not cover all the possible cases that can arise Here we will give a different method that covers all possible cases with a single approach Moreover, this method is generally much faster to carry out than the book s method when n is not too large This method begins by identifying a polynomial p(z) such that p(a) = 0 Such a polynomial is said to annihilate A The Cayley-Hamiltion Theorem states that one such polynomial is the characteristic polynomial of A, which we define by (5) p A (z) = det(iz A) This polyinomial has degree n Because det(zi A) = ( 1) n det(a zi), this definition of p A (z) coincides with the book s definition when n is even, and is its negative when n is odd Both conventions are common We have chosen the convention that makes p A (z) monic What matters most about p A (z) is its roots and their multiplicity, which are the same for both conventions These roots are called the eigenvalues of A The Cayley-Hamiltion Theorem states that (6) p A (A) = 0 We will not prove this for general n n matrices However, it is easy to verify for 2 2 matrices by a direct calculation Consider the general 2 2 matrix a11 a A = 12 a 21 a 22 Its characteristic polynomial is z a11 a p A (z) = det(iz A) = det 12 a 21 z a 22 = (z a 11 )(z a 22 ) a 21 a 12 = z 2 (a 11 + a 22 )z + (a 11 a 22 a 21 a 12 ) = z 2 tr(a)z + det(a), where tr(a) = a 11 + a 22 is the trace of A Then a direct calculation shows that p A (A) = A 2 (a 11 + a 22 )A + (a 11 a 22 a 21 a 12 )I 2 a11 a = 12 a11 a (a a 21 a 11 + a 22 ) 12 + (a 22 a 21 a 11 a 22 a 21 a 12 ) 22 a 2 = 11 + a 12 a 21 (a 11 + a 22 )a 12 (a 11 + a 22 )a 21 a 21 a 12 + a22 2 a11 a + 22 a 21 a 12 0 0 a 11 a 22 a 21 a 12 = 0, which verifies (6) for 2 2 matrices 1 0 0 1 (a11 + a 22 )a 11 (a 11 + a 22 )a 12 (a 11 + a 22 )a 21 (a 11 + a 22 )a 22

By the above paragraph, you can always find a polynomial p(z) of degree m n that annihilates A For this polynomial, we see from the matrix KEY identity () that d p e ta = p(a)e ta = 0 This means that each entry of e ta is a solution of the m th -order scalar homogeneous linear differential equation with constant coefficients d (7) p y = 0 If y 1 (t), y 1 (t),, y m (t) is a fundamental set of solutions to this equation then a general solution of it is y = c j y j (t), where c 1, c 2,, c m are arbitrary constants It follows that e ta must have the form (8) e ta = C j y j (t), where C 1, C 2,, C m are arbitrary n n constant matrices The constant matrices C 1, C 2,, C m may be determined by taking derivatives of (8) with respect to t and evaluating them at t = 0 The k th derivative of (8) evaluated at t = 0 gives () A k = C j y (k) j (0) Because y 1 (t), y 1 (t),, y m (t) is a fundamental set of solutions to (7), the constant matrices C 1, C 2,, C m are determined by () for k = 0, 1,, m 1 For example, if p(z) has m simple roots λ 1, λ 2,, λ m, then one can choose the fundamental set of solutions to (7) given by y j (t) = e λ jt, for j = 1, 2,, m Then (8) becomes the system of m linear equations (10) A k = C j λj k, for k = 0, 1,, m 1 This system may be solved for the constant matrices C 1, C 2,, C m, and the result placed into (8) to obtain e ta = C j e λjt Example Compute e ta for A = 2 2 Solution Because A is 2 2, its characteristic polynomial is p(z) = det(iz A) = z 2 tr(a)z + det(a) = z 2 6z + 5 = (z 1)(z 5)

Its roots are 1 and 5 System (10) then becomes I = C 1 + C 2, A = C 1 + 5C 2 This system can be easily solved to find 2 2 C 1 = 1(5I A) = 1 2 2 2 2 C 2 = 1 (A I) = 1 2 2 Formula (8) then yields e ta = C 1 e t + C 2 e 5t = 1 2 = 1 2 = 1 2 1 1, 1 1 1 1 1 1 e t + e 5t e 5t e t e 5t e t e t + e 5t Exponentials of Two-by-Two Matrices There are simple formulas for the exponential of a general 2 2 real matrix a11 a A = 12 a 21 a 22 Because A is 2 2, its characteristic polynomial is Upon completing the square we see that p(z) = det(iz A) = z 2 tr(a)z + det(a) p(z) = (z µ) 2 δ, where the mean µ and discriminant δ are given by µ = tr(a), δ = tr(a)2 det(a) 2 There are three cases which are distinguished by the sign of δ If δ > 0 then p(z) has the simple real roots µ and µ + where = δ In this case [ (11) e ta = e µt I cosh(t) + (A µi) sinh(t) ] If δ < 0 then p(z) has the complex conjugate roots µ i and µ+i where = δ In this case [ (12) e ta = e µt I cos(t) + (A µi) sin(t) ] If δ = 0 then p(z) has the double real root µ In this case (1) e ta = e µt[ I + (A µi)t ] For 2 2 matrices you will find it faster to apply these formulas than to set-up and solve for C 1 and C 2 These formulas are eazy to remember Notice that formulas (11) and (12) are similar the first uses hyperbolic functions when p(z) has simple real roots, while the second uses trigonometric functions when p(z) has complex conjugate roots Formula (1) is the limiting case of both (11) and (12) as 0

5 Example Compute e ta for A = 2 2 Solution Because A is 2 2, its characteristic polynomial is p(z) = det(iz A) = z 2 tr(a)z + det(a) = z 2 6z + 5 = (z ) 2 It has the real roots ± 2 By (11) with µ = and = 2 we see that [ e ta = e t I cosh(2t) + (A I) sinh(2t) ] 2 [ ] 1 0 0 1 = e t cosh(2t) + sinh(2t) 0 1 1 0 cosh(2t) sinh(2t) = e t sinh(2t) cosh(2t) Example Compute e ta for A = 6 5 5 2 Solution Because A is 2 2, its characteristic polynomial is p(z) = det(iz A) = z 2 tr(a)z + det(a) = z 2 z + 1 = (z 2) 2 + 2 It has the conjugate roots 2 ± i By (12) with µ = 2 and = we see that [ e ta = e 2t I cos(t) + (A 2I) sin(t) ] [ ] 1 0 5 sin(t) = e 2t cos(t) + 0 1 5 ( cos(2t) + = e 2t sin(t) 5 sin(t) ) 5 sin(t) cos(t) sin(t) Use of Natural Fundamental Sets The natural fundamental set of solutions to (7) are the solutions y 1 (t), y 1 (t),, y m (t) such that for each j = 1, 2,, m the solution y j (t) satifies the initial conditions (1) y (k 1) j (0) = δ jk for k = 1, 2,, m where δ jk is the Kronecker delta, which is defined by { 1 when j = k, δ jk = 0 when j k If y 1 (t), y 1 (t),, y m (t) is the natural fundamental set of solutions to (7) then () with k 1 replacing k becomes A k 1 = C j y (k 1) j (0) = C j δ jk = C k for k = 1, 2,, m

6 In that case (8) becomes (15) e ta = A j 1 y j (t), If the natural fundamental set of solutions to (7) is either known or easily found then this is the shortest route to computing e ta when m is not too large Example Compute e ta for A = 0 2 1 2 0 2 1 2 0 Solution The characteristic polynomial of A is p(z) = det(iz A) = det z 2 1 2 z 2 = z + + z + z + z 1 2 z = z + z = z(z 2 + ) Its roots are 0, ±i The associated higher-order equation is d y + dy = 0 By (10) its natural fundamental set of solutions y 1 (t), y 2 (t), and y (t) satisfy the initial conditions y 1 (0) = 1, y 2 (0) = 0, y (0) = 0, y 1 (0) = 0, y 2(0) = 1, y (0) = 0, y 1 y 2(0) = 0, y You can solve these three initial-value problems to find (16) y 1 (t) = 1, y 2 (t) = sin(t), y (t) = 1 cos(t) We will see a more efficient way to find these solutions a bit later, so we will not give any details here Given these solutions, formula (15) yields e ta = Iy 1 (t) + Ay 2 (t) + A 2 y (t) = 1 0 0 0 1 0 + 0 2 1 2 0 2 sin(t) + 0 2 1 2 2 0 2 1 cos(t) 0 0 1 1 2 0 1 2 0 = 1 0 0 0 1 0 + 0 2 1 2 0 2 sin(t) + 5 2 2 8 2 1 cos(t) 0 0 1 1 2 0 2 5 = + 5 cos(t) 2 2 cos(t) + 2 sin(t) cos(t) 1 sin(t) 2 2 cos(t) 2 sin(t) 1 + 8 cos(t) 2 2 cos(t) + 2 sin(t) cos(t) + 1 sin(t) 2 2 cos(t) + 2 sin(t) + 5 cos(t)

Three Examples Formulas (11), (12), and (1) for the exponential of 2 2 matrices can be easily derived using the natural fundamental set of solutions to the equation d p y = 0, where p(z) = (z µ) 2 δ There are three cases which are distinguished by the sign of δ If δ > 0 then p(z) has the simple real roots µ and µ + where = δ In this case the natural fundamental set of solutions is (17) y 1 (t) = e µt cosh(t) µe µtsinh(t), y 2 (t) = e µtsinh(t) If δ < 0 then p(z) has the complex conjugate roots µ i and µ+i where = δ In this case the natural fundamental set of solutions is (18) y 1 (t) = e µt cos(t) µe µtsin(t), y 2 (t) = e µtsin(t) If δ = 0 then p(z) has the double real root µ In this case the natural fundamental set of solutions is (1) y 1 (t) = e µt µe µt t, y 2 (t) = e µt t Then by (15), formulas (11), (12), and (1) are abtained by plugging the natural fundamental sets of solutions (17), (18), and (1) respectively into e ta = Iy 1 (t) + Ay 2 (t) Notice that (1) is the limiting case of both (17) and (18) as 0 Remark The above examples shows that, once the natural fundamental set of solutions is found for the associated higher-order equation, employing formula (15) is straight forward It requires only computing A k up to k = m 1 and some addition For m 2 this requires (m 2)n multiplications, which grows fast as m and n get large (Often m = n) However, for small systems like the ones you will face in this course, it is generally the fastest method It will become even faster once you learn how to quickly generate the natural fundamental set of solutions for the associated higher-order equation from its Green function Generating Natural Fundamental Sets with Green Functions In each of the natural fundamental sets of solutions given by (17), (18), and (1), the solutions y 1 (t) and y 2 (t) are related by y 1 (t) = y 2 (t) 2µy 2(t) This is an instance of a more general fact For the m th -order equation d (20) p y = 0, where p(z) = z m + π 1 z m 1 + + π m 1 z + π m, one can generate its entire natural fundamental set of solutions from the Green function g(t) associated with (20) Recall that the Green function g(t) satisfies the initial-value problem d (21) p g = 0, g(0) = g (0) = = g (m 2) (0) = 0, g (m 1) (0) = 1 7

8 The natural fundamental set of solutions is then given by the recipe (22) y m (t) = g(t), y m 1 (t) = g (t) + π 1 g(t), y m 2 (t) = g (t) + π 1 g (t) + π 2 g(t), y 2 (t) = g (m 2) (t) + π 1 g (m ) (t) + + π m g (t) + π m 2 g(t), y 1 (t) = g (m 1) (t) + π 1 g (m 2) (t) + π 2 g (m ) (t) + + π m 2 g (t) + π m 1 g(t) This entire set is thereby generated by the solution of the single initial-value problem (21) Example Show that (16) is indeed the natural fundamental set of solutions to the equation d y + dy = 0 Solution By (21) the Green function g(t) satisfies the initial-value problem d g + dg = 0, g(0) = g (0) = 0, g (0) = 1 The characteristic polynomial of this equation is p(z) = z +z, which has roots 0, ±i We therefore seek a solution in the form Because g(t) = c 1 + c 2 cos(t) + c sin(t) g (t) = c 2 sin(t) + c cos(t), g (t) = c 2 cos(t) c sin(t), the inital conditions for g(t) then yield the algebraic system g(0) = c 1 + c 2 = 0, g (0) = c = 0, g (0) = c 2 = 1 The solution of this system is c 1 = 1, c 2 = 1, and c = 0, whereby the Green function is g(t) = 1 cos(t) Because p(z) = z + z, we read off from (20) that π 1 = 0, π 2 =, and π = 0 Then by recipe (22) the natural fundamental set of solutions is given by y (t) = g(t) = 1 cos(t), y 2 (t) = g (t) + 0 g(t) = sin(t), y 1 (t) = g (t) + 0 g (t) + g(t) = cos(t) + 1 cos(t) = 1 This is indeed the set given by (16) Example Given the fact that p(z) = z z annihilates A, compute e ta for 0 1 0 1 A = 1 0 1 0 0 1 0 1 1 0 1 0

Solution Because you are told that p(z) = z z annihilates A, you do not have to compute the characteristic polynomial of A By (21) the Green function g(t) satisfies the initial-value problem d g dg = 0, g(0) = g (0) = 0, g (0) = 1 The characteristic polynomial of this equation is p(z) = z z, which has roots 0, ±2 We therefore seek a solution in the form g(t) = c 1 + c 2 e 2t + c e 2t Because g (t) = 2c 2 e 2t 2c e 2t, g (t) = c 2 e 2t + c e 2t, the inital conditions for g(t) then yield the algebraic system g(0) = c 1 + c 2 + c = 0, g (0) = 2c 2 2c = 0, g (0) = c 2 + c = 1 The solution of this system is c 1 = 1 and c 2 = c = 1, whereby the Green function is 8 g(t) = 1 + 1 8 e2t + 1 8 e 2t = 1 ( cosh(2t) 1 ) Because p(z) = z z, we read off from (20) that π 1 = 0, π 2 =, and π = 0 Then by recipe (22) the natural fundamental set of solutions is given by y (t) = g(t) = cosh(2t) 1, y 2 (t) = g (t) + 0 g(t) = sinh(2t), 2 y 1 (t) = g (t) + 0 g (t) g(t) = cosh(2t) cosh(2t) 1 Given these solutions, formula (15) yields = 1 e ta = Iy 1 (t) + Ay 2 (t) + A 2 y (t) 2 1 0 0 0 0 1 0 1 0 1 0 1 = 0 1 0 0 0 0 1 0 + 1 0 1 0 sinh(2t) 0 1 0 1 + 1 0 1 0 2 0 1 0 1 0 0 0 1 1 0 1 0 1 0 1 0 1 0 0 0 0 1 0 1 2 0 2 0 = 0 1 0 0 0 0 1 0 + 1 0 1 0 sinh(2t) 0 1 0 1 + 0 2 0 2 2 2 0 2 0 0 0 0 1 1 0 1 0 0 2 0 2 1 + 1 cosh(2t) 1 cosh(2t) 1 1 sinh(2t) 2 2 2 2 2 2 1 = 2 + 1 cosh(2t) 1 cosh(2t) 1 2 2 2 2 2 1 cosh(2t) 1 1 + 1 cosh(2t) 1 sinh(2t) 2 2 2 2 2 2 1 cosh(2t) 1 1 + 1 cosh(2t) 2 2 2 2 2 2 cosh(2t) 1 cosh(2t) 1 Remark The polynomial p(z) = z z used in the above example is not the characteristic polynomial of A With some effort you can check that p A (z) = z z 2 We saved quite a bit of work in computing e ta by using p(z) = z z as the annihilating polynomial rather than p A (z) = z z 2 because it has a lower degree Given the characteristic polynomial p A (z) of

10 a matrix A, every annihilating polynomial p(z) will have the same roots as p A (z), but these roots might have lower multiplicity If p(z) has simple roots then it has the lowest degree possible for an annihilating polynomial of A If p A (z) has roots that are not simple then it pays to find an annihilating polynomial of lower degree If A is either symmetric (A T = A) or skew-symmetric (A T = A) then it has an annihilating polynomial with simple roots In the example above, because A is symmetric and p A (z) has roots 2, 0, 0, and 2, we know that p(z) = (z + 2)z(z 2) = z z is an annihilating polynomial Because each root of p(z) is simple, it has the lowest degree possible for an annihilating polynomial of A Justification of Recipe (22) The following justification of recipe (22) is included for completeness It was not covered in lecture and you do not need to know this argument However, you should find the recipe itself quite useful We see from (21) that g(t) is a solution of d (2) p y = y (m) + π 1 y (m 1) + π 2 y (m 2) + + π m 1 y + π m y = 0, so that all of its derivatives are too Each y j (t) defined by (22) must also be a solution of (2) because it is a linear combination of g(t) and its derivatives The only thing that remains to be checked is that the initial conditions (1) are satisfied Because y m (t) = g(t), we see from (21) that the initial conditions (1) hold for y m (t) The key step is to show that if the initial conditions (1) hold for y j+1 (t) for some j < m then they hold for y j (t) Once this is done then we can argue that because the initial conditions (1) hold for y m (t), they also hold for y m 1 (t), which implies they also hold for y m 2 (t), which implies they also hold for y m (t), and so on down to y 1 (t) We now prove the key step We suppose that for some j < m the initial conditions (1) hold for y j+1 (t) This is the same as (2) y (k) j+1 (0) = δ jk for k = 0, 1,, m 1 Because y j+1 (t) satisfies (2), it follows from the above that m 1 d 0 = p y j+1 (t) = y (m) j+1 (0) + π m k y (k) j+1 (0) t=0 k=0 (25) m 1 = y (m) j+1 (0) + k=0 π m k δ jk = y (m) j+1 (0) + π m j We see from recipe (22) that y j (t) is related to y j+1 (t) by y j (t) = y j+1 (t) + π m jg(t) We evaluate the (k 1) st derivative of this relation at t = 0 to obtain y (k 1) j (0) = y (k) j+1 (0) + π m jg (k 1) (0) for k = 1, 2,, m Because g(t) satisfies the initial conions in (21), we see from (2) that this becomes y (k 1) j (0) = δ jk for k = 1, 2,, m 1, while we see from (25) that for k = m it becomes y (m 1) j (0) = y (m) j+1 (0) + π m j = 0 The initial conditions (1) thereby hold for y j (t) This completes the proof of the key step, which completes the justification of recipe (22)