FIRST-ORDER SYSTEMS OF ORDINARY DIFFERENTIAL EQUATIONS II: Homogeneous Linear Systems with Constant Coefficients

Size: px
Start display at page:

Download "FIRST-ORDER SYSTEMS OF ORDINARY DIFFERENTIAL EQUATIONS II: Homogeneous Linear Systems with Constant Coefficients"

Transcription

1 FIRST-ORDER SYSTEMS OF ORDINARY DIFFERENTIAL EQUATIONS II: Homogeneous Linear Systems with Constant Coefficients David Levermore Department of Mathematics University of Maryland 28 January 2012 Because the presentation of this material in lecture will differ from that in the book, I felt that notes that closely follow the lecture presentation might be appreciated 3 Matrix Exponentials Contents 31 Introduction 2 32 Computing Matrix Exponentials 3 33 Two-by-Two Matrix Exponentials 9 34 Natural Fundamental Sets from Green Functions not covered Matrix Exponential Formula not covered Hermite Interpolation Methods not covered 18 4 Eigen Methods 41 Eigenpairs Special Solutions of First-Order Systems Two-by-Two Fundamental Matrices Diagonalizable Matrices Computing Matrix Exponentials by Diagonalization Generalized Eigenpairs not covered 36 5 Linear Planar Systems 51 Phase-Plane Portraits Classification of Phase-Plane Portraits Mean-Discriminant Plane Stability of the Origin 45 1

2 2 3 Matrix Exponentials 31 Introduction Consider the vector-valued initial-value problem 31 dx dt = Ax, xt I = x I, where A is a constant, n n, real matrix Let Φt be the natural fundamental matrix associated with 31 for the initial time 0 It satisfies the matrix-valued initial-value problem 32 dφ dt = AΦ, Φ0 = I, where I is the n n identity matrix We assert that Φt satisfies i Φt + s = ΦtΦs for every t and s in R, ii ΦtΦ t = I for every t in R Assertion i follows because both sides satisfy the matrix-valued initial-value problem dψ dt = AΨ, Ψ0 = Φs, and are therefore equal Assertion ii follows by setting s = t in assertion i and using the fact Φ0 = I Properties i and ii look like the properties satisfied by the real-valued exponential function e at, but they are satisfied by the matrix-valued function Φt They motivate the following definition Definition 21 The matrix exponential of A is the solution of the matrix-valued initial-value problem 32 It is commonly denoted either as e ta or as expta Given e ta, the solution of the initial-value problem 31 is given by xt = e t t IA x I, while a general solution of the first-order differential system in 31 is given by c xt = e ta c, where c = 2 c n In this section and the next we will present methods by which you can compute e ta Remark It is important to realize that for every n 2 the exponential matrix e ta is not the matrix you obtain by exponentiating each entry! This means that a11 a if A = 12 e then e a 21 a ta ta 11 e ta e ta 21 e ta 22 In particular, the initial condition of 32 implies that e 0A = I, while the matrix on the right-hand side above has every entry equal to 1 when t = 0 c 1

3 3 You can see from 32 that for every positive integer k one has the identity 33 D k e ta = A k e ta, where D = d dt The Taylor expansion of e ta about t = 0 is thereby 34 e ta 1 = k! tk A k = I + ta t2 A t3 A t4 A 4 +, k=0 where we define A 0 = I Recall that the Taylor expansion of e at is e at 1 = k! ak t k = 1 + at a2 t a3 t a4 t 4 + k=0 Motivated by the similarity of these expansions, the textbook defines e ta by the infinite series 34 This approach dances around questions about the convergence of the infinite series questions that are seldom favorites of students We avoid such questions by defining e ta to be the solution of the matrix-valued initial-value problem 32, which is also more central to the material of this course 32 Computing Matrix Exponentials Given any real n n matrix A, there are many ways to compute e At that are easier than evaluating the infinite series 34 The textbook gives a method that is based on computing the eigenvectors and sometimes the generalized eigenvectors of A This method requires different approaches depending on whether the eigenvalues of the real matrix A are real, complex conjugate pairs, or have multiplicity greater than one These approaches are covered in Sections 75, 76, and 78 of the textbook, but these sections do not cover all the possible cases that can arise We will cover that method in Section 4 of these notes Here we present another method that covers all possible cases with a single approach Moreover, this method is generally much faster to carry out than the textbook s method when n is not too large Before giving the method, we give its three ingredients: a matrix version of the Key Identity, the Cayley-Hamilton Theorem from linear algebra, and natural fundamental sets of solutions of higher-order differential operators 321 Matrix Key Identity Just as the scalar Key Identity allowed us to construct explicit solutions to higher-order linear differential equations with constant coefficients, a matrix Key Identity will allow us to construct explicit solutions to first-order linear differential systems with a constant coefficient matrix Definition 32 Given any polynomial pz = π 0 z m + π 1 z m π m 1 z + π m and any n n matrix A we define the n n matrix pa by pa = π 0 A m + π 1 A m π m 1 A + π m I The polynomial pz is said to annihilate A if pa = 0 It follows from 33 and the definition of pa that 35 pde ta = pae ta This is the matrix version of the Key Identity

4 4 Let pz be a polynomial of degree m that annihilates A We see from the matrix Key Identity 35 that pde ta = pae ta = 0 This means that each entry of e ta is a solution of the m th -order scalar homogeneous linear differential equation with constant coefficients 36 pde ta = 0 Moreover, you can see from the derivative identity 33 that each entry of e ta satisfies initial conditions that can be read off from 37 D k e ta t=0 = A k e ta t=0 = A k, for k = 0, 1,, m 1 This shows that the entries of e ta can be obtained by solving the initial-value problems provided you can find a polynomial that annihilates A 322 Cayley-Hamiltion Theorem You can always find a polynomial that annihilates the matrix A because the Cayley-Hamiltion Theorem states that one such polynomial is the characteristic polynomial of A Definition 33 The characteristic polynomial of an n n matrix A is 38 p A z = detiz A This polyinomial has degree n and is easy to compute when n is not large Because detzi A = 1 n deta zi, this definition of p A z coincides with the textbook s definition when n is even, and is its negative when n is odd Both conventions are common We have chosen the convention that makes p A z monic What matters most about p A z is its roots and their multiplicity, which are the same for both conventions As we will see in Section 4 of these notes, these roots are the eigenvalues of A An important fact about characteristic polynomials is the following Theorem 31 Cayley-Hamiltion Theorem For every n n matrix A 39 p A A = 0 In other words, every n n matrix is annihilated by its characteristic polynomial Reason We will not prove this theorem for general n n matrices However, it is easy to verify for 2 2 matrices by a direct calculation Consider the general 2 2 matrix a11 a A = 12 a 21 a 22 Its characteristic polynomial is z a11 a p A z = detiz A = det 12 a 21 z a 22 = z a 11 z a 22 a 21 a 12 = z 2 a 11 + a 22 z + a 11 a 22 a 21 a 12 = z 2 traz + deta,

5 5 where tra = a 11 + a 22 is the trace of A Then a direct calculation shows that p A A = A 2 a 11 + a 22 A + a 11 a 22 a 21 a 12 I 2 a11 a = 12 a11 a a a 21 a 11 + a a 22 a 21 a 11 a 22 a 21 a a 2 = 11 + a 12 a 21 a 11 + a 22 a 12 a 11 + a 22 a 21 a 21 a 12 + a22 2 a11 a + 22 a 21 a a 11 a 22 a 21 a 12 = 0, which verifies 39 for 2 2 matrices a11 + a 22 a 11 a 11 + a 22 a 12 a 11 + a 22 a 21 a 11 + a 22 a Natural Fundamental Sets of Solutions The solutions of the initial-value problems are given by 310 e ta = N 0 ti + N 1 ta + + N m 1 ta m 1, where N 0 t, N 1 t,, N m 1 t is the natural fundamental set of solutions associated with the m th -order differential operator pd and the initial time 0 Earlier in the course we showed that you can compute the natural fundamental set N 0 t, N 1 t,, N m 1 t by solving the general initial-value problem 311 pdy = 0, y0 = y 0, y 0 = y 1,, y m 1 0 = y m 1 Because pdy = 0 is an m th -order linear equation with constant coefficients, you do this by first factoring pz and using our recipe to generate a fundamental set of solutions Y 1 t, Y 2 t,, Y m t You then determine c 1, c 2,, c m so that the general solution, yt = c 1 Y 1 t + c 2 Y 2 t + + c m Y m t, satifies the general initial conditions of 311 By grouping the terms that multiply each y k, you can express the solution of the general initial-value problem 311 as yt = N 0 ty 0 + N 1 ty N m 1 ty m 1 You can then read off N 0 t, N 1 t,, N m 1 t from this expression 324 Our Method Our method to compute matrix exponentials has three steps 1 Find a polynomial pz that annihilates A Let m denote its degree 2 Compute N 0 t, N 1 t,, N m 1 t, the natural fundamental set of solutions associated with the m th -order differential operator pd and the initial time 0 3 Compute the matrix exponential e ta by the formula 310 By the Cayley-Hamilton Theorem, you can always find a polynomial of degree n that annihilates A namely, the characteristic polynomial of A given by 38 For now, this is how you will complete Step 1 Later we will see that sometimes you can find a polynomial of smaller degree that also annihilates A However, when the characteristic polynomial has only simple roots then no polynomial of smaller degree annihilates A

6 6 Once you have a polynomial of degree m n that annihilates A, you compute the natural fundamental set of solutions N 0 t, N 1 t,, N m 1 t by solving the general initial-value problem 311 For now, this is how you can complete Step 2 We will present an alternative way to do it in Section 34 Once you have the natural fundamental set N 0 t, N 1 t,, N m 1 t, you just have to apply formula 310 to compute e ta If m > 2 this requires computing the powers of A that appear and doing the matrix addition If m = 2 this only requires doing the matrix addition because only I and A appear in formula 310 We now illustrate this method with examples Example Compute e ta for A = Solution Because A is 2 2, its characteristic polynomial is pz = detiz A = z 2 traz + deta = z 2 6z + 5 = z 5z 1 Its roots are 5 and 1 The associated general initial-value problem is y 6y + 5y = 0, y0 = y 0, y 0 = y 1 Its general solution is yt = c 1 e 5t + c 2 e t Because y t = 5c 1 e 5t + c 2 e t, the general initial conditions imply y 0 = y0 = c 1 e 0 + c 2 e 0 = c 1 + c 2, y 1 = y 0 = 5c 1 e 0 + c 2 e 0 = 5c 1 + c 2 Upon solving this system for c 1 and c 2 you find that c 1 = y 1 y 0, c 2 = 5y 0 y The solution of the general initial-value problem is therefore yt = y 1 y 0 e 5t + 5y 0 y 1 e t = 5et e 5t y 0 + e5t e t y You then read off that the natural fundamental set is N 0 t = 5et e 5t, N 1 t = e5t e t 4 4 Formula 310 then yields e ta = N 0 ti + N 1 ta = 5et e 5t e5t e t 3 2 = 1 e 5t + e t e 5t e t e 5t e t e 5t + e t Example Compute e ta for A = Solution Because A is 2 2, its characteristic polynomial is pz = detiz A = z 2 traz + deta = z 2 + 2z + 1 = z + 1 2

7 It has the double real root 1 The associated general initial-value problem is y + 2y + y = 0, y0 = y 0, y 0 = y 1 Its general solution is yt = c 1 e t + c 2 te t Because y t = c 1 e t c 2 te t + c 2 e t, the general initial conditions imply y 0 = y0 = c 1 e 0 + c 2 0e 0 = c 1, y 1 = y 0 = c 1 e 0 c 2 0e 0 + c 2 e 0 = c 1 + c 2 Upon solving this system for c 1 and c 2 you find that c 1 = y 0, c 2 = y 0 + y 1 The solution of the general initial-value problem is therefore yt = y 0 e t + y 0 + y 1 te t = 1 + te t y 0 + te t y 1 You then read off that the natural fundamental set is Formula 310 then yields N 0 t = 1 + te t, N 1 t = te t e ta = 1 + te t I + te t A = 1 + te t + te 0 1 t 4 1 Example Compute e ta for A = Solution Because A is 2 2, its characteristic polynomial is 1 2t t = e t 4t 1 + 2t pz = detiz A = z 2 traz + deta = z 2 4z + 13 = z It has conjugate roots 2 ± i3 The associated general initial-value problem is y 4y + 13y = 0, y0 = y 0, y 0 = y 1 Its general solution is yt = c 1 e 2t cos3t + c 2 e 2t sin3t Because y t = 2c 1 e 2t cos3t 3c 1 e 2t sin3t + 2c 2 e 2t sin3t + 3c 2 e 2t cos3t, the general initial conditions imply y 0 = y0 = c 1 e 0 cos0 + c 2 e 0 sin0 = c 1, y 1 = y 0 = 2c 1 e 0 cos0 3c 1 e 0 sin0 + 2c 2 e 0 sin0 + 3c 2 e 0 cos0 = 2c 1 + 3c 2 Upon solving this system for c 1 and c 2 you find that c 1 = y 0, c 2 = y 1 2y 0 3 7

8 8 The solution of the general initial-value problem is therefore yt = y 0 e 2t cos3t + y 1 2y 0 e 2t sin3t 3 = e 2t cos3t 2 3 e2t sin3t y e2t sin3ty 1 You then read off that the natural fundamental set is Formula 310 then yields N 0 t = e 2t cos3t 2 3 sin3t, N 1 t = e 2t 1 3 sin3t e ta = N 0 ti + N 1 ta = e 2t cos3t 2 sin3t e t 1 sin3t cos3t + = e 2t 4 sin3t 3 5 sin3t 3 5 sin3t cos3t 4 sin3t 3 3 Example Compute e ta for A = Solution The characteristic polynomial of A is pz = detiz A = det z z 2 = z z + 4z + z 1 2 z = z 3 + 9z = zz Its roots are 0, ±i3 The associated general initial-value problem is y + 9y = 0, y0 = y 0, y 0 = y 1, y 0 = y 2 Its general solution is yt = c 1 + c 2 cos3t + c 3 sin3t Because y t = 3c 2 sin3t + 3c 3 cos3t, y t = 9c 2 cos3t 9c 3 sin3t, the general initial conditions imply y 0 = y0 = c 1 + c 2 cos0 + c 3 sin0 = c 1 + c 2, y 1 = y 0 = 3c 2 sin0 + 3c 3 cos0 = 3c 3, y 2 = y 0 = 9c 2 cos0 9c 3 sin0 = 9c 2, Upon solving this system for c 1, c 2, and c 3 you find that c 1 = 9y 0 + y 2 9, c 2 = y 2 9, c 3 = y 1 3

9 9 The solution of the general initial-value problem is therefore yt = 9y 0 + y 2 y cos3t + y 1 3 sin3t = y 0 + sin3t y cos3t y You then read off that the natural fundamental set is N 0 t = 1, N 1 t = sin3t, N 2 t = 1 cos3t 3 9 Formula 310 then yields e ta = N 0 ti + N 1 ta + N 2 ta 2 = sin3t cos3t = sin3t cos3t = 4+5 cos3t cos3t 6 sin3t cos3t+3 sin3t cos3t+6 sin3t cos3t cos3t+6 sin3t cos3t 3 sin3t cos3t+6 sin3t cos3t 9 Remark The above examples show that, once the natural fundamental set of solutions is found for the associated higher-order equation, employing formula 310 is straightforward It requires only computing A k up to k = m 1 and some addition For m 2 this requires m 2n 3 multiplications, which grows fast as m and n get large Often m = n However, for small systems like the ones you will face in this course, it is generally the fastest method 33 Two-by-Two Matrix Exponentials We will now apply our method to derive simple formulas for the exponential of any real matrix A that is annihilated by a quadratic polynomial pz = z 2 + π 1 z + π 2 This includes the general 2 2 real matrix a11 a A = 12, a 21 a 22 which is annihilated by its characteristic polynomial, Upon completing the square we see that pz = detiz A = z 2 traz + deta pz = z 2 + π 1 z + π 2 = z µ 2 δ, where the mean µ and discriminant δ are given by µ = π 1 2, δ = µ2 π 2 There are three cases which are distinguished by the sign of δ 2

10 10 If δ > 0 then pz has the simple real roots µ ν and µ + ν where ν = δ In this case the natural fundamental set of solutions is found to be 312 N 0 t = e µt coshνt µe µtsinhνt, N 1 t = e µtsinhνt ν ν If δ < 0 then pz has the complex conjugate roots µ iν and µ + iν where ν = δ In this case the natural fundamental set of solutions is found to be 313 N 0 t = e µt cosνt µe µtsinνt, N 1 t = e µtsinνt ν ν If δ = 0 then pz has the double real root µ In this case the natural fundamental set of solutions is found to be 314 N 0 t = e µt µe µt t, N 1 t = e µt t Notice that 314 is the limiting case of both 312 and 313 as ν 0 because lim coshνt = lim cosνt = 1, ν 0 ν 0 lim sinhνt ν 0 ν sinνt = lim ν 0 ν Because pz is quadratic, m = 2 in formula 310 and it reduces to e ta = N 0 ti + N 1 ta When the natural fundamental sets 312, 313, and 314 respectively are plugged into this formula, you obtain the following formulas If δ > 0 then pz has the simple real roots µ ν and µ + ν where ν = δ In this case [ 315 e ta = e µt coshνti + sinhνt ] A µi ν If δ < 0 then pz has the complex conjugate roots µ iν and µ + iν where ν = δ In this case [ 316 e ta = e µt cosνti + sinνt ] A µi ν If δ = 0 then pz has the double real root µ In this case 317 e ta = e µt[ I + t A µi ] For any matrix that is annihialted by a quadratic polynomial you will find that it is faster to apply these formulas than to set-up and solve the general initial-value problem for N 0 t and N 1 t These formulas are eazy to remember Notice that formulas 315 and 316 are similar the first uses hyperbolic functions when pz has simple real roots, while the second uses trigonometric functions when pz has complex conjugate roots Formula 317 is the limiting case of both 315 and 316 as ν 0 Remark For 2 2 matrices µ = 1 tra while tri = 2 In that case the trace of the 2 matrix A µi that appears in each of the formulas is zero You should use this fact as a check whenever you use these formulas to compute e ta for 2 2 matrices We will illustrate this approach on the same 2 2 matrices used in the last subsection = t

11 11 Example Compute e ta for A = Solution Because A is 2 2, its characteristic polynomial is pz = detiz A = z 2 traz + deta = z 2 6z + 5 = z It has the real roots 3 ± 2 By 315 with µ = 3 and ν = 2 we see that [ e ta = e 3t cosh2ti + sinh2t ] A 3I 2 ] cosh2t sinh2t = e [cosh2t 3t + sinh2t = e t sinh2t cosh2t Remark At first glance this may not look like the same answer that we got in the last section but it is! It is simply expressed in terms of hyperbolic functions rather than just exponentials Either form of the answer is correct Example Compute e ta for A = Solution Because A is 2 2, its characteristic polynomial is pz = detiz A = z 2 traz + deta = z 2 + 2z + 1 = z It has the double real root 1 By 317 with µ = 1 we see that e ta = e t[ I + t A + I ] [ ] t t = e t + t = e t 4t 1 + 2t Example Compute e ta for A = Solution Because A is 2 2, its characteristic polynomial is pz = detiz A = z 2 traz + deta = z 2 4z + 13 = z It has the conjugate roots 2 ± i3 By 316 with µ = 2 and ν = 3 we see that [ e ta = e 2t cos3ti + sin3t ] A 2I = e [cos3t 2t + sin3t ] cos2t + = e 2t 4 sin3t 3 5 sin3t 3 5 sin3t cos3t 4 sin3t 3 3

12 12 34 Natural Fundamental Sets from Green Functions If A is an n n matrix that is annihilated by a polynomial of degree m > 2 and n is not too large then the hardest step in our method for computing e ta is generating the natural fundamental set N 0 t, N 1 t,, N m 1 t by solving the general initial-value problem 311 Here we present an alternative way to generate these solutions Observe that in each of the natural fundamental sets of solutions given by 312, 313, and 314, the solutions N 0 t and N 1 t are related by N 0 t = N 1t 2µN 1 t This is an instance of a more general fact For the m th -order equation 318 pdy = 0, where pz = z m + π 1 z m π m 1 z + π m, one can generate its entire natural fundamental set of solutions from the Green function gt associated with 318 Recall that the Green function gt satisfies the initial-value problem 319 pdg = 0, g0 = g 0 = = g m 2 0 = 0, g m 1 0 = 1 The natural fundamental set of solutions is then given by the recipe 320 N m 1 t = gt, N m 2 t = N m 1 t + π 1gt, N 0 t = N 1t + π m 1 gt This entire set is thereby generated by the solution g of the initial-value problem 319 Remark It is clear that solving the initial-value problem 319 for the Green function gt requires less algebra than solving the general initial-value problem 311 directly for N 0 t, N 1 t,, N m 1 t The trade-off is that you now have to compute the m 1 derivatives required by recipe 320 Example Compute the natural fundamental set of solutions to the equation y + 9y = 0 Solution By 319 the Green function gt satisfies the initial-value problem g + 9g = 0, g0 = g 0 = 0, g 0 = 1 The characteristic polynomial of this equation is pz = z 3 +9z, which has roots 0, ±i3 We therefore seek a solution in the form Because gt = c 1 + c 2 cos3t + c 3 sin3t g t = 3c 2 sin3t + 3c 3 cos3t, g t = 9c 2 cos3t 9c 3 sin3t, the initial conditions for gt then yield the algebraic system g0 = c 1 + c 2 = 0, g 0 = 3c 3 = 0, g 0 = 9c 2 = 1

13 Solving this system gives c 1 = 1 9, c 2 = 1 9, and c 3 = 0, whereby the Green function is gt = 1 cos3t 9 Because pz = z 3 + 9z, we read off from 318 that π 1 = 0, π 2 = 9, and π 3 = 0 Then by recipe 320 the natural fundamental set of solutions is given by N 2 t = gt = 1 cos3t, 9 N 1 t = N 2 sin3t t + 0 gt =, 3 N 0 t = N 1 1 cos3t t + 9 gt = cos3t + 9 = 1 9 Remark Earlier we had computed the above fundamental set by solving the general initial-value problem 311 You should compare that calculation with the one above Example Given the fact that pz = z 3 4z annihilates A, compute e ta for A = Solution Because you are told that pz = z 3 4z annihilates A, you do not have to compute the characteristic polynomial of A By 319 the Green function gt satisfies the initial-value problem g 4g = 0, g0 = g 0 = 0, g 0 = 1 The characteristic polynomial of this equation is pz = z 3 4z, which has roots 0, ±2 We therefore seek a solution in the form gt = c 1 + c 2 e 2t + c 3 e 2t 13 Because g t = 2c 2 e 2t 2c 3 e 2t, g t = 4c 2 e 2t + 4c 3 e 2t, the initial conditions for gt then yield the algebraic system g0 = c 1 + c 2 + c 3 = 0, g 0 = 2c 2 2c 3 = 0, g 0 = 4c 2 + 4c 3 = 1 The solution of this system is c 1 = 1 and c 4 2 = c 3 = 1, whereby the Green function is 8 gt = e2t e 2t = 1 4 cosh2t 1

14 14 Because pz = z 3 4z, we read off from 318 that π 1 = 0, π 2 = 4, and π 3 = 0 Then by recipe 320 the natural fundamental set of solutions is given by N 2 t = gt = cosh2t 1, 4 N 1 t = N 2 sinh2t t + 0 gt =, 2 N 0 t = N 1 cosh2t 1 t 4 gt = cosh2t 4 = 1 4 Given this set of solutions, formula 310 yields e ta = N 0 ti + N 1 ta + N 2 ta = sinh2t cosh2t = sinh2t cosh2t cosh2t sinh2t cosh2t 1 sinh2t = 1 sinh2t 1 + cosh2t sinh2t cosh2t 1 2 cosh2t 1 sinh2t 1 + cosh2t sinh2t sinh2t cosh2t 1 sinh2t 1 + cosh2t Remark The polynomial pz = z 3 4z that was used in the previous example is not the characteristic polynomial of A You can check that p A z = z 4 4z 2 We saved a lot of work by using pz = z 3 4z as the annihilating polynomial rather than p A z = z 4 4z 2 because pz has a smaller degree Given the characteristic polynomial p A z of any matrix A, you can seek a polynomial pz of smaller degree that also annihilates A guided by the following facts Every polynomial pz that annihilates A will have the same roots as p A z, but these roots might have smaller multiplicity If p A z has roots that are not simple then it pays to seek a polynomial of smaller degree that annihilates A If pz has simple roots then it has the smallest degree possible for a polynomial that annihilates A If A is either symmetric A T = A, skew-symmetric A T = A, or normal A T A = AA T then there is a polynomial that annihilates A with simple roots In the last example, because A is symmetric and p A z has roots 2, 0, 0, and 2, we know that pz = z + 2zz 2 = z 3 4z is an annihilating polynomial with simple roots It thereby has the smallest degree possible for a polynomial that annihilates A 2

15 Justification of Recipe 320 The following justification of recipe 320 is included for completeness It was not covered in lecture and you do not need to know it However, you should find the recipe itself quite useful Suppose that the Green function gt satisfies the initial-value problem 319 while the functions N 0 t, N 1 t,, N m 1 t are defined by recipe 320 We want to show that for each j = 0, 1,, m 1 the function N j t is the solution of 321 p D y = y m + π 1 y m 1 + π 2 y m π m 1 y + π m y = 0, that satisfies the initial conditions 322 N k j 0 = δ jk for k = 0, 1,, m 1 Here δ jk is the Kronecker delta, which is defined by { 1 when j = k, δ jk = 0 when j k Because N m 1 t = gt, we see from 319 that it satisfies the differential equation 321 and the initial conditions 322 for j = m 1 The key step is to show that if N j t satisfies 321 and 322 for some positive j < m then so does N j 1 t Once this step is done then we can argue that because N m 1 t has these properties, so does N m 2 t, which implies so does N m 3 t, and so on down to N 0 t We now prove the key step Suppose that N j t satisfies 321 and 322 for some positive j < m Because N j t satisfies 321, so does N j t Because N j 1t is given by recipe 320 as a linear combination of N j t and gt, while N j t and gt satisfy 321, we see that N j 1 t also satisfies the differential equation 321 The only thing that remains to be checked is that N j 1 t satifies the initial conditions 322 Because N j t satisfies 321 and 322 we see that = p D N j t m 1 t=0 = N m j 0 + π m k N k j 0 k=0 m 1 = N m j 0 + π m k δ jk = N m j 0 + π m j Because N j 1 t is given by recipe 320 to be N j 1 t = N jt+π m j gt, we evaluate the k 1 st derivative of this relation at t = 0 to obtain k=0 N k 1 j 1 0 = N k j 0 + π m j g k 1 0 for k = 1, 2,, m Because gt satisfies the initial condtions in 319, we see from 322 that this becomes N k 1 j 1 0 = δ jk for k = 1, 2,, m 1, while we see from 323 that for k = m it becomes N m 1 j 1 0 = N m j 0 + π m j = 0 The initial conditions 322 thereby hold for N j 1 t This completes the proof of the key step, which completes the justification of recipe

16 16 35 Matrix Exponential Formula We now present a formula for the exponential of any n n matrix A This formula will be the basis for a method presented in the next subsection that computes e ta efficiently when n is not too large It is also related to the method of generalized eigenvectors that computes e ta efficiently when n is large Let be pz be any monic polynomial of degree m n that annihilates A The formula for e ta will be given explicitly in terms of A and the roots of pz Suppose that pz has l roots λ 1, λ 2,, λ l possibly complex with multiplicities m 1, m 2,, m l respectively This means that pz has the factored form l 324 pz = z λ k m k, k=1 and that m 1 + m m l = m Here we understand that λ j λ k if j k Then l 325a e ta = e λ kt I + t A λ k I + + tm k 1 m k 1! A λ ki m k 1 Q k, k=1 where the n n matrices Q k are defined by l mj A λj I 325b Q k = for every k = 1, 2,, l λ k λ j j=1 j k Before we justify formula 325, let us consider its structure It can be recast as e ta = ha where hz is the time-dependent polynomial l 326a hz = e λ kt 1 + t z λ k + + tm k 1 m k 1! z λ k m k 1 q k z, k=1 with the time-independent polynomials q k z defined by l mj z λj 326b q k z = for every k = 1, 2,, l λ k λ j j=1 j k Because the degree of each q k z is m m k, it follows from 326a that the degree of hz is at most m 1 It can be checked that for every k = 1, 2,, l the polynomial hz satisfies the m k relations 327 hλ k = e λ kt, h λ k = t e λ kt,, h m k 1 λ k = t m k 1 e λ kt These relations state that the functions hz and e zt and their derivatives with respect to z though order m k 1 agree at z = λ k Because these m k relations hold for each k, this gives a total of m relations that hz satisfies The theory of Hermite interpolation states that these m relations uniquely specify the polynomial hz at every time t, which is the called associated Hermite interpolant of e zt Justification of Formula 325 Let Φt denote the right-hand side of 325a We will show that Φt satisfies the matrix-valued initial-value problem 32, and thereby conclude that Φt = e ta as asserted by formula 325

17 We first show that Φ0 = I By setting t = 0 in the right-hand side of 325a we get Φ0 = ha where the polynomial hz is obtained by setting t = 0 in formula 326a namely, l hz = q k z, k=1 where the polynomials q k z are defined by 326b The polynomial hz has degree less than m By 327, for every k = 1, 2,, l this polynomial satisfies the m k conditions hλ k = 1, h λ k = 0,, h m k 1 λ k = 0 Because the constant polynomial 1 satisfies these conditions and has degree less than m, we conclude by the uniqueness of the Hermite interpolant that hz = 1 Therefore Φ0 = ha = I, which is the initial condition of 32 Next we show that Φt also satisfies the differential system of 32 Because pz annihilates A and has the factored form 324, it follows that A λ k I m k Q k = 0 for every k = 1, 2,, l By using this fact we see that dφ l dt t = λ k e λ kt I + t A λ k I + + tm k 1 m k 1! A λ ki m k 1 k=1 l + e λ kt A λ k I + + tm k 2 m k 2! A λ ki m k 1 Q k k=1 l = λ k e λ kt I + t A λ k I + + tm k 1 m k 1! A λ ki m k 1 k=1 + l k=1 = AΦt e λkt A λ k I I + t A λ k I + + tm k 1 m k 1! A λ ki m k 1 Therefore Φt also satisfies the differential system of 32 Because Φt satisfies the matrix-valued initial-value problem 32, we conclude that Φt = e ta, thereby proving formula 325 Remark Naively plugging A into formula 325 is an extremely inefficient way to evaluate e ta This is because for each k = 1,, l it requires m k 1 matrix multiplications to compute A λ k I j for j = 2,, m k, which adds to m l matrix multiplications, while formula 325b requires l 2 matrix multiplications to compute each Q k, which adds to l 2 2l Naively evaluating formula 325 thereby requires at least l 2 3l+m matrix multiplications Because each matrix multiplication requires n 3 multiplications of numbers, naively evaluating formula 325 when l = m = n therefore requires n 5 2n 4 multiplications of numbers This is 81 multiplications for n = 3 and 512 multiplications for n = 4 ouch! Q k Q k Q k 17

18 18 36 Hermite Interpolation Methods We can compute e ta from formula 325 more efficiently by adapting methods for evaluating the Hermite interpolant hz efficiently Moreover, these methods can be applied to compute other functions of matrices like high powers 361 Computing Hermite Interpolants Given any function fz of the complex variable z An Hermite interpolant hz of fz is a polynomial of degree at most m 1 that satisfies m conditions These m conditions are encoded by a list of m points z 1, z 2,, z m These are listed with multiplicity and clustered so that if z j = z k for some j < k then z j = z j+1 = = z k 1 = z k The interpolation conditions take the form for every 1 j k m 328 h k j z k = f k j z k if z j = z k These relations state that hz and fz agree at z = z k The way to evaluate hz efficiently is to first express it in the Newton form 329a hz = m 1 k=0 h k+1 p k z, where the polynomials p k z are defined by k 329b p 0 z = 1, p k z = z z j for k = 1,, m 1 j=1 Notice that each p k z is a monic polynomial of degree k The coefficients h k are determined by completing a so-called divided-difference table of the form z 1 h[z 1 ] ց z 2 h[z 2 ] h[z 1, z 2 ] ց ց z 3 h[z 3 ] h[z 2, z 3 ] h[z 1, z 2, z 3 ] ց ց ց ց ց ց ց z m h[z m ] h[z m 1, z m ] h[z m 2, z m 1, z m ] h[z 1,, z m ], where the first column of the table is seeded with the entries h[z k ] = fz k and entries of subsequent columns are obtain for every 1 j < k m from the divided-difference formula h[z j+1,, z k ] h[z j,, z k 1 ], if z j z k ; 329c h[z j,, z k ] = z k z j 1 k j! fk j z k, if z j = z k Notice that entry h[z j,, z k ] depends only on the entry to its left, and the entry above that one This dependence is indicated by the arrows in the table After the table is

19 completed, the coefficients h k in 329a are read off from the top entries of each column as 329d h k = h[z 1,, z k ] for k = 1,, m 362 Application to Matrix Exponentials We can now evaluate formula 325 by applying recipe 329 to the function fz = e tz Let A be an n n real matrix and pz be a polyinomial of degree m n that annihilates A Let {z 1,, z m } be the roots of pz listed with multiplicity and clustered so that if z j = z k for some j < k then z j = z j+1 = = z k 1 = z k Compute the divided-difference table whose first column is seeded with h[z k ] = e tz k and subsequent columns are given by 329c Read off the functions h 1 t, h 2 t,, h m t from the top entries of each column as prescribed by 329d Then set m 1 330a e ta = ha = h k+1 tp k, where P k = p k A and the polynomials p k z are defined by 329b The matrices P k can be computed efficiently by the recipe 330b P 0 = I, P 1 = A z 1 I, k=0 P k = P k 1 A z k I for k = 2,, m 1 This approach requires only m 2 matrix multiplications to compute the P k Evaluating formula 330 when m = n therefore requires n 4 2n 3 multiplications of numbers This is 25 for n = 3 and 128 for n = 4 much better! Remark This method is closely related to the Putzer Method for computing e ta That method also uses formula 330 but rather than computing h 1 t, h 2 t,, h m t by using a divided-difference table, it obtains them by solving the system of differential equations h 1 = z 1h 1 h 1 0 = 1, h k = z k h k + h k 1 h k 0 = 0 for k = 2,, m Of course, the resulting h 0 t, h 0 t,, h m 1 t are the same, but the divided-difference table gets to them faster Moreover, the divided-difference table easily extends to other functions of matrices, whereas the Putzer Method does not For matrices annihilated by a quadratic polynonial, formula 330 does not require any matrix multiplications and it recovers formulas for matrix exponentials that we derived earlier There are two cases to consider Fact If A is annihilated by a quadratic polynonial with a double root λ 1 then e ta = e λ 1t I + t e λ 1t A λ 1 I Reason Set z 1 = z 2 = λ 1 Then by 330b P 0 = I, P 1 = A λ 1 I, 19

20 20 while by 329c the divided-difference table is simply λ 1 e λ 1t ց λ 1 e λ 1t t e λ 1t We read off from the top entries of the columns that whereby the result follows from 330a h 1 t = e λ 1t, h 2 t = t e λ 1t, Remark The above fact recovers formula 317 for the matrix exponential when the quadratic polynonial pz has a double root λ 1 Fact If A is annihilated by a quadratic polynonial with distinct roots λ 1 and λ 2 then e ta = e λ 1t I + eλ 2t e λ 1t λ 2 λ 1 A λ 1 I Reason Set z 1 = λ 1 and z 2 = λ 2 Then by 330b P 0 = I, P 1 = A λ 1 I, while by 329c the divided-difference table is simply λ 1 e λ 1t ց λ 2 e λ 2t eλ 2t e λ 1t λ 2 λ 1 We read off from the top entries of the columns that h 1 t = e λ 1t, h 2 t = eλ 2t e λ 1t λ 2 λ 1, whereby the result follows from 330a Remark The above fact recovers formulas 315 and 316 for the matrix exponential when the quadratic polynonial pz has distinct roots λ 1 and λ 2 For example, if λ 1 and λ 2 are real with λ 1 = µ ν and λ 2 = µ + ν then e ta = e µ νt I + eµ+νt e µ νt A µ νi 2ν = eµ+νt + e µ νt I + eµ+νt e µ νt A µi [ 2 2ν = e µt coshνti + sinhνt ] A µi ν

21 Similarly, if λ 1 and λ 2 are a conjugate pair with λ 1 = µ iν and λ 2 = µ + iν then e ta = e µ iνt I + eµ+iνt e µ iνt A µ iνi 2iν = eµ+iνt + e µ iνt I + eµ+iνt e µ iνt A µi [ 2 2iν = e µt cosνti + sinνt ] A µi ν It is clear that if λ 1 and λ 2 are a conjugate pair then computing e ta by this approach is usually slower than simply applying formula 316 because of the complex arithmetic that is introduced For matrices annihilated by a cubic polynonial formula 330 requires only one matrix multiplication There are three cases to consider Fact If A is annihilated by a cubic polynonial with a triple root λ 1 then e ta = e λ 1t I + t e λ 1t A λ 1 I t2 e λ 1t A λ 1 I 2 Reason Set z 1 = z 2 = z 3 = λ 1 Then by 330b P 0 = I, P 1 = A λ 1 I, P 2 = A λ 1 I 2, while by 329c the divided-difference table is simply λ 1 e λ 1t ց λ 1 e λ 1t t e λ 1t ց ց λ 1 e λ 1t t e λ 1t 1 2 t2 e λ 1t We read off from the top entries of the columns that h 1 t = e λ 1t, h 2 t = t e λ 1t, h 3 t = 1 2 t2 e λ 1t, whereby the result follows from 330a Fact If A is annihilated by a cubic polynonial with a double root λ 1 and a simple root λ 2 then e ta = e λ1t I + t e λ1t 1 e λ 2 t e λ 1t A λ 1 I + t e λ 1t A λ 1 I 2 λ 2 λ 1 λ 2 λ 1 Reason Set z 1 = z 2 = λ 1 and z 3 = λ 2 Then by 330b P 0 = I, P 1 = A λ 1 I, P 2 = A λ 1 I 2, while by 329c the divided-difference table is simply λ 1 e λ 1t ց λ 1 e λ 1t t e λ 1t ց λ 2 e λ 2t eλ 2t e λ 1t λ 2 λ 1 ց 1 e λ 2 t e λ 1t t e λ 1t λ 2 λ 1 λ 2 λ 1 21

22 22 We read off from the top entries of the columns that h 1 t = e λ1t, h 2 t = t e λ1t 1 e λ 2 t e λ 1t, h 3 t = t e λ 1t, λ 2 λ 1 λ 2 λ 1 whereby the result follows from 330a Fact If A is annihilated by a cubic polynonial with distinct roots λ 1, λ 2, and λ 3 then e ta = e λ1t I + eλ 2t e λ 1t A λ 1 I λ 2 λ 1 1 e λ 3 t e λ 2t + λ 3 λ 1 λ 3 λ 2 eλ2t e λ1t A λ 1 IA λ 2 I λ 2 λ 1 Reason Set z 1 = λ 1, z 2 = λ 2, and z 3 = λ 3 Then by 330b P 0 = I, P 1 = A λ 1 I, P 2 = A λ 1 IA λ 2 I, while by 329c the divided-difference table is simply λ 1 e λ 1t ց λ 2 e λ 2t eλ 2t e λ 1t λ 2 λ 1 ց ց λ 3 e λ 3t eλ 3t e λ 2t 1 e λ 3 t e λ 2t λ 3 λ 2 λ 3 λ 1 λ 3 λ 2 We read off from the top entries of the columns that h 1 t = e λ 1t, h 2 t = eλ 2t e λ 1t λ 2 λ 1, h 3 t = whereby the result follows from 330a 1 λ 3 λ 1 eλ2t e λ1t λ 2 λ 1 e λ 3 t e λ 2t λ 3 λ 2 eλ2t e λ1t, λ 2 λ Application to Other Functions of Matrices We can use the same method to compute other functions of matrices Let A be an n n real matrix and pz be a polyinomial of degree m n that annihilates A Let {z 1,, z m } be the roots of pz listed with multiplicity and clustered so that if z j = z k for some j < k then z j = z j+1 = = z k 1 = z k Let fz be a function that is defined and differentiable at every complex z Such functions are called entire For example, let fz = z r for some positive integer r, where we think of r as large Compute the divided-difference table whose first column is seeded with h[z k ] = fz k and subsequent columns are given by 329c Read off the coefficients h 1, h 2,, h m from the top entries of each column as prescribed by 329d Then set 331 fa = ha = m 1 k=0 h k+1 P k, where the matices P k are computed by 330b In this way you can compute A 1000 efficiently when n is not too large

23 4 Eigen Methods 41 Eigenpairs Let A be a real n n matrix A number λ possibly complex is an eigenvalue of A if there exists a nonzero vector v possibly complex such that 41 Av = λv Each such vector is an eigenvector associated with λ, and λ,v is an eigenpair of A Fact 1 If λ,v is an eigenpair of A then so is λ, αv for every complex α 0 In other words, if v is an eigenvector associated with an eigenvalue λ of A then so is αv for every complex α 0 In particular, eigenvectors are not unique Reason Because λ,v is an eigenpair of A you know that 41 holds It follows that Aαv = αav = αλv = λαv Because the scalar α and vector v are nonzero, the vector αv is also nonzero Therefore λ, αv is also an eigenpair of A 411 Finding Eigenvalues Recall that the characteristic polynomial of A is defined by p A z = detzi A It has the form p A z = z n + π 1 z n 1 + π 2 z n π n 1 z + π n, where the coefficients π 1, π 2,, π n are real In other words, it is a real monic polynomial of degree n One can show that in general π 1 = tra, π n = 1 n deta, where tra is the sum of the diagonal entries of A In particular, when n = 2 one has p A z = z 2 traz + deta Because detzi A = 1 n deta zi, this definition of p A z coincides with the book s definition when n is even, and is its negative when n is odd Both conventions are common We have chosen the convention that makes p A z monic What matters most about p A z is its roots and their multiplicity, which are the same for both conventions Fact 2 A number λ is an eigenvalue of A if and only if p A λ = 0 In other words, the eigenvalues of A are the roots of p A z Reason If λ is an eigenvalue of A then by 41 there exists a nonzero vector v such that λi Av = λv Av = 0 It follows that p A λ = detλi A = 0 Conversely, if p A λ = detλi A = 0 then there exists a nonzero vector v such that λi Av = 0 It follows that λv Av = λi Av = 0, whereby λ and v satisfy 41, which implies λ is an eigenvalue of A Fact 2 shows that the eigenvalues of a n n matrix A can be found if you can find all the roots of the characteristic polynomial of A Because the degree of this characteristic 23

24 24 polynomial is n, and because every polynomical of degree n has exactly n roots counting multiplicity, the n n matrix A therefore must have at least one eigenvalue and can have at most n eigenvalues 3 2 Example Find the eigenvalues of A = 2 3 Solution The characteristic polynomial of A is p A z = z 2 6z + 5 = z 1z 5 By Fact 2 the eigenvalues of A are 1 and 5 Example Find the eigenvalues of A = Solution The characteristic polynomial of A is By Fact 2 the only eigenvalue of A is 3 Example Find the eigenvalues of A = p A z = z 2 6z + 9 = z Solution The characteristic polynomial of A is p A z = z 2 6z + 13 = z = z By Fact 2 the eigenvalues of A are 3 + i2 and 3 i2 Remark The above examples show all the possibilities that can arise for 2 2 matrices namely, a 2 2 matrix A can have either, two real eigenvalues, one real eigenvalue, or a conjugate pair of eigenvalues 412 Finding Eigenvectors for 2 2 Matrices Once you have found the eigenvalues of a matrix you can find all the eigenvectors associated with each eigenvalue by finding a general nonzero solution of 41 For an n n matrix with n distinct eigenvalues this means solving n homogeneous linear algebraic systems, which might take some time However, there is a trick that allows you to quickly find all the eigenvectors associated with each eigenvalue of any 2 2 matrix A without solving any linear systems The trick is based on the Cayley-Hamilton Theorem, which states that p A A = 0 The eigenvalues λ 1 and λ 2 are the roots of p A z, so that p A z = z λ 1 z λ 2 Hence, by the Cayley-Hamilton Theorem 42 0 = p A A = A λ 1 IA λ 2 I = A λ 2 IA λ 1 I It follows that every nonzero column of A λ 2 I is an eigenvector associated with λ 1, and that every nonzero column of A λ 1 I is an eigenvector associated with λ 2 This observation also applies to the case where λ 1 = λ 2 and A λ 1 I 0 Of course, if A λ 1 I = 0 then every nonzero vector is an eigenvector 3 2 Example Find an eigenpair for each of the eigenvalues of A = 2 3

25 Solution We have already showed that the eigenvalues of A are 1 and 5 Compute A I =, A 5I =, Every column of A 5I has the form 1 α 1 for some α 0, while every column of A I has the form 1 β for some β It follows from 42 that eigenpairs of A are 1 1,, 1 1 5, Example Find an eigenpair for each of the eigenvalues of A = 1 2 Solution We have already showed that the only eigenvalue of A is 3 Compute 1 1 A 3I = 1 1 Every column of A 3I has the form 1 α 1 for some α 0 It follows from 42 that an eigenpair of A is 1 3, Example Find an eigenpair for each of the eigenvalues of A = 2 3 Solution We have already showed that the eigenvalues of A are 3 + i2 and 3 i2 Compute i2 2 A 3 + i2i =, A 3 i2i = 2 i2 Every column of A 3 i2i has the form 1 α for some α 0, i while every column of A 3 + i2i has the form 1 β for some β 0 i i2 2 2 i2

26 26 It follows from 42 that eigenpairs of A are i2,, 3 i2, i i Notice that in the above example the eigenvectors associated with 3 i2 are complex conjugates to those associated with 3 + i2 This illustrates a particular instance of the following general fact Fact 3 If λ,v is an eigenpair of the real matrix A then so is λ,v Reason Because λ,v is an eigenpair of A you know by 41 that Av = λv Because A is real, the complex conjugate of this equation is Av = λv, where v is nonzero because v is nonzero It follows that λ,v is an eigenpair of A The foregoing examples illustrate particular instances of the following general facts Fact 4 Let λ be an eigenvalue of the real matrix A If λ is real then it has a real eigenvector If λ is not real then none of its eigenvectors are real Reason Let v be any eigenvector associated with λ, so that λ, v is an eigenpair of A Let λ = µ + iν and v = u + iw where µ and ν are real numbers and u and w are real vectors One then has Au + iaw = Av = λv = µ + iνu + iw = µu νw + iµw + νu, which is equivalent to Au µu = νw, and Aw µw = νu If ν = 0 then u and w will be real eigenvectors associated with λ whenever they are nonzero But at least one of u and w must be nonzero because v = u + iw is nonzero Conversely, if ν 0 and w = 0 then the second equation above implies u = 0 too, which contradicts the fact that at least one of u and w must be nonzero Hence, if ν 0 then w 0 42 Special Solutions of First-Order Systems Eigenvalues and eigenvectors can be used to construct special solutions of first-order differential systems with a constant coefficient matrix The system we study is dx 43 dt = Ax, where xt is a vector and A is a real n n matrix We begin with the following fact Fact 5 If λ,v is an eigenpair of A then a solution of system 43 is 44 xt = e λt v Reason Because λv = Av, a direct calculation shows that dx dt = d e λt v = e λt λv = e λt Av = A e λt v = Ax, dt whereby xt given by 44 solves 43

27 If λ,v is a real eigenpair of A then recipe 44 will yield a real solution of 43 But if λ is an eigenvalue of A that is not real then recipe 44 will not yield a real solution However, if we also use the solution associated with the conjugate eigenpair λ,v then we can construct two real solutions Fact 6 Let λ,v be an eigenpair of A with λ = µ + iν and v = u + iw where µ and ν are real numbers while u and w are real vectors Then two real solutions of system 43 are 45 x 1 t = Re e λt v = e µt u cosνt w sinνt, x 2 t = Im e λt v = e µt w cosνt + u sinνt Reason Because λ,v is an eigenpair of A, by Fact 3 so is λ,v By recipe 44 two solutions of 43 are e λt v and e λt v, which are complex conjugates of each other Because system 43 is linear, it follows that two real solutions of 43 are given by x 1 t = Re e λt v = eλt v + e λt v, x 2 t = Im e λt v = eλt v e λt v 2 i2 Because λ = µ + iν and v = u + iw we see that e λt v = e µt cosνt + i sinνt u + iv = e µt[ u cosνt w sinνt + i w cosνt + u sinνt ], whereby x 1 t and x 2 t are the real and imaginary parts, yielding 45 Recall that if you have n linearly independent real solutions x 1 t, x 2 t,, x n t, of system 43 then you can construct a fundamental matrix Ψt by Ψt = x 1 t x 2 t x n t Recipes 44 and 45 will often, but not always, yield enough solutions to do this Example Use eigenpairs to construct real solutions of dx dt = Ax, where A = If possible, use these solutions to construct a fundamental matrix Ψt for this system Solution By a previous example we know that A has the real eigenpairs 1 1 1,, 5, 1 1 By recipe 44 the system has the real solutions 1 1 x 1 t = e t, x 1 2 t = e 5t 1 These solutions are linearly independent because 1 1 W[x 1,x 2 ]0 = det =

28 28 A fundamental matrix for the system is therefore given by Ψt = x 1 t x 2 t e t e = 5t e t e 5t Example Use eigenpairs to construct real solutions of dx dt = Ax, where A = If possible, use these solutions to construct a fundamental matrix Ψt for this system Solution By a previous example we know that A has the eigenpair 1 3, 1 By recipe 44 the system has the real solution 1 xt = e 3t 1 Because we have not constructed two linearly independent solutions, we cannot yet construct a fundamental matrix for this system Example Use eigenpairs to construct real solutions of dx dt = Ax, where A = If possible, use these solutions to construct a fundamental matrix Ψt for this system Solution By a previous example we know that A has the conjugate eigenpairs i2,, 3 i2, i i Because e 3+i2t 1 i = e 3t cos2t + i sin2t 1 cos2t + i sin2t = e i 3t, sin2t + i cos2t by recipe 45 the system has the real solutions cos2t sin2t x 1 t = e 3t, x sin2t 2 t = e 3t cos2t These solutions are linearly independent because 1 0 W[x 1,x 2 ]0 = det = A fundamental matrix for the system is therefore given by Ψt = x 1 t x 2 t cos2t sin2t = e 3t sin2t cos2t Remark Recipes 44 and 45 will always yield n linearly independent real solutions of system 43 whenever p A z only has simple roots, as was the case in the first and

29 third examples above They will typically fail to do so whenever p A z has a root with multiplicity greater than 1, as was the case in the second example above Finally, whenever you can construct a fundamental matrix Ψt for system 43 from special solutions, you can compute e ta by using the fact that e ta = ΨtΨ0 1 It is easy to check that the right-hand side above satisfies the matrix-valued initial-value problem 32, so it must be equal to e ta 3 2 Example Compute e ta for A = 2 3 Solution In the first example given above we constructed a fundamental matrix Ψt for the associated linear system We obtained e t e Ψt = 5t e t e 5t Because e 0 e Ψ0 = 0 e 0 e 0 = and detψ0 = 1 1 = 2, we see that Ψ0 1 = = whereby e e ta = ΨtΨ0 1 t e = 5t 1 e t e 5t 2 Example Compute e ta for A = , , = 1 e t + e 5t e t + e 5t e t + e 5t e t + e 5t Solution In the third example given above we constructed a fundamental matrix Ψt for the associated linear system We obtained cos2t sin2t Ψt = e 3t sin2t cos2t Because cos0 sin0 Ψ0 = e 0 = sin0 cos0 1 0 = I, 0 1 we see that Ψ0 1 = I, whereby cos2t sin2t e ta = ΨtΨ0 1 = ΨtI = Ψt = e 3t sin2t cos2t 29

30 30 43 Two-by-Two Fundamental Matrices Recipes 44 and 45 will always yield n linearly independent real solutions of system 43 with which you can construct a real fundamental matrix whenever the characteristic polynomial p A z only has simple roots For 2 2 matrices they will fail to do so whenever p A z = z µ 2 and A µi 0 In that case µ is the only eigenvalue of A and recipe 44 yields the real solution x 1 t = e µt v, where the vector v is proportional to any nonzero column of A µi Then if w is any vector that is not proportional to v, you can construct a second solution by the recipe 46 x 2 t = e µt w + t e µt A µiw This is just e ta w where e ta is given by 314 You can always take the vector w to be proportional to the transpose of any nonzero row of A µi You can then construct the real fundamental matrix Ψt = x 1 t x 2 t This will yield a fundamental matrix only when A is 2 2 Example Construct a real fundamental matrix for the system dx 4 1 = Ax, where A = dt 1 2 Solution The characteristic polynomial of A is Because p A z = z 2 traz + deta = z 2 6z + 9 = z 3 2 we see that A has the eigenpair A 3I = 1 1, , 1 Recipe 44 yields the real solution x 1 t = e 3t 1 1 Recipe 46 with w = 1 1 T yields the second real solution x 2 t = e 3t + t e 1 3t t = e 3t + t e 1 3t = e 2 3t 1 + 2t A real fundamental matrix for the system is therefore given by t Ψt = e 3t t

= AΦ, Φ(0) = I, = k! tk A k = I + ta t2 A t3 A t4 A 4 +, (3) e ta 1

= AΦ, Φ(0) = I, = k! tk A k = I + ta t2 A t3 A t4 A 4 +, (3) e ta 1 Matrix Exponentials Math 26, Fall 200, Professor David Levermore We now consider the homogeneous constant coefficient, vector-valued initial-value problem dx (1) = Ax, x(t I) = x I, where A is a constant

More information

Eigenpairs and Diagonalizability Math 401, Spring 2010, Professor David Levermore

Eigenpairs and Diagonalizability Math 401, Spring 2010, Professor David Levermore Eigenpairs and Diagonalizability Math 40, Spring 200, Professor David Levermore Eigenpairs Let A be an n n matrix A number λ possibly complex even when A is real is an eigenvalue of A if there exists a

More information

π 1 = tr(a), π n = ( 1) n det(a). In particular, when n = 2 one has

π 1 = tr(a), π n = ( 1) n det(a). In particular, when n = 2 one has Eigen Methods Math 246, Spring 2009, Professor David Levermore Eigenpairs Let A be a real n n matrix A number λ possibly complex is an eigenvalue of A if there exists a nonzero vector v possibly complex

More information

Solutions of the Sample Problems for the Third In-Class Exam Math 246, Fall 2017, Professor David Levermore

Solutions of the Sample Problems for the Third In-Class Exam Math 246, Fall 2017, Professor David Levermore Solutions of the Sample Problems for the Third In-Class Exam Math 6 Fall 07 Professor David Levermore Compute the Laplace transform of ft t e t ut from its definition Solution The definition of the Laplace

More information

Linear Planar Systems Math 246, Spring 2009, Professor David Levermore We now consider linear systems of the form

Linear Planar Systems Math 246, Spring 2009, Professor David Levermore We now consider linear systems of the form Linear Planar Systems Math 246, Spring 2009, Professor David Levermore We now consider linear systems of the form d x x 1 = A, where A = dt y y a11 a 12 a 21 a 22 Here the entries of the coefficient matrix

More information

HIGHER-ORDER LINEAR ORDINARY DIFFERENTIAL EQUATIONS II: Nonhomogeneous Equations. David Levermore Department of Mathematics University of Maryland

HIGHER-ORDER LINEAR ORDINARY DIFFERENTIAL EQUATIONS II: Nonhomogeneous Equations. David Levermore Department of Mathematics University of Maryland HIGHER-ORDER LINEAR ORDINARY DIFFERENTIAL EQUATIONS II: Nonhomogeneous Equations David Levermore Department of Mathematics University of Maryland 14 March 2012 Because the presentation of this material

More information

Solutions to Final Exam Sample Problems, Math 246, Spring 2018

Solutions to Final Exam Sample Problems, Math 246, Spring 2018 Solutions to Final Exam Sample Problems, Math 46, Spring 08 () Consider the differential equation dy dt = (9 y )y. (a) Find all of its stationary points and classify their stability. (b) Sketch its phase-line

More information

Solutions to Final Exam Sample Problems, Math 246, Spring 2011

Solutions to Final Exam Sample Problems, Math 246, Spring 2011 Solutions to Final Exam Sample Problems, Math 246, Spring 2 () Consider the differential equation dy dt = (9 y2 )y 2 (a) Identify its equilibrium (stationary) points and classify their stability (b) Sketch

More information

Math Ordinary Differential Equations

Math Ordinary Differential Equations Math 411 - Ordinary Differential Equations Review Notes - 1 1 - Basic Theory A first order ordinary differential equation has the form x = f(t, x) (11) Here x = dx/dt Given an initial data x(t 0 ) = x

More information

Third In-Class Exam Solutions Math 246, Professor David Levermore Thursday, 3 December 2009 (1) [6] Given that 2 is an eigenvalue of the matrix

Third In-Class Exam Solutions Math 246, Professor David Levermore Thursday, 3 December 2009 (1) [6] Given that 2 is an eigenvalue of the matrix Third In-Class Exam Solutions Math 26, Professor David Levermore Thursday, December 2009 ) [6] Given that 2 is an eigenvalue of the matrix A 2, 0 find all the eigenvectors of A associated with 2. Solution.

More information

HIGHER-ORDER LINEAR ORDINARY DIFFERENTIAL EQUATIONS. David Levermore Department of Mathematics University of Maryland.

HIGHER-ORDER LINEAR ORDINARY DIFFERENTIAL EQUATIONS. David Levermore Department of Mathematics University of Maryland. HIGHER-ORDER LINEAR ORDINARY DIFFERENTIAL EQUATIONS David Levermore Department of Mathematics University of Maryland 28 March 2008 The following is a review of some of the material that we covered on higher-order

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors Eigenvalues and Eigenvectors week -2 Fall 26 Eigenvalues and eigenvectors The most simple linear transformation from R n to R n may be the transformation of the form: T (x,,, x n ) (λ x, λ 2,, λ n x n

More information

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors Chapter 7 Canonical Forms 7.1 Eigenvalues and Eigenvectors Definition 7.1.1. Let V be a vector space over the field F and let T be a linear operator on V. An eigenvalue of T is a scalar λ F such that there

More information

We have already seen that the main problem of linear algebra is solving systems of linear equations.

We have already seen that the main problem of linear algebra is solving systems of linear equations. Notes on Eigenvalues and Eigenvectors by Arunas Rudvalis Definition 1: Given a linear transformation T : R n R n a non-zero vector v in R n is called an eigenvector of T if Tv = λv for some real number

More information

Differential Equations 2280 Sample Midterm Exam 3 with Solutions Exam Date: 24 April 2015 at 12:50pm

Differential Equations 2280 Sample Midterm Exam 3 with Solutions Exam Date: 24 April 2015 at 12:50pm Differential Equations 228 Sample Midterm Exam 3 with Solutions Exam Date: 24 April 25 at 2:5pm Instructions: This in-class exam is 5 minutes. No calculators, notes, tables or books. No answer check is

More information

Linear Algebra Exercises

Linear Algebra Exercises 9. 8.03 Linear Algebra Exercises 9A. Matrix Multiplication, Rank, Echelon Form 9A-. Which of the following matrices is in row-echelon form? 2 0 0 5 0 (i) (ii) (iii) (iv) 0 0 0 (v) [ 0 ] 0 0 0 0 0 0 0 9A-2.

More information

Systems of Algebraic Equations and Systems of Differential Equations

Systems of Algebraic Equations and Systems of Differential Equations Systems of Algebraic Equations and Systems of Differential Equations Topics: 2 by 2 systems of linear equations Matrix expression; Ax = b Solving 2 by 2 homogeneous systems Functions defined on matrices

More information

FIRST-ORDER SYSTEMS OF ORDINARY DIFFERENTIAL EQUATIONS III: Autonomous Planar Systems David Levermore Department of Mathematics University of Maryland

FIRST-ORDER SYSTEMS OF ORDINARY DIFFERENTIAL EQUATIONS III: Autonomous Planar Systems David Levermore Department of Mathematics University of Maryland FIRST-ORDER SYSTEMS OF ORDINARY DIFFERENTIAL EQUATIONS III: Autonomous Planar Systems David Levermore Department of Mathematics University of Maryland 4 May 2012 Because the presentation of this material

More information

21 Linear State-Space Representations

21 Linear State-Space Representations ME 132, Spring 25, UC Berkeley, A Packard 187 21 Linear State-Space Representations First, let s describe the most general type of dynamic system that we will consider/encounter in this class Systems may

More information

18.06 Problem Set 8 - Solutions Due Wednesday, 14 November 2007 at 4 pm in

18.06 Problem Set 8 - Solutions Due Wednesday, 14 November 2007 at 4 pm in 806 Problem Set 8 - Solutions Due Wednesday, 4 November 2007 at 4 pm in 2-06 08 03 Problem : 205+5+5+5 Consider the matrix A 02 07 a Check that A is a positive Markov matrix, and find its steady state

More information

Generalized Eigenvectors and Jordan Form

Generalized Eigenvectors and Jordan Form Generalized Eigenvectors and Jordan Form We have seen that an n n matrix A is diagonalizable precisely when the dimensions of its eigenspaces sum to n. So if A is not diagonalizable, there is at least

More information

The Exponential of a Matrix

The Exponential of a Matrix The Exponential of a Matrix 5-8- The solution to the exponential growth equation dx dt kx is given by x c e kt It is natural to ask whether you can solve a constant coefficient linear system x A x in a

More information

Math Matrix Algebra

Math Matrix Algebra Math 44 - Matrix Algebra Review notes - 4 (Alberto Bressan, Spring 27) Review of complex numbers In this chapter we shall need to work with complex numbers z C These can be written in the form z = a+ib,

More information

7 Planar systems of linear ODE

7 Planar systems of linear ODE 7 Planar systems of linear ODE Here I restrict my attention to a very special class of autonomous ODE: linear ODE with constant coefficients This is arguably the only class of ODE for which explicit solution

More information

Constant coefficients systems

Constant coefficients systems 5.3. 2 2 Constant coefficients systems Section Objective(s): Diagonalizable systems. Real Distinct Eigenvalues. Complex Eigenvalues. Non-Diagonalizable systems. 5.3.. Diagonalizable Systems. Remark: We

More information

Section 8.2 : Homogeneous Linear Systems

Section 8.2 : Homogeneous Linear Systems Section 8.2 : Homogeneous Linear Systems Review: Eigenvalues and Eigenvectors Let A be an n n matrix with constant real components a ij. An eigenvector of A is a nonzero n 1 column vector v such that Av

More information

Ma 227 Review for Systems of DEs

Ma 227 Review for Systems of DEs Ma 7 Review for Systems of DEs Matrices Basic Properties Addition and subtraction: Let A a ij mn and B b ij mn.then A B a ij b ij mn 3 A 6 B 6 4 7 6 A B 6 4 3 7 6 6 7 3 Scaler Multiplication: Let k be

More information

Announcements Wednesday, November 7

Announcements Wednesday, November 7 Announcements Wednesday, November 7 The third midterm is on Friday, November 6 That is one week from this Friday The exam covers 45, 5, 52 53, 6, 62, 64, 65 (through today s material) WeBWorK 6, 62 are

More information

Solutions to Math 53 Math 53 Practice Final

Solutions to Math 53 Math 53 Practice Final Solutions to Math 5 Math 5 Practice Final 20 points Consider the initial value problem y t 4yt = te t with y 0 = and y0 = 0 a 8 points Find the Laplace transform of the solution of this IVP b 8 points

More information

Putzer s Algorithm. Norman Lebovitz. September 8, 2016

Putzer s Algorithm. Norman Lebovitz. September 8, 2016 Putzer s Algorithm Norman Lebovitz September 8, 2016 1 Putzer s algorithm The differential equation dx = Ax, (1) dt where A is an n n matrix of constants, possesses the fundamental matrix solution exp(at),

More information

Lecture 6. Eigen-analysis

Lecture 6. Eigen-analysis Lecture 6 Eigen-analysis University of British Columbia, Vancouver Yue-Xian Li March 7 6 Definition of eigenvectors and eigenvalues Def: Any n n matrix A defines a LT, A : R n R n A vector v and a scalar

More information

and let s calculate the image of some vectors under the transformation T.

and let s calculate the image of some vectors under the transformation T. Chapter 5 Eigenvalues and Eigenvectors 5. Eigenvalues and Eigenvectors Let T : R n R n be a linear transformation. Then T can be represented by a matrix (the standard matrix), and we can write T ( v) =

More information

HIGHER-ORDER LINEAR ORDINARY DIFFERENTIAL EQUATIONS IV: Variable Coefficient Nonhomogeneous Case

HIGHER-ORDER LINEAR ORDINARY DIFFERENTIAL EQUATIONS IV: Variable Coefficient Nonhomogeneous Case HIGHER-ORDER LINEAR ORDINARY DIFFERENTIAL EQUATIONS IV: Variable Coefficient Nonhomogeneous Case David Levermore Department of Mathematics University of Maryland 15 March 2009 Because the presentation

More information

MATHEMATICS 217 NOTES

MATHEMATICS 217 NOTES MATHEMATICS 27 NOTES PART I THE JORDAN CANONICAL FORM The characteristic polynomial of an n n matrix A is the polynomial χ A (λ) = det(λi A), a monic polynomial of degree n; a monic polynomial in the variable

More information

4. Linear transformations as a vector space 17

4. Linear transformations as a vector space 17 4 Linear transformations as a vector space 17 d) 1 2 0 0 1 2 0 0 1 0 0 0 1 2 3 4 32 Let a linear transformation in R 2 be the reflection in the line = x 2 Find its matrix 33 For each linear transformation

More information

Symmetric and anti symmetric matrices

Symmetric and anti symmetric matrices Symmetric and anti symmetric matrices In linear algebra, a symmetric matrix is a square matrix that is equal to its transpose. Formally, matrix A is symmetric if. A = A Because equal matrices have equal

More information

HIGHER-ORDER LINEAR ORDINARY DIFFERENTIAL EQUATIONS II: Nonhomogeneous Case and Vibrations

HIGHER-ORDER LINEAR ORDINARY DIFFERENTIAL EQUATIONS II: Nonhomogeneous Case and Vibrations HIGHER-ORDER LINEAR ORDINARY DIFFERENTIAL EQUATIONS II: Nonhomogeneous Case and Vibrations David Levermore Department of Mathematics University of Maryland 22 October 28 Because the presentation of this

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors Sec. 6.1 Eigenvalues and Eigenvectors Linear transformations L : V V that go from a vector space to itself are often called linear operators. Many linear operators can be understood geometrically by identifying

More information

Announcements Wednesday, November 7

Announcements Wednesday, November 7 Announcements Wednesday, November 7 The third midterm is on Friday, November 16 That is one week from this Friday The exam covers 45, 51, 52 53, 61, 62, 64, 65 (through today s material) WeBWorK 61, 62

More information

MATH 583A REVIEW SESSION #1

MATH 583A REVIEW SESSION #1 MATH 583A REVIEW SESSION #1 BOJAN DURICKOVIC 1. Vector Spaces Very quick review of the basic linear algebra concepts (see any linear algebra textbook): (finite dimensional) vector space (or linear space),

More information

Second In-Class Exam Solutions Math 246, Professor David Levermore Thursday, 31 March 2011

Second In-Class Exam Solutions Math 246, Professor David Levermore Thursday, 31 March 2011 Second In-Class Exam Solutions Math 246, Professor David Levermore Thursday, 31 March 211 (1) [6] Give the interval of definition for the solution of the initial-value problem d 4 y dt 4 + 7 1 t 2 dy dt

More information

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero. Sec 6 Eigenvalues and Eigenvectors Definition An eigenvector of an n n matrix A is a nonzero vector x such that A x λ x for some scalar λ A scalar λ is called an eigenvalue of A if there is a nontrivial

More information

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v ) Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

More information

Systems of Second Order Differential Equations Cayley-Hamilton-Ziebur

Systems of Second Order Differential Equations Cayley-Hamilton-Ziebur Systems of Second Order Differential Equations Cayley-Hamilton-Ziebur Characteristic Equation Cayley-Hamilton Cayley-Hamilton Theorem An Example Euler s Substitution for u = A u The Cayley-Hamilton-Ziebur

More information

systems of linear di erential If the homogeneous linear di erential system is diagonalizable,

systems of linear di erential If the homogeneous linear di erential system is diagonalizable, G. NAGY ODE October, 8.. Homogeneous Linear Differential Systems Section Objective(s): Linear Di erential Systems. Diagonalizable Systems. Real Distinct Eigenvalues. Complex Eigenvalues. Repeated Eigenvalues.

More information

20D - Homework Assignment 5

20D - Homework Assignment 5 Brian Bowers TA for Hui Sun MATH D Homework Assignment 5 November 8, 3 D - Homework Assignment 5 First, I present the list of all matrix row operations. We use combinations of these steps to row reduce

More information

MTH 102: Linear Algebra Department of Mathematics and Statistics Indian Institute of Technology - Kanpur. Problem Set

MTH 102: Linear Algebra Department of Mathematics and Statistics Indian Institute of Technology - Kanpur. Problem Set MTH 102: Linear Algebra Department of Mathematics and Statistics Indian Institute of Technology - Kanpur Problem Set 6 Problems marked (T) are for discussions in Tutorial sessions. 1. Find the eigenvalues

More information

Calculating determinants for larger matrices

Calculating determinants for larger matrices Day 26 Calculating determinants for larger matrices We now proceed to define det A for n n matrices A As before, we are looking for a function of A that satisfies the product formula det(ab) = det A det

More information

av 1 x 2 + 4y 2 + xy + 4z 2 = 16.

av 1 x 2 + 4y 2 + xy + 4z 2 = 16. 74 85 Eigenanalysis The subject of eigenanalysis seeks to find a coordinate system, in which the solution to an applied problem has a simple expression Therefore, eigenanalysis might be called the method

More information

18.06 Problem Set 8 Solution Due Wednesday, 22 April 2009 at 4 pm in Total: 160 points.

18.06 Problem Set 8 Solution Due Wednesday, 22 April 2009 at 4 pm in Total: 160 points. 86 Problem Set 8 Solution Due Wednesday, April 9 at 4 pm in -6 Total: 6 points Problem : If A is real-symmetric, it has real eigenvalues What can you say about the eigenvalues if A is real and anti-symmetric

More information

GRE Subject test preparation Spring 2016 Topic: Abstract Algebra, Linear Algebra, Number Theory.

GRE Subject test preparation Spring 2016 Topic: Abstract Algebra, Linear Algebra, Number Theory. GRE Subject test preparation Spring 2016 Topic: Abstract Algebra, Linear Algebra, Number Theory. Linear Algebra Standard matrix manipulation to compute the kernel, intersection of subspaces, column spaces,

More information

Eigenvalues and Eigenvectors. Review: Invertibility. Eigenvalues and Eigenvectors. The Finite Dimensional Case. January 18, 2018

Eigenvalues and Eigenvectors. Review: Invertibility. Eigenvalues and Eigenvectors. The Finite Dimensional Case. January 18, 2018 January 18, 2018 Contents 1 2 3 4 Review 1 We looked at general determinant functions proved that they are all multiples of a special one, called det f (A) = f (I n ) det A. Review 1 We looked at general

More information

HIGHER-ORDER LINEAR ORDINARY DIFFERENTIAL EQUATIONS IV: Laplace Transform Method David Levermore Department of Mathematics University of Maryland

HIGHER-ORDER LINEAR ORDINARY DIFFERENTIAL EQUATIONS IV: Laplace Transform Method David Levermore Department of Mathematics University of Maryland HIGHER-ORDER LINEAR ORDINARY DIFFERENTIAL EQUATIONS IV: Laplace Transform Method David Levermore Department of Mathematics University of Maryland 9 December Because the presentation of this material in

More information

The minimal polynomial

The minimal polynomial The minimal polynomial Michael H Mertens October 22, 2015 Introduction In these short notes we explain some of the important features of the minimal polynomial of a square matrix A and recall some basic

More information

Chapter 7. Homogeneous equations with constant coefficients

Chapter 7. Homogeneous equations with constant coefficients Chapter 7. Homogeneous equations with constant coefficients It has already been remarked that we can write down a formula for the general solution of any linear second differential equation y + a(t)y +

More information

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises This document gives the solutions to all of the online exercises for OHSx XM511. The section ( ) numbers refer to the textbook. TYPE I are True/False. Answers are in square brackets [. Lecture 02 ( 1.1)

More information

BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x

BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x BASIC ALGORITHMS IN LINEAR ALGEBRA STEVEN DALE CUTKOSKY Matrices and Applications of Gaussian Elimination Systems of Equations Suppose that A is an n n matrix with coefficents in a field F, and x = (x,,

More information

Chapter 5. Eigenvalues and Eigenvectors

Chapter 5. Eigenvalues and Eigenvectors Chapter 5 Eigenvalues and Eigenvectors Section 5. Eigenvectors and Eigenvalues Motivation: Difference equations A Biology Question How to predict a population of rabbits with given dynamics:. half of the

More information

MATH 1553-C MIDTERM EXAMINATION 3

MATH 1553-C MIDTERM EXAMINATION 3 MATH 553-C MIDTERM EXAMINATION 3 Name GT Email @gatech.edu Please read all instructions carefully before beginning. Please leave your GT ID card on your desk until your TA scans your exam. Each problem

More information

Math 3191 Applied Linear Algebra

Math 3191 Applied Linear Algebra Math 9 Applied Linear Algebra Lecture 9: Diagonalization Stephen Billups University of Colorado at Denver Math 9Applied Linear Algebra p./9 Section. Diagonalization The goal here is to develop a useful

More information

HIGHER-ORDER LINEAR ORDINARY DIFFERENTIAL EQUATIONS IV: Laplace Transform Method. David Levermore Department of Mathematics University of Maryland

HIGHER-ORDER LINEAR ORDINARY DIFFERENTIAL EQUATIONS IV: Laplace Transform Method. David Levermore Department of Mathematics University of Maryland HIGHER-ORDER LINEAR ORDINARY DIFFERENTIAL EQUATIONS IV: Laplace Transform Method David Levermore Department of Mathematics University of Maryland 6 April Because the presentation of this material in lecture

More information

a 11 a 12 a 11 a 12 a 13 a 21 a 22 a 23 . a 31 a 32 a 33 a 12 a 21 a 23 a 31 a = = = = 12

a 11 a 12 a 11 a 12 a 13 a 21 a 22 a 23 . a 31 a 32 a 33 a 12 a 21 a 23 a 31 a = = = = 12 24 8 Matrices Determinant of 2 2 matrix Given a 2 2 matrix [ ] a a A = 2 a 2 a 22 the real number a a 22 a 2 a 2 is determinant and denoted by det(a) = a a 2 a 2 a 22 Example 8 Find determinant of 2 2

More information

Advanced Engineering Mathematics Prof. Pratima Panigrahi Department of Mathematics Indian Institute of Technology, Kharagpur

Advanced Engineering Mathematics Prof. Pratima Panigrahi Department of Mathematics Indian Institute of Technology, Kharagpur Advanced Engineering Mathematics Prof. Pratima Panigrahi Department of Mathematics Indian Institute of Technology, Kharagpur Lecture No. #07 Jordan Canonical Form Cayley Hamilton Theorem (Refer Slide Time:

More information

Supplement til SDL, Blok

Supplement til SDL, Blok Supplement til SDL, Blok 1 2007 NOTES TO THE COURSE ON ORDINARY DIFFERENTIAL EQUATIONS Inst. f. Matematiske Fag Gerd Grubb August 2007 This is a supplement to the text in [BN], short for F. Brauer and

More information

Section 9.3 Phase Plane Portraits (for Planar Systems)

Section 9.3 Phase Plane Portraits (for Planar Systems) Section 9.3 Phase Plane Portraits (for Planar Systems) Key Terms: Equilibrium point of planer system yꞌ = Ay o Equilibrium solution Exponential solutions o Half-line solutions Unstable solution Stable

More information

Differential equations

Differential equations Differential equations Math 7 Spring Practice problems for April Exam Problem Use the method of elimination to find the x-component of the general solution of x y = 6x 9x + y = x 6y 9y Soln: The system

More information

arxiv: v1 [math.na] 5 May 2011

arxiv: v1 [math.na] 5 May 2011 ITERATIVE METHODS FOR COMPUTING EIGENVALUES AND EIGENVECTORS MAYSUM PANJU arxiv:1105.1185v1 [math.na] 5 May 2011 Abstract. We examine some numerical iterative methods for computing the eigenvalues and

More information

Eigenspaces and Diagonalizable Transformations

Eigenspaces and Diagonalizable Transformations Chapter 2 Eigenspaces and Diagonalizable Transformations As we explored how heat states evolve under the action of a diffusion transformation E, we found that some heat states will only change in amplitude.

More information

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit V: Eigenvalue Problems Lecturer: Dr. David Knezevic Unit V: Eigenvalue Problems Chapter V.2: Fundamentals 2 / 31 Eigenvalues and Eigenvectors Eigenvalues and eigenvectors of

More information

EIGENVALUES AND EIGENVECTORS

EIGENVALUES AND EIGENVECTORS EIGENVALUES AND EIGENVECTORS Diagonalizable linear transformations and matrices Recall, a matrix, D, is diagonal if it is square and the only non-zero entries are on the diagonal This is equivalent to

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 16: Eigenvalue Problems; Similarity Transformations Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 18 Eigenvalue

More information

Old Math 330 Exams. David M. McClendon. Department of Mathematics Ferris State University

Old Math 330 Exams. David M. McClendon. Department of Mathematics Ferris State University Old Math 330 Exams David M. McClendon Department of Mathematics Ferris State University Last updated to include exams from Fall 07 Contents Contents General information about these exams 3 Exams from Fall

More information

Section 5.5. Complex Eigenvalues

Section 5.5. Complex Eigenvalues Section 5.5 Complex Eigenvalues A Matrix with No Eigenvectors Consider the matrix for the linear transformation for rotation by π/4 in the plane. The matrix is: A = 1 ( ) 1 1. 2 1 1 This matrix has no

More information

Remark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Remark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero. Sec 5 Eigenvectors and Eigenvalues In this chapter, vector means column vector Definition An eigenvector of an n n matrix A is a nonzero vector x such that A x λ x for some scalar λ A scalar λ is called

More information

Spectral Theorem for Self-adjoint Linear Operators

Spectral Theorem for Self-adjoint Linear Operators Notes for the undergraduate lecture by David Adams. (These are the notes I would write if I was teaching a course on this topic. I have included more material than I will cover in the 45 minute lecture;

More information

What s Eigenanalysis? Matrix eigenanalysis is a computational theory for the matrix equation

What s Eigenanalysis? Matrix eigenanalysis is a computational theory for the matrix equation Eigenanalysis What s Eigenanalysis? Fourier s Eigenanalysis Model is a Replacement Process Powers and Fourier s Model Differential Equations and Fourier s Model Fourier s Model Illustrated What is Eigenanalysis?

More information

Lecture 15, 16: Diagonalization

Lecture 15, 16: Diagonalization Lecture 15, 16: Diagonalization Motivation: Eigenvalues and Eigenvectors are easy to compute for diagonal matrices. Hence, we would like (if possible) to convert matrix A into a diagonal matrix. Suppose

More information

LINEAR EQUATIONS OF HIGHER ORDER. EXAMPLES. General framework

LINEAR EQUATIONS OF HIGHER ORDER. EXAMPLES. General framework Differential Equations Grinshpan LINEAR EQUATIONS OF HIGHER ORDER. EXAMPLES. We consider linear ODE of order n: General framework (1) x (n) (t) + P n 1 (t)x (n 1) (t) + + P 1 (t)x (t) + P 0 (t)x(t) = 0

More information

235 Final exam review questions

235 Final exam review questions 5 Final exam review questions Paul Hacking December 4, 0 () Let A be an n n matrix and T : R n R n, T (x) = Ax the linear transformation with matrix A. What does it mean to say that a vector v R n is an

More information

Eigenvalues and Eigenvectors: An Introduction

Eigenvalues and Eigenvectors: An Introduction Eigenvalues and Eigenvectors: An Introduction The eigenvalue problem is a problem of considerable theoretical interest and wide-ranging application. For example, this problem is crucial in solving systems

More information

c Igor Zelenko, Fall

c Igor Zelenko, Fall c Igor Zelenko, Fall 2017 1 18: Repeated Eigenvalues: algebraic and geometric multiplicities of eigenvalues, generalized eigenvectors, and solution for systems of differential equation with repeated eigenvalues

More information

APPM 2360 Section Exam 3 Wednesday November 19, 7:00pm 8:30pm, 2014

APPM 2360 Section Exam 3 Wednesday November 19, 7:00pm 8:30pm, 2014 APPM 2360 Section Exam 3 Wednesday November 9, 7:00pm 8:30pm, 204 ON THE FRONT OF YOUR BLUEBOOK write: () your name, (2) your student ID number, (3) lecture section, (4) your instructor s name, and (5)

More information

Systems of Linear Differential Equations Chapter 7

Systems of Linear Differential Equations Chapter 7 Systems of Linear Differential Equations Chapter 7 Doreen De Leon Department of Mathematics, California State University, Fresno June 22, 25 Motivating Examples: Applications of Systems of First Order

More information

4.1 Eigenvalues, Eigenvectors, and The Characteristic Polynomial

4.1 Eigenvalues, Eigenvectors, and The Characteristic Polynomial Linear Algebra (part 4): Eigenvalues, Diagonalization, and the Jordan Form (by Evan Dummit, 27, v ) Contents 4 Eigenvalues, Diagonalization, and the Jordan Canonical Form 4 Eigenvalues, Eigenvectors, and

More information

Department of Mathematics IIT Guwahati

Department of Mathematics IIT Guwahati Stability of Linear Systems in R 2 Department of Mathematics IIT Guwahati A system of first order differential equations is called autonomous if the system can be written in the form dx 1 dt = g 1(x 1,

More information

Diagonalization. Hung-yi Lee

Diagonalization. Hung-yi Lee Diagonalization Hung-yi Lee Review If Av = λv (v is a vector, λ is a scalar) v is an eigenvector of A excluding zero vector λ is an eigenvalue of A that corresponds to v Eigenvectors corresponding to λ

More information

9.1 Eigenanalysis I Eigenanalysis II Advanced Topics in Linear Algebra Kepler s laws

9.1 Eigenanalysis I Eigenanalysis II Advanced Topics in Linear Algebra Kepler s laws Chapter 9 Eigenanalysis Contents 9. Eigenanalysis I.................. 49 9.2 Eigenanalysis II................. 5 9.3 Advanced Topics in Linear Algebra..... 522 9.4 Kepler s laws................... 537

More information

Online Exercises for Linear Algebra XM511

Online Exercises for Linear Algebra XM511 This document lists the online exercises for XM511. The section ( ) numbers refer to the textbook. TYPE I are True/False. Lecture 02 ( 1.1) Online Exercises for Linear Algebra XM511 1) The matrix [3 2

More information

Maths for Signals and Systems Linear Algebra in Engineering

Maths for Signals and Systems Linear Algebra in Engineering Maths for Signals and Systems Linear Algebra in Engineering Lectures 13-14, Tuesday 11 th November 2014 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL PROCESSING IMPERIAL COLLEGE LONDON Eigenvectors

More information

EE263: Introduction to Linear Dynamical Systems Review Session 5

EE263: Introduction to Linear Dynamical Systems Review Session 5 EE263: Introduction to Linear Dynamical Systems Review Session 5 Outline eigenvalues and eigenvectors diagonalization matrix exponential EE263 RS5 1 Eigenvalues and eigenvectors we say that λ C is an eigenvalue

More information

Diagonalization of Matrices

Diagonalization of Matrices LECTURE 4 Diagonalization of Matrices Recall that a diagonal matrix is a square n n matrix with non-zero entries only along the diagonal from the upper left to the lower right (the main diagonal) Diagonal

More information

MIT Final Exam Solutions, Spring 2017

MIT Final Exam Solutions, Spring 2017 MIT 8.6 Final Exam Solutions, Spring 7 Problem : For some real matrix A, the following vectors form a basis for its column space and null space: C(A) = span,, N(A) = span,,. (a) What is the size m n of

More information

6 Linear Equation. 6.1 Equation with constant coefficients

6 Linear Equation. 6.1 Equation with constant coefficients 6 Linear Equation 6.1 Equation with constant coefficients Consider the equation ẋ = Ax, x R n. This equating has n independent solutions. If the eigenvalues are distinct then the solutions are c k e λ

More information

Eigenvectors and Hermitian Operators

Eigenvectors and Hermitian Operators 7 71 Eigenvalues and Eigenvectors Basic Definitions Let L be a linear operator on some given vector space V A scalar λ and a nonzero vector v are referred to, respectively, as an eigenvalue and corresponding

More information

Systems of Linear ODEs

Systems of Linear ODEs P a g e 1 Systems of Linear ODEs Systems of ordinary differential equations can be solved in much the same way as discrete dynamical systems if the differential equations are linear. We will focus here

More information

Math 21b Final Exam Thursday, May 15, 2003 Solutions

Math 21b Final Exam Thursday, May 15, 2003 Solutions Math 2b Final Exam Thursday, May 5, 2003 Solutions. (20 points) True or False. No justification is necessary, simply circle T or F for each statement. T F (a) If W is a subspace of R n and x is not in

More information

Linear Algebra Review (Course Notes for Math 308H - Spring 2016)

Linear Algebra Review (Course Notes for Math 308H - Spring 2016) Linear Algebra Review (Course Notes for Math 308H - Spring 2016) Dr. Michael S. Pilant February 12, 2016 1 Background: We begin with one of the most fundamental notions in R 2, distance. Letting (x 1,

More information

Linear Algebra and ODEs review

Linear Algebra and ODEs review Linear Algebra and ODEs review Ania A Baetica September 9, 015 1 Linear Algebra 11 Eigenvalues and eigenvectors Consider the square matrix A R n n (v, λ are an (eigenvector, eigenvalue pair of matrix A

More information

LINEAR ALGEBRA BOOT CAMP WEEK 2: LINEAR OPERATORS

LINEAR ALGEBRA BOOT CAMP WEEK 2: LINEAR OPERATORS LINEAR ALGEBRA BOOT CAMP WEEK 2: LINEAR OPERATORS Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F has characteristic zero. The following are facts

More information

CAAM 335 Matrix Analysis

CAAM 335 Matrix Analysis CAAM 335 Matrix Analysis Solutions to Homework 8 Problem (5+5+5=5 points The partial fraction expansion of the resolvent for the matrix B = is given by (si B = s } {{ } =P + s + } {{ } =P + (s (5 points

More information