Homogeneous Constant Matrix Systems, Part II

Similar documents
Homogeneous Constant Matrix Systems, Part II

Homogeneous Linear Systems and Their General Solutions

Copyright (c) 2006 Warren Weckesser

Math 312 Lecture Notes Linear Two-dimensional Systems of Differential Equations

Understand the existence and uniqueness theorems and what they tell you about solutions to initial value problems.

Department of Mathematics IIT Guwahati

Section 9.3 Phase Plane Portraits (for Planar Systems)

Eigenvectors and Hermitian Operators

7 Planar systems of linear ODE

3 + 4i 2 + 3i. 3 4i Fig 1b

Coordinate Curves for Trajectories

Linear Planar Systems Math 246, Spring 2009, Professor David Levermore We now consider linear systems of the form

The Liapunov Method for Determining Stability (DRAFT)

MATH 135: COMPLEX NUMBERS

Linearization of Differential Equation Models

Second-Order Homogeneous Linear Equations with Constant Coefficients

Def. (a, b) is a critical point of the autonomous system. 1 Proper node (stable or unstable) 2 Improper node (stable or unstable)

20D - Homework Assignment 5

21 Linear State-Space Representations

Examples include: (a) the Lorenz system for climate and weather modeling (b) the Hodgkin-Huxley system for neuron modeling

NOTES ON LINEAR ALGEBRA CLASS HANDOUT

In these chapter 2A notes write vectors in boldface to reduce the ambiguity of the notation.

Systems of Linear ODEs

Extreme Values and Positive/ Negative Definite Matrix Conditions

Part II Problems and Solutions

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education

APPLICATIONS The eigenvalues are λ = 5, 5. An orthonormal basis of eigenvectors consists of

Math 4B Notes. Written by Victoria Kala SH 6432u Office Hours: T 12:45 1:45pm Last updated 7/24/2016

Math 216 Final Exam 24 April, 2017

Week Quadratic forms. Principal axes theorem. Text reference: this material corresponds to parts of sections 5.5, 8.2,

Nonlinear Autonomous Systems of Differential

MTH 464: Computational Linear Algebra

Phase portraits in two dimensions

MATH 221: SOLUTIONS TO SELECTED HOMEWORK PROBLEMS

DIFFERENTIAL EQUATIONS

Lecture Notes for Math 251: ODE and PDE. Lecture 27: 7.8 Repeated Eigenvalues

Final 09/14/2017. Notes and electronic aids are not allowed. You must be seated in your assigned row for your exam to be valid.

Even-Numbered Homework Solutions

Springs: Part I Modeling the Action The Mass/Spring System

Examples True or false: 3. Let A be a 3 3 matrix. Then there is a pattern in A with precisely 4 inversions.

Systems of Linear Differential Equations Chapter 7

4. Linear transformations as a vector space 17

Solutions Chapter 9. u. (c) u(t) = 1 e t + c 2 e 3 t! c 1 e t 3c 2 e 3 t. (v) (a) u(t) = c 1 e t cos 3t + c 2 e t sin 3t. (b) du

1 Review of simple harmonic oscillator

Math 3301 Homework Set Points ( ) ( ) I ll leave it to you to verify that the eigenvalues and eigenvectors for this matrix are, ( ) ( ) ( ) ( )

Math 1270 Honors ODE I Fall, 2008 Class notes # 14. x 0 = F (x; y) y 0 = G (x; y) u 0 = au + bv = cu + dv

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

Rotation of Axes. By: OpenStaxCollege

Linear Algebra Exercises

Math 308 Final Exam Practice Problems

Math 266: Phase Plane Portrait

Math Ordinary Differential Equations

You may use a calculator, but you must show all your work in order to receive credit.

Solutions to Dynamical Systems 2010 exam. Each question is worth 25 marks.

Solutions to Math 53 Math 53 Practice Final

Systems of Differential Equations: General Introduction and Basics

DIFFERENTIAL EQUATIONS REVIEW. Here are notes to special make-up discussion 35 on November 21, in case you couldn t make it.

A Brief Review of Elementary Ordinary Differential Equations

2.10 Saddles, Nodes, Foci and Centers

Lecture 1 Complex Numbers. 1 The field of complex numbers. 1.1 Arithmetic operations. 1.2 Field structure of C. MATH-GA Complex Variables

Geometry and Motion, MA 134 Week 1

Definition 1. A set V is a vector space over the scalar field F {R, C} iff. there are two operations defined on V, called vector addition

Differential Equations 2280 Sample Midterm Exam 3 with Solutions Exam Date: 24 April 2015 at 12:50pm

8.5 Taylor Polynomials and Taylor Series

Matrices and Deformation

Topic 2 Quiz 2. choice C implies B and B implies C. correct-choice C implies B, but B does not imply C

Span and Linear Independence

10.7 Trigonometric Equations and Inequalities

10.7 Trigonometric Equations and Inequalities

Calculus and Differential Equations II

Math 331 Homework Assignment Chapter 7 Page 1 of 9

Lecture for Week 2 (Secs. 1.3 and ) Functions and Limits

ENGI Linear Approximation (2) Page Linear Approximation to a System of Non-Linear ODEs (2)

Econ 204 Differential Equations. 1 Existence and Uniqueness of Solutions

REVIEW OF DIFFERENTIAL CALCULUS

154 Chapter 9 Hints, Answers, and Solutions The particular trajectories are highlighted in the phase portraits below.

Old Math 330 Exams. David M. McClendon. Department of Mathematics Ferris State University

Find the general solution of the system y = Ay, where

1.4 Techniques of Integration

Chapter 4 & 5: Vector Spaces & Linear Transformations

+ i. cos(t) + 2 sin(t) + c 2.

Lecture 3f Polar Form (pages )

= 2e t e 2t + ( e 2t )e 3t = 2e t e t = e t. Math 20D Final Review

2.3 Oscillation. The harmonic oscillator equation is the differential equation. d 2 y dt 2 r y (r > 0). Its solutions have the form

a11 a A = : a 21 a 22

10.7 Trigonometric Equations and Inequalities

MathQuest: Differential Equations

Chapter 8B - Trigonometric Functions (the first part)

October 4, 2017 EIGENVALUES AND EIGENVECTORS. APPLICATIONS

1 Matrices and Systems of Linear Equations. a 1n a 2n

APPM 2360: Final Exam 10:30am 1:00pm, May 6, 2015.

Math 216 Second Midterm 16 November, 2017

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors

A plane autonomous system is a pair of simultaneous first-order differential equations,

Third In-Class Exam Solutions Math 246, Professor David Levermore Thursday, 3 December 2009 (1) [6] Given that 2 is an eigenvalue of the matrix

Section 14.1 Vector Functions and Space Curves

Relevant sections from AMATH 351 Course Notes (Wainwright): 1.3 Relevant sections from AMATH 351 Course Notes (Poulin and Ingalls): 1.1.

Some Notes on Linear Algebra

1 Differentiability at a point

Identification Methods for Structural Systems

Transcription:

4 Homogeneous Constant Matrix Systems, Part II Let us now expand our discussions begun in the previous chapter, and consider homogeneous constant matrix systems whose matrices either have complex eigenvalues or have incomplete sets of eigenvectors 4 Solutions Corresponding to Complex Eigenvalues Eigenpairs of Real Matrices Remember that the real and imaginary parts of a complex number z are those real numbers x and y, respectively, such that z = x + iy, and that the complex conjugate z of z is z = x iy All these notions extend to matrices having complex components in the obvious manner: Given a matrix M whose entries are complex, we define the complex conjugate M of M to be the matrix formed by replacing each entry of M with that entry s complex conjugate, and the real and imaginary parts of M are simply the corresponding matrices of the real and imaginary parts of the components of that matrix It is easy to show that the standard identities for the conjugates of complex numbers such as ( z ) = z, (az) = (a )(z ) and z = w z = w also hold for analogous expression involving matrices and column vectors In particular, for any complex value r, (Au) = (A )(u ) and (ru) = r u But if (as we ve been assuming) the components of A are real numbers, then a a a N a a a N A a a a N = = a a a N = A, a N a N a N N a N a N a N N 8/3/

Chapter & Page: 4 Homogeneous Constant Matrix Systems, Part II and thus, Au = ru (Au) = (ru) (A )(u ) = r u A(u ) = r u, giving us the first part of the following lemma: Lemma 4 Let A be a real N N constant matrix Then (r, u) is an eigenpair for A if and only if (r, u ) is an eigenpair for A Moreover, if r is not real, then the real and imaginary parts of u form a linearly independent set The proof of the second part the claimed linear independence is not difficult, but may distract us from our core narrative So we ll place it in an appendix for later verification (see lemma 48 in section 46) The Corresponding Real-Valued Solutions Let s now look at the solutions to x = Ax when the N N matrix A has complex eigenvalues, still assuming that A is a real constant matrix From lemma 4, we know that the complex eigenpairs occur in complex conjugate pairs; so let (r, u) and (r, u ) be such a pair, with and u = r = λ + iω, r = λ iω, a + ib a b = a + ib = a + i b = a + ib a N + ib N u u u N a N u = = a ib b N So one fundamental pair of solutions to x = Ax corresponding to the complex conjugate pair of eigenvalues λ + iω and λ iω is x(t) = ue rt = [a + ib]e (λ+iω)t and x (t) = u e r t = [a ib]e (λ iω)t, which we could rewrite in terms of sines and cosines since e (λ±iω)t = e λt e ±iωt = e λt [cos(ωt) ± i sin(ωt)] Recall that we had a similar situation arise when solving single homogeneous linear differential equations with constant coefficients (see chapters 6 and 8) And here, just as there, we prefer to have solutions that do not involve complex values So let s try adapting what we did there to get a corresponding fundamental pair of solutions to x = Ax in terms of real-valued functions only We start by splitting our two solutions into their real and imaginary parts: [a ± ib]e (λ±iω)t = [a ± ib]e λt [cos(ωt) ± i sin(ωt)] = = e λt [a cos(ωt) b sin(ωt)] ± ie λt [b cos(ωt) + a sin(ωt)]

Solutions Corresponding to Complex Eigenvalues Chapter & Page: 4 3 Letting and x R (t) = the real part of x(t) = e λt [a cos(ωt) b sin(ωt)] x I (t) = the imaginary part of x(t) = e λt [b cos(ωt) + a sin(ωt)], (4a) (4b) we can write our two solutions to x = Ax as x(t) = x R (t) + ix I (t) and x (t) = x R (t) ix I (t) In turn, we can easily solve this pair of equations for x R and x I, obtaining x R (t) = x(t) + x (t) and x I (t) = i x(t) i x (t) Being linear combinations of solutions to the homogeneous linear system x = Ax, x R and x I must also be solutions to this system of differential equations Note that both are in terms of real-valued functions only And it is a fairly easy exercise (using the second part of lemma 4) to show that {x R, x I } is a linearly independent pair It is this pair that we will use as the pair of real-valued solutions corresponding to eigenvalues λ ± iω Theorem 4 Let A be a real constant N N matrix Then the complex eigenvalues of A occur as complex conjugate pairs Moreover, if u is an eigenvector corresponding to eigenvalue λ + iω (with ω = ), then a corresponding pair of real-valued solutions to x = Ax is given by {x R (t), x I (t)} where x R (t) = the real part of x(t), x I (t) = the imaginary part of x(t) and x(t) = ue (λ+iω)t = ue λt [cos(ωt) ± i sin(ωt)] By the way, memorizing formulas (4a) and (4b) for x R (t) and x I (t) is not necessary, or even advisable for most people Instead, just expand ue (λ+iω)t into its real and imaginary parts, and use those parts for x R (t) and x I (t)! Example 4: Let s solve the system x (t) 3 x y (t) = 4 y z (t) 3 z Setting up the characteristic equation: det[a ri] = r 3 det 4 r = 3 r ( 4 r) [ ( r) + 9 ] = (r + 4) [ r + 4r + 9 ] =

Chapter & Page: 4 4 Homogeneous Constant Matrix Systems, Part II Hence, either which you can easily solve, obtaining r + 4 = or r + 4r + 9 =, r = 4, r = + i3 and r 3 = i3 as the eigenvalues for our matrix By now, you should have no problem is showing that u = [,, ] T is an eigenvector corresponding to eigenvalue r = 4, giving us x (t) = e 4t as one solution to our system For the pair of solutions corresponding to the conjugate pair of eigenvalues r = + i3 and r 3 = i3, we first need to find an eigenvector u corresponding to r = +i3 Letting u = [ x, y, z ] T, we see that [A r I] u = 3 x 4 ( + i3) y = 3 z ( + i3) 3 x 4 ( + i3) y = 3 ( + i3) z 3i 3 x 6 3i y = 3 3i z So x, y and z must satisfy the algebraic system 3ix + y + 3z = x (6 + 3i)y + z = 3x + y 3iz = The second equation in this system clearly tells us that y = The first and third equations are equivalent (if this is not obvious, multiply the third by i ) So x and y must satisfy 3ix + y + 3z =,

Solutions Corresponding to Complex Eigenvalues Chapter & Page: 4 5 which can be rewritten as z = ix Thus, all the eigenvectors corresponding to eigenvalue + 3i are given by x x u = y = = x z ix i where x is arbitrary Picking x =, we have the single eigenvector u = i So, a single solution corresponding to to the conjugate pair of eigenvalues r = + i3 and r 3 = i3 is x(t) = ue r t = e (+i3)t = e t [cos(3t) + i sin(3t)] i i To find the corresponding pair of real-valued solutions, we first expand out the last formula for x(t), x(t) = e t [cos(3t) + i sin(3t)] i cos(3t) + i sin(3t) = e t i cos(3t) sin(3t) cos(3t) sin(3t) cos(3t) sin(3t) = e t + i = e t + i e t sin(3t) cos(3t) sin(3t) cos(3t) Taking the real and imaginary parts, we get cos(3t) sin(3t) x R (t) = e t and x I (t) = e t sin(3t) cos(3t) So, we now have a set { x (t), x R (t), x I (t) } = cos(3t) sin(3t) e 4t, e t, e t sin(3t) cos(3t) of solutions to our 3 3 homogeneous linear system of differential equations It should be clear that this set is linearly independent (if its not clear, compute the Wronskian at t = ) Hence, it is a fundamental set of solutions to our system, and a general solution is given by x(t) = c e 4t cos(3t) + c sin(3t) e t + c 3 sin(3t) cos(3t) e t

Chapter & Page: 4 6 Homogeneous Constant Matrix Systems, Part II 4 Two-Dimensional Phase Portraits with Complex Eigenvalues Let us now see what patterns are formed by the trajectories of a system x = Ax when A has complex eigenvalues λ + iω and λ iω with ω = As before, we ll let x(t) = x(t) y(t) The General Solutions In the previous section, we saw that, if u = a+ib is any eigenvector corresponding to eigenvalue λ + iω, then {x R, x I } is a linearly independent pair of solutions to x = Ax where x R (t) = e λt [a cos(ωt) b sin(ωt)] and x I (t) = e λt [b cos(ωt) + a sin(ωt)] Hence, x(t) = c x R (t) + c x I (t) (4) is a general solution for our system x = Ax However, for no apparent reason, let s also consider Cx r (t t ) where C and t are two real constants Using some basic trigonometric identities, you can easily confirm that Cx R (t t ) = Ce λ[t t ] [ a cos(ω[t t ]) b sin(ω[t t ]) ] = = Ce λt cos(ωt )e λt [a cos(ωt) b sin(ωt)] + Ce λt sin(ωt )e λt [b cos(ωt) + a sin(ωt)] = c x R (t) + c x I (t) where with c = ρ cos(θ) and c = ρ sin(θ) ρ = Ce λt and θ = ωt So, Cx r (t t ) a solution of x = Ax for any pair of real constants C and t Moreover, if we are first given real values c and c, we can then find real constants C and t so that Cx R (t t ) = c x R (t) + c x I (t) by simply taking the polar coordinates (ρ,θ) of the point with Cartesian coordinates (c, c ) and setting t = θ ω and then C = ρe λt

Two-Dimensional Phase Portraits with Complex Eigenvalues Chapter & Page: 4 7 This means that we have two general solutions at our disposal Theorem 43 Let A be a real matrix with complex eigenvalues and let r = λ + iω and r = λ iω with ω =, x R (t) = e λt [a cos(ωt) b sin(ωt)] and x I (t) = e λt [b cos(ωt) + a sin(ωt)] where a and b are, respectively, the real part and the imaginary part of an eigenvector corresponding to r Then two general solutions for x = Ax are c x R (t) + c x I (t) and Cx R (t t ) where c, c, C and t are arbitrary (real) constants It immediately follows that, for the A s begin considered now, all the trajectories of x = Ax are simply scaled versions of the trajectory of x R (t) Consequently, if we can get a reasonable sketch of one nonzero trajectory, all the others in our phase portrait can be obtained by just scaling the one already sketched We will use this in constructing the phase portraits that follow There are three cases to consider: λ =, λ > and λ < Trajectories when λ = If λ =, then e λt = and x R (t) = a cos(ωt) b sin(ωt) From a simple observation, x R( t + π ω ) ( [ = a cos ω t + π ]) ( [ b sin ω t + π ]) ω ω = a cos(ωt + π) b sin(ωt + π) = a cos(ωt) b sin(ωt) = x R (t), we know the components of x R (t) call them x R (t) and y R (t) are periodic with a period of π / ω That is, (x R (t), y R (t)) repeatedly goes through the same points on the XY plane as t increases by increments of π / ω With a little thought, you will realize that this means these trajectories are closed loops Moreover, by the same sort of calculations just done, you can easily verify that x R( t + π ) = x R (t) for all t, ω telling us that, for each point on the trajectory of x R, there is another point on the trajectory with the origin right in the middle of these two points So, not only is this trajectory a closed loop, this loop must be centered about the origin In fact, this loop is an ellipse centered about the origin (trust me on this, or turn to section 46 where it is proven)

Chapter & Page: 4 8 Homogeneous Constant Matrix Systems, Part II As a result of all this, sketching a phase portrait is fairly easy First sketch a minimal direction field Then sketch an elliptic trajectory about the origin using the direction field to get the general shape and the direction of travel Finally, scale this trajectory to get a sequence of concentric elliptic trajectories Include the dot at the origin for the critical point, and you have a phase portrait for this system Note that we don t actually need the eigenvector u = a+ ib to sketch a phase portrait And the only reason you need the eigenvalue r = iω is to verify that it is imaginary! Example 4: You can easily verify that A = has eigenvalues r ± = ±i with corresponding eigenvectors u ± = a ± ib = ± i where and So general solutions to our system are given by x(t) = c x R (t) + c x I (t) and x(t) = Cx R (t t ) x R (t) = the real part of u + e it = = x I (t) = the imaginary part of u + e it = = cos(t) cos(t) + sin(t) sin(t) Since the eigenvalues are imaginary, we know the trajectories are ellipses centered about the origin To get a better idea of the shape of these ellipses and the direction of travel along these ellipses, we ll compute x at a few well-chosen points using x = Ax : At (, ) : x = = At (, ) : x = = At (, ) : x = = At (, ) : x = = 3 Then we sketch corresponding direction arrows at these points, and at other points using the symmetries in the direction field discussed in section 394 That gives the minimal direction field sketched in figure 4a Sketching an ellipse that best matches this direction field then gives the trajectory also sketched in figure 4a Alternatively, you can plot x R (t) for a few well-chosen values of t, and get the direction of travel from the derivative of x R (t) at some convenient value of t

Two-Dimensional Phase Portraits with Complex Eigenvalues Chapter & Page: 4 9 3 Y 3 Y 3 X 3 3 X 3 3 3 (a) (b) Figure 4: (a) A minimal direction field with one matching elliptic trajectory, and (b) a phase portrait of the system in example 4 Scaling this one ellipse by various constants, adding the dot for the critical point and erasing the direction field then gives the phase portrait in figure 4b for our system In practice, we may not always need to sketch the the trajectory for x R (t) as carefully as we did in the above example Often, it is sufficient to simply note that the trajectories are all concentric ellipses centered at the origin and to determine the direction of travel on each by simply finding the direction arrow at one point In this case, the critical point (, ) is not called a node or saddle point For obvious reasons, we call it a center Observe that the corresponding equilibrium solution is stable, but not asymptotically stable, since the trajectory of any solution x(t) with x(t ) for some t is an ellipse right around (, ) By the way, when sketching a trajectory for a cos(ωt) b sin(ωt), it is tempting to suspect that the vectors a and b directly correspond to the two axes of the elliptic trajectory Well, in general, they don t To see this, just contemplate the last example and figure 4a Trajectories when λ > If λ > then all the trajectories are given by scaling the single trajectory of x R (t) = e λt [a cos(ωt) b sin(ωt)] As we just saw, the [a cos(ωt) b sin(ωt)] factor in this formula for x R (t) repeatedly traces out an ellipse as t varies Multiplying by e λt which increases to + as t + and decreases to as t converts this ellipse into a spiral that spirals outward as t + and spirals inward to (, ) as t, wrapping around the origin infinitely many times in either case Whether the trajectories are spiralling out in the clockwise or counterclockwise direction can be determined by finding the direction of travel indicated by x computed at a convenient point via x = Ax As usual, a minimal direction field may aid your sketching

Chapter & Page: 4 Homogeneous Constant Matrix Systems, Part II 3 Y 3 Y 3 3 X 3 3 X 3 3 (a) (b) Figure 4: (a) A minimal direction field with one matching spiral trajectory, and (b) a phase portrait of the system in example 43 in which λ >! Example 43: Let us just consider sketching a phase portrait for x = Ax when 3 A = 3 As you can easily verify, A has eigenvalues r = ± i3 The fact that these eigenvalues are complex with positive real parts tells us that the trajectories of x = Ax are spirals with the direction of travel being away from the origin Computing x at a few well-chosen points: 3 At (, ) : x = = 3 3 At (, ) : x = = 3 3 At (, ) : x = = 3 3 At (, ) : x = = 3 3 5 3 5 From these vectors and the symmetries discussed in section 394, we get the minimal direction field sketched in figure 4a, which also includes a spiral trajectory matching that direction field Note that this trajectory is spiralling outwards in a clockwise direction Adding more spirals matching this minimal direction field (and erasing the direction field) then gives us the phase portrait in figure 4b For these systems, the origin classified as a spiral point (for obvious reasons), and is clearly unstable

Second Solutions When the Set of Eigenvectors is Incomplete Chapter & Page: 4 3 Y 3 3 X 3 Figure 43: A phase portrait for x = Ax for a case where A has complex eigenvalues with negative real parts Trajectories when λ < The only significant difference between this case and the previous is that e λt decreases to as t + and increases to + as t All this does is to change the direction of travel We still get spiral trajectories, but with the direction of travel on each being towards the origin The result is a phase portrait similar to that sketched in figure 43 The origin is still called a spiral point, but you can clearly see that the equilibrium solution x(t) = is asymptotically stable 43 Second Solutions When the Set of Eigenvectors is Incomplete Up to now, we ve assumed that our matrix A has a complete set of eigenvectors That is, we ve assumed that, for each eigenvalue r the algebraic multiplicity of r = the geometric multiplicity of r where (just in case you forgot) the algebraic multiplicity of r while = the multiplicity of r as a root of the characteristic polynomial, det[a r I], the geometric multiplicity of r = the number of eigenvectors in any basis for the eigenspace of r Now, let s suppose that an eigenvalue r has algebraic multiplicity of two or more, but geometric multiplicity one, with u being a corresponding eigenvector That gives us one solution x (t) = ue rt

Chapter & Page: 4 Homogeneous Constant Matrix Systems, Part II to x = Ax It should seem reasonable to expect that there is another solution x corresponding to this eigenvalue other than a constant multiple of x Let s look for this second solution Based on what we know about second solutions to homogeneous linear differential equations with constant coefficients (from chapter 6), we might suspect that a second solution can be generated by just multiplying the first by t, x (t) = tx (t) = tue rt Alas, it doesn t work Try it yourself to see what happens You get a leftover term that does not cancel out But the attempt may lead you to try adding something to deal with that leftover term So let us try x (t) = [tu + w]e rt where w is a yet unknown vector: d dt x (t) = Ax d ( ) [tu + w]e rt = A[tu + w]e rt dt [u + tru + rw]e rt = [tru + Aw]e rt u + tru + rw = tru + Aw u + rw = Aw Aw rw = u And since Aw rw = Aw riw = [A ri]w, we can rewrite our last equation as [A ri]w = u This is what w must satisfy! Example 44: Let us consider the system x = Ax with 3 A = 3 Solving for the eigenvalues, = det[a ri] = det 3 r 3 r = (r 3) we get a single eigenvalue r = 3 with algebraic multiplicity Any eigenvector u = [x, y] T then must satisfy x y = [A 3I]u = = y,

Second Solutions When the Set of Eigenvectors is Incomplete Chapter & Page: 4 3 which tells us that y = and x is arbitrary So any eigenvector of this matrix can be written as x u = = x Taking x = gives us the single eigenvector u = Thus, r = 3 is an eigenvalue with algebraic multiplicity but geometric multiplicity One solution to x = Ax is then given by x (t) = ue rt = e 3t According to our derivation above, another solution is given by where w satisfies x (t) = [tu + w]e 3t [A 3I]w = u To find such a w, plug w = [x, y] T into the last equation, and solve for x and y : ( 3 3 [A 3I] w = u ) x 3 = y y = Hence, y = /, x is arbitrary, and w = [ x ] = [ ] + x That is, w is [, / ] T plus any constant multiple of the one eigenvector u For simplicity, let us take that constant multiple x to be x = Then, for our second solution, we have ( ) t x = [tu + w]e rt = t + e 3t = e 3t Clearly, x is not a constant multiple of x Hence, our system x = Ax has { } t e 3t, e 3t as a fundamental pair of solutions, and ] x(t) = c [ as a general solution e 3t + c [ t ] e 3t = ( c [ ] + c [ t ]) e 3t

Chapter & Page: 4 4 Homogeneous Constant Matrix Systems, Part II The key to our finding a second solution lay in finding a w satisfying [A ri]w = u when (r, u) is an eigenpair for A You may have some concern about this being possible After all, since r is an eigenvalue, det[a ri] =, telling us that the matrix A ri is not invertible So what guarantee is there that the above key equation has a solution w? Well, for matrices there is the following lemma, which you ll get to verify yourself in exercise 43 on page 4 6 Lemma 44 Let A be a matrix with eigenpair (r, u) If r has algebraic multiplicity two, but geometric multiplicity of only one, then there is a w, which is not a constant multiple of u, such that [A ri]w = u Do note that the w in the above lemma is not unique After all, if then, for any constant c, simply because [A ri]w = u for some w, [A ri]w = u with w = w + cu, [A ri](w + cu) = [A ri]w + c[a ri]u = u + c Back to differential equations: As an immediate corollary of the above lemma and the work done before the previous example, we have Theorem 45 Let A be an matrix with eigenpair (r, u) If r has algebraic multiplicity two, but geometric multiplicity of only one, then the general solution to x = Ax is given by x(t) = c ue rt + c [tu + w]e rt where w is any vector satisfying [A ri]w = u You may wonder if we can expand this discussion to find third and fourth solutions when the algebraic multiplicity is greater than two The answer is yes, but let s look at the trajectories of the solutions we ve found when A is a simple matrix before discussing this expansion Much more general versions of this lemma will be discussed in section 45

Two-Dimensional Phase Portraits with Incomplete Sets of Eigenvectors Chapter & Page: 4 5 44 Two-Dimensional Phase Portraits with Incomplete Sets of Eigenvectors If A is a matrix with a single eigenvalue r having algebraic multiplicity two, but geometric multiplicity one, then, as we just saw, the general solution to x = Ax is x(t) = c ue rt + c [tu + w]e rt where u is an eigenvector corresponding to eigenvalue r, and w satisfies [A ri]w = u The trajectories of this when c = are simply the straight-line trajectories corresponding to the eigenpair (r, u), and, along with dotting the critical point, are the first trajectories to sketch in our phase portrait To simplify our sketching of the trajectories when c =, let s rewrite the above formula for x as x(t) = c [ (αu + w)e rt + tue rt] where α = c / c Note that x() = c (αu + w) This may help us make rough sketches of the trajectories To help find the direction of travel and any tangents, we can use either the derivative, dx dt = = c [ [(rα + )u + rw)]e rt + tue rt], or, to remove any distracting scaling effects, we can use any positive multiple of this Let us use [ T(t) = e rt dx (rα + )u + rw = c + t ] t dt t t ru Note that while lim T(t) = = c ru, t lim T(t) = = +c ru t + This tells us that, if t + is a very large positive value and t is very large negative value, then the tangents to the trajectories at these two values of t but are nearly parallel to the eigenvector u, are in directly opposite directions Trajectories when r > If r > and c =, then lim x(t) = t lim c [ (αu + w)e rt + tue rt] = t

Chapter & Page: 4 6 Homogeneous Constant Matrix Systems, Part II Y Y 5 5 5 5 X 5 5 X 5 5 (a) (b) Figure 44: Trajectories for a system x = Ax when all the eigenvectors of A are parallel to the X axis assuming (a) the eigenvalue is positive, and (b) the eigenvalue is negative and lim x(t) = lim c [(αu + w) + tu] e rt = t t So each trajectory appears to start at (, ) and continues infinitely beyond the origin as t goes from to + This, alone, tells us that the direction of travel on each nonequilibrium trajectory is away from the origin Now, in our general comments, we saw that, if x(t) is a solution with c =, then and x() = c (αu + w) lim T(t) = ±c ru t ± In particular, if c >, then the trajectory of x(t) appears to start at the origin, tangent to u but directed in the direction opposite to u As t increases to t =, x(t) traces a path that bends in the direction of w to pass through the point corresponding to c (αu + w) As t continues to get ever larger, this path continues to bend and slowly flattens out until the direction of travel is in nearly the same direction as u If c <, then the trajectory of x(t) is just a mirror reflection of the trajectory of a corresponding solution having positive c The end result is a phase portrait similar to that sketched in figure 44a This is a phase portrait for the system x = Ax with 3 A = 3 from example 44 In that example, we found that A only had r = 3 as an eigenvalue, and that all eigenvectors were constant multiples of u = [, ] T We also obtained w = [, / ] T Do note, however, that you don t really need to know w to sketch this phase portrait You can (as we ve done several times before) start by sketching the straight-line trajectories for the solutions ±ue rt, add a minimal direction field, and then use this direction field to sketch all the

Incomplete Sets of Eigenvectors Chapter & Page: 4 7 other nonequilibrium trajectories Just keep in mind that these trajectories start at the origin tangent to u and bend around to ultimately be headed in the opposite direction And don t forget the dot at the origin for the equilibrium solution In these cases, the equilibrium solution x(t) = is clearly an unstable equilibrium The critical point (, ) is classified as a node Those who wish to be more precise refer to it as an improper node By the way, we ve not sketched enough of the trajectories in figure 44a to really see that the tangents become parallel to the eigenvector as t gets large That is because the magnitude of x(t) increases like e rt, much faster than T(t) approaches ±c ru Consequently, in practice, if you use a scale large enough to see the tangents being almost parallel to the eigenvector at large values of t, then the behavior of the trajectories near the origin becomes almost invisible Trajectories when r < The analysis just done assuming r > can be redone with r < The only real difference is that the directions of travel will be reversed With r <, the direction of travel will be towards the origin The result will be a phase portrait similar to that in figure 44b The origin is a node (again, improper ), and the equilibrium solution x(t) = is now a stable equilibrium Trajectories when r = This is left as an exercise (exercise 44 on page 4 7) 45 Complete Sets of Solutions Corresponding to Incomplete Sets of Eigenvectors It is relatively easy to extend the ideas presented in section 43 for finding a second solution corresponding to an eigenvalue with geometric multiplicity one but algebraic multiplicity of two Well, it s relatively easy least as long as the geometric multiplicity remains one and we know enough linear algebra To begin, let s state the appropriate generalization of lemma 44: Lemma 46 Let A be an N N matrix with eigenpair (r, u) If r has algebraic multiplicity m greater than one, but geometric multiplicity of only one, then there a linearly independent set of m vectors {w m, w m,,w, w } with w = u such that [A ri]w k = w k for k =, 3,, m However, there is no v such that [A ri]v = w m Unfortunately, the arguments outlined in exercise 43 for proving lemma 44 do not readily extend to the more general cases considered by the above lemma To verify this lemma, we really should delve deeper into the general theory of linear algebra and develop such concepts as

Chapter & Page: 4 8 Homogeneous Constant Matrix Systems, Part II the Jordan form of a matrix and generalized eigenvectors - concepts rarely mentioned in elementary linear algebra courses But that would take us far outside the appropriate bounds of this text So either trust me or (better yet) take a more advanced course in linear algebra and learn how to prove the above lemma yourself Back to differential equations: Let A be an N N matrix with eigenpair (r, u), and assume r has algebraic multiplicity m greater than one, but geometric multiplicity of only one By the arguments already given in section 43, it immediately follows that, if the w k s are as given in the above lemma, then the first two solutions corresponding to eigenvalue r are x (t) = w e rt and x (t) = [ tw + w ] e rt where w = u, and w is any column vector statisfying [A ri] w = w And if the algebraic multiplicity of r is greater than two, then you can easily extend the computations done in section 43 to show that a third solution is given by where w 3 is any column vector satisfying x 3 (t) = [ t w + tw + w 3] e rt [A ri] w 3 = w (see exercise 45) And once you ve done that, the road to generating a linearly independent set of m solutions to x = Ax should be clear, at least when r has algebraic multiplicity m and geometric multiplicity one Traveling that road is left to the interested reader But what if the geometric multiplicity is greater than one? Then we must appeal to the following generalization of the last lemma: Lemma 47 Let r be an eigenvalue of an N N matrix A, and assume the algebraic and geometric multiplicities of r are m and µ, respectively, with m > µ > Then for some positive integer K, there are K positive integers γ, γ,, γ K, along with corresponding linearly independent sets { w k,γ k, w k,γk,, w k,} for k =,,, K such that γ + γ + + γ K = m The set of all w k, j s is linearly independent 3 For k =,, K, (a) (r, w k, ) is an eigenpair for A (b) [A ri]w k, j = w k, j for j =,,γ k

Incomplete Sets of Eigenvectors Chapter & Page: 4 9 Again, you either must trust me on this lemma, or take a suitable course on linear algebra When the situation is as described in the above lemma, then each set { w k,γ k, w k,γ k,, w k,} generates a linearly independent set of solutions to x = Ax { x k,γ k, x k,γ k,, x k,} following the scheme just discussed for the case where the geometric multiplicity is one, x k, (t) = w k, e rt, x k, (t) = [ tw k, + w k,] e rt, x k,3 (t) = [ t w k, + tw k, + w k,3] e rt, One major difference, however, is that you do not know the initial eigenvector w k, All you know is that it must be some linear combination of the µ eigenvectors u, and u µ originally found corresponding to eigenvalue r So finding a suitable linear combination has to be part of solving w k, = a u + a u + + a µ u µ [A ri]w k, = w k, for each k In addition, you do not initially know the γ k s So you just have to iteratively solve for the w k, j s until you reach the point where [A ri]w k, j k, j = w has no solution w k, j You then know γ k = j for that k and j! Example 45: Consider the system x = Ax where 4 A = 4 4 It is easily verified that det(a ri) = (4 r) 3 So this matrix has one eigenvalue r = 4, and this eigenvalue has algebraic multiplicity three It is also easily seen that all eigenvectors are linear combinations of the two eigenvectors u = and u =

Chapter & Page: 4 Homogeneous Constant Matrix Systems, Part II So the geometric multiplicity of r = 4 is two From this and our last lemma it follows that we have a linearly independent set { w,, w,, w,} with w, and w, being eigenvectors of A (hence, linear combinations of u and u ), and [A 4I] w, = w, We also know that the corresponding fundamental set of solutions to x = Ax is {x,, x,, x, } where x, (t) = w, e 4t, x, (t) = w, e 4t and x, (t) = [ tw, + w,] e 4t Now, consider the equation [A 4I] w, = w, Since w, must be an eigenvector, it must be a linear combination of the two eigenvectors already obtained, w, = au + bu for some, yet unknown, choice of a and b Letting w, = [x, y, z] T, we now have [A 4I] w, = au + bu x y = a + b z y a = y b Thus, x, y, z, a and b can be any values satisfying y = a = b = Taking a =, and x = z = then gives us w, = a = b = u + u and w, = x y = z For w, we can take any linear combination of u and u such that {w,, w,, w, } is linearly independent An obvious choice is w, = u u = Finally, then, we can write out the general solution to x = Ax as x(t) = c x, (t) + c x, (t) + c x, (t)

Appendix for Sections 4 and 4 Chapter & Page: 4 where and x, (t) = w, e 4t = e 4t, x, (t) = w, e 4t = e 4t x, (t) = [ tw, + w,] e 4t = t + e 4t Finally, we should note that the eigenvalue in question may be complex, r = λ + iω If A is real (as we ve been assuming), then all of the above still holds, and, following the discussion at the start of this chapter, we can find a complete set of real-valued solutions to x = Ax corresponding to r and r by taking the real and imaginary parts of the complex-valued solutions generated by the procedures just discussed 46 Appendix for Sections 4 and 4 In this section, we will verify two claims made in sections 4 and 4; namely, The claim of the linear independence of of the real and imaginary parts of a complex eigenvector made in lemma 4 on page 4 The claim made on page 4 7 that the trajectory of x R is an ellipse about the origin Linear Independence of Two Vectors Here is the claim and the proof: Lemma 48 Let (r, u) be an eigenpair for a real N N matrix A, and assume r is not real Then the real and imaginary parts of u form a linearly independent set PROOF: Let λ and ω be, respectively, the real and imaginary parts of r, and let a and b be the real and imaginary parts of u, so that, and u = a + ib and u = a ib, r r = (λ + iω) (λ iω) = iω Since we are assuming r is not real, we must have r r = (43)

Chapter & Page: 4 Homogeneous Constant Matrix Systems, Part II For the moment, suppose {a, b} is not linearly independent This would mean that either b = or that a is a constant multiple of b, a = γ b If b =, then On the other hand, if a = γ b, then u = a = u u = a + ib = (γ + i)b and u = a ib = (γ i)b From fact that eigenvectors are nonzero we know that u, u and γ ± i are nonzero Hence, we would have u = γ + i γ i u Either way, if {a, b} is not linearly independent, then u must be a constant nonzero multiple of u, and, thus, must be an eigenvector corresponding to r as well as r But then, which, since u =, tells us that (r r )u = ru r u = Au Au =, r r =, contrary to inequality (43), which followed immediately from the basic assumption that r is not real So it is not possible to have both that r is not real and {a, b} is not linearly independent If r is not real, then it cannot be true that {a, b} is not linearly independent; that is, if r is not real, {a, b} must be linearly independent Ellipticity of Trajectories On page 4 7, it was claimed that the trajectory of x R (t) = a cos(ωt) b sin(ωt) is an ellipse when (iω, u) is an eigenpair and a and b are the real and imaginary parts of u Since, as we just saw, {a, b} is linearly independent (and, hence, so is {a, b} ), this claim will follow immediately from the following lemma Lemma 49 Let x(t) = [x(t), y(t)] T be given by x(t) = v cos(ωt) + w sin(ωt) where ω is a nonzero real number and {v, w} is a linearly independent pair of vectors whose components are real numbers Then the path traced out by (x(t), y(t)) on the Cartesian plane as t varies is an ellipse centered at the origin Our approach to verifying this lemma will be to verify that the x and y components of x(t) satisfy an equation for such an ellipse So let us start with little review (and possibly an extension) of what you know about equations for ellipses

Appendix for Sections 4 and 4 Chapter & Page: 4 3 Coordinate Equations for Ellipses Recall the canonical equation for an ellipse centered at the origin and with its axes parallel to the coordinate axes is x α + y β = where α and β are nonzero real numbers With the same assumptions on α and β, x α y β = and x α + y β = are equations for symmetric pairs of hyperbolas about the origin, and the graph of x α y β = is empty (ie, contains no points) since no (x, y) can satisfy this equation Since these are the only possibilities, we have Lemma 4 Let a and c be real numbers, and assume the polynomial equation has a nonempty graph Then: ax + cy = If ac >, the graph is an ellipse centered at the origin If ac <, the graph is a pair of hyperbolas Now assume we have a nonempty graph of a polynomial equation of the form Ax + Bxy + Cy = (44) where A, B and C are real numbers To deal with this, we invoke a new XY -coordinate system obtained by rotating the original x y-coordinate system by an angle θ as indicated in figure 45 If you check, you will find that the (x, y) and (X, Y) coordinates of any single point are related by x y = [ cos(θ) sin(θ) sin(θ) cos(θ) ][ X Y ] (45) Using this, we can rewrite our polynomial equation (equation (44)) in terms of X and Y, obtaining ax + bxy + cy = (46) where a, b and c are formulas involving cos(θ) and sin(θ) Now, the clever choice for θ is the one that makes b =, thus giving ax + cy = (47) as the polynomial equation of our graph in terms of the XY -coordinate system As an exercise (exercise 4, following the next theorem), you can show that this choice is possible, and that by this choice ac = AC 4 B

Chapter & Page: 4 4 Homogeneous Constant Matrix Systems, Part II Y -axis y y-axis X-axis X Y θ x x-axis Figure 45: An XY -coordinate system obtained by rotating the xy-coordinate system by an angle θ This and lemma 4 then yields: Lemma 4 Let A, B and C be real numbers, and assume the polynomial equation has a nonempty graph Then: Ax + Bxy + Cy = If AC 4 B >, the graph is an ellipse centered at the origin If AC 4 B <, the graph is a pair of hyperbolas? Exercise 4 a: Find the formulas for A, B and C when using the change of variables given by equation (45) to convert equation (44) to equation (46) b: Show that equation (44) reduces to equation (47) under the change of variables given by equation (45) if ( ) θ = Arctan B if A = C A C if A = C c: Assume that θ is as above, and verify that AC 4 B = ac π 4 Verifying Lemma 49 Now consider x(t) = v cos(ωt) + w sin(ωt)

Appendix for Sections 4 and 4 Chapter & Page: 4 5 assuming ω is some nonzero real number, and v v = and w = v [ w w ] is a linearly independent pair of vectors whose components are real numbers Rewriting the above formula for x in component form, we see that x(t) v w v cos(ωt) + w sin(ωt) = cos(ωt) + sin(ωt) = y(t) v cos(ωt) + w sin(ωt) v w That is, (x, y) is the point on the trajectory given by x(t) if and only if x cos(ωt) v w = M where M = y sin(ωt) v w (48) Since the two columns of M are the column vectors v and w, and {v, w} is linearly independent, we know (from linear algebra) that δ = det(m) = v w w v = This tells us M is invertible The inverse of M is easily found, and using it, we can then invert relation (48), obtaining cos(ωt) = w w x = w x w y sin(ωt) δ v v y δ v x + v y From this and our favorite trigonometric identity, we have = cos (ωt) + sin (ωt) = δ (w x w y) + δ ( v x + v y), which, after a little algebra, simplifies to with Ax + Bxy + Cy = A = (w ) + (v ) δ, B = w w + v v δ and C = (w ) + (v ) δ From theorem 4 we know this trajectory is an ellipse centered at the origin if AC B / 4 > Remarkably, when you carry out the details of computation, you get AC 4 B = δ 4 [( w + v )( w + v ) (w w + v v ) ] = = > So, yes, the trajectory is an ellipse centered at the origin, and lemma 49 is confirmed? Exercise 4: Fill in the details of the computations in the above proof

Chapter & Page: 4 6 Homogeneous Constant Matrix Systems, Part II Additional Exercises Many problems to be added! 43 The main goal of this exercise is to verify lemma 44 on page 4 4 a Assume A is any N N matrix, and let B = A γ I for some scalar γ Show that (r, u) is an eigenpair for A if and only if (ρ, u) is an eigenpair for B with r = ρ + γ b For the rest of this exercise, assume A is a constant matrix, and set B = A ri where r is an eigenvalue for A with corresponding eigenvector u = [u, u ] T i Using the first part of this exercise set, verify that Bu = ii Then, using the fact that Bu =, find formulas for the components of B iii Using the formulas just found for the components of B, show that there is a vector a = [a, a ] T such that Bv = (u v u v )a for each v = [v,v ] T iv Assume that the a just found is nonzero, and, using the above, verify that it is an eigenvector for both B and A, and determine the eigenvalue r for A corresponding to eigenvector a v Let (r, a) be as just found, and suppose {u, a} is linearly independent Show that r = r vi Let (r, a) be as just found, and suppose a = Show that there is a linearly independent pair {u, v} where (r, v) is an eigenpair for A c Let A, (r, u) and a be as above, and assume, in addition, that eigenvalue r has algebraic multiplicity two but geometric multiplicity one Using the above: i Show that a must be a nonzero constant multiple of u ii Verify that there is a nonzero w satisfying (A ri)w = u with w not being a constant multiple of u

Additional Exercises Chapter & Page: 4 7 44 Assume r = is the only eigenvalue for some real constant matrix A Further assume that every eigenvector for A is a multiple of u, and recall from exercise 397 on page 39 7 that every point on the straight line parallel to u through the origin is a critical point for x = Ax a Describe the general solution to x = Ax b Describe the trajectories of the nonequilibrium solutions c For each of the following choices of A, sketch a phase portrait for x = Ax i TBD ii TBD iii TBD 45 Let (r, u) be an eigenpair for an N N matrix A Assume r has geometric multiplicity one but algebraic multiplicity three or greater We know that two solutions to x = Ax are given by where w is any vector satisfying x (t) = ue rt and x (t) = [ tu + w ] e rt [A ri]w = u a Set x 3 (t) = [ t u + αtw + w ] e rt and derive the fact that x 3 is a solution to x = Ax if α = and [A ri] w = w b Assume the algebraic multiplicity of r is at least four, and derive the conditions for α, β and w 3 so that is a solution to x = Ax x 4 (t) = [ t 3 u + αt w + βtw + w 3] e rt

Chapter & Page: 4 8 Homogeneous Constant Matrix Systems, Part II Some Answers to Some of the Exercises WARNING! Most of the following answers were prepared hastily and late at night They have not been properly proofread! Errors are likely! 3b iv r = r + u a u a 4a x(t) = c ue rt + c [tu + w]e rt where w is any solution to Aw = u 4b They are the lines parallel to u that do not pass through the origin