= A(λ, t)x. In this chapter we will focus on the case that the matrix A does not depend on time (so that the ODE is autonomous):

Similar documents
Constant coefficients systems

7 Planar systems of linear ODE

MATH 304 Linear Algebra Lecture 23: Diagonalization. Review for Test 2.

The goal of this chapter is to study linear systems of ordinary differential equations: dt,..., dx ) T

Generalized Eigenvectors and Jordan Form

Linear ODEs. Existence of solutions to linear IVPs. Resolvent matrix. Autonomous linear systems

Section 8.2 : Homogeneous Linear Systems

Entrance Exam, Differential Equations April, (Solve exactly 6 out of the 8 problems) y + 2y + y cos(x 2 y) = 0, y(0) = 2, y (0) = 4.

21 Linear State-Space Representations

Linear Systems Notes for CDS-140a

Examples include: (a) the Lorenz system for climate and weather modeling (b) the Hodgkin-Huxley system for neuron modeling

Dynamic interpretation of eigenvectors

A proof of the Jordan normal form theorem

Eigenvalues and Eigenvectors

MATH 2921 VECTOR CALCULUS AND DIFFERENTIAL EQUATIONS LINEAR SYSTEMS NOTES

Linear Algebra- Final Exam Review

DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix

Course 216: Ordinary Differential Equations

Math Ordinary Differential Equations

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors

Chapter 5 Eigenvalues and Eigenvectors

DIAGONALIZABLE LINEAR SYSTEMS AND STABILITY. 1. Algebraic facts. We first recall two descriptions of matrix multiplication.

1. Diagonalize the matrix A if possible, that is, find an invertible matrix P and a diagonal

MATHEMATICS 217 NOTES

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction

Old Math 330 Exams. David M. McClendon. Department of Mathematics Ferris State University

LECTURE VII: THE JORDAN CANONICAL FORM MAT FALL 2006 PRINCETON UNIVERSITY. [See also Appendix B in the book]

Linear ODEs. Types of systems. Linear ODEs. Definition (Linear ODE) Linear ODEs. Existence of solutions to linear IVPs.

Notes on the matrix exponential

Section 9.3 Phase Plane Portraits (for Planar Systems)

4. Linear transformations as a vector space 17

8.1 Bifurcations of Equilibria

235 Final exam review questions

1. General Vector Spaces

Invariant Manifolds of Dynamical Systems and an application to Space Exploration

B5.6 Nonlinear Systems

Differential Equations and Modeling

Homogeneous Constant Matrix Systems, Part II

University of Sydney MATH3063 LECTURE NOTES

NOTES ON LINEAR ODES

1. The Transition Matrix (Hint: Recall that the solution to the linear equation ẋ = Ax + Bu is

A plane autonomous system is a pair of simultaneous first-order differential equations,

Jordan Normal Form. Chapter Minimal Polynomials

Solution of Linear Equations

1 Systems of Differential Equations

MATH 221, Spring Homework 10 Solutions

October 4, 2017 EIGENVALUES AND EIGENVECTORS. APPLICATIONS

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

Problem set 7 Math 207A, Fall 2011 Solutions

Eigenvalues, Eigenvectors. Eigenvalues and eigenvector will be fundamentally related to the nature of the solutions of state space systems.

MATH 583A REVIEW SESSION #1

Module 09 From s-domain to time-domain From ODEs, TFs to State-Space Modern Control

BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x

Chapter 6 Nonlinear Systems and Phenomena. Friday, November 2, 12

Ph.D. Katarína Bellová Page 1 Mathematics 2 (10-PHY-BIPMA2) EXAM - Solutions, 20 July 2017, 10:00 12:00 All answers to be justified.

c Igor Zelenko, Fall

A PRIMER ON SESQUILINEAR FORMS

Math 273 (51) - Final

av 1 x 2 + 4y 2 + xy + 4z 2 = 16.

NORMS ON SPACE OF MATRICES

Linear algebra II Homework #1 solutions A = This means that every eigenvector with eigenvalue λ = 1 must have the form

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

MATH 115A: SAMPLE FINAL SOLUTIONS

Linear Algebra. Session 12

Some solutions of the written exam of January 27th, 2014

Spectral Theorem for Self-adjoint Linear Operators

Quizzes for Math 304

Eigenvalues, Eigenvectors and the Jordan Form

2nd-Order Linear Equations

Definition (T -invariant subspace) Example. Example

State will have dimension 5. One possible choice is given by y and its derivatives up to y (4)

September 26, 2017 EIGENVALUES AND EIGENVECTORS. APPLICATIONS

j=1 x j p, if 1 p <, x i ξ : x i < ξ} 0 as p.

Modal Decomposition and the Time-Domain Response of Linear Systems 1

Eigenspaces and Diagonalizable Transformations

MATH JORDAN FORM

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

In these chapter 2A notes write vectors in boldface to reduce the ambiguity of the notation.

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

Linear Algebra Lecture Notes-II

B5.6 Nonlinear Systems

Linear Algebra: Matrix Eigenvalue Problems

ODE Homework 1. Due Wed. 19 August 2009; At the beginning of the class

Department of Mathematics IIT Guwahati

MATH Spring 2011 Sample problems for Test 2: Solutions

Final A. Problem Points Score Total 100. Math115A Nadja Hempel 03/23/2017

GQE ALGEBRA PROBLEMS

Theorem 1. ẋ = Ax is globally exponentially stable (GES) iff A is Hurwitz (i.e., max(re(σ(a))) < 0).

Jordan normal form notes (version date: 11/21/07)

1. Find the solution of the following uncontrolled linear system. 2 α 1 1

MATH SOLUTIONS TO PRACTICE MIDTERM LECTURE 1, SUMMER Given vector spaces V and W, V W is the vector space given by

Last lecture: Recurrence relations and differential equations. The solution to the differential equation dx

Even-Numbered Homework Solutions

APPM 2360: Final Exam 10:30am 1:00pm, May 6, 2015.

Chapter III. Stability of Linear Systems

B5.6 Nonlinear Systems

MCE693/793: Analysis and Control of Nonlinear Systems

Linear algebra II Tutorial solutions #1 A = x 1

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL

Transcription:

Chapter 2 Linear autonomous ODEs 2 Linearity Linear ODEs form an important class of ODEs They are characterized by the fact that the vector field f : R m R p R R m is linear at constant value of the parameters and time, that is for all x,y R m and a, b R fax + by, λ, t afx, λ, t + bfy, λ, t This implies that the vector field may be represented by an m m matrix Aλ, t, so that dx Aλ, tx In this chapter we will focus on the case that the matrix A does not depend on time so that the ODE is autonomous: dx Aλx, It turns out that Proposition 2 The flow of an ODE is linear if and only if the corresponding vector field is linear Proof The flow Φ t,t λ is linear if Φ t,t λ ax + by aφ t,t λ x + bφ t,t λ y for all a, b R and x,y R m Equation 2 contains the formal definition of the flow map, from which it immediately follows that linearity of Φ in x is correlated with linearity of f in x 22 Explicit solution: exponential We consider the linear autonomous ODE dx Ax, 22

2 CHAPTER 2 LINEAR AUTONOMOUS ODES where x R m and A glm, R an m m matrix with real entries In the case that m, when A R, the flow map Φ t of the linear ODE is equal to multiplication by expat It turns out that we can write the flow map of a general linear autonomous ODE in this same form, if we define the exponential operator as Definition 22 Let L glm, R, then expl : k L k You should recognize the familiar Taylor expansion of the exponential function in the case that m In the formula, L k represents the matrix product of k consecutive matrices L It follows that expl is also a linear map which can be represented by an m m matrix Moreover, it is invertible, with inverse exp L Technically, one may worry about the fact whether the coefficients of the matrix expl are indeed all smaller than infinity But it is in fact not too hard to find an upper bound for any of the entries of expl given that the entries of the matrix L are finite See exercise Theorem 222 The linear autonomous ODE 22 with initial condition x y has the unique solution xt expaty Proof We use the definition of the derivative: d expt + ha expta expat lim h h expha I lim expta h lim h lim h lim h h ha n expta h n! n ha h + ha n expta h n! n2 A + h h j A j+2 expta A expta j + 2! j Since y is constant it thus follows that d xt A expaty Axt To show that the solution is unique, suppose that xt is a solution We calculate by using the chain rule d d exp taxt exp ta xt + exp ta dxt A exp Atxt + exp taaxt

23 COMPUTATION OF THE FLOW MAP 3 Hence, xt exptac where c R m is some constant We observe that this constant is entirely determined to be equal to y by the boundary condition x y Hence the solution xt exptay is unique It is easy to extend the result to the case of initial condition xτ y, yielding xt expat τy Corollary 223 Existence and uniqueness For the linear autonomous ODE 22, through each point y in the phase space passes exactly one solution at time t τ, namely xt expat τy 23 Computation of the flow map 23 The planar case We consider the calculation of the flow map expat Before dealing with matters in higher dimension we first focus on the case of ODEs on the plane R 2 We consider the calculation of the flow map expat Recall the definition of the exponential of a linear map matrix in Definition 22 Example 23 Hyperbolic Let with λ, λ 2 R, then expat At k k k Example 232 Elliptic Let with β R, then λ t λ 2 t A 2 λ A λ 2 A k, λ k tk k λ k 2 tk k β β β 2 β 2 so that for all k N A 2k k β 2k I, A 2k+ so that expat k k βt 2k 2 k k βt 2k+ 2k+!, β 2 I, k+ β 2k+ k β 2k+ k k βt 2k+ 2k+! k k βt 2k 2 e λ t e λ 2t cosβt sinβt sinβt cosβt

4 CHAPTER 2 LINEAR AUTONOMOUS ODES Example 233 Jordan Block Let A λ λ, with λ R, then so that expat λt k k λ A k k kλ k λ k t λt k k λt k k e λt te λt e λt Distinct real eigenvalues We consider a linear autonomous ODE dx Ax, with A gl2, R andx R2 Suppose that A has a real eigenvalue µ with orresponding eigenvector v with Then if we consider an initial condition on the linear subspace V that is generated by v a line then we find that V is an invariant subspace for the flow, that is Φ t V V In fact, we may express that ODE restricted to V as dv µv, and the flow map of this restricted ODE is multiplication by expµt Suppose now that A has two distinct eigenvalues µ µ 2 Then it follows that the corresponding eigenvectors v and v 2 must be linearly independent, ie v av 2 for all a R In other words, v and v 2 span the plane R 2 As a consequence, we may write any point as a linear combination of the two eigenvectors x av + bv 2, 23 and by linearity of the flow map, and knowing the flow in the directions of the eigenvectors, we obtain Φ t x Φ t av + bv 2 aφ t v + bφ t v 2 ae λ t v + be λ 2t v 2 232 In order to find the matrix expression for Φ t we now need to find expressions for the coefficients a and b in 23 We do this by taking the standard matrix product on both sides with vectors v i, where v i is perpendicular to the eigenvector v i, for i, 2: a v 2 x v 2 v, b v x v v 2 On a more abstract level, we may think of the vectors av and bv 2 being obtained from the vector x by projection: P x av v v 2 v v 2 x, P 2 x bv 2 v 2 v v 2 v x Note that projections P are linear maps that satisfy the property that P 2 P As they are linear, we can represent them by matrices: P v v 2 v v 2, P 2 v 2 v v 2 v

23 COMPUTATION OF THE FLOW MAP 5 So in terms of these expressions, we can rewrite 232 as Φ t e λt P + e λ2t P 2 233 Example 234 Let A, then the eigenvalues are equal to λ 2 and λ 2 2 with corresponding eigenvectors v and v 2 Any vector in R 2 can be decomposed into a linear combination of these eigenvectors as follows: x x y + y y Hence, expat x y x ye t + ye 2t x ye t + ye 2t ye 2t e t e 2t e t e 2t x y, so that expat e t + e 2t We now construct the projections using the formulas we obtained above Choosing the perpendicular vectors for instance as v and v2, we obtain P v v 2 v v 2, P 2 v 2 v v 2 v The answer in Example 23 can be evaluated in the same way We note that the above procedure can also be carried out in the higher dimensional case In the m-dimensional case, if A has m distinct real eigenvalues λ,, λ m then expat m e λkt P k, k where P k is the projection to the one-dimensional subspace spanned by the eigenvector for eigenvalue λ k with kernel equal to the m -dimensional subspace spanned by the collection of all other eigenvectors Projections P are characterized by the following properties: P is linear rangep where to project to kerp what should be projected away P 2 P

6 CHAPTER 2 LINEAR AUTONOMOUS ODES Proposition 235 Projection formula A formula for the m m matrix representing the projection with a given range and kernel can be found as follows Let the vectors u,,u k form a basis for the range of the projection, and assemble these vectors in the m k matrix A The range and the kernel are complementary spaces, so the kernel has dimension m k It follows that the orthogonal complement of the kernel has dimension k Let w,,w k form a basis for the orthogonal complement of the kernel of the projection, and assemble these vectors in the matrix B Then the projection to the range is defined by Proof Exercise P AB A B It is readily verified that indeed the formulas we obtained for the projections in the twodimensional case are special cases of this general result Complex conjugate pair of eigenvalues The eigenvalues of a matrix with real entries need not be real, but in case we have an eigenvalue λ α + iβ with α, β R Namely, eigenvalues λ of a matrix A glm, R are roots of the characteristic polynomial, that is, they satisfy deta λi 234 As A has only real entries, it follows by taking the complex conjugate of the entire equation that if λ satisfies 234 then so does its complex conjugate λ The eigenvectors of a real matrix for complex eigenvalues are never in R m This can be seen from taking the complex conjugate of the eigenvalue equation Av λv, from which it follows that if v is an eigenvector for eigenvalue λ then v is an eigenvector for eigenvalue λ Hence for each pair of complex conjugate eigenvalues λ, λ we have a complex conjugate pair of eigenvectors v,v C m Here we use the natural identification of R m as a subspace of C m Although practical, it is maybe not so elegant to go outside our real phase space R m In analogy to the case of real eigenvalues, we are interested in identifying an invariant subspace of R m which is associated with the complex eigenvalues The complex eigenvectors v,v span a complex vector space whose real subspace is given by E span C v,v R 2 span R v + v, iv v It is easily verified that indeed E is an A-invariant subspace: let λ α + iβ and v x + iy, then Aav + v + biv v aav + Av + ibav Av aλv + λv + ibλv λv 2aαx βy 2bαy + βx 2aα 2bβx 2aβ + 2bαy aα bβv + v + aβ + bαiv v

23 COMPUTATION OF THE FLOW MAP 7 In particular, if we choose the vectors v+v as the orthonormal basis of E Let us iv v now assume that m 2 and A has two complex conjugate eigenvalues Then according to the above formula, with the above mentioned basis, it has he form α β A 235 β α v+v and iv v We note that the eigenvalues of this matrix are indeed α ± iβ, as required In a similar way to Example 232, the exponential can be computed cosβt sinβt expat e αt sinβt cosβt We could in fact make use of the complex eigenvectors to derive the following analogy to the expression for the flow map in the real distinct eigenvalue case: expat e α+iβt P v + e α iβt P v, where with perpendicular defined with respect to standard inner product on C 2 : v w i v iw i with v i v ie i, v i w ie i and {e i } is an orthonormal basis P v vv v v, P v P v Example 236 In the case of 235 the eigenvector of A for eigenvalue λ α + iβ is i v, v, and we choose v i i Then we obtain P v vv v v 2 2i so that [ ] expα + iβtp v + expα iβtp v e αt 2 eiβt 2i eiβt 2i eiβt + 2 e iβt 2i e iβt 2 eiβt 2i e iβt 2 e iβt cosβt sinβt e αt sinβt cosβt 3 Example 237 Let A, then one eigenvalue is equal to + i 6 with corresponding eigenvector v 2 i 6 and the other eigenvalue and eigenvector are the complex 3 conjugate We now construct the projections using the formulas we obtained above Choosing v i 6 3 we obtain P v vv v v 2i i 2 i 3 2 2 6 6 2

8 CHAPTER 2 LINEAR AUTONOMOUS ODES so that exp + i 6tP v + exp i 6tP v e t [ Jordan Block e αt e i 6 2i 6 2 2i 2 2 6 3 + cc cost 6 6 sint 6 2 6 sint 6 cost 6 3 The matrix in Example 233 has the property that there is only one eigenvalue and one eigenvector, even though it is a 2 2 matrix It is useful to introduce a few definitions Definition 238 Algebraic multiplicity If a polynomial can be written as pr r λ k qr with qλ then λ is called a root of p of algebraic multiplicity k In the calculation of eigenvalues, the terminology algebraic multiplicity of an eigenvalue is used with reference to the characteristic polynomial obtained from deta λi Definition 239 Geometric multiplicity of eigenvalues An eigenvalue has geometric multiplicity k if the eigenspace associated to this eigenvalue has dimension k In case the algebraic and geometric multiplicities of an eigenvalue are not equal to each other, it is useful to introduce the notion of a generalized eigenspace Definition 23 Generalized eigenspace The generalized eigenspace E λ for eigenvalue λ of a square matrix A, with algebraic multiplicity n, is equal to kera λi n In Example 233, λ is an eigenvalue of algebraic multiplicity 2 and geometric multiplicity The eigenspace for λ is spanned by the vector, whereas the generalized eigenspace for λ is equal to the entire R 2 The following observation generalizes the fact that eigenspaces are flow invariant Proposition 23 Generalized eigenspaces of A glm, R are flow invariant for the ODE d x Ax, x Rm Proof Let n denote the algebraic multiplicity of an eigenvalue λ of A Let w E λ the generalized eigenspace Then, since A λi n w, Aw satisfies also the property that Aw kera λ n This shows that solutions with initial conditions in E λ remain within E λ Let us consider now the general case of a matrix A gl2, R with a real eigenvalue λ of algebraic multiplicity two and geometric multiplicity one Let v denote the eigenvector In the case that algebraic and geometric multiplicity are both equal to two, there are two linearly independent eigenvectors and the method presented for distinguished eigenvalues can be applied ]

23 COMPUTATION OF THE FLOW MAP 9 The generalized eigenspace E λ is spanned by v and another vector, which we call w, where w kera λi Since w kera λi 2 it follows that A λiw kera λi so that Aw λw + cv, for some c otherwise w kera λi By scaling of w one can set c without loss of generality We recall that also Av λv Hence, with respect to the basis {v, w} A takes the form λ A λ a Namely, let av + bw, then b a A b Aav + bw aav + baw aλ + bv + λw λ λ a b Considering the ODE d x Ax, writing xt vt, wtt with respect to the basis {v, w} ie xt vtv + wtw we are left to solve d d vt λvt + wt, wt λwt The last equation, with initial value w v admits the solution wt e λt w, which leaves us to solve vt λvt + e λt w With initial condition v v this ODE has the solution vt e λt v + te λt w and the resulting flow is given by e Φ t λt te λt e λt, so that indeed Φ t v e λt v + te λt w w e λt w 236 232 Jordan normal form In the previous section we have seen how we can solve linear autonomous ODEs From the above described methodology it already transpired that the problem decomposes to ODEs on generalized eigenspaces On each of these eigenspaces one can choose convenient coordinates We have already seen some examples: If λ R and the geometric multiplicity of λ is equal to the algebraic multiplicity, the eigenvectors for λ span E λ Choosing these as a basis yields a diagonal matrix A If λ R and the geometric multiplicity of λ is equal to the algebraic multiplicity, the linear combinations of complex eigenvectors v + v, iv v span E λ Choosing these as a basis yields a blockdiagonal matrix A with 2 2 blocks of the form 235

2 CHAPTER 2 LINEAR AUTONOMOUS ODES In case the geometric multiplicity of eigenvalues is less than the algebraic multiplicity, we have so-called Jordan blocks hence the use of this terminology in the above example You should be familiar with these from M2P2 in the case of matrices with complex coefficients For real matrices, analogous results can be proven, and Jordan blocks take the form λ λ real : λ λ R λ I λ α ± iβ complex : and I denotes the 2 2 identity matrix Choice of coordinates or coordinate transformation α β, where R λ β α Rλ I R λ In the above we have presented the Jordan normal form, as arising naturally as a consequence of a convenient choice of basis for the coordinates Alternatively, we can also take the viewpoint that any matrix A can be transformed into Jordan normal form by an appropriate linear coordinate transformation Suppose e,,e m is a basis for R m in terms of which we have written our ODE We consider a new basis ê,, ê m obtained by a linear transformation T Glm, R group of invertible linear maps from R m to itself: ê j Te j We consider the consequence for the matrix representation of the linear vector field We can write points in R m in two ways, as m i x ie i m i y iê i Let x x,, x m and y y,,y m Then x and y are related as x Ty For the ODE this means dx dty Ax ATy dy Ãy, where à T AT So the Jordan normal form result implies that there exists a linear coordinate transformation represented by T Glm, R that conjugates A to à where the latter has the structure of the real Jordan normal form Often, if there is no specific significance attached to any particular choice of coordinates, it is assumed that linear vector fields are in Jordan normal form 233 Jordan-Chevalley decomposition There is another property of linear maps that may be of use when analysing the exponential of a linear map We need to introduce the notion of a semi-simple and nilpotent linear map or matrix,

23 COMPUTATION OF THE FLOW MAP 2 Definition 232 Semi-simple and nilpotent linear maps A linear map A glm, R is called semi-simple if the geometric and algebraic multiplicities of each eigenvalue are equal to each other A linear map A glm, R is called nilpotent if there exists a p N such that A p the null matrix We now consider the following decomposition: Definition 233 Jordan-Chevalley decomposition Let A glm, R Then A S + N is called the Jordan-Chevalley decomposition of A if S is semi-simple, N is nilpotent and N and S commute, ie NS SN Theorem 234 Every A glm, R admits a unique Jordan-Chevalley decomposition Remark 235 Please note that the above notions and results also hold for complex matrices in glm, C Note that the Jordan-Chevalley decomposition of a matrix in Jordan normal form consists of the diagonal semi-simple and off-diagonal nilpotent parts note that indeed these parts are semi-simple and nilpotent, respectively, and that they commute For our purposes here computation of expat, the Jordan-Chevalley tells us that, if A N + S is the Jordan-Chevalley decomposition, then expat expst expnt, so that one can focus on exponentiating semi-simple and nilpotent matrices separately In this respect we note that the exponential expnt where N is nilpotent, with p least such that N p, takes the form and thus is polynomial in t expnt Nt k k p k Nt k Example 236 The Jordan-Chevalley decomposition of the matrix A Consequently, we have A S + N, where S expst e λt e λt λ λ and N and expnt I + Nt t λ λ is so that, in correspondence with 236 e λt expat e λt t e λt te λt e λt

22 CHAPTER 2 LINEAR AUTONOMOUS ODES 24 Stability Linear autonomous ODEs always have an equilibrium at, since there we have dx A, and thus is always a fixed point of the flow Φ t expat One of the elementary things one may want to understand about dynamics is whether points in the neighbourhood of an equilibrium have a tendency to stay nearby, or whether they have a tendency to wander off We call the former type stable The formal definition is: Definition 24 Lyapunov and asymptotic stability An equilibrium or fixed point x is Lyapunov stable, if for every neighbourhood U of x, there exists neighbourhoods V and V of x such that V V U and the forward time orbits of all initial conditions x V remain in V for all time A special case of Lyapunov stability is asymptotic stability, which require that all initial conditions in V converge to x as time goes to infinity Certain properties of the eigenvalues of a linear ODE are directly related to stabilty Proposition 242 Let A glm, R Then the trivial fixed point of the flow Φ t expat is not Lyapunov stable if the real part of one eigenvalue of A is positive The fixed point is asymptotically stable if and only if the real parts of all eigenvalues of A are negative Proof We first note that generalized eigenspaces are flow invariant Whenever there is an eigenvalue with positive real part, apart from the trivial fixed point, all other initial conditions within this eigenspace tend to infinity as time goes forward, which contradicts stability If all eigenvalues have a negative real part, the flow on each generalized eigenspace is contracting towards which implies asymptotic stability If there exists an eigenvalue with zero real part, then the flow on the corresponding generalized eigenspace contradicts asymptotic stability Namely, on eigenspaces of eigenvalues with zero real part the flow is trivial the identity map if the eigenvalue is real, and a generally skewed rotation in the case of purely imaginary complex conjugate eigenvalues If the geometric multiplicity of such eigenvalues is less than their algebraic multiplicity, due to the fact that the flow has factors which are polynomial in time, most initial conditions in the generalized eigenspace do not stay in the neighbourhood of the equilibrium as t Consider a linear autonomous ODE dx Ax A glm, R and x R m, then we may write the phase space R m as a direct sum of generalized eigenspaces E λ where we take the convention that in case λ R, E λ denotes the invariant subspace associated with eigenvalues λ and λ: R m j E λj recall that a sum of vector spaces is a direct sum if all of the components in the sum only have trivial intersection We define the stable and unstable subspaces of R m as the union of generalized eigenspaces for eigenvalues with negative and positive real parts, respectively: E s Reλj <E λj, E u Reλj >E λj

25 PHASE PORTRAITS 23 The union of the remaining generalized eigenspaces for eigenvalues with zero real part is called the centre subspace: E c Reλj E λj The flow on the stable subspace is asymptotically stable, and the flow on the unstable subspace is asymptotically unstable the inverse flow is asymptotically stable The flow on the center manifold is not asymptotically stable or unstable as there is always a subspace on which the flow is either stationary corresponding to zero eigenvalue or a rotation purely imaginary complex conjugate pair of eigenvalues The centre subspace E c is Lyapunov stable in case there are no nontrivial Jordan blocks In general it is not easy to prove that an equilibrium is Lyapunov Stable or asymptotically stable, but we will see a few instances where it is possible In the exercises you find some discussion about the Lyapunov functions, which can be used to prove stability The difficulty with that method is that there is no general way of constructing those Lyapunov functions, so that one in the end here needs to rely on some particular insight or intuition in order to construct a Lyapunov function Finally we give some simple examples of Lyapunov stable equilibria Example 243 Consider dx Ax with x R2 and A a a Then x is an equilibrium point The eigenvalues of A are a ± i If a the equilibrium is Lyapunov stable We readily compute that Φ t x e at x for all x R 2 In case a then the flow is a rotation around the origin and whereas the equilibrium is Lyapunov stable, it is not asymptotically stable If a < the equilibrium is asymptotically stable 25 Phase portraits We finally consider some geometrical properties of the flow that apply to linear as well as nonlinear autonomous ODEs We recall that the right hind side of the ODE dx fx, x Rm 25 provides for each point in the phase space the vector fx; accordingly we call f a vector field The ODE further specifies that we identify the vector fx with dx/ The following property is essential: Proposition 25 Let xt be a solution of 25 Then for each t, the vector fxt is tangent to xt at xt

24 CHAPTER 2 LINEAR AUTONOMOUS ODES 4 y 2 y 5 y 5 K4 K2 2 4 x K2 K K5 5 x K5 K 2 3 x K5 K4 K K Figure 2: Examples of linear planar vector fields [made with Maple] in the from left to right elliptic, hyperbolic and Jordan block cases Part of one solution curve is also plotted to illustrate the fact that such curves are everywhere tangent to the vector field Proof We will present the proof for a planar ODE m 2 The arguments are similar when m > 2 First we find an expression for the tangent vector Suppose that the curve xt has the implicit form Fx, y To find the tangent in x, y xt we evaluate the function F in the point x + δx, y + δy with δx and δy small, and write the Taylor series expansion Fx + δx, y + δy Fx, y + F x x, y δx + F y x, y δy + O δx, δy 2 Thus in order to stay on the curve, this expression needs to be equal to zero We already have Fx, y since x, y is on the curve, so to first order we need to satisfy F x x, y F y x, y δx δy The vector on the left hand side in this inner product is known as the gradient Fx, y The vector on the right hand side is the best linear approximation to the curve in x, y which is known as the tangent We thus observe that the tangent vector is perpendicular to the gradient of the function that defines the curve We finally relate this to the ODE Taking the derivative of F with respect to t, and noting that Fxt, yt by definition, we obtain d Fx, y F x x, y ẋ + F y x, y ẏ Fx, y ẋ Thus ẋ fx is also tangent to the solution curve xt In the case of planar vectors fields, we often provide a sketch the flow by drawing some representative set of solution curves, with arrows on them indicating the direction of the flow Such a sketch is called a phase portrait