The goal of this chapter is to study linear systems of ordinary differential equations: dt,..., dx ) T

Similar documents
Examples include: (a) the Lorenz system for climate and weather modeling (b) the Hodgkin-Huxley system for neuron modeling

NORMS ON SPACE OF MATRICES

Linear ODEs. Existence of solutions to linear IVPs. Resolvent matrix. Autonomous linear systems

Math Ordinary Differential Equations

Department of Mathematics IIT Guwahati

7 Planar systems of linear ODE

Linear Algebra Practice Problems

Linear Systems Notes for CDS-140a

21 Linear State-Space Representations

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors

Diagonalization. P. Danziger. u B = A 1. B u S.

The following definition is fundamental.

DIAGONALIZABLE LINEAR SYSTEMS AND STABILITY. 1. Algebraic facts. We first recall two descriptions of matrix multiplication.

Jordan normal form notes (version date: 11/21/07)

NOTES ON LINEAR ODES

INNER PRODUCT SPACE. Definition 1

MA5510 Ordinary Differential Equation, Fall, 2014

Differential Topology Final Exam With Solutions

1. The Transition Matrix (Hint: Recall that the solution to the linear equation ẋ = Ax + Bu is

In these chapter 2A notes write vectors in boldface to reduce the ambiguity of the notation.

Control Systems. Dynamic response in the time domain. L. Lanari

B5.6 Nonlinear Systems

A = 3 1. We conclude that the algebraic multiplicity of the eigenvalues are both one, that is,

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

1. General Vector Spaces

Dynamic interpretation of eigenvectors

Solution of Linear State-space Systems

2. Review of Linear Algebra

Lecture Notes for Math 524

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent.

Linear ODEs. Types of systems. Linear ODEs. Definition (Linear ODE) Linear ODEs. Existence of solutions to linear IVPs.

= A(λ, t)x. In this chapter we will focus on the case that the matrix A does not depend on time (so that the ODE is autonomous):

Lecture Notes 6: Dynamic Equations Part C: Linear Difference Equation Systems

Homogeneous Linear Systems of Differential Equations with Constant Coefficients

Entrance Exam, Differential Equations April, (Solve exactly 6 out of the 8 problems) y + 2y + y cos(x 2 y) = 0, y(0) = 2, y (0) = 4.

Homogeneous and particular LTI solutions

DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix

Lecture 5. Ch. 5, Norms for vectors and matrices. Norms for vectors and matrices Why?

6 Linear Equation. 6.1 Equation with constant coefficients

SYSTEMTEORI - ÖVNING 1. In this exercise, we will learn how to solve the following linear differential equation:

Differential Equations and Modeling

Math 3191 Applied Linear Algebra

Econ Lecture 14. Outline

Archive of past papers, solutions and homeworks for. MATH 224, Linear Algebra 2, Spring 2013, Laurence Barker

Remark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

MTH 2032 SemesterII

Recall : Eigenvalues and Eigenvectors

Definition: An n x n matrix, "A", is said to be diagonalizable if there exists a nonsingular matrix "X" and a diagonal matrix "D" such that X 1 A X

Lecture 3: Review of Linear Algebra

Vector Spaces. Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms.

Linear Algebra- Final Exam Review

B. Differential Equations A differential equation is an equation of the form

AN ITERATION. In part as motivation, we consider an iteration method for solving a system of linear equations which has the form x Ax = b

Linear System Theory

1 Math 241A-B Homework Problem List for F2015 and W2016

Chapter III. Stability of Linear Systems

16. Local theory of regular singular points and applications

Hilbert Spaces. Hilbert space is a vector space with some extra structure. We start with formal (axiomatic) definition of a vector space.

Putzer s Algorithm. Norman Lebovitz. September 8, 2016

Quadratic forms. Here. Thus symmetric matrices are diagonalizable, and the diagonalization can be performed by means of an orthogonal matrix.

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

Linear Algebra II Lecture 13

MATH 2921 VECTOR CALCULUS AND DIFFERENTIAL EQUATIONS LINEAR SYSTEMS NOTES

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

LINEAR ALGEBRA MICHAEL PENKAVA

Invariant Manifolds of Dynamical Systems and an application to Space Exploration

Linear Algebra 2 Spectral Notes

Differential equations

Econ 204 Supplement to Section 3.6 Diagonalization and Quadratic Forms. 1 Diagonalization and Change of Basis

Ordinary Differential Equations II

0.1 Rational Canonical Forms

University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm

MATH JORDAN FORM

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

MCE693/793: Analysis and Control of Nonlinear Systems

EK102 Linear Algebra PRACTICE PROBLEMS for Final Exam Spring 2016

V 1 V 2. r 3. r 6 r 4. Math 2250 Lab 12 Due Date : 4/25/2017 at 6:00pm

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

1. Foundations of Numerics from Advanced Mathematics. Linear Algebra

Lecture 3: Review of Linear Algebra

Lecture 4. Chapter 4: Lyapunov Stability. Eugenio Schuster. Mechanical Engineering and Mechanics Lehigh University.

REPRESENTATION THEORY WEEK 7

Systems of Linear ODEs

Econ 204 Differential Equations. 1 Existence and Uniqueness of Solutions

LMI Methods in Optimal and Robust Control

x 3y 2z = 6 1.2) 2x 4y 3z = 8 3x + 6y + 8z = 5 x + 3y 2z + 5t = 4 1.5) 2x + 8y z + 9t = 9 3x + 5y 12z + 17t = 7

22.2. Applications of Eigenvalues and Eigenvectors. Introduction. Prerequisites. Learning Outcomes

THEODORE VORONOV DIFFERENTIABLE MANIFOLDS. Fall Last updated: November 26, (Under construction.)

MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors.

systems of linear di erential If the homogeneous linear di erential system is diagonalizable,

1.1 Limits and Continuity. Precise definition of a limit and limit laws. Squeeze Theorem. Intermediate Value Theorem. Extreme Value Theorem.

Numerical Linear Algebra

MATH 215/255 Solutions to Additional Practice Problems April dy dt

Notions such as convergent sequence and Cauchy sequence make sense for any metric space. Convergent Sequences are Cauchy

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #2 Solutions

1. Matrix multiplication and Pauli Matrices: Pauli matrices are the 2 2 matrices. 1 0 i 0. 0 i

Notes on uniform convergence

LINEAR ALGEBRA QUESTION BANK

2. Every linear system with the same number of equations as unknowns has a unique solution.

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

Transcription:

1 1 Linear Systems The goal of this chapter is to study linear systems of ordinary differential equations: ẋ = Ax, x(0) = x 0, (1) where x R n, A is an n n matrix and ẋ = dx ( dt = dx1 dt,..., dx ) T n. dt It will be shown that the unique solution of Eq. (1) is given by x(t) = e At x 0, where e At is an n n matrix defined by its Taylor series. A good portion of this chapter is concerned with the computation of e At in terms of the eigenvalues and eigenvectors of A.

2 1.1 Uncoupled Linear Systems Let s start the solution of the linear system (1) with the simplest case, where the system contains only one equation (i.e. n = 1): ẋ = ax, x(0) = c. The method of separation of variables immediately gives x(t) = c e at.

3 2 2 Uncoupled Systems: an Example To move one step forward, let s consider the following 2 2 uncoupled system: ẋ 1 = x 1, ẋ 2 = 2x 2, x 1 (0) = c 1, x 2 (0) = c 2. Its solution is easily found to be: x 1 (t) = c 1 e t, x 2 (t) = c 2 e 2t, or in matrix form: x(t) = [ e t 0 0 e 2t ] c =: e At c.

4 The General Case Clearly, the same procedure can be applied to solve uncoupled systems of any size. For example, the solution of the following 3 3 system: ẋ 1 = x 1, ẋ 2 = x 2, ẋ 3 = x 3, x 1 (0) = c 1, x 2 (0) = c 2, x 3 (0) = c 3. is given by x(t) = e t 0 0 0 e t 0 c =: eat c. 0 0 e t

5 Phase Plane Analysis Before wrapping up this section, let s introduce some notations that will be useful in the study of the linear system ẋ = Ax. Let s first consider the 2 2 system: [ 1 0 ẋ = Ax, A = 0 2 ]. Recall it has the solution: x(t) = [ e t 0 0 e 2t ] c. Clearly, the above formula describes the dynamics of each of the system components x 1 and x 2.

6 Phase Portrait In many cases, however, we are interested in the dynamics of the entire system x = (x 1, x 2 ) T besides that of the individual components x 1 or x 2. To gain a better understanding of the entire system, we eliminate the variable t from the solution representation so that a single formula involving only x 1 and x 2 results: x 2 = c2 1c 2 x 2 1 For any fixed c 1, c 2, the above equation defines a curve in the x 1 x 2 -plane the so-called phase plane. The set of all solution curves for all possible values of c 1 and c 2 constitutes a phase portrait of the linear system ẋ = Ax..

7 The Phase Portrait of the 2 2 System The phase portrait of the 2 2 system is shown below. Figure 1: The phase portrait of the 2 2 system.

8 Vector Field The direction of the motion of the solution x = (x 1, x 2 ) T along the solution curves can be read off from the explicit formula x 1 (t) = c 1 e t, x 2 (t) = c 2 e 2t. On the other hand, it can also be directly determined from the right-hand side [ ] 1 0 f(x) = Ax = 0 2 x of the system, which defines a vector field on the phase plane. The vector field must be tangent to the solution curves at every point x on the phase plane.

9 The Vector Field of the 2 2 System The vector field of the 2 2 system is shown below. Figure 2: The vector field of the 2 2 system.

10 The Phase Portrait of the 3 3 System Similar analysis can be carried out for more general linear systems. For example, the following figure shows the phase portrait of the 3 3 system ẋ = Ax where A = diag[1, 1, 1]: Figure 3: The phase portrait of the 3 3 system.

11 1.2 Diagonalization In the last section, we have seen how to solve uncoupled linear systems of the form ẋ = Ax = diag[λ 1,...,λ n ]x. The purpose of this and the following sections is to develop solution techniques for general, coupled linear system where the matrix A is not necessarily diagonal. The key is to reduce A to its diagonal form, or in more general situations, to its Jordan form.

12 Matrices with Real Distinct Eigenvalues Let s start with the simple case where A has real, distinct eigenvalues. The following theorem provides the basis for the solution of the linear system ẋ = Ax. Theorem 1 If the eigenvalues λ 1, λ 2,...,λ n of an n n matrix A are real and distinct, then any set of corresponding eigenvectors {v 1,v 2,...,v n } forms a basis for R n, the matrix P = [v 1,v 2,...,v n ] is invertible and P 1 AP = diag[λ 1, λ 2,...,λ n ]. The proof of the theorem can be found in any standard linear algebra text, for example Lowenthal [Lo].

13 Matrices with Real Distinct Eigenvalues (Cont d) Using the above theorem, we may solve the linear system ẋ = Ax by introducing the change of variable: y = P 1 x. It reduces the original system to an uncoupled linear system: ẏ = diag[λ 1,...,λ n ]y, and the solution of the original system can then be easily found: x(t) = PE(t)P 1 x(0), where E(t) is the diagonal matrix E(t) = diag [ e λ 1t,...,e λ nt ].

14 Example As an example, consider the linear system ẋ 1 = x 1 3x 2, ẋ 2 = 2x 2, x 1 (0) = c 1, x 2 (0) = c 2. Using the procedure described above, the solution is found to be x 1 (t) = c 1 e t + c 2 (e t e 2t ), x 2 (t) = c 2 e 2t.

15 Example (Cont d) The phase portrait of the above system is shown below. Figure 4: The phase portrait of the example.

16 Stable and Unstable Subspaces Note that the subspaces spanned by the eigenvectors v 1 and v 2 of the matrix A determine the stable and unstable subspaces of the linear system ẋ = Ax, according to the following definition. Definition 2 Suppose that the n n matrix A has k negative eigenvalues λ 1,...,λ k and n k positive eigenvalues λ k+1,...,λ n and that these eigenvalues are distinct. Let {v 1,...,v n } be a corresponding set of eigenvectors. Then the stable and unstable subspaces of the linear system, E s and E u, are the linear subspaces spanned by {v 1,...,v k } and {v k+1,...,v n } respectively; i.e., E s = span{v 1,...,v k }, E u = span{v k+1,...,v n }.

17 1.3 Exponentials of Operators In the last section, we have seen how to solve the linear system ẋ = Ax when A has real distinct eigenvalues, or more generally when A is diagonalizable. The purpose of this and the following sections is to study the general case where A is not necessarily diagonalizable. The key is to define the matrix exponential e At and verify the identity d dt eat = Ae At.

18 Matrices as Linear Operators We shall define e At through the Taylor series e At = k=0 1 k! Ak t k, but first we need to make sure that the series converges in appropriate norms. To introduce a norm (i.e. a measure ) for an n n matrix A, we view it as a linear operator T that maps an element in R n (i.e. an n-vector) to another element in R n : T : R n R n, T(x) = Ax. It can be shown that the converse is also true, i.e. any linear operator that maps R m to R n can be identified with an n m matrix. So matrices are indeed synonyms for linear operators.

19 Operator Norm For a linear operator T : R n R n, we define the operator norm: T = sup x =0 T(x), x where x denotes the Euclidean norm of x R n : x = x 2 1 + + x2 n. It can be readily verified that the operator norm has the following equivalent definitions: T = sup T(x) or T = sup T(x). x 1 x =1 Remark. The induced norm of the matrix representation A of the operator T is called the 2-norm of A.

20 Properties of the Operator Norm The operator norm has all of the usual properties of a norm, namely for any linear operators S, T : R n R n, (a) T 0 and T = 0 iff T = 0 (positive definiteness) (b) at = a T for any a R (positive homogeneity) (c) S + T S + T (triangle inequality or subadditivity) It can be shown that the space L(R n ) of linear operators T : R n R n equipped with the norm is a complete normed space, or in other words, a Banach space. The convergence of a sequence of operators T k L(R n ) can then be defined in terms of the norm.

21 Convergence in Operator Norm Definition 3 A sequence of linear operators T k L(R n ) is said to converge to a linear operator T L(R n ) as k, i.e., lim T k = T, k if for any ǫ > 0, there exists an N such that T T k < ǫ for all k N. Now we can show that the infinite Taylor series e Tt = k=0 1 k! T k t k, converges in the operator norm.

22 The Operator Exponential e Tt Theorem 4 Given T L(R n ) and t 0 > 0, the series e Tt := k=0 1 k! T k t k is absolutely and uniformly convergent for all t t 0. Moreover, e Tt e Tt. To prove this theorem, we need the following lemma. Lemma 5 For S, T L(R n ) and x R n, (a) T(x) T x (b) TS T S (c) T k T k for k = 0, 1, 2,...

23 The Matrix Exponential e At By identifying an n n matrix A with a linear operator T L(R n ) via the relation T(x) = Ax, we may define the matrix exponential e At as follows. Definition 6 Let A be an n n matrix. Then for t R, e At is the n n matrix defined by the Taylor series e At = k=0 1 k! Ak t k. As will be shown later, the matrix exponential e At can be computed in terms of the eigenvalues and eigenvectors of A.

24 Properties of the Matrix Exponential We next establish some basic properties of the operator exponential e T in order to facilitate the computation of the corresponding matrix exponential e A. Proposition 7 If P and T are linear operators on R n and S = PTP 1, then e S = Pe T P 1. Corollary 8 If P 1 AP = diag[λ j ], then e At = P diag[e λ jt ]P 1. Proposition 9 If S and T are linear operators on R n which commute, i.e., ST = TS, then e S+T = e S e T = e T e S. Corollary 10 If T is a linear operator on R n, the inverse of the linear operator e T is given by (e T ) 1 = e T.

25 Properties of the Matrix Exponential (Cont d) Corollary 11 (Complex Conjugate Eigenvalues) If [ ] a b A =, b a then e A = e a [ cosb sinb sinb cosb ]. Corollary 12 (Nontrivial Jordan Block) If [ ] a b A =, 0 a then e A = e a [ 1 b 0 1 ].

26 Matrix Exponential for 2 2 Matrices It will be shown in Section 1.8 that, for any 2 2 matrix A, there is an invertible 2 2 matrix P (whose columns consist of generalized eigenvectors of A) such that the matrix B = P 1 AP has one of the following forms: [ ] [ ] [ ] λ 0 λ 1 a b B =, B =, or B =. 0 µ 0 λ b a It then follows that e At = Pe Bt P 1 where [ ] [ e λt e Bt 0 1 t =, e Bt = e λt 0 e µt 0 1 [ ] cosb sinb or e Bt = e at. sinb cosb ],