1 1 Linear Systems The goal of this chapter is to study linear systems of ordinary differential equations: ẋ = Ax, x(0) = x 0, (1) where x R n, A is an n n matrix and ẋ = dx ( dt = dx1 dt,..., dx ) T n. dt It will be shown that the unique solution of Eq. (1) is given by x(t) = e At x 0, where e At is an n n matrix defined by its Taylor series. A good portion of this chapter is concerned with the computation of e At in terms of the eigenvalues and eigenvectors of A.
2 1.1 Uncoupled Linear Systems Let s start the solution of the linear system (1) with the simplest case, where the system contains only one equation (i.e. n = 1): ẋ = ax, x(0) = c. The method of separation of variables immediately gives x(t) = c e at.
3 2 2 Uncoupled Systems: an Example To move one step forward, let s consider the following 2 2 uncoupled system: ẋ 1 = x 1, ẋ 2 = 2x 2, x 1 (0) = c 1, x 2 (0) = c 2. Its solution is easily found to be: x 1 (t) = c 1 e t, x 2 (t) = c 2 e 2t, or in matrix form: x(t) = [ e t 0 0 e 2t ] c =: e At c.
4 The General Case Clearly, the same procedure can be applied to solve uncoupled systems of any size. For example, the solution of the following 3 3 system: ẋ 1 = x 1, ẋ 2 = x 2, ẋ 3 = x 3, x 1 (0) = c 1, x 2 (0) = c 2, x 3 (0) = c 3. is given by x(t) = e t 0 0 0 e t 0 c =: eat c. 0 0 e t
5 Phase Plane Analysis Before wrapping up this section, let s introduce some notations that will be useful in the study of the linear system ẋ = Ax. Let s first consider the 2 2 system: [ 1 0 ẋ = Ax, A = 0 2 ]. Recall it has the solution: x(t) = [ e t 0 0 e 2t ] c. Clearly, the above formula describes the dynamics of each of the system components x 1 and x 2.
6 Phase Portrait In many cases, however, we are interested in the dynamics of the entire system x = (x 1, x 2 ) T besides that of the individual components x 1 or x 2. To gain a better understanding of the entire system, we eliminate the variable t from the solution representation so that a single formula involving only x 1 and x 2 results: x 2 = c2 1c 2 x 2 1 For any fixed c 1, c 2, the above equation defines a curve in the x 1 x 2 -plane the so-called phase plane. The set of all solution curves for all possible values of c 1 and c 2 constitutes a phase portrait of the linear system ẋ = Ax..
7 The Phase Portrait of the 2 2 System The phase portrait of the 2 2 system is shown below. Figure 1: The phase portrait of the 2 2 system.
8 Vector Field The direction of the motion of the solution x = (x 1, x 2 ) T along the solution curves can be read off from the explicit formula x 1 (t) = c 1 e t, x 2 (t) = c 2 e 2t. On the other hand, it can also be directly determined from the right-hand side [ ] 1 0 f(x) = Ax = 0 2 x of the system, which defines a vector field on the phase plane. The vector field must be tangent to the solution curves at every point x on the phase plane.
9 The Vector Field of the 2 2 System The vector field of the 2 2 system is shown below. Figure 2: The vector field of the 2 2 system.
10 The Phase Portrait of the 3 3 System Similar analysis can be carried out for more general linear systems. For example, the following figure shows the phase portrait of the 3 3 system ẋ = Ax where A = diag[1, 1, 1]: Figure 3: The phase portrait of the 3 3 system.
11 1.2 Diagonalization In the last section, we have seen how to solve uncoupled linear systems of the form ẋ = Ax = diag[λ 1,...,λ n ]x. The purpose of this and the following sections is to develop solution techniques for general, coupled linear system where the matrix A is not necessarily diagonal. The key is to reduce A to its diagonal form, or in more general situations, to its Jordan form.
12 Matrices with Real Distinct Eigenvalues Let s start with the simple case where A has real, distinct eigenvalues. The following theorem provides the basis for the solution of the linear system ẋ = Ax. Theorem 1 If the eigenvalues λ 1, λ 2,...,λ n of an n n matrix A are real and distinct, then any set of corresponding eigenvectors {v 1,v 2,...,v n } forms a basis for R n, the matrix P = [v 1,v 2,...,v n ] is invertible and P 1 AP = diag[λ 1, λ 2,...,λ n ]. The proof of the theorem can be found in any standard linear algebra text, for example Lowenthal [Lo].
13 Matrices with Real Distinct Eigenvalues (Cont d) Using the above theorem, we may solve the linear system ẋ = Ax by introducing the change of variable: y = P 1 x. It reduces the original system to an uncoupled linear system: ẏ = diag[λ 1,...,λ n ]y, and the solution of the original system can then be easily found: x(t) = PE(t)P 1 x(0), where E(t) is the diagonal matrix E(t) = diag [ e λ 1t,...,e λ nt ].
14 Example As an example, consider the linear system ẋ 1 = x 1 3x 2, ẋ 2 = 2x 2, x 1 (0) = c 1, x 2 (0) = c 2. Using the procedure described above, the solution is found to be x 1 (t) = c 1 e t + c 2 (e t e 2t ), x 2 (t) = c 2 e 2t.
15 Example (Cont d) The phase portrait of the above system is shown below. Figure 4: The phase portrait of the example.
16 Stable and Unstable Subspaces Note that the subspaces spanned by the eigenvectors v 1 and v 2 of the matrix A determine the stable and unstable subspaces of the linear system ẋ = Ax, according to the following definition. Definition 2 Suppose that the n n matrix A has k negative eigenvalues λ 1,...,λ k and n k positive eigenvalues λ k+1,...,λ n and that these eigenvalues are distinct. Let {v 1,...,v n } be a corresponding set of eigenvectors. Then the stable and unstable subspaces of the linear system, E s and E u, are the linear subspaces spanned by {v 1,...,v k } and {v k+1,...,v n } respectively; i.e., E s = span{v 1,...,v k }, E u = span{v k+1,...,v n }.
17 1.3 Exponentials of Operators In the last section, we have seen how to solve the linear system ẋ = Ax when A has real distinct eigenvalues, or more generally when A is diagonalizable. The purpose of this and the following sections is to study the general case where A is not necessarily diagonalizable. The key is to define the matrix exponential e At and verify the identity d dt eat = Ae At.
18 Matrices as Linear Operators We shall define e At through the Taylor series e At = k=0 1 k! Ak t k, but first we need to make sure that the series converges in appropriate norms. To introduce a norm (i.e. a measure ) for an n n matrix A, we view it as a linear operator T that maps an element in R n (i.e. an n-vector) to another element in R n : T : R n R n, T(x) = Ax. It can be shown that the converse is also true, i.e. any linear operator that maps R m to R n can be identified with an n m matrix. So matrices are indeed synonyms for linear operators.
19 Operator Norm For a linear operator T : R n R n, we define the operator norm: T = sup x =0 T(x), x where x denotes the Euclidean norm of x R n : x = x 2 1 + + x2 n. It can be readily verified that the operator norm has the following equivalent definitions: T = sup T(x) or T = sup T(x). x 1 x =1 Remark. The induced norm of the matrix representation A of the operator T is called the 2-norm of A.
20 Properties of the Operator Norm The operator norm has all of the usual properties of a norm, namely for any linear operators S, T : R n R n, (a) T 0 and T = 0 iff T = 0 (positive definiteness) (b) at = a T for any a R (positive homogeneity) (c) S + T S + T (triangle inequality or subadditivity) It can be shown that the space L(R n ) of linear operators T : R n R n equipped with the norm is a complete normed space, or in other words, a Banach space. The convergence of a sequence of operators T k L(R n ) can then be defined in terms of the norm.
21 Convergence in Operator Norm Definition 3 A sequence of linear operators T k L(R n ) is said to converge to a linear operator T L(R n ) as k, i.e., lim T k = T, k if for any ǫ > 0, there exists an N such that T T k < ǫ for all k N. Now we can show that the infinite Taylor series e Tt = k=0 1 k! T k t k, converges in the operator norm.
22 The Operator Exponential e Tt Theorem 4 Given T L(R n ) and t 0 > 0, the series e Tt := k=0 1 k! T k t k is absolutely and uniformly convergent for all t t 0. Moreover, e Tt e Tt. To prove this theorem, we need the following lemma. Lemma 5 For S, T L(R n ) and x R n, (a) T(x) T x (b) TS T S (c) T k T k for k = 0, 1, 2,...
23 The Matrix Exponential e At By identifying an n n matrix A with a linear operator T L(R n ) via the relation T(x) = Ax, we may define the matrix exponential e At as follows. Definition 6 Let A be an n n matrix. Then for t R, e At is the n n matrix defined by the Taylor series e At = k=0 1 k! Ak t k. As will be shown later, the matrix exponential e At can be computed in terms of the eigenvalues and eigenvectors of A.
24 Properties of the Matrix Exponential We next establish some basic properties of the operator exponential e T in order to facilitate the computation of the corresponding matrix exponential e A. Proposition 7 If P and T are linear operators on R n and S = PTP 1, then e S = Pe T P 1. Corollary 8 If P 1 AP = diag[λ j ], then e At = P diag[e λ jt ]P 1. Proposition 9 If S and T are linear operators on R n which commute, i.e., ST = TS, then e S+T = e S e T = e T e S. Corollary 10 If T is a linear operator on R n, the inverse of the linear operator e T is given by (e T ) 1 = e T.
25 Properties of the Matrix Exponential (Cont d) Corollary 11 (Complex Conjugate Eigenvalues) If [ ] a b A =, b a then e A = e a [ cosb sinb sinb cosb ]. Corollary 12 (Nontrivial Jordan Block) If [ ] a b A =, 0 a then e A = e a [ 1 b 0 1 ].
26 Matrix Exponential for 2 2 Matrices It will be shown in Section 1.8 that, for any 2 2 matrix A, there is an invertible 2 2 matrix P (whose columns consist of generalized eigenvectors of A) such that the matrix B = P 1 AP has one of the following forms: [ ] [ ] [ ] λ 0 λ 1 a b B =, B =, or B =. 0 µ 0 λ b a It then follows that e At = Pe Bt P 1 where [ ] [ e λt e Bt 0 1 t =, e Bt = e λt 0 e µt 0 1 [ ] cosb sinb or e Bt = e at. sinb cosb ],