Lecture Notes for Math 524 Dr Michael Y Li October 19, 2009 These notes are based on the lecture notes of Professor James S Muldowney, the books of Hale, Copple, Coddington and Levinson, and Perko They are for the use of students in my graduate ODE class Chapter 2 Linear Systems 21 Basic Theory Consider a linear system of n differential equations x i(t) = n a ij (t) x j (t) + f i (t), i=1 i = 1,, n where a ij, f are real-valued (or complex-valued) continuous functions on an interval I Let A(t) = a 11 a nn a 1n a nn We can write the linear system in matrix form x = x 1 x n, f = f 1 f n x (t) = A(t) x(t) + f(t) (NH) When f 0, we have the corresponding homogeneous system (while (NH) is called the inhomogeneous system) x (t) = A(t) x(t) (H) We make the basic assumption that A(t) and f(t) are continuous functions over an interval I We will also study linear n -th order differential equations (recall the second order linear oscillators in the Introduction) a n (t)x (n) (t) + a n 1 (t)x (n 1) (t) + + a 1 (t)x (t) + a 0 (t)x(t) = g(t) (1) 1
where x(t) is a scalar-valued function, and a i, g are all scalar-valued functions, continuous in an interval I Assume a n (t) 0 Set b j = a n j /a n, j = 1,, n, u(t) = x(t) x (t) x (n 1) (t), A(t) = 0 1 0 0 0 0 1 0 0 0 0 1 b n b n 1 b n 2 b 1 Then the linear equation (1) can be written as a first order linear system u (t) = A(t)u(t) + f(t), f(t) = 0 0 g(t)/a n (t) Example 21 A first order linear equation ( n = 1 ) can be completely integrated x (t) = a(t)x(t) + f(t) e R t a(s)ds (x (t) a(t)x(t)) = e R t a(s)ds f(t) [e R t a(s)ds x(t)] = e R t a(s)ds f(t) Integrate, we have e R t t t a(s)ds 0 x(t) x(t0 ) = e R [ t t x(t) = e t a(s)ds 0 x( ) + e R s ] t a(τ)dτ 0 f(s)ds = x( )e R s a(τ)dτ f(s)ds R t a(s)ds + t e R t s a(τ)dτ f(s) ds Proposition 21 (Superposition Principle) The following holds for linear systems (N H) and (H) (i) If x 1 (t), x 2 (t) are two solutions of (H), c 1, c 2 are two scalars, then c 1 x 1 (t) + c 2 x 2 (t) is also a solution of (H) (ii) If x(t) is a solution of (H) and y(t) is a solution of (N H), then x(t) + y(t) is a solution of (NH) Corollary 22 The set of all solutions to the homogeneous system (H) is a vector space Let be a vector norm of R n It induces a matrix norm for all n n matrices by A = sup { Ax : x R n, x = 1} 2
Theorem 23 Assume that A(t) and f(t) are continuous for t I Then, for I, x 0 R n, the solution to x (t) = A(t)x(t) + f(t), x( ) = x 0 exists for all t I and is unique Proof From x (t) = A(t)x(t) + f(t) we have t x(t) x 0 + f(s) ds + x 0 + b f(s) ds + t A(s) x(s) ds, t t A(s) x(s) ds, t b Gronwall s lemma implies x(t) [ b ] R t x 0 + f(s) ds e t A(s) ds 0 for t b This implies that x(t) exists in any compact subinterval [, b] of I ( [a, ] is similar) Therefore x(t) exists in I The uniqueness follows from the fact that f(t, x) = A(t)x + f(t) is Lipschitz in any compact subset of I R n Corollary 24 If x(t) is a solution of (H), then x( ) = 0 = x(t) = 0 for all t I Theorem 25 The set of solutions of (H) is a vector space of dimension n Proof Let x i (t), i = 1,, n, be n solutions to (H) such that x i ( ) = e i, where {e 1,, e n } is a basis of R n Let c i, i = 1,, n be scalars Set Then x(t) = c 1 x 1 (t) + + c n x n (t) (2) x( ) = c 1 e 1 + + c n e n (3) By Corollary 24, x(t) 0 if and only if x( ) = 0 if and only if c i = 0, i = 1,, n Therefore, x i (t) are linearly independent Furthermore, since any vector x( ) can be written as in (3), by uniqueness, each solution to (H) can be written as in (2) Therefore, {x 1 (t),, x n (t)} is a basis of the solution space 3
22 Matrix Solutions An n n matrix-valued function X(t) is a fundamental matrix of (H) if and X(t) is non-singular for all t I The proof of Theorem 25 also implies the following Theorem 26 The followings are equivalent: (i) X(t) is a fm of (H) X (t) = A(t)X(t) (4) (ii) The n columns of X(t) are linearly independent solutions of (H) (iii) X(t) satisfies (4) and X( ) is nonsingular A matrix solution X(t, ) is called a principal solution of (H) at if t X(t, ) = A(t) X(t, ), X(, ) = I n n The solution x(t) = x(t;, x 0 ) of (H) can be written as Furthermore, by uniqueness, where X(t) is any fm of (H) x(t) = X(t, )x 0 X(t, ) = X(t, s)x(s, ) and X(t, ) = X(t)X 1 ( ), Exercise Let X(t) be a fm of (H) (1) Show that Y (t) is a fm of (H) if and only if Y (t) = X(t)C for some constant non-singular matrix C (2) Under what circumstance is CX(t) a fm of (H)? Theorem 27 (Abel-Liouville-Jacobi Formula) Let X(t) be a fm of (H) Then R t det(x(t)) = det(x( )) e t tr(a(s)) ds 0 (5) Proof Show that d dtdet(x(t)) = tr(a(t))det(x(t)) (Exercise) 4
23 Higher order equations Let ϕ 1 (t),, ϕ m (t) be smooth functions The Wronskian, W (t; ϕ 1,, ϕ m ), of ϕ 1,, ϕ m is defined as ϕ 1 ϕ 2 ϕ m ϕ 1 ϕ 2 ϕ m W (t; ϕ 1,, ϕ m ) = (6) ϕ (m 1) 1 ϕ (m 1) 2 ϕ (m 1) m The functions ϕ 1,, ϕ m are linearly independent over an interval I, if W (t; ϕ 1,, ϕ m ) 0 for all t I Consider an n-th order equation and the corresponding system where u = (x, x,, x (n 1) ) T, and x (n) + b 1 x (n 1) + + b n 1 x + b n x = 0, (7) A(t) = u = Au, (8) 0 1 0 0 0 0 1 0 0 0 0 1 b n b n 1 b n 2 b 1 Applying Theorems 25, 26, and 27 to system (8), we have the following (9) Theorem 28 The set of solutions of an n-th order linear equation is a vector space of dimension n Let ϕ 1 (t),, ϕ n (t) be a fundamental set of solutions, and W (t) = W (t; ϕ 1,, ϕ n ) Then W (t) = W ( ) e R t b 1 (s)ds (10) Remark A set of solutions ϕ 1 (t),, ϕ m (t) to a linear homogeneous equation is linearly independent over I if and only if W (t; ϕ 1,, ϕ m ) 0 for all t I, if and only if W ( ; ϕ 1,, ϕ m ) 0 for some I, by the uniqueness 24 Adjoint Equation Let A be the conjugate transposition of A The linear system is called the adjoint equation of (H) z (t) = A (t)z(t) Lemma 29 Let Y (t) be a non-singular n n matrix-valued differentiable function Then (Y 1 ) = Y 1 Y Y 1 5
Proof Differentiating Y (t)y 1 (t) = I we have Y Y 1 + Y (Y 1 ) = 0, (why does (Y 1 ) exist?) which leads to the claim Theorem 210 Let X(t) be a fm of (H) Then (i) Z(t) = (X(t) ) 1 is a fm of the adjoint equation (H ) (ii) Y (t) is a fm of (H ) if and only if X (t)y (t) = C where C is constant and non-singular Proof (X 1 ) = X 1 X X 1 = X 1 AXX 1 = X 1 A (11) ((X ) 1 ) = A (X ) 1 (12) This establish the first claim By Theorem 26, Y (t) is a fm of (H ) if and only if Y (t) = (X ) 1 (t)c, C non-singular, if and only if X (t)y (t) = C Definition (H) is self-adjoint if A(t) = A (t) Corollary 211 Let X(t) be fm of (H) Then (H) is self-adjoint if and only if X (t)x(t) = C for some constant and non-singular C Exercise Suppose that x(t) and y(t) are solutions of x y = A (t)y, respectively Prove that x (t) y(t) = const = A(t)x and its adjoint equation 25 Non-homogeneous equations: variation of constant formula Consider a linear system x (t) = A(t) x(t) + f(t) Let X(t) be a fm of the corresponding homogeneous system x (t) = A(t) x(t) (NH) (H) By the superposition principle, to find the general solution of (NH), it is sufficient to find a particular solution x p (t) to (NH), and to which we will superpose the general solution of (H) To find x p (t), we use the method of variation of parameters Assume that We determine the right choice of y(t) by plugging (13) into (NH) x p (t) = X(t)y(t) (13) x = X y + Xy = A(t)Xy + Xy = A(t)x + Xy want = A(t)x + f 6
We find Xy = f, or y = X 1 f Thus t y(t) = c + X 1 (s)f(s) ds Therefore [ t ] x p (t) = X(t) c + X 1 (s)f(s) ds Therefore, the solution to (NH) satisfying the initial condition x( ) = x 0 has the form which is called the variation of constants formula t x(t) = X(t)X 1 ( )x 0 + X(t)X 1 (s)f(s) ds (14) t = X(t, )x 0 + X(t, s)f(s) ds, (15) 26 Linear systems with constant coefficients Consider x (t) = A x(t), (16) where A is a constant n n matrix Let P be an n n invertible matrix The change of variables x = P y will transform (16) to the following linear system y = P 1 AP y (17) Linear systems (16) and (17) are equivalent By choosing appropriate P, matrix J = P 1 AP can take various canonical forms, which makes system (17) much easier to analyze First, we introduce the exponential e A of an n n matrix A, e A = n=0 A n n! = I + A + A2 2 + (18) The infinite series is convergent in matrix norm, since n k=m A k k! n A k and the second sum can be made arbitrarily small when m, n are sufficiently large, due to the convergence of power series x n /n! k=m k! Theorem 212 e A has following properties (1) e A is invertible and (e A ) 1 = e A 7
(2) If AB = BA, then e A+B = e A e B Also give an example where e A+B e A e B (3) If P is an n n invertible matrix, then e P 1 AP = P 1 e A P (4) If A is diagonalizable, so is e A (5) If Ax = λx for some x 0, then e A x = e λ x (6) If A is symmetric, then e A is positive definite (7) det(e A ) = e tra (8) d dt eat = Ae At = e At A (9) e At is an fundamental matrix of x = Ax (10) If X(t) is any fundamental matrix of x = Ax, then e At = X(t)X 1 (0) (Hint: use the uniqueness of solution to the IVP) Proof Exercise In the following, we will use several canonical forms J = P 1 AP to investigate the fm e At = P e Jt P 1 of (16) Proposition 213 (Jordan Canonical Form) There exists invertible matrix P such that where J = diag (J 0, J 1,, J k ), (19) λ λ 1 0 q+i 0 J 0 = 1 λ q+i, J i = 0 λ q 0 1 λ q+i q + r 1 + + r k = n, and λ i are eigenvalues of A Note that λ j may not be all distinct Eigenvalues λ 1,, λ q that appear in J 0 are called of simple type Excercise Give an example of matrices whose eigenvalues are of simple type but not simple r i r i, Proposition 214 Let J i be as defined above Then e Jt = diag (e J 0t, e J 1t,, e J kt ), (20) e J0t = diag (e λ1t,, e λqt ), (21) 1 0 e Jit = e λ q+it t 1 (22) t r i 1 (r i 1)! t 1 8
Proof For the third relation, let J = λi + N, where 0 0 N = 1 0 1 0 Then N r = 0 and N r 1 0 Such a matrix is called nilpotent of order r r r e Jt = e (λi+n)t = e λit e Nt (λin = NλI) [ = e λt e Nt = e λt I + Nt + + N r 1 t r 1 ] (r 1)! Calculations of N k, k = 1,, r 1, will lead to the desired relation for e Jt In summary, we have the following result Theorem 215 Every solution of (17) has the form y(t) = q k c i e λit + P ri (t)e λq+it, (23) i=1 i=1 where c i r i 1 are constant vectors and P ri (t) are vector-valued polynomials of degree no greater than From this theorem we immediately have the next two results on the asymptotic behaviours of solutions Theorem 216 All solutions x(t) to (16) satisfy x(t) 0 as t if and only if all eigenvalues of A have negative real part Theorem 217 All solutions to (16) are bounded for t if and only if X(t) M for t and a fm X(t), and if and only if (a) all the eigenvalues of A have nonpositive real parts, and (b) all eigenvalues λ with zero real parts are of simple type (ie appear in J 0 Canonical Form) of the Jordan Proof Exercise 9
27 Calculation of e At Definition A vector v 0 in R n is said to be a generalized eigenvector with respect to an eigenvalue λ if (λi A) k v = 0 for some 1 k r, where r is the multiplicity of λ ( ) 2 0 For instance, the matrix A = has a double eigenvalue λ = 2 but only one linearly 1 2 independent eigenvector v 0 = (0, 1) T It has a generalized eigenvector v = (1, 0) T that is linearly independent of v 0 and satisfies (2I A) 2 v = 0 The following result is standard in linear algebra Theorem 218 Let λ 1,, λ n be the eigenvalues of A, counting multiplicities Then there exists a set of n linearly independent generalized eigenvectors {u 1,, u n } Let P = (u 1,, u n ) Then A = S + N, where P 1 SP = diag(λ 1,, λ n ) and N is nilpotent of order k n Moreover, NS = SN If all generalized eigenvectors are eigenvectors, then A has n linearly independent eigenvectors In this case N = 0, and P 1 AP = diag (λ 1,, λ n ), ie A is diagonalizable This result can simplify the calculation of e At Theorem 219 Let λ 1,, λ n be the eigenvalues of A, counting multiplicities, and u 1,, u n be a set of linearly independent generalized eigenvectors Set P = (u 1,, u n ) Then [ ] e At = P diag (e λ1t,, e λnt ) P 1 I + Nt + + (Nt)k 1 (24) (k 1)! In particular, when A has an eigenvalue λ of multiplicity n, we can choose P as the identity matrix and have th following [ ] e At = e λt I + Nt + + (Nt)k 1 (25) (k 1)! Examples (1) Find e At for A = ( ) 3 1 1 1 Solution A has a double eigenvalue λ = 2 Thus ( ) 2 0 S = 0 2 and ( ) 1 1 N = A S = 1 1 10
with N 2 = 0 Therefore, by (25), ( e At = e 2t [I + Nt] = e 2t 1 + t t ) t 1 t (2) Find e At for 1 0 0 A = 1 2 0 1 1 2 Solution The eigenvalues of A are λ 1 = 1, λ 2 = λ 3 = 2 The corresponding eigenvectors are v 1 = (1, 1, 2) T, v 2 = (0, 0, 1) T We find a generalized eigenvector for λ 2 = 2 that is linearly independent of v 2 be solving the system (A 2I) 2 v = 0, ie 1 0 0 1 0 0 v = 0 2 0 0 We get v 3 = (0, 1, 0) T Thus 1 0 0 1 0 0 P = 1 0 1 and P 1 = 2 0 1 2 1 0 1 1 0 Therefore, 1 0 0 S = P diag(1, 2, 2) P 1 = 1 2 0 2 0 2 0 0 0 N = A S = 0 0 0, 1 1 0 and N 2 = 0 Therefore, by (24), e At = P diag (e t, e 2t, e 2t ) P 1 [I + Nt] e t 0 0 = e t e 2t e 2 2e t + (2 t)e 2t te 2t e 2t In the case of complex eigenvalues, one can choose a real matrix P as follows See Perko, page 36 Theorem 220 Let A be a real 2n 2n matrix with complex eigenvalues λ j = a j + ib j and λ j = a j ib j, j = 1,, n There exists a basis (of C n ) of generalized eigenvectors w j = 11
u j + iv j and w j = u j iv j, j = 1,, n The set {u 1, v 1,, u n, v n } is a basis of R 2n Set P = (u 1, v 1,, u n, v n ) Then P is invertible and A = S + N, where ( ) P 1 aj b SP = diag ( j ), b j a j and N k+1 = 0 for some k 2n 1 Moreover, NS = SN Applying this result, we have the following Theorem 221 Under the assumptions of the above theorem, we have ( e At = P diag (e a jt cos bj t sin b j t ))P [I 1 + Nt + + N k t k ] (26) sin b j t cos b j t k! Exercise Calculate e At for the following matrices: [ ] [ ] λ 0 λ 0 (a) (b) 0 µ 1 λ (c) [ ] a b b a 28 Two-dimensional systems In this section, ( we) analyze two-dimensional linear systems of constant coefficients in great detail a b Let A = Then its eigenvalues satisfy λ c d 2 + pλ + q = 0, and thus λ 1,2 = 1 2 [ p ± ], where p = a + d = tra = λ 1 + λ 2, q = ad bc = det A = λ 1 λ 2 = p 2 4q We first discuss different cases of Jordan canonical form J of A We assume that the system x = Ax has been reduced to its canonical form y = Jy by the change of variables x = P y For each different case of J, we solve the system to get y(t) = (y 1 (t), y 2 (t)) Then solutions curves y(t) with respect to different initial data will be plotted in the y 1 y 2 coordinate space as parametric curves The result is the so-called phase portrait Then we will compare the phase portrait in the y 1 y 2 space to that in the x 1 x 2 space Case I: Distinct eigenvalues of the same sign (a) λ 2 < λ 1 < 0 ( > 0, p < 0, q > 0) 12
( y1 y 2 ) ( ) ( ) λ1 0 y1 = 0 λ 2 y 2 y 1 (t) = A e λ 1t, A = y 1 (0); y 2 (t) = B e λ 2t, B = y 2 (0) A 0, B = 0 = y 1 -axis, A = 0, B 0 = y 2 -axis, A 0, B 0 = y 2(t) y 1 (t) = B A e(λ 2 λ 1 )t t 0 y ( 2 B = y1 ) λ 2 λ 1 (parabolas) A (b) 0 < λ 1 < λ 2 ( > 0, p > 0, q > 0) The phase portraits for both cases (a) and (b) are depicted in the following figure The origin (0, 0) is called a stable node and unstable node, respectively 2 15 1 05-2 -15-1 -05 05 1 15 2-05 -1-15 -2 Case II: Equal eigenvalues of simple type (a) λ 1 = λ 2 = P/2 < 0, = 0, rank(λi A) = 0 ( y1 y 2 ) ( ) ( ) λ1 0 y1 = 0 λ 2 y 2 y 1 (t) = A e λ 1t, A = y 1 (0); y 2 (t) = B e λ 2t, B = y 2 (0) A = 0, B 0 = y 2 -axis, A 0, B = 0 = y 1 -axis, A 0, B 0 = y 1(t) y 2 (t) = A B, a straight line (b) λ 1 = λ 2 = P/2 > 0, = 0, rank(λi A) = 0 13
The phase portraits for both cases (a) and (b) are depicted in the following figure The origin (0, 0) is a stable node and unstable node, respectively 2 15 1 05-2 -15-1 -05 05 1 15 2-05 -1-15 -2 Case III: Equal eigenvalues of non-simple type (a) λ 1 = λ 2 = P/2 < 0, = 0, rank(λi A) = 1 ( y1 y 2 ) ( ) ( ) λ1 0 y1 = 1 λ 2 y 1 (t) = A e λ 1t, A = y 1 (0); y 2 (t) = (At + B) e λ 2t, B = y 2 (0) y 2 A = 0, B 0 = y 2 -axis, A 0, y 2 (t) = 0 = t = B A, A 0, B 0 = y 1(t) y 2 (t) = A At + B t 0 (b) λ 1 = λ 2 = P/2 > 0, = 0, rank(λi A) = 1 The phase portraits for both cases (a) and (b) are depicted in the following figure The origin (0, 0) is called a degenerate stable node and degenerate unstable node, respectively 2 15 1 05-2 -15-1 -05 05 1 15 2-05 -1-15 -2 14
Case IV: Distinct eigenvalues of opposite sign λ 1 < 0 < λ 2 ( > 0, q < 0) ( y1 y 2 ) ( ) ( ) λ1 0 y1 = 0 λ 2 y 2 y 1 (t) = A e λ 1t, A = y 1 (0); y 2 (t) = B e λ 2t, B = y 2 (0) A = 0, B 0 = y 2 -axis, A 0, B = 0 = y 1 -axis, A 0, B 0 = y ( 2 B = y1 A ) λ 2 λ 1 (hyperbolas) The phase portrait is depicted in the following figure The origin (0, 0) is called a saddle point The y 1 -axis is the stable manifold and the y 2 -axis is the unstable manifold 2 15 1 05-2 -15-1 -05 05 1 15 2-05 -1-15 -2 Case V: Complex eigenvalues λ = a ± ib, ( < 0, p = 2a 0) ( y1 where c 2 = c 2 1 + c2 2 and tan θ = c 2/c 1 (a) a < 0 y 1 (t) 0, y 2 (t) 0 as t (b) a > 0 y 1 (t) 0, y 2 (t) 0 as t y 2 ) ( ) ( ) a b y1 = b a y 1 (t) = e at (c 1 cos bt c 2 sin bt) = ce at cos(t + θ) y 2 (t) = e at (c 2 cos bt + c 1 sin bt) = ce at sin(t + θ), The phase portraits for both cases (a) and (b) are depicted in the following figure The origin (0, 0) is called a stable focus (or spiral) and a unstable focus (or spiral), respectively y 2 15
2 15 1 05-2 -15-1 -05 05 1 15 2-05 -1-15 -2 Case VI: Pure imaginary eigenvalues λ = ±ib, ( < 0, p = 0) The phase portrait is depicted in the following figure All orbits are concentric circles except the origin The origin (0, 0) is called a center 2 15 1 05-2 -15-1 -05 05 1 15 2-05 -1-15 -2 In all the previous cases, A is nonsingular, and both eigenvalues are nonzero In these cases, the origin is called a hyperbolic equilibrium When one of the eigenvalues is zero, the origin is non-hyperbolic This case needs to be dealt with separately (Exercise) Let x = Ax and y = By be two n n linear systems with principal solutions X(t) and Y (t), respectively The two systems are called topologically conjugate if there exists homeomorphism h : R n R n such that h(x(t)x 0 ) = Y (t)h(x 0 ) for all x 0 R n and t R The mapping h maps trajectories of x = Ax to those of y = By Two topologically conjugate differential equations are considered qualitatively the same For instance, a linear system of constant coefficients x = Ax is topologically conjugate to y = Jy, where J = P 1 AP is the Jordan canonical form, and the mapping h is given by y = h(x) = P 1 x 16
29 Periodic linear systems: the Floquet theory In this section, we investigate linear systems: where x (t) = A(t) x(t) (27) A(t + T ) = A(t) t R (28) First of all, we will establish by an example a major difference between a periodic linear system and that of constant coefficients Example (Markus and Yamabe) Let A(t) be ( 1 + 3 A(t) = 2 cos2 t 1 3 ) 2 cos t sin t 1 3 2 sin t cos t 1 + 3 2 sin2 t The eigenvalues of A(t) are λ 1,2 (t) = 1 4 [ 1 ± i 7] Thus Re λ 1,2 (t) = 1 4 < 0 for all t R Based on our experience with systems of constant coefficients in the previous sections, we might mistakenly expect that solutions to x = A(t)x have exponential decay as t However, one can verify that ( ) cos t x(t) = e t 2 sin t is a solution to our linear system, and this solution is unbounded as t The lesson from this example: for linear systems with time-dependent coefficients, eigenvalues of A(t) no longer gives relevant information on the asymptotic behaviours of solutions Proposition 222 If C is an n n matrix (real or complex) with det (C) 0, then there exists a matrix B such that C = e B (We can define B = log C ) Proof If C = e B and P is nonsingular, then P 1 CP = e P 1BP We may thus assume that C is of the Jordan canonical form C = diag(j 1,, J k ) We need only look at two types Jordan blocks: J = diag(λ 1,, λ q ) Choose B = log J = diag(log λ 1,, log λ q ) (Why is log λ i defined? Is it a real number? Is it unique?) ( J = λi + N = λ I + N ), N r = 0 λ Choose r 1 B = log J = log λi + j=1 ( N) j jλ j The sum in the above relation comes from the power series of log(1 + x) We can verify in both cases e B = J (Exercise) In the case when C = diag(j 1,, J k ), we can choose B = diag(log J 1,, log J k ) 17
Exercise For any real matrix D such that detd 0, there exists a real matrix B such that e B = D 2 Exercise If λ 1,, λ n are eigenvalues of A, then e λ 1,, e λn are eigenvalues of e A If λ i 0 for all i, then log λ i are eigenvalues of log A The following result is the core of this section Theorem 223 (Floquet) Every fundamental matrix of (27) has the form X(t) = P (t) e Bt (29) where P (t), B are n n matrices, P (t + T ) = P (t) for all t, and B is a constant matrix Proof If X(t) is a fm, so is X(t+T ) due to the periodicity of P (t) Thus there exists a constant nonsingular matrix C, such that X(t + T ) = X(t) C Then, by Proposition 222, there exists B such that C = e BT Set P (t) = X(t)e Bt We need only to verify that P (t) is T-periodic In fact, P (t + T ) = X(t + T )e B(t+T ) = X(t)e BT e BT e Bt = X(t)e Bt = P (t) A linear system x = B(t)x is said to be reducible if there exists a linear transformation x = P (t)y that transform the system into a linear system of constant coefficient y = B 1 y One implication of the Floquet theorem is that any periodic linear system can be transformed into a linear system of constant coefficient y = By by a periodic linear transformation x = P (t)y Corollary 224 There exists nonsingular periodic change of variables x = P (t)y that transforms (27) into a system of constant coefficients Proof Suppose that P (t) and B are defined by (29) Let x = P (t)y in (27) Then y satisfies y = P 1 (AP P )y Since P = Xe Bt, we have P = AP P B, and this proves the result Exercise Show that B in the Floquet representation (29) can always be chosen to be a real matrix if A(t) is real The only thing that is required for this is P (t + 2T ) = P (t) Definition A monodromy matrix of (27) is the matrix C associated with a fm X(t) through X(t + T ) = X(t)C If we choose X(t) such that X(0) = I, then X(T ) is a monodromy matrix Eigenvalues ρ of a monodromy matrix are called Floquet multipliers of (27) Any λ such that ρ = e λt is called a Floquet exponent We can choose the exponents as eigenvalues of B for any matrix B such that C = e BT Thus the real parts of Floquet exponents are uniquely defined, though exponents themselves are not (Why?) The Floquet multipliers do not depend on the particular monodromy matrix chosen (why?) 18
Corollary 225 Let C be a monodromy matrix Then det C = er T 0 tra(s) ds Proof This follows from the Liouville formula (Exercise) Theorem 226 λ is a Floquet exponent of (27) if and only if there is a nontrivial solution of (27) of form p(t)e λt where p(t + T ) = p(t) In particular, there is a nontrivial periodic solution of (27) of period T (or 2T ) if and only if there is a Floquet multiplier 1 (or 1 ) Proof If p(t)e λt, p(t + T ) = p(t) 0, satisfies (27), then there is x 0 0 such that p(t)e λt = P (t)e Bt x 0 (= X(t)x 0 ) Thus P (t)e Bt [e BT e λt I]x 0 = 0, and thus det(e BT e λt I) = 0 Conversely, if det(e BT e λt I) = 0, choose x 0 0 such that [e BT e λt I]x 0 = 0 Choose λ to be an eigenvalue of B Then e Bt x 0 = e λt x 0 for all t Thus P (t)e Bt x 0 = P (t)x 0 e λt = p(t)e λt is the desired solution The last assertion is obvious (Exercise) The Floquet representation (29) provides a way to characterize the asymptotic behaviours of solutions to a periodic linear system using the Floquet multipliers (or equivalently, Floquet exponents) Theorem 227 (1) A necessary and sufficient condition that all solutions x(t) to (27) satisfy x(t) 0 as t is that all Floquet multipliers λ i satisfies λ i < 1, i = 1,, n (or equivalently, all Floquet exponents µ i satisfies Reµ i < 0, i = 1,, n ) (2) A necessary and sufficient condition that all solutions x(t) to (27) satisfy x(t) M for all t 0 is that all Floquet multipliers λ i satisfies λ i 1, i = 1,, n, and if λ i = 1 then λ i is of simple type (or equivalently, all Floquet exponents µ i satisfies Reµ i 0, i = 1,, n, and if Reµ i = 0 then µ i is of simple type) Proof The proof is parallel to that of the case of constant coefficients (Exercise) The problem of determining the Floquet multipliers (or exponents) is an extremely difficult one, especially when n > 2 For a recent development in this direction, see JS Muldowney, Rocky Mountain J Math 20 (1990), 857 19