1 Chapter III Stability of Linear Systems 1. Stability and state transition matrix 2. Time-varying (non-autonomous) systems 3. Time-invariant systems
1 STABILITY AND STATE TRANSITION MATRIX 2 In this chapter, we study the Lyapunov stability of systems described by linear vector differential equations. The results presented here not only enable us to obtain necessary and sufficient conditions for the stability of linear systems, but also pave the way to deriving Lyapunov s linearization method, which is presented in the next chapter. 1 Stability and state transition matrix Consider a system described by the linear vector differential equation ẋ(t) = A(t)x(t), t 0 (1) The system (1) is autonomous is A( ) is constant as a function of time ; otherwise it is non-autonomous. It is clear that 0 is always an equilibrium of the system (1). Further, 0 is an isolated equilibrium if A(t) is nonsingular for some t 0.
1 STABILITY AND STATE TRANSITION MATRIX 3 The general solution of (1) is given by x(t) = Φ(t, t 0 )x(t 0 ) (2) where Φ(, ) is the state transition matrix associated with A( ) and is the unique solution of the equation d dt Φ(t, t 0) = A(t)Φ(t, t 0 ), t t 0 0 (3) Φ(t 0, t 0 ) = I, t 0 0 (4) With the aid of this explicit characterization of the solutions of (1), it is possible to derive some useful conditions for the stability of the equilibrium 0. Since these conditions involve the state transition matrix Φ, they are not of much computational value, because in general it is impossible to derive an analytical expression for Φ. Nevertheless, they are of conceptual value, enabling one to understand the mechanisms of stability and instability in linear systems.
1 STABILITY AND STATE TRANSITION MATRIX 4 Theorem 1.1. The equilibrium 0 is stable if and only if for each t 0 0 it is true that sup Φ(t, t 0 ) i := m(t 0 ) < (5) t t 0 where i denotes the induced norm of matrix. Proof : Assume that (5) is true, and let ɛ > 0, t 0 0 be specified. If we define δ(ɛ, t 0 ) as ɛ/m(t 0 ), then x(t 0 ) < δ x(t) Φ(t, t 0 ) i x(t 0 ) < m(t 0 )δ = ɛ (6) This shows that the equilibrium 0 is stable. Assume that (5) is false, so that Φ(t, t 0 ) i is an unbounded function of t for some t 0 0. To show that 0 is an unstable equilibrium, let ɛ > 0 be any positive number, and let δ be an arbitrary positive number. It is shown that one can choose an x(t 0 ) in the ball B δ such that the resulting solution x(t) satisfies x(t) ɛ for some t t 0.
1 STABILITY AND STATE TRANSITION MATRIX 5 Select a δ 1 in the open interval (0, δ). Since Φ(t, t 0 ) i is unbounded as a function of t, there exists a t t 0 such that Next, select a vector v of norm one such that Φ(t, t 0 ) i > ɛ δ 1 (7) Φ(t, t 0 )v = Φ(t, t 0 ) i (8) This is possible in view of the definition of the induced norm. Finally, let x(t 0 ) = δ 1 v. Then x B δ. Moreover, x(t) = Φ(t, t 0 )x(t 0 ) = δ 1 Φ(t, t 0 )v = δ 1 Φ(t, t 0 ) i > ɛ (9) Hence, the equilibrium 0 is unstable. Remark 1.1. Note that, in the case of linear systems, the instability does indeed imply that some solution trajectories actually blow-up. This is in contrast to the case of nonlinear systems, where instability of 0 can be accompanied by the boundedness of all solutions.
1 STABILITY AND STATE TRANSITION MATRIX 6 Necessary and sufficient conditions for uniform stability are given next : Theorem 1.2. The equilibrium 0 is uniformly stable if and only if sup sup Φ(t, t 0 ) i := m 0 < (10) t 0 0 t t 0 Proof : Suppose m 0 is finite, then for any ɛ and any t 0 0, there exists a δ = ɛ/m 0 such that x 0 δ, t 0 0 x(t, t 0, x 0 ) < ɛ, t t 0. Suppose m(t 0 ) is unbounded as a function of t 0. Then at least one component of Φ(, ), say the ij-component, has the property that sup φ ij (t, t 0 ) is unbounded as a function of t 0 (11) t t 0 Let x 0 = e j, the elementary vector. Then (11) implies that the quantity x(t) / x 0 = Φ(t, t 0 )x 0 / x 0 cannot be bounded independent of t 0. Hence, 0 is not uniformly stable.
1 STABILITY AND STATE TRANSITION MATRIX 7 The next theorem characterizes uniform asymptotic stability. Theorem 1.3. The equilibrium 0 is (globally) uniformly asymptotically stable if and only if sup sup Φ(t, t 0 ) i := m 0 < (12) t 0 0 t t 0 Φ(t + t 0, t 0 ) i 0 as t, uniformly in t 0 (13) Proof : Assume (12) holds, then 0 is uniformly stable. Also, if (13) holds, then the ratio Φ(t, t 0 )x 0 / x 0 approaches 0 uniformly in t 0 so 0 is uniformly attractive. Hence, 0 is uniformly asymptotically stable. left as an exercice.
1 STABILITY AND STATE TRANSITION MATRIX 8 The next theorem shows that, for linear systems, uniform asymptotic stability is equivalent to exponential stability. Theorem 1.4. The equilibrium 0 is uniformly stable if and only if there exist constants m, λ > 0 such that Φ(t, t 0 ) i m exp[ λ(t t 0 )], t t 0 (14) Proof : Assume (14) is satisfied. Then clearly (12) and (13) are also true, whence 0 is uniformly asymptotically stable by Theorem 1.3. Conversely, suppose (12) and (13) are true. Then there exist finite constants µ and T such that Φ(t, t 0 ) i µ, t t 0 0 (15) Φ(t + t 0, t 0 ) i 1/2, t T, t 0 0. (16) In particular, (16) implies that Φ(T + t 0, t 0 ) i 1/2, t 0 0. (17) Now, given any t 0 and any t t 0, pick an integer k such that
1 STABILITY AND STATE TRANSITION MATRIX 9 t 0 + kt t t 0 + (k + 1)T. Then Φ(t, t 0 ) = Φ(t, t 0 + kt )Φ(t 0 + kt, t 0 + kt T )... Φ(t 0 + T, t 0 ) (18) Hence Φ(t, t 0 ) i = Φ(t, t 0 + kt ) i Π k j=1 Φ(t 0 + jt, t 0 + jt T ) i µ2 k 2µ2 (t t 0)/T Hence (14) is satisfied if we define This completes the proof. m = 2µ and λ = log2 T. This section contains several results that relate the stability properties of a linear system to its state transition matrix. Since these results require an explicit expression of the state transition matrix, they are not of much use for testing purposes. Nevertheless, they do provide some insight.
2 TIME-INVARIANT SYSTEMS 10 2 Time-invariant systems Throughout this section, attention is restricted to linear time-invariant systems of the form ẋ(t) = Ax(t) (19) In this special case, Lyapunov theory is very complete, as we shall see. Theorem 2.1. (1)The equilibrium 0 of (19) is exponentially stable if and only if all the eigenvalues of A have negative real parts. (2) The equilibrium 0 of (19) is stable if and only if all the eigenvalues of A have nonpositive real parts. Proof : The state transition matrix Φ(t, t 0 ) of the system (19) is given by Φ(t, t 0 ) = exp[a(t t 0 )] (20)
2 TIME-INVARIANT SYSTEMS 11 where exp( ) is the matrix exponential. Furthermore, exp(at) can be expressed as exp(at) = r i=1 m i j=1 p ij (A)t j 1 exp(λ i t) (21) where r is the number of distinct eigenvalues of A ; λ 1,... λ r are the distinct eigenvalues ; m i is the multiplicity of the eigenvalue λ i and p ij are interpolating polynomials. The stated conditions for stability and for asymptotic stability now follow readily from Theorem 1.2 and 1.3. Example 2.1. 1. A = [ 1 1 0 2 The eigenvalues are λ 1 = 1 and λ 2 = 2. Then the equilibrium 0 is unstable. ]
2 TIME-INVARIANT SYSTEMS 12 2. 3. 4. A = [ 1 1 0 2 The eigenvalues are λ 1 = 1 and λ 2 = 2. Then the equilibrium 0 is exponentially stable. A = [ 1 1 0 2 The eigenvalues are λ 1 = 1 and λ 2 = 2. Then the equilibrium 0 is unstable. A = [ 0 1 1 0 The eigenvalues are λ 1 = i and λ 2 = i. Then the equilibrium 0 is stable. ] ] ]
2 TIME-INVARIANT SYSTEMS 13 5. A = [ 1 1 1 1 The eigenvalues are λ 1 = 1 + i and λ 2 = 1 i. Then the equilibrium 0 is unstable. ] Thus in the case of linear time-invariant systems of the form (19), the stability status of the equilibrium 0 can be ascertained by studying the eigenvalues of A. However it is possible to formulate an entirely different approach to the problem, based on the use of quadratic Lyapunov functions. This theory is of interest in itself, and is also useful in studying non-linear systems using linearization methods.
2 TIME-INVARIANT SYSTEMS 14 Given the system (19), the idea is to choose a Lyapunov function candidate of the form V (x) = x P x (22) where P is a real symmetric matrix. Then V is given by V (x) = ẋ P x + x P ẋ = x Qx (23) where A P + P A = Q (24) Equation (24) is commonly known as the Lyapunov Matrix Equation. By means of this equation, it is possible to study the stability properties of the equilibrium 0 of the system (19). For example, if a pair of matrices (P, Q) satisfying (24) can be found such that both P and Q are positive definite, then both V and V are positive definite functions, and V is radially unbounded. Hence, by Theorem 3.4 in Chapter 3, 0 is globally exponentially stable. On the other hand, if a pair (P, Q) can be found such that Q is positive definite and P has at least one nonpositive eigenvalue, then V is positive definite, and V assumes nonpositive values arbitrarily close to the origin. Hence, by Theorem 88, the origin is unstable.
2 TIME-INVARIANT SYSTEMS 15 There are two ways in which (24) can be tackled : 1) Given a matrix A, one can pick a particular matrix P and study the properties of the matrix Q resulting from (24). 2) Given a matrix A, one can pick a particular matrix Q and study the matrix P resulting from (24).
2 TIME-INVARIANT SYSTEMS 16 One difficulty with selecting Q and trying to find the corresponding P is that, depending on the matrix A, (24) may not have a unique solution for P. The next result gives necessary and sufficient conditions under which (24) has a unique solution corresponding to each Q. Lemma 2.1. Let A R n n, and let {λ 1, λ 2,..., λ n } denote the (not necessarily distinct) eigenvalues of A. Then (24) has a unique solution for P corresponding to each Q R n n if and only if λ i + λ j 0, i, j (25) On the basis of Lemma 2.1, one can state the following corollary : Corollary 2.1. If for some choice of Q R n n, Equation (24) does not have a unique solution P, then the origin is not asymptotically stable. Proof : If all the eigenvalues of A have negative real parts, then (25) is satisfied.
2 TIME-INVARIANT SYSTEMS 17 The following lemma provides an alternate characterization of the solutions of (24). Note that a matrix A is called Hurwitz if all of its eigenvalues have negative real parts. Lemma 2.2. Let A be a Hurwitz matrix. Then, for each Q R n n, the corresponding unique solution of (24) is given by P = 0 e A t Qe At dt (26) Proof : If A is Hurwitz, then the condition (25) is satisfied, and (24) has a unique solution for P corresponding to each Q R n n. Moreover, if A is Hurwitz, then the integral on the right side of (26) is well-defined. Let M denote this integral. It is now shown that A M + MA = Q (27)
2 TIME-INVARIANT SYSTEMS 18 By the uniqueness of solutions to (24), it then follows that P = M. To prove (27), observe that A M + MA = = 0 = Q This completes the proof. 0 [A e A t Qe At + e A t Qe At A]dt d[e A t Qe At ] = [e A t Qe At] Remark 2.1. Note that the above lemma also provides a convenient way to compute infinite integrals of the form (26). 0
2 TIME-INVARIANT SYSTEMS 19 We can now state one of the main results of the Lyapunov matrix equation : Theorem 2.2. Given a matrix A R n n, the following three statements are equivalent : (1) A is a Hurwitz matrix. (2) There exists some positive definite matrix Q R n n such that (24) has a corresponding unique positive definite solution P. (3) For every positive definite matrix Q R n n, (24) has a unique positive definite solution P. Proof : (3) (2) Obvious. (2) (1) Suppose (2) is true for some particular Q. Then we can apply Theorem 3.3 in Chapter 3, with the Lyapunov function candidate V (x) = x P x. Then V (x) = x Qx, and one can conclude that 0 is asymptotically stable. By Theorem 2.1, this implies that A is Hurwitz.
2 TIME-INVARIANT SYSTEMS 20 (1) (3) Suppose A is Hurwitz and let Q be positive definite but otherwise arbitrary. By Lemma 2.2, Equation (24) has a corresponding unique solution P given by (26). It only remains to show that P is positive definite. For this purpose, factor Q in the form M M where M is nonsingular. Now it is claimed that P is positive definite because with Q = M M, P becomes Thus, for any x R n, x P x = 0 P = x P x > 0, x 0 (28) x e A t M Me At xdt = 0 e A t M Me At dt (29) Substituting t = 0 gives Mx = 0, which implies x = 0. Hence, P is 0 Me At x 2 2dt 0 (30) where 2 denotes the Euclidean norm. Next, if x P x = 0, then Me At x = 0, t 0 (31)
2 TIME-INVARIANT SYSTEMS 21 positive definite. Remark 2.2. Theorem 2.2 is very important in that it enables one to determine the stability status of the equilibrium 0 in the following manner : Given A R n n, pick Q R n n to be any positive definite matrix (a logical choice is the identity matrix). Attempt to solve (24) for P. (a) If (24) has no solution or has non-unique solution, then 0 is not asymptotically stable. (b) If P is unique but not positive definite, then once again 0 is not asymptotically stable. (c) If P is uniquely determined and positive definite, then 0 is asymptotically stable.
2 TIME-INVARIANT SYSTEMS 22 Example 2.2. In this example we demonstrate the necessary steps required in applying the Lyapunov stability test. Consider the following continuous time invariant system represented by ẋ1(t) ẋ 2 (t) ẋ 3 (t) = 1 0 1 0 1 0 1 2 3 x 1(t) x 2 (t) x 3 (t) (32) It is easy to check by MATLAB function eig that the eigenvalues of this system are λ 1 = 2.3247, λ 2 = 0.3367 + i0.5623, λ 3 = 0.3367 i0.5623 and hence this system is asymptotically stable. In order to apply the Lyapunov method, we first choose a positive definite matrix Q. The standard initial guess for Q is identity, i.e. Q = I 3. With the help of the MATLAB function lyap (used for solving the algebraic Lyapunov equation), we can execute the following statement P = lyap(a, Q) and obtain the solution P as P = 2.3 2.1 0.5 2.1 4.6 1.3 0.5 1.3 0.6
2 TIME-INVARIANT SYSTEMS 23 Examining the positive definiteness of the matrix P (all eigenvalues of P must be in the closed right half plane), we get that the eigenvalues of this matrix are given 6.1827, 1.1149, 0.2024 hence P is positive definite and the Lyapunov test indicates that the system under consideration is asymptotically stable. It can be seen from this particular example that the Lyapunov stability test is not numerically very efficient since we have first to solve the linear algebraic Lyapunov equation and then to test the positive definiteness of the matrix P, which requires finding its eigenvalues. Of course, we can find the eigenvalue of the matrix A immediately and from that information determine the system stability. It is true that the Lyapunov stability test is not the right method to test the stability of linear systems when the system matrix is given by numerical entries. However, it can be used as a useful concept in theoretical considerations, e.g. to prove some other stability results.
2 TIME-INVARIANT SYSTEMS 24 Theorem 2.2 shows that, if A is Hurwitz and Q is positive definite. The next result shows that, under certain conditions, P is positive definite even when Q is positive semi-definite. Lemma 2.3. Suppose A R n n and satisfies (25). Suppose C R n n, and that rank L = Let f(t) = Ce At x. Then f as well as all its derivatives are C CA Under these conditions, the equation. CA n 1 has a unique positive definite solution P. = n (33) A P + P A = C C (34) Proof : Existence and uniqueness of P follows from Lemma (2.1). To show that P is positive definite, we have x P x = 0 Ce At x = 0, t 0
2 TIME-INVARIANT SYSTEMS 25 identically zero. In particular, Lx = 0, which implies x = 0. Hence, P is positive definite. Example 2.3. Consider the same system matrix as in Example 2.2 with the matrix Q 1 obtained from Q 1 = C T C = 0 0 [ 0 0 1 ] = 0 0 0 0 0 0 1 0 0 1 Note that the rank of the matrix L is 3, then L is full rank. Then, the Lyapunov algebraic equation has the positive definite solution 0.1 0.2 0 P 1 = 0.2 0.7 0.1 0 0.1 0.2 A P 1 + P 1 A = Q 1 (35) which can be confirmed by finding the eigenvalues of P 1, so that the considered linear system is asymptotically stable.
3 TIME-VARYING SYSTEMS 26 3 Time-Varying Systems Here, we interested in the stability of the following time-varying system ẋ(t) = A(t)x(t), t 0 (36) In the case of linear time-varying systems, the stability status of the equilibrium 0 can be ascertained, in principle at least, by studying the state transition matrix. Existence of Quadratic Lyapunov Functions For time-invariant systems, it has been shown that if 0 is exponentially stable then a quadratic Lyapunov function exists. A similar result is now proved for time-varying systems, under the assumption that 0 is exponentially stable. This result is based on two preliminary lemmas :
3 TIME-VARYING SYSTEMS 27 Lemma 3.1. Suppose Q : R + R n n is continuous and bounded, and that the equilibrium 0 of (36) is uniformly asymptotically stable. Then, for each t 0, the matrix P (t) = t Φ (τ, t)q(τ)φ(τ, t)dτ (37) is well-defined ; moreover, P (t) is bounded as a function of t. Proof : The assumption of uniform asymptotic stability implies that 0 is exponentially stable. Thus, there exist constants m, λ > 0 such that Φ(τ, t) i m exp[ λ(τ t)], τ t 0. (38) The previous bound, together with the boundeness of Q( ), proves the lemma.
3 TIME-VARYING SYSTEMS 28 Lemma 3.2. Suppose that, in addition to the assumption of Lemma (3.1), the following conditions also hold : (1) Q(t) is symmetric and positive definite for each t 0 ; moreover, there exists a constant α > 0 such that αx x x Q(t)x, t 0, x R n. (39) (2) The matrix A( ) is bounded, i.e. m 0 := sup t 0 A(t) i,2 < (40) Under these conditions, the matrix P (t) is defined in (37) is positive definite for each t 0 ; moreover, there exists a constant β > 0 such that βx x x P (t)x, t 0, x R n. (41)
3 TIME-VARYING SYSTEMS 29 Theorem 3.1. Suppose Q( ) and A( ) satisfy the hypotheses of Lemma 3.2. Then for each function Q( ) satisfying the hypotheses, the function V (t, x) = x P (t)x is a Lyapunov function for establishing the exponential stability of the equilibrium 0. Proof : With V (t, x) defined as above, we have V (t, x) = x [ P (t) + A (t)p (t) + P (t)a(t)]x It is easy to verify by differentiating (37) that Hence P (t) = A (t)p (t) P (t)a(t) Q(t) (42) V (t, x) = x Q(t)x Thus the functions V and V satisfy all the needed conditions.
3 TIME-VARYING SYSTEMS 30 Example 3.1. Consider the linear time-varying system { ẋ1 (t) = 1 2 (cos t esin t )x 1 (t) + sin 2 t x 2 (t) ẋ 2 (t) = sin 2 t x 1 (t) + 1 2 (cos t esin t )x 2 (t) Then A(t) = [ 1 2 (cos t esin t ) sin 2 t sin 2 t 1 2 (cos t esin t ) Taking Q = I, simple calculation shows that P (t) = [ ] e sin t 0 0 e sin t is a solution of (42). The system is exponentially stable. ].