9 Analysis of stability and H norms Consider the causal, linear, time-invariant system ẋ(t = Ax(t + Bu(t y(t = Cx(t Denote the transfer function G(s := C (si A 1 B. Theorem 85 The following statements are equivalent: 1. A is stable, and G < 1 2. A is stable, and the matrix A BB T C T C A T has no imaginary axis eigenvalues. 3. A is stable and there exists a matrix X R n n, such that X = X T, A + BB T X is stable, and A T X + XA + XBB T X + C T C = 0 Proof: 1 2 follows directly from the homework. Using Theorem 19, it is clear that 2 3. The norms can also be characterized in terms of Riccati inequalities. Theorem 86 The following statements are equivalent: 1. A is stable, and G < 1 2. A is stable, and for some ǫ > 0, C (si A 1 B < 1 ǫi 166
3. A is stable and there exists a matrix X R n n, such that X = X T, A + BB T X is stable, and A T X + XA + XBB T X + C T C < 0 4. A is stable and there exists a matrix X R n n, such that X = X T, and A T X + XA + XBB T X + C T C < 0 5. There exists a matrix X R n n such that X = X T > 0 and A T X + XA + XBB T X + C T C < 0 6. There exists a matrix X R n n such that X = X T > 0 and A T X + XA + C T C XB B T X I < 0 7. There exists a matrix X R n n such that X = X T > 0 and Proof: A T X + XA XB C T B T X I 0 C 0 I 1 2 Since A is stable (si A 1 B is finite. Hence, for some ǫ > 0, the condition holds. 2 3 Use Theorem 85, that 1 3, to conclude that there exists an X = X T such that 3 4 Obvious. 4 5 Let W > 0 such that < 0 A T X + XA + XBB T X + C T C = ǫ 2 I < 0. A T X + XA = XBB T X C T C W 167
Since A is stable, and is observable, it follows from Theorem 3 that X > 0. A, B T X C W 1 2 5 1 There are two easy ways to do this, each involve a completion of the square in either time-domain or frequency-domain. This is in a later homework. 5 6 7 Schur complements... For systems with a D term, the Hamiltonian matrix from the homework needs to be modified, and you should take some time to do this. In this case, the correct version of the inequality theorem is: Theorem 87 The following statements are equivalent: 1. The continuous-time system described by ẋ = Ax + Bu y = Cx + Du is internally stable, and has T yu < 1. 2. σ (D < 1, A is stable, and there is a matrix X = X T such that A + BB T X + BD T ( I DD T 1 ( C + DB T X is stable and (A + BD T ( I DD T 1 C T X + X (A + BD T ( I DD T 1 C +XB [I + D T ( I DD T ] 1 D B T X + C T ( I DD T 1 C = 0 3. σ (D < 1 and there exists a matrix X = X T > 0 such that (A + BD T ( I DD T 1 C T X + X (A + BD T ( I DD T 1 C +XB [I + D T ( I DD T ] 1 D B T X + C T ( I DD T 1 C < 0. 168
4. There exists a matrix X R n n such that X = X T > 0 and A T X + XA XB C T B T X I D T C D I < 0 169
State Feedback/Full Information H Riccati Inequality Formulation The material is motivated from earlier work, namely Petersen (IEEE Transactions on Automatic Control, 1987, vol. 32, pp. 427-429, Zhou and Khargonekar (Systems and Control Letters, 1988, vol. 11, pp. 85-91, Becker (ACC 93, PhD and SCL 1994, vol. 23, pp. 205-215, and Gahinet (CDC 92, ACC 93. There are two problems considered in this section: State-Feedback and Full-Information. In this section, we also use some very special orthogonality assumptions which make the formulae much cleaner. Later, we will show how to relax the orthogonality assumptions. The generalized plant for the State-Feedback problem is ẋ e y = A B 1 B 2 C 1 0 D 12 I 0 0 x d u (9.37 The orthogonality assumptions are that D T 12 D 12 = I nu, D T 12 C 1 = 0 The generalized plant for the Full-Information problem is ẋ e y 1 y 2 = A B 1 B 2 C 1 0 D 12 I 0 0 0 I 0 Again, we impose the restrictions that D12D T 12 = I nu, D12C T 1 = 0 x d (9.38 u 170
Synthesis Result Theorem 88 Consider the generalized plant P SF (state-feedback case in equation (9.37 and P FI (full information in equation (9.38. The following statements are equivalent: 1. For the plant P SF, there exists a linear, constant-gain feedback law u(t = F 1 y(t = F 1 x(t that achieves closed-loop internal stability and T ed < 1 2. For the plant P FI, there exists a linear, dynamic feedback law u = K 1 y 1 + K 2 y 2 = K 1 x + K 2 d that achieves closed-loop internal stability and T ed < 1 3. There exists a matrix X R n n, X = X T > 0 such that A T X + XA + X ( B 1 B T 1 B 2 B T 2 X + C T 1 C 1 < 0 4. There exists a matrix Y R n n, Y = Y T > 0 such that Y A T + AY + ( B 1 B T 1 B 2 B T 2 C 1 Y Y C T 1 I < 0 5. There exists a matrix Y R n n, Y = Y T > 0 such that Y A T + AY + ( B 1 B T 1 B 2 B T 2 + Y C T 1 C 1 Y < 0 Remark: Hence, for this special H control design problem, it is of no advantage to use dynamic controllers, or even to use the information in the disturbance measurement. This is essentially due to the fact that the disturbance affects the error only through the state (D 11 = 0, and hence the state contains all of the useful information about the disturbance. 171
Proof 2 3 Let the controller state-space matrix be ẋ c u = A c B 1c B 2c C c D 1c D 2c x c y 1 y 2 where the state dimension of the controller is n c 0. The closed-loop system matrix is simply ẋ e ẋ c = A 0 B 1 0 0 nc 0 C 1 0 0 + B 2 0 0 I nc D 12 0 D 1c C 1c D 2c B 1c A c B 2c x x c By assumption, the closed-loop system is stable, and has < 1. Hence, by the main analysis lemma, there is a matrix X R (n+n c (n+n c, X = X T > 0 such that for matrices N, L, R and K ss defined as we have N := L := X X A 0 0 0 nc [ [ [ + B T 1 0 [ B 2 0 0 I nc 0 0 D 12 0 ] C 1 0 ] K ss := A T 0 ] 0 0 nc X X B 1 0 C T 1 0 X I nd 0 ], R := D 2c C 1c D 1c B 2c A c B 1c N + LK ss R + R T K T ssl T < 0 0 I ne I n+nc 0 0 0 I nd 0 Note that L is not full column rank. Hence, for this matrix to be negative definite, by proper choice of X and K ss, it follows that some portion 172 d
of N must be negative definite already (since L can t affect it in all directions. What is orthogonal to L? Let Y := X 1 and partition X and Y as X 11 X 12 Y 11 Y 12 X = X T 12 X 22 It is easy to show (do it that L :=, Y = 0 0 Y 11 0 0 Y12 T I nd 0 0 0 D 12 D 12 B T 2 Y T 12 Y 22 is full column rank, and spans the orthogonal complement of the span of L. Hence, we must have L T NL = L T ( N + LKss R + R T K T ssl T L < 0 Multiply this out (remember that D T 12C 1 = 0, which also implies that C 1 is in the span of D 12, so that C T 1 D 12 D T 12 C 1 = C T 1 C 1. Then, do an obvious Schur complement to get that Y 11 A T + AY 11 + Y 11 C T 1 C 1Y 11 + B 1 B T 1 B 2B T 2 < 0 In doing this, you need to use the fact that X 11 Y 11 +X 12 Y12 T = I n, which follows from XY = I. Then, multiply both sides by Y11 1 to get the desired Riccati inequality (ie., the X R n n in the desired Riccati inequality is just Y11 1. 3 1 Verify that u(t = B2 T Xx(t works. 1 2 Use the constant-gain state-feedback, K 1 := F 1, K 2 := 0. 3 4 Y := X 1 and Schur complement. 4 5 Schur complement. 173
Maximal solutions Riccati equations For reference, the results from Gohberg, Lancaster and Rodman are restated, for real (and hence symmetric hermitian solutions. Suppose that A R n n, C R n n, D R n n, with D = D T 0, C = C T and (A, D stabilizable. Consider the matrix equation XDX A T X XA C = 0 (9.39 Definition 89 A symmetric solution X + = X T + of (9.39 is maximal if X + X for any other symmetric solution X of (9.39. Theorem 90 If there is a symmetric solution to (9.39, then there is a maximal solution X +. The maximal solution satisfies max i Reλ i (A DX + 0 Theorem 91 If there is a symmetric solution to (9.39, then there is a sequence {X j } j=1 such that for each j, X j = X T j and 0 X j DX J A T X j X j A C X j X for any X = X T solving 9.39 A DX j is stable lim j X j = X + Theorem 92 If there are symmetric solutions to (9.39, then for every symmetric C C, there exists symmetric solutions (and hence a maximal solution X + to XDX A T X XA C = 0 Moreover, the maximal solutions are related by X + X +. 174
Inequalities and Equalities Theorem 93 Suppose that A R n n, B R n m and Q = Q T R n n, with (A, B stabilizable. Let a matrix function R : R n n R n n be defined as R(X := A T X + XA XBB T X + Q Then the following statements are equivalent: 1. exists an X R n n such that X = X T and R(X 0. 2. There exists a unique X + R n n such that X + = X T +, R(X + = 0, and A BB T X + is stable 3. The matrix A Q has no imaginary-axis eigenvalues. BB T A T Moreover, if any of these are satisfied, then for any Y = Y T with R(Y 0, it must be that Y X +. Also, if any of the three conditions are satisfied, then there exists a sequence {X j } j=1 such that R (X j 0, lim j X j = X + 175
Proof 1 2 By hypothesis, there is an X = X T R n n such that A T X + XA XBB T X + Q =: W 0 Hence, X is a symmetric solution to XBB T X A T X XA (Q W = 0 (9.40 By [GohLR], there is a maximal solution X + = X T + to the equation X + BB T X + A T X + X + A Q = 0 (9.41 (since C := Q Q W =: C, and X + X for any symmetric solution X to equation (9.40. Now, note that A T X + + X + A X + BB T X + + Q = 0 A T X + XA XBB T X + Q = W Subtracting these equations gives A T (X + X + (X + X A X + BB T X + + XBB T X = W (9.42 Define := X + X. Equation (9.42 can be rearranged into ( A BB T X + T + ( A BB T X + = BB T W The right-hand side is clearly negative definite, and since = X + X 0, it follows by standard Lyapunov theory that A BB T X + is stable. In fact, it then follows that 0, so that X + X. 2 3 Obvious, by early results on Riccati equations 3 1 By continuity, there is an ǫ > 0 such that A BB T (Q ǫi A T has no imaginary-axis eigenvalues. Since (A, B is stabilizable, Theorem 17 applies, and there exists a matrix X = X T R n n such that A T X + XA XBB T X + (Q ǫi = 0 (9.43 176
with A BB T X stable. Note that (9.43 is rewritten as as desired. R(X = ǫi 0 177
Additional Results Finally, if Y = Y T has R(Y 0, then following the same argument gives ( A BB T X + T Y + Y ( A BB T X + = Y BB T Y R(Y where Y := X + Y. Since ( A BB T X + is stable, and the right-hand side is negative semi-definite, it follows that Y 0. For the last statement, choose γ > 0 such that for all γ (0, γ], the matrix A (Q γi BB T A T has no imaginary axis eigenvalues. This is possible by continuity of eigenvalues. Hence, for each such γ, the 3rd condition of the theorem is true, and all conditions are true. Let X γ + be the unique symmetric matrix satisfying A T X γ + X γ +A + X γ +BB T X γ + (Q γi = 0 Re [ λ i ( A BB T X γ +] < 0 Note that for each γ, R (X+ γ = γi 0, hence X+ γ X +. But also, if 0 < γ 2 γ 1 γ, then X γ 2 + X γ 1 + Hence, the function X γ + (on (0, γ] is bounded above by X +, and is increasing as γ decreases. Hence, there is a limit, X satisfying lim γ 0 Xγ + = X X = X T R ( X = 0 X X + Re ( A BB T X 0 But the eigenvalue distribution of A Q BB T A T 178
implies that there are no eigenvalues on the imaginary axis, hence it must in fact be that Re ( A BB T X < 0 so that X = X +. 179
H Control Riccati Equalities The generalized plant for the State-Feedback problem is ẋ e y = A B 1 B 2 C 1 0 D 12 I 0 0 x d u (9.44 The generalized plant for the Full-Information problem is ẋ e y 1 y 2 = A B 1 B 2 C 1 0 D 12 I 0 0 0 I 0 x d u (9.45 In both problems, we impose orthogonality assumptions D T 12D 12 = I nu, D T 12C 1 = 0. The theorem about the solvability of the H synthesis problem for these generalized plants stated that the control synthesis problem is solvable if and only if there exists a matrix X R n n, X = X T 0 such that A T X + XA + X ( B 1 B T 1 B 2B T 2 X + C T 1 C 1 < 0 Using Theorem 93, we can connect these theorems to Riccati equalities, rather than inequalities. 180
Connect Connect Theorem 94 Consider the state-feedback control problem described earlier. Suppose that (A, C 1 is observable. The following statements are equivalent: 1. There exists a matrix X R n n with X = X T 0 and A T X + XA + X ( B 1 B T 1 B 2 B T 2 X + C T 1 C 1 < 0 2. There exists a matrix X R n n with X = X T 0 such that A T ( X + X A + X B1 B1 T B 2 B2 T [ ( Reλ i A + B1 B1 T B 2B2 T ] X < 0 X + C T 1 C 1 = 0 3. The state-feedback or full-information synthesis problem (as stated in Theorems 21-24 are solvable with constant gain, or dynamic gain linear feedback laws. Moreover, if either of these conditions are true, then for any ǫ > 0, there exists a matrix X ǫ > 0 such that A T X ǫ + X ǫ A + X ǫ ( B1 B T 1 B 2 B T 2 and X ǫ X, and σ (X ǫ X < ǫ. Xǫ + C T 1 C 1 < 0 Proof: 1 2 By hypothesis, there is a matrix X = X T 0 such that A T X + XA + X ( B 1 B1 T B 2B2 T X + C T 1 C 1 < 0 Multiply on the left and right by X 1 =: Y to give Y A T + AY + B 1 B1 T B 2 B2 T + Y C1 T C 1 Y < 0 Multiply by 1, Y ( A T + ( AY Y C1 T C 1Y + ( B 2 B2 T B 1B1 T 0 181
In terms of Theorem 93, let Theorem 93 X := Y A := A T B := C 1 Current Data Q := ( B 2 B T 2 B 1B T 1 Since (A, C 1 is observable, it follows that ( A T, C T 1 C 1 is detectable. Apply Theorem 93, since Y = Y T. Hence there is a matrix Y + = Y T + solving Y + ( A T + ( AY + Y + C T 1 C 1 Y + + ( B 2 B T 2 B 1 B T 1 = 0 (9.46 with ( A T C T 1 C 1 Y + stable, and Y + Y = X 1 0. Manipulating the Riccati equation (9.46 gives A + ( B 1 B T 1 B 2 B T 2 which shows that A+ ( B 1 B1 T B 2B2 T X := Y+ 1 completes the proof. ( Y 1 + = Y + A T C1 T C 1 Y + Y 1 + Y 1 + 2 3 Exercise: see Lemma 6, page 836 of [DoyGKF]. 3 1 Follows easily from Theorems 21-24. is indeed stable. Defining The final statement follows straightforwardly from the convergent sequence described (and constructed at the end of Theorem 93. 182