Lecture 2 Anders Helmersson anders.helmersson@liu.se ISY/Reglerteknik Linköpings universitet
Today s topics
Today s topics Norms
Today s topics Norms Representation of dynamic systems
Today s topics Norms Representation of dynamic systems Lyapunov equations
Today s topics Norms Representation of dynamic systems Lyapunov equations Gramians
Today s topics Norms Representation of dynamic systems Lyapunov equations Gramians Balancing
Today s topics Norms Representation of dynamic systems Lyapunov equations Gramians Balancing Model reduction
Today s topics Norms Representation of dynamic systems Lyapunov equations Gramians Balancing Model reduction H 2 and H norms
Small gain theorem G If G is stable and G < 1 then the closed loop system is stable if 1. Which norm should we use?
Norms p-norms for vectors: x p = ( ) 1/p x i p i Most common: p = 1: sum of the absolute values of the elements; p = 2: Euclidian norm of the vector x; p = : the maximum of absolute values. Induced norms maximum singular value. A = sup x =0 Ax 2 x 2 = σ(a)
Singular values Singular value decomposition where A = UΣV, σ 1 σ 2 Σ = U U = I, V V = I... Rn n σn assuming σ 1 σ 2... σ n 0. (A A)V = (UΣV ) UΣV V = VΣU UΣ = VΣ 2, which means that Σ 2 are the eigenvalues of A A. Numerically, this is not a good method to compute Σ.
Singular values Also, [ 0 ] [ A U U A 0 V V [ UΣ UΣ = VΣ VΣ ] [ 0 UΣV = ] [ U U VΣU 0 V V ] [ ] [ ] U U Σ 0 = V V 0 Σ ] and, consequently, ±Σ are the eigenvalues of [ 0 ] A A 0
Singular values and LMIs We can also use a Linear Matrix Inequality (LMI) to find the maximum singluar value. Find the minimum γ for which [ ] γi A A 0. γi Here 0 means positive semidefinite (all eigenvalues 0). Also, min{γ : γi A A 0} = σ 2 (A) γ
Representation of dynamic systems State-space representation ẋ = Ax + Bu y = Cx + Du Transfer function G(s) = C(sI A) 1 B + D (1) Sometimes we write this as [ ] [ ẋ A B = y C D or [ ] A B G = C D ] [ x u ] (2) (3)
Serial and parallel connection Serial connection: Parallel connection: [ ] A1 B G 1 = 1 C 1 D 1 G 1 G 2 = G 1 + G 2 = [ ] A2 B G 2 = 2 C 2 D 2 A 1 B 1 C 2 B 1 D 2 0 A 2 B 2 C 1 D 1 C 2 D 1 D 2 A 1 0 B 1 0 A 2 B 2 C 1 C 2 D 1 + D 2 (4) (5) (6)
Lyapunov equaions The linear equation A T X + XA + Q = 0 (7) is a Lyapunov equation. It has a unique solution if A och A do not have any common eigenvalues; for instance if A has all its eigenvalues in the left hand plane. In Matlab we can solve the equation using X = lyap (A, Q);
Observability gramian L o The observability gramian is defined by Regard the following system (u = 0): {ẋ = Ax A T L o + L o A + C T C = 0. (8) y = Cx Introduce V o = x T L o x and take its time derivative V o = ẋ T L o x + x T L o ẋ = x T A T L o x + x T L o Ax = x T (A T L o + L o A) x = x T C T Cx = y T y = y 2 2 }{{} C T C
Observability gramian L o Integrate V o T 0 T V o dt = V o (x(t)) V o (x(0)) = y(t) 2 2 dt. 0 If we assume that the system is stable and T then y(t) approaches zero. Thus V o (x(0)) = x T (0)L o x(0) = T 0 y(t) 2 2 dt The gramian, L o, includes information about how much a certain state becomes visible (as energy) in the output signal.
Controllabilty gramian L c The controllability gramian is defined by AL c + L c A T + BB T = 0. (9) Introduce V c (x) = x T Lc 1 x. This is a measure of the smallest energy needed in the input to obtain a certain state x. As before, take the derivative of V c with ẋ = Ax + Bu: V c (x) = ẋ T Lc 1 x + x T Lc 1 ẋ = (Ax + Bu) T Lc 1 x + x T Lc 1 (Ax + Bu) = u T B T Lc 1 x + x T Lc 1 Bu + x T (A T Lc 1 + Lc 1 A) x }{{} = (B T L 1 c Lc 1 BB T Lc 1 x + u) T (B T L 1 x + u) + u T u u T u c
Controllability gramian L c The control law u opt = B T Lc 1 x gives the smallest control energy. The energy needed for reaching a certain state (starting in x( ) = 0) is given by u 2 = V c (x) + u + B T L 1 c x 2 (10) Thus, u 2 V c (x) = x T L 1 c x, with equality when u = u opt.
Balancing The representation of a system is not unique. [ ] A B G = C D Using a similarity transformation, it can also be written as [ ] [ Â ˆB TAT 1 ] TB G = = Ĉ ˆD CT 1 D (11) (12) The transformation also affects the gramians ˆL o = T T L o T 1 (13) and ˆL c = TL c T T (14)
Balancing Choose T in a clever way such that σ 1 σ 2 ˆL o = ˆL c = Σ =... σn (15) Then σ 2 i = λ i (ˆL o ˆL c ) = λ i (L o L c ). (16) To obtain this factor L o och L c, for instance using Cholesky decomposition: L o = Uo T U o and L c = Uc T U c. Then compute UΣV T = U o Uc T using singular value decomposition, where UU T = VV T = I. The transformation matrix is given by T = Σ 2 1 U T U o.
An example of balancing 5 0 1 1 G(s) = (s + 1)(s + 5) = 1 1 0 = 0 1 0 with Σ = diag [σ 1, σ 2 ] = diag [0.1124, 0.0124]. 0.5946 1.336 0.3656 1.336 5.405 0.3656 0.3656 0.3656 0 In Matlab this is done by g = tf(1,[1 1])*tf(1,[1 5]); [gb,sig] = balreal(g); Here σ 2 is small compared to σ 1. If we remove the corresponding state, x 2, the H error is limited by 2σ 2 : G Ĝ 1 2σ 2, Ĝ 1 (s) = [ 0.5946 0.3656 0.3656 0 ] (17)
An example of balancing Singular Values 10 15 20 G G 1 G G 1 2σ 2 25 Singular Values (db) 30 35 40 45 50 55 60 10 1 10 0 10 1 10 2 Frequency (rad/sec)
Model reduction Model reduction can be done by truncating the states that have the smallest Hankel singular values: G n Ĝ r 2 n i=r+1 σ i (18) This upper bound can be halved by using a different scheme that also modifes the D matrix: σ r+1 G n Ĝ r n σ i (19) i=r+1
With modified D optimal Hankel Singular Values 10 15 G G 1 G G 1 =σ 2 20 25 Singular Values (db) 30 35 40 45 50 55 60 10 1 10 0 10 1 10 2 Frequency (rad/sec)
H 2 norm e G z Let e = δ (dirac pulse) or white noise Energy in the output: ẋ = Ax + Bδ(t 0) }{{} x=b V(x) = x T (0)L o x(0) = B T L o B = G 2 2 = CL c C T
H 2 norm e G z In MIMO case G 2 2 = tr B T L o B = tr CL c C T The H 2 norm can be computed by solving a linear matrix equation.
H norm H norm is a measure of the maximum gain of a stable system, G, over all frequencies, ω. That is to say L norm also applies to unstable systems. G = max G(jω) (20) ω H norm cannot be computed direct (as the H 2 norm), but we can test if G < γ.
H norm [ A B G(s) = C D ] = D + C(sI A) 1 B Assume that G < γ. If G is SISO then Φ(jω) = γ 2 σ 2 (G(jω)) = γ 2 G(jω) G(jω) > 0, for all ω. If γ G then Φ(jω) has at least one zero for ω R.
H norm where This gives Φ(s) = γ 2 G (s)g(s) = γ 2 G( s) T G(s) G (s) = [ A T C T B T D T A 0 B Φ(s) = C T C A T C T D D T C B T γ 2 I D T D Requirement 1: γ > σ(d) = λ max (D T D). ] If Φ(s) has a zero on the imaginary axis then Φ 1 (s) must have a pole on the imaginary axis.
Inverse of systems We need to compute the inverse of a state space system. The inverse of a system can be written as [ ] 1 [ G 1 A B A BD 1 C BD 1 (s) = = C D D 1 C D 1 ] since...
Inverse of systems G 1 (s)g(s) = I I 0 = 0 I 0 0 0 I A BD 1 C BD 1 C BD 1 D 0 A B D 1 C D 1 C D 1 D A BD 1 C BD 1 C B 0 A B D 1 C D 1 C I A BD 1 C A BD 1 C 0 0 A B = D 1 C D 1 C I A BD 1 C 0 0 = 0 A B I D 1 C 0 I I I 0 0 I 0 0 0 I I I 0 0 I 0 0 0 I
H norm The inverse of the system can be written as ] 1 [ A BD 1 C BD 1 = C [ A B ] D D 1 C D 1 Let H be the A matrix of the inverse of Φ(s): [ A 0 H = C T C A T [ = ] [ A + BR 1 D T C C T C + C T DR 1 D T C B C T D ] ( γ 2 I D D) T 1 [ D T C B T ] } {{ } R BR 1 B T A T C T DR 1 B T If H has eigenvalues on the imaginary axis then γ G. ]
Exemple of the H norm Let G(s) = 1 1+0.1s+s 2 : γ eig H 10 ±1j, ±0.9950j 10.1 ±0.0066 ± 0.9975j 10.0125 ±ɛ ± 0.9974j 1/(s 2 +s/10+1) 30 20 10 Singular Values (db) 0 10 20 30 40 10 1 10 0 10 1 Frequency (rad/sec)
Induced norms The H norm can also be described as an induced L 2 or l 2 norm. For a discrete-time system, if γ > G then max u T t=0 ( ) y T (t)y(t) γ 2 u T (t)u(t) < 0 for the system [ x(t + 1) y(t) ] [ A B = C D ] [ x(t) u(t) ].
Induced norms Introduce a Lyapunov function, V(x) = x T Px, where P = P T 0 is positive definite: = = = T ( ) y T (t)y(t) γ 2 u T (t)u(t) t=0 T t=0 T t=0 T t=0 ( ) y T (t)y(t) γ 2 u T (t)u(t) + V(x(T + 1)) V(x(0)) + V(x(0)) V(x(T + 1)) ( ) y T (t)y(t) γ 2 u T (t)u(t) + V(x(t + 1)) V(x(t)) + V(x(0)) V(x(T + 1)) [ x(t) u(t) ] T ([ A T PA P A T PB B T PA B T PB γ 2 I + V(x(0)) V(x(T + 1)) ] [ C T + D T ] [ C D ] ) [ x(t) u(t) ] T
LMIs Assure that [ A T PA P A T PB B T PA B T PB γ 2 I ] [ C T + D T ] [ C D ] 0 by choosing an appropriate matrix P = P T 0. The smallest γ that we can find such a P for gives the upper bound of G. This is a linear matrix inequality (LMI), which is a convex problem.
LMIs For continuous-time systems [ PA + A T P PB B T P γ 2 I ] [ C T + D T ] [ C D ] 0 The smallest γ for which we can find such a P = P T 0 gives the upper bound of G.