Output Feedback and State Feedback EL2620 Nonlinear Control Lecture 10 Exact feedback linearization Input-output linearization Lyapunov-based control design methods ẋ = f(x,u) y = h(x) Output feedback: Find u = k(y) such that the closed-loop system ẋ = f ( x,k ( h(x) )) has nice properties. State feedback: Find u = l(x) such that ẋ = f(x,l(x)) has nice properties. k and l may include dynamics. Lecture 10 1 Lecture 10 2 What if x is not measurable? Nonlinear Observers Nonlinear Controllers Nonlinear dynamical controller: ż = a(z,y), u = c(z) Simplest observer ẋ = f(x,u), y = h(x) x = f( x,u) Linear dynamics, static nonlinearity: ż = Az + By, u = c(z) Linear controller: ż = Az + By, u = Cz Feedback correction, as in linear case, Choices of K x = f( x,u) + K(y h( x)) Linearize f at x 0, find K for the linearization Linearize f at x(t), find K = K( x) for the linearization Second case is called Extended Kalman Filter Lecture 10 3 Lecture 10 4
Exact Feedback Linearization Some state feedback control approaches Exact feedback linearization Input-output linearization Lyapunov-based design - backstepping control Consider the nonlinear control-affine system idea: use a state-feedback controller u(x) to make the system linear Example 1: The state-feedback controller ẋ = cos x x 3 + u u(x) = cosx + x 3 kx + v yields the linear system ẋ = kx + v Lecture 10 5 Lecture 10 6 Another Example ẋ 1 = a sin x 2 ẋ 2 = x 2 1 + u How do we cancel the term sin x 2? Perform transformation of states into linearizable form: yields z 1 = x 1, z 2 = ẋ 1 = a sin x 2 ż 1 = z 2, ż 2 = a cos x 2 ( x 2 1 + u) and the linearizing control becomes u(x) = x 2 1 + v a cos x 2, x 2 [ π/2,π/2] Diffeomorphisms A nonlinear state transformation z = T(x) with T invertible for x in the domain of interest T and T 1 continuously differentiable is called a diffeomorphism Definition: A nonlinear system is feedback linearizable if there exist a diffeomorphism T whose domain contains the origin and transforms the system into the form ẋ = Ax + Bγ(x) (u α(x)) with (A,B) controllable and γ(x) nonsingular for all x in the domain of interest. Lecture 10 7 Lecture 10 8
Exact vs. Input-Output Linearization The example again, but now with an output ẋ 1 = a sin x 2, ẋ 2 = x 2 1 + u, y = x 2 The control law u = x 2 1 + v/a cos x 2 yields ż 1 = z 2 ; ż 2 = v ; y = sin 1 (z 2 /a) which is nonlinear in the output. If we want a linear input-output relationship we could instead use to obtain u = x 2 1 + v ẋ 1 = a sin x 2, ẋ 2 = v, y = x 2 which is linear from v to y (but what about the unobservable state x 1?) Input-Output Linearization Use state feedback u(x) to make the control-affine system y = h(x) linear from the input v to the output y The general idea: differentiate the output, y = h(x), p times untill the control u appears explicitly in y (p), and then determine u so that y (p) = v i.e., G(s) = 1/s p Lecture 10 9 Lecture 10 10 Example: controlled van der Pol equation Differentiate the output ẋ 1 = x 2 ẋ 2 = x 1 + ǫ(1 x 2 1)x 2 + u y = x 1 ẏ = ẋ 1 = x 2 The state feedback controller ÿ = ẋ 2 = x 1 + ǫ(1 x 2 1)x 2 + u u = x 1 ǫ(1 x 2 1)x 2 + v ÿ = v Lie Derivatives Consider the nonlinear SISO system The derivative of the output ; x R n,u R 1 y = h(x), y R ẏ = dh dxẋ = dh dx (f(x) + g(x)u) L fh(x) + L g h(x)u where L f h(x) and L g h(x) are the Lie derivatives (L f h is the derivative of h along the vector field of ẋ = f(x)) Repeated derivatives L k fh(x) = d(lk 1 f h) f(x), L g L f h(x) = d(l fh) dx dx g(x) Lecture 10 11 Lecture 10 12
Lie derivatives and relative degree The relative degree p of a system is defined as the number of integrators between the input and the output (the number of times y must be differentiated for the input u to appear) A linear system Y (s) U(s) = has relative degree p = n m b 0 s m +... + b m s n + a 1 s n 1 +... + a n A nonlinear system has relative degree p if L g L i 1 f h(x) = 0,i = 1,...,p 1 ; L g L p 1 f h(x) 0 x D Example The controlled van der Pol equation ẋ 1 = x 2 Differentiating the output ẋ 2 = x 1 + ǫ(1 x 2 1)x 2 + u y = x 1 ẏ = ẋ 1 = x 2 ÿ = ẋ 2 = x 1 + ǫ(1 x 2 1)x 2 + u Thus, the system has relative degree p = 2 Lecture 10 13 Lecture 10 14 The input-output linearizing control Consider a nth order SISO system with relative degree p Differentiating the output y = h(x) ẏ = dh dxẋ = L fh(x) +. =0 {}}{ L g h(x)u y (p) = L p f h(x) + L gl p 1 f h(x)u and hence the state-feedback controller u = 1 ( L p L g L p 1 fh(x) + v) f h(x) results in the linear input-output system y (p) = v Lecture 10 15 Lecture 10 16
Zero Dynamics van der Pol again Note that the order of the linearized system is p, corresponding to the relative degree of the system Thus, if p < n then n p states are unobservable in y. The dynamics of the n p states not observable in the linearized dynamics of y are called the zero dynamics. Corresponds to the dynamics of the system when y is forced to be zero for all times. A system with unstable zero dynamics is called non-minimum phase (and should not be input-output linearized!) ẋ 1 = x 2 ẋ 2 = x 1 + ǫ(1 x 2 1 )x 2 + u With y = x 1 the relative degree p = n = 2 and there are no zero dynamics, thus we can transform the system into ÿ = v. Try it yourself! With y = x 2 the relative degree p = 1 < n and the zero dynamics are given by ẋ 1 = 0, which is not asymptotically stable (but bounded) Lecture 10 17 Lecture 10 18 A simple introductory example Lyapunov-Based Control Design Methods ẋ = f(x,u) Find stabilizing state feedback u = u(x) Verify stability through Control Lyapunov function Methods depend on structure of f Here we limit discussion to Back-stepping control design, which require certain f discussed later. Consider Apply the linearizing control ẋ = cos x x 3 + u u = cosx + x 3 kx Choose the Lyapunov candidate V (x) = x 2 /2 V (x) > 0, V = kx 2 < 0 Thus, the system is globally asymptotically stable But, the term x 3 in the control law may require large control moves! Lecture 10 19 Lecture 10 20
The same example Now try the control law ẋ = cos x x 3 + u u = cosx kx Choose the same Lyapunov candidate V (x) = x 2 /2 V (x) > 0, V = x 4 kx 2 < 0 Simulating the two controllers Simulation with x(0) = 10 State x State trajectory Control input 10 linearizing non linearizing 9 8 7 6 5 4 3 Input u 1000 800 600 400 200 linearizing non linearizing Thus, also globally asymptotically stable (and more negative V ) 2 1 0 0 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 time [s] 200 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 time [s] The linearizing control is slower and uses excessive input Lecture 10 21 Lecture 10 22 Back-Stepping Control Design We want to design a state feedback u = u(x) that stabilizes at x = 0 with f(0) = 0. ẋ 1 = f(x 1 ) + g(x 1 )x 2 ẋ 2 = u Idea: See the system as a cascade connection. Design controller first for the inner loop and then for the outer. u x 2 x 1 g(x 1 ) f() (1) Suppose the partial system ẋ 1 = f(x 1 ) + g(x 1 ) v can be stabilized by v = φ(x 1 ) and there exists Lyapunov fcn V 1 = V 1 (x 1 ) such that V 1 (x 1 ) = dv ( ) 1 f(x 1 ) + g(x 1 )φ(x 1 ) W(x 1 ) for some positive definite function W. This is a critical assumption in backstepping control! Lecture 10 23 Lecture 10 24
Equation (1) can be rewritten as The Trick ẋ 1 = f(x 1 ) + g(x 1 )φ(x 1 ) + g(x 1 )[x 2 φ(x 1 )] ẋ 2 = u u x 2 x 1 g(x 1 ) φ(x 1 ) f + gφ Introduce new state ζ = x 2 φ(x 1 ) and control v = u φ: where ẋ 1 = f(x 1 ) + g(x 1 )φ(x 1 ) + g(x 1 )ζ ζ = v φ(x 1 ) = dφ ẋ 1 = dφ ) (f(x 1 ) + g(x 1 )x 2 u ζ x 1 g(x 1 ) φ(x 1 ) f + gφ Lecture 10 25 Lecture 10 26 Consider V 2 (x 1,x 2 ) = V 1 (x 1 ) + ζ 2 /2. Then, V 2 (x 1,x 2 ) = dv ( ) 1 f(x 1 ) + g(x 1 )φ(x 1 ) + dv 1 g(x 1 )ζ + ζv Choosing gives W(x 1 ) + dv 1 g(x 1 )ζ + ζv v = dv 1 g(x 1 ) kζ, k > 0 V 2 (x 1,x 2 ) W(x 1 ) kζ 2 Hence, x = 0 is asymptotically stable for (1) with control law u(x) = φ(x) + v(x). If V 1 radially unbounded, then global stability. Back-Stepping Lemma Lemma: Let z = (x 1,...,x k 1 ) T and Assume φ(0) = 0, f(0) = 0, ż = f(z) + g(z)x k ẋ k = u ż = f(z) + g(z)φ(z) stable, and V (z) a Lyapunov fcn (with V W ). Then, u = dφ ) (f(z) + g(z)x k dv dz dz g(z) (x k φ(z)) stabilizes x = 0 with V (z) + (x k φ(z)) 2 /2 being a Lyapunov fcn. Lecture 10 27 Lecture 10 28
Strict Feedback Systems Back-stepping Lemma can be applied to stabilize systems on strict feedback form: ẋ 1 = f 1 (x 1 ) + g 1 (x 1 )x 2 ẋ 2 = f 2 (x 1,x 2 ) + g 2 (x 1,x 2 )x 3 ẋ 3 = f 3 (x 1,x 2,x 3 ) + g 3 (x 1,x 2,x 3 )x 4. ẋ n = f n (x 1,...,x n ) + g n (x 1,...,x n )u 2 minute exercise: Give an example of a linear system ẋ = Ax + Bu on strict feedback form. where g k 0 Note: x 1,...,x k do not depend on x k+2,...,x n. Lecture 10 29 Lecture 10 30 Back-Stepping Back-Stepping Lemma can be applied recursively to a system on strict feedback form. Back-stepping generates stabilizing feedbacks φ k (x 1,...,x k ) (equal to u in Back-Stepping Lemma) and Lyapunov functions V k (x 1,...,x k ) = V k 1 (x 1,...,x k 1 ) + [x k φ k 1 ] 2 /2 by stepping back from x 1 to u (see Khalil pp. 593 594 for details). Back-stepping results in the final state feedback u = φ n (x 1,...,x n ) Design back-stepping controller for Example ẋ 1 = x 2 1 + x 2, ẋ 2 = x 3, ẋ 3 = u Step 0 Verify strict feedback form Step 1 Consider first subsystem ẋ 1 = x 2 1 + φ 1 (x 1 ), ẋ 2 = u 1 where φ 1 (x 1 ) = x 2 1 x 1 stabilizes the first equation. With V 1 (x 1 ) = x 2 1/2, Back-Stepping Lemma gives u 1 = ( 2x 1 1)(x 2 1 + x 2 ) x 1 (x 2 + x 2 1 + x 1 ) = φ 2 (x 1,x 2 ) V 2 = x 2 1/2 + (x 2 + x 2 1 + x 1 ) 2 /2 Lecture 10 31 Lecture 10 32
Step 2 Applying Back-Stepping Lemma on gives u = u 2 = dφ 2 dz ẋ 1 = x 2 1 + x 2 ẋ 2 = x 3 ẋ 3 = u (f(z) + g(z)x n ) dv 2 dz g(z) (x n φ 2 (z)) = φ 2 x 1 (x 2 1 + x 2 ) + φ 2 x 2 x 3 V 2 x 2 (x 3 φ 2 (x 1,x 2 )) which globally stabilizes the system. Gain scheduling Next Lecture Nonlinear controllability (How to park a car) Lecture 10 33 Lecture 10 34