Output Feedback and State Feedback. EL2620 Nonlinear Control. Nonlinear Observers. Nonlinear Controllers. ẋ = f(x,u), y = h(x)

Similar documents
Nonlinear Control Systems

Stabilization and Passivity-Based Control

Nonlinear Control. Nonlinear Control Lecture # 25 State Feedback Stabilization

Introduction to Geometric Control

Feedback Linearization

CDS 101/110a: Lecture 2.1 Dynamic Behavior

CDS 101/110a: Lecture 2.1 Dynamic Behavior

Solution of Additional Exercises for Chapter 4

Lecture 9 Nonlinear Control Design

Lecture 8. Chapter 5: Input-Output Stability Chapter 6: Passivity Chapter 14: Passivity-Based Control. Eugenio Schuster.

Nonlinear Control Lecture 9: Feedback Linearization

A nemlineáris rendszer- és irányításelmélet alapjai Relative degree and Zero dynamics (Lie-derivatives)

Nonlinear Control Lecture 7: Passivity

Lyapunov-based methods in control

Linearization problem. The simplest example

1 The Observability Canonical Form

Nonlinear Systems and Control Lecture # 12 Converse Lyapunov Functions & Time Varying Systems. p. 1/1

Lyapunov Stability Theory

Stability of Parameter Adaptation Algorithms. Big picture

High-Gain Observers in Nonlinear Feedback Control. Lecture # 2 Separation Principle

Disturbance Decoupling Problem

System Control Engineering

MCE693/793: Analysis and Control of Nonlinear Systems

1 Relative degree and local normal forms

Introduction to Nonlinear Control Lecture # 4 Passivity

Nonlinear Control Lecture # 1 Introduction. Nonlinear Control

TTK4150 Nonlinear Control Systems Solution 6 Part 2

Nonlinear Control. Nonlinear Control Lecture # 3 Stability of Equilibrium Points

Mathematics for Control Theory

Nonlinear Control. Nonlinear Control Lecture # 8 Time Varying and Perturbed Systems

Lyapunov Based Control

EN Nonlinear Control and Planning in Robotics Lecture 3: Stability February 4, 2015

Mathematical Systems Theory: Advanced Course Exercise Session 5. 1 Accessibility of a nonlinear system

Mathematics for Control Theory

Nonlinear Control. Nonlinear Control Lecture # 2 Stability of Equilibrium Points

EG4321/EG7040. Nonlinear Control. Dr. Matt Turner

Topic # /31 Feedback Control Systems. Analysis of Nonlinear Systems Lyapunov Stability Analysis

Feedback Linearization Lectures delivered at IIT-Kanpur, TEQIP program, September 2016.

Prof. Krstic Nonlinear Systems MAE281A Homework set 1 Linearization & phase portrait

DO NOT DO HOMEWORK UNTIL IT IS ASSIGNED. THE ASSIGNMENTS MAY CHANGE UNTIL ANNOUNCED.

INVERSION IN INDIRECT OPTIMAL CONTROL

Converse Lyapunov theorem and Input-to-State Stability

EG4321/EG7040. Nonlinear Control. Dr. Matt Turner

EL 625 Lecture 10. Pole Placement and Observer Design. ẋ = Ax (1)

Optimal Control. Lecture 18. Hamilton-Jacobi-Bellman Equation, Cont. John T. Wen. March 29, Ref: Bryson & Ho Chapter 4.

Lecture : Feedback Linearization

A differential Lyapunov framework for contraction analysis

EML5311 Lyapunov Stability & Robust Control Design

Control design with guaranteed ultimate bound for feedback linearizable systems

MCE693/793: Analysis and Control of Nonlinear Systems

Equilibrium points: continuous-time systems

Modeling and Analysis of Dynamic Systems

LMI Methods in Optimal and Robust Control

Lecture 2: Controllability of nonlinear systems

Review of course Nonlinear control Lecture 5. Lyapunov based design Torkel Glad Lyapunov design Control Lyapunov functions Control Lyapunov function

Chap. 1. Some Differential Geometric Tools

Nonlinear Control. Nonlinear Control Lecture # 6 Passivity and Input-Output Stability

Introduction to Nonlinear Control Lecture # 3 Time-Varying and Perturbed Systems

Nonlinear Control Lecture # 14 Input-Output Stability. Nonlinear Control

Generalization to inequality constrained problem. Maximize

Nonlinear Control. Nonlinear Control Lecture # 24 State Feedback Stabilization

Solutions. Problems of Chapter 1

MCE/EEC 647/747: Robot Dynamics and Control. Lecture 8: Basic Lyapunov Stability Theory

Nonlinear Control Lecture 4: Stability Analysis I

BIFURCATION PHENOMENA Lecture 1: Qualitative theory of planar ODEs

E209A: Analysis and Control of Nonlinear Systems Problem Set 6 Solutions

2 Lyapunov Stability. x(0) x 0 < δ x(t) x 0 < ɛ

A NONLINEAR TRANSFORMATION APPROACH TO GLOBAL ADAPTIVE OUTPUT FEEDBACK CONTROL OF 3RD-ORDER UNCERTAIN NONLINEAR SYSTEMS

Input to state Stability

Lecture 4. Chapter 4: Lyapunov Stability. Eugenio Schuster. Mechanical Engineering and Mechanics Lehigh University.

1. Find the solution of the following uncontrolled linear system. 2 α 1 1

MCE693/793: Analysis and Control of Nonlinear Systems

LIE TRANSFORM CONSTRUCTION OF EXPONENTIAL NORMAL FORM OBSERVERS. Suba Thomas Harry G. Kwatny Bor-Chin Chang

2006 Fall. G(s) y = Cx + Du

EEE582 Homework Problems

EE363 homework 8 solutions

NONLINEAR CONTROLLER DESIGN FOR ACTIVE SUSPENSION SYSTEMS USING THE IMMERSION AND INVARIANCE METHOD

3 Stability and Lyapunov Functions

Modeling & Control of Hybrid Systems Chapter 4 Stability

Balanced realization and model order reduction for nonlinear systems based on singular value analysis

Output Regulation of Uncertain Nonlinear Systems with Nonlinear Exosystems

Kalman Decomposition B 2. z = T 1 x, where C = ( C. z + u (7) T 1, and. where B = T, and

There are none. Abstract for Gauranteed Margins for LQG Regulators, John Doyle, 1978 [Doy78].

High-Gain Observers in Nonlinear Feedback Control

Nonlinear System Analysis

Adaptive Nonlinear Control A Tutorial. Miroslav Krstić

16.30/31, Fall 2010 Recitation # 13

EE363 homework 7 solutions

Lecture Note #6 (Chap.10)

Automatic Control 2. Nonlinear systems. Prof. Alberto Bemporad. University of Trento. Academic year

1 Continuous-time Systems

Chap. 3. Controlled Systems, Controllability

NPTEL Online Course: Control Engineering

Lecture 5 Input output stability

6 Linear Equation. 6.1 Equation with constant coefficients

Optimization-Based Control

Nonlinear Control. Nonlinear Control Lecture # 8 Time Varying and Perturbed Systems

Formula Sheet for Optimal Control

Some solutions of the written exam of January 27th, 2014

Georgia Institute of Technology Nonlinear Controls Theory Primer ME 6402

Transcription:

Output Feedback and State Feedback EL2620 Nonlinear Control Lecture 10 Exact feedback linearization Input-output linearization Lyapunov-based control design methods ẋ = f(x,u) y = h(x) Output feedback: Find u = k(y) such that the closed-loop system ẋ = f ( x,k ( h(x) )) has nice properties. State feedback: Find u = l(x) such that ẋ = f(x,l(x)) has nice properties. k and l may include dynamics. Lecture 10 1 Lecture 10 2 What if x is not measurable? Nonlinear Observers Nonlinear Controllers Nonlinear dynamical controller: ż = a(z,y), u = c(z) Simplest observer ẋ = f(x,u), y = h(x) x = f( x,u) Linear dynamics, static nonlinearity: ż = Az + By, u = c(z) Linear controller: ż = Az + By, u = Cz Feedback correction, as in linear case, Choices of K x = f( x,u) + K(y h( x)) Linearize f at x 0, find K for the linearization Linearize f at x(t), find K = K( x) for the linearization Second case is called Extended Kalman Filter Lecture 10 3 Lecture 10 4

Exact Feedback Linearization Some state feedback control approaches Exact feedback linearization Input-output linearization Lyapunov-based design - backstepping control Consider the nonlinear control-affine system idea: use a state-feedback controller u(x) to make the system linear Example 1: The state-feedback controller ẋ = cos x x 3 + u u(x) = cosx + x 3 kx + v yields the linear system ẋ = kx + v Lecture 10 5 Lecture 10 6 Another Example ẋ 1 = a sin x 2 ẋ 2 = x 2 1 + u How do we cancel the term sin x 2? Perform transformation of states into linearizable form: yields z 1 = x 1, z 2 = ẋ 1 = a sin x 2 ż 1 = z 2, ż 2 = a cos x 2 ( x 2 1 + u) and the linearizing control becomes u(x) = x 2 1 + v a cos x 2, x 2 [ π/2,π/2] Diffeomorphisms A nonlinear state transformation z = T(x) with T invertible for x in the domain of interest T and T 1 continuously differentiable is called a diffeomorphism Definition: A nonlinear system is feedback linearizable if there exist a diffeomorphism T whose domain contains the origin and transforms the system into the form ẋ = Ax + Bγ(x) (u α(x)) with (A,B) controllable and γ(x) nonsingular for all x in the domain of interest. Lecture 10 7 Lecture 10 8

Exact vs. Input-Output Linearization The example again, but now with an output ẋ 1 = a sin x 2, ẋ 2 = x 2 1 + u, y = x 2 The control law u = x 2 1 + v/a cos x 2 yields ż 1 = z 2 ; ż 2 = v ; y = sin 1 (z 2 /a) which is nonlinear in the output. If we want a linear input-output relationship we could instead use to obtain u = x 2 1 + v ẋ 1 = a sin x 2, ẋ 2 = v, y = x 2 which is linear from v to y (but what about the unobservable state x 1?) Input-Output Linearization Use state feedback u(x) to make the control-affine system y = h(x) linear from the input v to the output y The general idea: differentiate the output, y = h(x), p times untill the control u appears explicitly in y (p), and then determine u so that y (p) = v i.e., G(s) = 1/s p Lecture 10 9 Lecture 10 10 Example: controlled van der Pol equation Differentiate the output ẋ 1 = x 2 ẋ 2 = x 1 + ǫ(1 x 2 1)x 2 + u y = x 1 ẏ = ẋ 1 = x 2 The state feedback controller ÿ = ẋ 2 = x 1 + ǫ(1 x 2 1)x 2 + u u = x 1 ǫ(1 x 2 1)x 2 + v ÿ = v Lie Derivatives Consider the nonlinear SISO system The derivative of the output ; x R n,u R 1 y = h(x), y R ẏ = dh dxẋ = dh dx (f(x) + g(x)u) L fh(x) + L g h(x)u where L f h(x) and L g h(x) are the Lie derivatives (L f h is the derivative of h along the vector field of ẋ = f(x)) Repeated derivatives L k fh(x) = d(lk 1 f h) f(x), L g L f h(x) = d(l fh) dx dx g(x) Lecture 10 11 Lecture 10 12

Lie derivatives and relative degree The relative degree p of a system is defined as the number of integrators between the input and the output (the number of times y must be differentiated for the input u to appear) A linear system Y (s) U(s) = has relative degree p = n m b 0 s m +... + b m s n + a 1 s n 1 +... + a n A nonlinear system has relative degree p if L g L i 1 f h(x) = 0,i = 1,...,p 1 ; L g L p 1 f h(x) 0 x D Example The controlled van der Pol equation ẋ 1 = x 2 Differentiating the output ẋ 2 = x 1 + ǫ(1 x 2 1)x 2 + u y = x 1 ẏ = ẋ 1 = x 2 ÿ = ẋ 2 = x 1 + ǫ(1 x 2 1)x 2 + u Thus, the system has relative degree p = 2 Lecture 10 13 Lecture 10 14 The input-output linearizing control Consider a nth order SISO system with relative degree p Differentiating the output y = h(x) ẏ = dh dxẋ = L fh(x) +. =0 {}}{ L g h(x)u y (p) = L p f h(x) + L gl p 1 f h(x)u and hence the state-feedback controller u = 1 ( L p L g L p 1 fh(x) + v) f h(x) results in the linear input-output system y (p) = v Lecture 10 15 Lecture 10 16

Zero Dynamics van der Pol again Note that the order of the linearized system is p, corresponding to the relative degree of the system Thus, if p < n then n p states are unobservable in y. The dynamics of the n p states not observable in the linearized dynamics of y are called the zero dynamics. Corresponds to the dynamics of the system when y is forced to be zero for all times. A system with unstable zero dynamics is called non-minimum phase (and should not be input-output linearized!) ẋ 1 = x 2 ẋ 2 = x 1 + ǫ(1 x 2 1 )x 2 + u With y = x 1 the relative degree p = n = 2 and there are no zero dynamics, thus we can transform the system into ÿ = v. Try it yourself! With y = x 2 the relative degree p = 1 < n and the zero dynamics are given by ẋ 1 = 0, which is not asymptotically stable (but bounded) Lecture 10 17 Lecture 10 18 A simple introductory example Lyapunov-Based Control Design Methods ẋ = f(x,u) Find stabilizing state feedback u = u(x) Verify stability through Control Lyapunov function Methods depend on structure of f Here we limit discussion to Back-stepping control design, which require certain f discussed later. Consider Apply the linearizing control ẋ = cos x x 3 + u u = cosx + x 3 kx Choose the Lyapunov candidate V (x) = x 2 /2 V (x) > 0, V = kx 2 < 0 Thus, the system is globally asymptotically stable But, the term x 3 in the control law may require large control moves! Lecture 10 19 Lecture 10 20

The same example Now try the control law ẋ = cos x x 3 + u u = cosx kx Choose the same Lyapunov candidate V (x) = x 2 /2 V (x) > 0, V = x 4 kx 2 < 0 Simulating the two controllers Simulation with x(0) = 10 State x State trajectory Control input 10 linearizing non linearizing 9 8 7 6 5 4 3 Input u 1000 800 600 400 200 linearizing non linearizing Thus, also globally asymptotically stable (and more negative V ) 2 1 0 0 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 time [s] 200 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 time [s] The linearizing control is slower and uses excessive input Lecture 10 21 Lecture 10 22 Back-Stepping Control Design We want to design a state feedback u = u(x) that stabilizes at x = 0 with f(0) = 0. ẋ 1 = f(x 1 ) + g(x 1 )x 2 ẋ 2 = u Idea: See the system as a cascade connection. Design controller first for the inner loop and then for the outer. u x 2 x 1 g(x 1 ) f() (1) Suppose the partial system ẋ 1 = f(x 1 ) + g(x 1 ) v can be stabilized by v = φ(x 1 ) and there exists Lyapunov fcn V 1 = V 1 (x 1 ) such that V 1 (x 1 ) = dv ( ) 1 f(x 1 ) + g(x 1 )φ(x 1 ) W(x 1 ) for some positive definite function W. This is a critical assumption in backstepping control! Lecture 10 23 Lecture 10 24

Equation (1) can be rewritten as The Trick ẋ 1 = f(x 1 ) + g(x 1 )φ(x 1 ) + g(x 1 )[x 2 φ(x 1 )] ẋ 2 = u u x 2 x 1 g(x 1 ) φ(x 1 ) f + gφ Introduce new state ζ = x 2 φ(x 1 ) and control v = u φ: where ẋ 1 = f(x 1 ) + g(x 1 )φ(x 1 ) + g(x 1 )ζ ζ = v φ(x 1 ) = dφ ẋ 1 = dφ ) (f(x 1 ) + g(x 1 )x 2 u ζ x 1 g(x 1 ) φ(x 1 ) f + gφ Lecture 10 25 Lecture 10 26 Consider V 2 (x 1,x 2 ) = V 1 (x 1 ) + ζ 2 /2. Then, V 2 (x 1,x 2 ) = dv ( ) 1 f(x 1 ) + g(x 1 )φ(x 1 ) + dv 1 g(x 1 )ζ + ζv Choosing gives W(x 1 ) + dv 1 g(x 1 )ζ + ζv v = dv 1 g(x 1 ) kζ, k > 0 V 2 (x 1,x 2 ) W(x 1 ) kζ 2 Hence, x = 0 is asymptotically stable for (1) with control law u(x) = φ(x) + v(x). If V 1 radially unbounded, then global stability. Back-Stepping Lemma Lemma: Let z = (x 1,...,x k 1 ) T and Assume φ(0) = 0, f(0) = 0, ż = f(z) + g(z)x k ẋ k = u ż = f(z) + g(z)φ(z) stable, and V (z) a Lyapunov fcn (with V W ). Then, u = dφ ) (f(z) + g(z)x k dv dz dz g(z) (x k φ(z)) stabilizes x = 0 with V (z) + (x k φ(z)) 2 /2 being a Lyapunov fcn. Lecture 10 27 Lecture 10 28

Strict Feedback Systems Back-stepping Lemma can be applied to stabilize systems on strict feedback form: ẋ 1 = f 1 (x 1 ) + g 1 (x 1 )x 2 ẋ 2 = f 2 (x 1,x 2 ) + g 2 (x 1,x 2 )x 3 ẋ 3 = f 3 (x 1,x 2,x 3 ) + g 3 (x 1,x 2,x 3 )x 4. ẋ n = f n (x 1,...,x n ) + g n (x 1,...,x n )u 2 minute exercise: Give an example of a linear system ẋ = Ax + Bu on strict feedback form. where g k 0 Note: x 1,...,x k do not depend on x k+2,...,x n. Lecture 10 29 Lecture 10 30 Back-Stepping Back-Stepping Lemma can be applied recursively to a system on strict feedback form. Back-stepping generates stabilizing feedbacks φ k (x 1,...,x k ) (equal to u in Back-Stepping Lemma) and Lyapunov functions V k (x 1,...,x k ) = V k 1 (x 1,...,x k 1 ) + [x k φ k 1 ] 2 /2 by stepping back from x 1 to u (see Khalil pp. 593 594 for details). Back-stepping results in the final state feedback u = φ n (x 1,...,x n ) Design back-stepping controller for Example ẋ 1 = x 2 1 + x 2, ẋ 2 = x 3, ẋ 3 = u Step 0 Verify strict feedback form Step 1 Consider first subsystem ẋ 1 = x 2 1 + φ 1 (x 1 ), ẋ 2 = u 1 where φ 1 (x 1 ) = x 2 1 x 1 stabilizes the first equation. With V 1 (x 1 ) = x 2 1/2, Back-Stepping Lemma gives u 1 = ( 2x 1 1)(x 2 1 + x 2 ) x 1 (x 2 + x 2 1 + x 1 ) = φ 2 (x 1,x 2 ) V 2 = x 2 1/2 + (x 2 + x 2 1 + x 1 ) 2 /2 Lecture 10 31 Lecture 10 32

Step 2 Applying Back-Stepping Lemma on gives u = u 2 = dφ 2 dz ẋ 1 = x 2 1 + x 2 ẋ 2 = x 3 ẋ 3 = u (f(z) + g(z)x n ) dv 2 dz g(z) (x n φ 2 (z)) = φ 2 x 1 (x 2 1 + x 2 ) + φ 2 x 2 x 3 V 2 x 2 (x 3 φ 2 (x 1,x 2 )) which globally stabilizes the system. Gain scheduling Next Lecture Nonlinear controllability (How to park a car) Lecture 10 33 Lecture 10 34