EL2620 Nonlinear Control. Lecture notes. Karl Henrik Johansson, Bo Wahlberg and Elling W. Jacobsen. This revision December 2012

Size: px
Start display at page:

Download "EL2620 Nonlinear Control. Lecture notes. Karl Henrik Johansson, Bo Wahlberg and Elling W. Jacobsen. This revision December 2012"

Transcription

1 EL262 Nonlinear Control Lecture notes Karl Henrik Johansson, Bo Wahlberg and Elling W. Jacobsen This revision December 22 Automatic Control KTH, Stockholm, Sweden

2

3 Preface Many people have contributed to these lecture notes in nonlinear control. Originally it was developed by Bo Bernhardsson and Karl Henrik Johansson, and later revised by Bo Wahlberg and myself. Contributions and comments by Mikael Johansson, Ola Markusson, Ragnar Wallin, Henning Schmidt, Krister Jacobsson, Björn Johansson and Torbjörn Nordling are gratefully acknowledged. Elling W. Jacobsen Stockholm, December 22

4

5 EL262 Nonlinear Control Automatic Control Lab, KTH Disposition 7.5 credits, lp 2 28h lectures, 28h exercises, 3 home-works Instructors Elling W. Jacobsen, lectures and course responsible jacobsen@kth.se Farhad Farokhi, Martin Andreasson, teaching assistants farakhi@kth.se, mandreas@kth.se Hanna Holmqvist, course administration hanna.holmqvist@ee.kth.se STEX (entrance floor, Osquldasv. ), course material stex@s3.kth.se Lecture EL262 Nonlinear Control Lecture Practical information Course outline Linear vs Nonlinear Systems Nonlinear differential equations Lecture 3 Course Goal To provide participants with a solid theoretical foundation of nonlinear control systems combined with a good engineering understanding You should after the course be able to understand common nonlinear control phenomena apply the most powerful nonlinear analysis methods use some practical nonlinear control design methods Lecture 2 Today s Goal You should be able to Define a nonlinear dynamic system Describe distinctive phenomena in nonlinear dynamic systems Transform differential equations to first-order form Derive equilibrium points Lecture 4

6 Course Information All info and handouts are available at course homepage (KTH Social) Homeworks are compulsory and have to be handed in on time (we are strict on this!) Written 5h exam on January 8 24 Lecture 5 Course Outline Introduction: nonlinear models and phenomena, computer simulation (L-L2) Feedback analysis: linearization, stability theory, describing functions (L3-L6) Control design: compensation, high-gain design, Lyapunov methods (L7-L) Alternatives: gain scheduling, optimal control, neural networks, fuzzy control (L-L3) Summary (L4) Lecture 7 Material Textbook: Khalil, Nonlinear Systems, Prentice Hall, 3rd ed., 22. Optional but highly recommended. Lecture notes: Copies of transparencies (from previous year) Exercises: Class room and home exercises Homeworks: 3 computer exercises to hand in Software: Matlab / Simulink Alternative textbooks (decreasing mathematical rigour): Sastry, Nonlinear Systems: Analysis, Stability and Control; Vidyasagar, Nonlinear Systems Analysis; Slotine & Li, Applied Nonlinear Control; Glad & Ljung, Reglerteori, flervariabla och olinjära metoder. Only references to Khalil will be given. Two course compendia sold by STEX (also available at course homepage) Lecture 6 Linear Systems Definition: Let M be a signal space. The system S : M M is linear if for all u, v M and α R S(αu) = αs(u) scaling S(u + v) = S(u) + S(v) superposition Example: Linear time-invariant systems ẋ(t) = Ax(t) + Bu(t), y(t) = Cx(t), x() = y(t) = g(t) u(t) = Y (s) = G(s)U(s) t g(τ)u(t τ)dτ Notice the importance to have zero initial conditions Lecture 8

7 Linear Systems Have Nice Properties Local stability=global stability Stability if all eigenvalues of A (or poles of G(s)) are in the left half-plane Superposition Enough to know a step (or impulse) response Frequency analysis possible Sinusoidal inputs give sinusoidal outputs: Y (iω) = G(iω)U(iω) Lecture 9 Can the ball move. meter in. seconds from steady state? Linear model (step response with φ = φ) gives x(t) t2 2 φ.5φ so that φ..5 = 2 rad = 4 Unrealistic answer. Clearly outside linear region! Linear model valid only if sin φ φ Must consider nonlinear model. Possibly also include other nonlinearities such as centripetal force, saturation, friction etc. Lecture Linear Models may be too Crude Approximations Example:Positioning of a ball on a beam φ x Nonlinear model: mẍ(t) = mg sin φ(t), Linear model: ẍ(t) = gφ(t) Lecture Linear Models are not Rich Enough Linear models can not describe many phenomena seen in nonlinear systems Lecture 2

8 Stability Can Depend on Reference Signal Example: Control system with valve characteristic f(u) = u 2 Motor Valve Process r y Σ s (s+) 2 Simulink block diagram: s u 2 y (s+)(s+) - Lecture 3 STEP RESPONSES.4.2 Output y r = Time t 2 Output y r = Time t Output y Multiple Equilibria Example: chemical reactor Input ( ẋ = xexp x2 ( ) ẋ2 = xexp x2 f =.7, ǫ =.4 ) + f( x) ǫf(x2 xc) Output time [s] r = Time t Stability depends on amplitude of the reference signal! (The linearized gain of the valve increases with increasing amplitude) Lecture 4 Temperature x 2 Coolant temp x c Existence of multiple stable equilibria for the same input gives hysteresis effect Lecture 5 Stable Periodic Solutions Example: Position control of motor with back-lash Constant 5 Sum P-controller 5s 2 +s Motor y Backlash - Motor: G(s) = s(+5s) Controller: K = 5 Lecture 6

9 Back-lash induces an oscillation Period and amplitude independent of initial conditions:.5 Output y Time t Output y Time t Output y Time t How predict and avoid oscillations? Lecture 7 Harmonic Distortion Example: Sinusoidal response of saturation a sin t y(t) = k= A k sin(kt) Saturation -2 a = Frequency (Hz) Automatic Tuning of PID Controllers Relay induces a desired oscillation whose frequency and amplitude are used to choose PID parameters PID Σ A u Process y T Relay y u 5 Time Lecture 8 Amplitude y -2 a = Time t 2 Amplitude y -2 Frequency (Hz) Time t Lecture 9 Example: Electrical power distribution Nonlinearities such as rectifiers, switched electronics, and transformers give rise to harmonic distortion k=2 Energy in tone k Total Harmonic Distortion = Energy in tone Example: Electrical amplifiers Effective amplifiers work in nonlinear region Introduces spectrum leakage, which is a problem in cellular systems Trade-off between effectivity and linearity Lecture 2

10 Subharmonics Example: Duffing s equation ÿ + ẏ + y y 3 = a sin(ωt) Time t a sin ωt y Time t Lecture 2 Existence Problems Example: The differential equation ẋ = x 2, x() = x has solution x(t) = x xt, t < x Solution not defined for tf = x Solution interval depends on initial condition! Recall the trick: ẋ = x 2 dx x = dt 2 Integrate x x(t) x() = t x(t) = x xt Lecture 23 Nonlinear Differential Equations Definition: A solution to ẋ(t) = f(x(t)), x() = x () over an interval [, T ] is a C function x : [, T ] R n such that () is fulfilled. When does there exists a solution? When is the solution unique? Example: ẋ = Ax, x() = x, gives x(t) = exp(at)x Lecture 22 Finite Escape Time Simulation for various initial conditions x 5 Finite escape time of dx/dt = x x(t) Time t Lecture 24

11 Uniqueness Problems Example: ẋ = x, x() =, has many solutions: { (t C) 2 /4 t > C x(t) = t C x(t) Time t Lecture 25 Lipschitz Continuity Definition: f : R n R n is Lipschitz continuous if there exists L, r > such that for all x, y Br(x) = {z R n : z x < r}, f(x) f(y) L x y Euclidean norm is given by Slope L x 2 = x x 2 n Lecture 27 Physical Interpretation Consider the reverse example, i.e., the water tank lab process with ẋ = x, x(t ) = where x is the water level. It is then impossible to know at what time t < T the level was x(t) = x >. Hint: Reverse time s = T t ds = dt and thus dx ds = dx dt Lecture 26 Local Existence and Uniqueness Theorem: If f is Lipschitz continuous, then there exists δ > such that ẋ(t) = f(x(t)), x() = x has a unique solution in Br(x) over [, δ]. Proof: See Khalil, Appendix C.. Based on the contraction mapping theorem Remarks δ = δ(r, L) f being C is not sufficient (cf., tank example) f being C implies Lipschitz continuity (L = maxx Br(x) f (x)) Lecture 28

12 State-Space Models State x, input u, output y General: f(x, u, y, ẋ, u, ẏ,...) = Explicit: ẋ = f(x, u), y = h(x) Affine in u: ẋ = f(x) + g(x)u, y = h(x) Linear: ẋ = Ax + Bu, y = Cx Lecture 29 Transformation to First-Order System Given a differential equation in y with highest derivative dn y dt n, express the equation in x = ( y dy... dt ) T d n y dt n Example: Pendulum MR 2 θ + k θ + MgR sin θ = x = ( θ θ)t gives ẋ = x2 ẋ2 = k MR x 2 2 g R sin x Lecture 3 Transformation to Autonomous System A nonautonomous system ẋ = f(x, t) is always possible to transform to an autonomous system by introducing xn+ = t: ẋ = f(x, xn+) ẋn+ = Lecture 3 Equilibria Definition: A point (x, u, y ) is an equilibrium, if a solution starting in (x, u, y ) stays there forever. Corresponds to putting all derivatives to zero: General: f(x, u, y,,,...) = Explicit: = f(x, u ), y = h(x ) Affine in u: = f(x ) + g(x )u, y = h(x ) Linear: = Ax + Bu, y = Cx Often the equilibrium is defined only through the state x Lecture 32

13 Multiple Equilibria Example: Pendulum MR 2 θ + k θ + MgR sin θ = θ = θ = gives sin θ = and thus θ = kπ Alternatively in first-order form: ẋ = x2 ẋ2 = k MR x 2 2 g R sin x ẋ = ẋ2 = gives x 2 = and sin(x ) = Lecture 33 When do we need Nonlinear Analysis & Design? When the system is strongly nonlinear, or not linearizable When the range of operation is large When distinctive nonlinear phenomena are relevant When we want to push performance to the limit Lecture 35 Some Common Nonlinearities in Control Systems u e u Abs Math Function Saturation Sign Dead Zone Look-Up Table Relay Backlash Coulomb & Viscous Friction Lecture 34 Next Lecture Simulation in Matlab Linearization Phase plane analysis Lecture 36

14

15 EL262 Nonlinear Control Lecture 2 Wrap-up of Lecture : Nonlinear systems and phenomena Modeling and simulation in Simulink Phase-plane analysis Lecture 2 Analysis Through Simulation Simulation tools: ODE s ẋ = f(t, x, u) ACSL, Simnon, Simulink DAE s F (t, ẋ, x, u) = Omsim, Dymola, Modelica Special purpose simulation tools Spice, EMTP, ADAMS, gproms Lecture 2 3 Today s Goal You should be able to Model and simulate in Simulink Linearize using Simulink Do phase-plane analysis using pplane (or other tool) Lecture 2 2 Simulink > matlab >> simulink Lecture 2 4

16 An Example in Simulink File -> New -> Model Double click on Continous Transfer Fcn Step (in Sources) Scope (in Sinks) Connect (mouse-left) Simulation -> Parameters s+ Step Transfer Fcn Scope Lecture 2 5 Save Results to Workspace stepmodel.mdl Clock To Workspace s+ Step Transfer Fcn y To Workspace u To Workspace2 Check Save format of output blocks ( Array instead of Structure ) >> plot(t,y) Lecture 2 7 Choose Simulation Parameters Don t forget Apply Lecture 2 6 How To Get Better Accuracy Modify Refine, Absolute and Relative Tolerances, Integration method Refine adds interpolation points: Refine = Refine = Lecture 2 8 t

17 Use Scripts to Document Simulations If the block-diagram is saved to stepmodel.mdl, the following Script-file simstepmodel.m simulates the system: open_system( stepmodel ) set_param( stepmodel, RelTol, e-3 ) set_param( stepmodel, AbsTol, e-6 ) set_param( stepmodel, Refine, ) tic sim( stepmodel,6) toc subplot(2,,),plot(t,y),title( y ) subplot(2,,2),plot(t,u),title( u ) Lecture 2 9 Example: Two-Tank System The system consists of two identical tank models: ḣ = (u q)/a q = a 2g h In /A Sum Gain s Integrator f(u) Fcn q 2 h In q In h Subsystem q In h Subsystem2 Out Lecture 2 Nonlinear Control System Example: Control system with valve characteristic f(u) = u 2 Motor Valve Process r y Σ s (s+) 2 Simulink block diagram: s u 2 y (s+)(s+) - Lecture 2 Linearization in Simulink Linearize about equilibrium (x, u, y): >> A=2.7e-3;a=7e-6,g=9.8; >> [x,u,y]=trim( twotank,[..],[],.) x =.. u = e-6 y =. >> [aa,bb,cc,dd]=linmod( twotank,x,u); >> sys=ss(aa,bb,cc,dd); >> bode(sys) Lecture 2 2

18 Differential Equation Editor dee is a Simulink-based differential equation editor >> dee Differential Equation Editor DEE deedemo deedemo2 deedemo3 deedemo4 Run the demonstrations Lecture 2 3 Homework Use your favorite phase-plane analysis tool Follow instructions in Exercise Compendium on how to write the report. See the course homepage for a report example The report should be short and include only necessary plots. Write in English. Lecture 2 5 Phase-Plane Analysis Download ICTools from ictools Down load DFIELD and PPLANE from dfield This was the preferred tool last year! Lecture 2 4

19 EL262 Nonlinear Control Lecture 3 Stability definitions Linearization Phase-plane analysis Periodic solutions Lecture 3 Local Stability Consider ẋ = f(x) with f() = Definition: The equilibrium x = is stable if for all ǫ > there exists δ = δ(ǫ) > such that x() < δ x(t) < ǫ, t x(t) δ ǫ If x = is not stable it is called unstable. Lecture 3 3 Today s Goal You should be able to Explain local and global stability Linearize around equilibria and trajectories Sketch phase portraits for two-dimensional systems Classify equilibria into nodes, focuses, saddle points, and center points Analyze stability of periodic solutions through Poincaré maps Lecture 3 2 Asymptotic Stability Definition: The equilibrium x = is asymptotically stable if it is stable and δ can be chosen such that t x() < δ lim The equilibrium is globally asymptotically stable if it is stable and limt x(t) = for all x(). Lecture 3 4

20 Linearization Around a Trajectory Let (x(t), u(t)) denote a solution to ẋ = f(x, u) and consider another solution (x(t), u(t)) = (x(t) + x(t), u(t) + ũ(t)): ẋ(t) = f(x(t) + x(t), u(t) + ũ(t)) = f(x(t), u(t)) + f x (x (t), u(t)) x(t) + f u (x (t), u(t))ũ(t) + O( x, ũ 2 ) (x(t), u(t)) (x(t) + x(t), u(t) + ũ(t)) Lecture 3 5 Example h(t) m(t) ḣ(t) = v(t) v(t) = g + veu(t)/m(t) ṁ(t) = u(t) Let x(t) = (h(t), v(t), m(t)) T, u(t) u >, be a solution. Then, x(t) = v eu (m ut) 2 x(t) + v e m ut ũ(t) Lecture 3 7 Hence, for small ( x, ũ), approximately x(t) = A(x(t), u(t)) x(t) + B(x(t), u(t))ũ(t) where A(x(t), u(t)) = f x (x (t), u(t)) B(x(t), u(t)) = f u (x (t), u(t)) Note that A and B are time dependent. However, if (x(t), u(t)) (x, u) then A and B are constant. Lecture 3 6 Pointwise Left Half-Plane Eigenvalues of A(t) Do Not Impose Stability A(t) = ( + α cos 2 t α sin t cos t α sin t cos t + α sin 2 t ), α > Pointwise eigenvalues are given by λ(t) = λ = α 2 ± α which are stable for < α < 2. However, ( e (α )t cos t e t sin t x(t) = e (α )t sin t e t cos t ) x(), is unbounded solution for α >. Lecture 3 8

21 Lyapunov s Linearization Method Theorem: Let x be an equilibrium of ẋ = f(x) with f C. Denote A = f (x x ) and α(a) = max Re(λ(A)). If α(a) <, then x is asymptotically stable If α(a) >, then x is unstable The fundamental result for linear systems theory! The case α(a) = needs further investigation. The theorem is also called Lyapunov s Indirect Method. A proof is given next lecture. Lecture 3 9 Linear Systems Revival d dt [ x x2 ] = A [ x x2 ] Analytic solution: x(t) = e At x(), t. If A is diagonalizable, then e At = V e Λt V = [ v v2 ] [ e λ t e λ 2t ] [v v2 ] where v, v2 are the eigenvectors of A (Av = λv etc). This implies that x(t) = ce λ t v + c2e λ 2t v 2, where the constants c and c2 are given by the initial conditions Lecture 3 Example The linearization of ẋ = x 2 + x + sin x2 ẋ2 = cos x2 x 3 5x2 at the equilibrium x = (, ) T is given by ( ) A =, λ(a) = { 2, 4} 3 5 x is thus an asymptotically stable equilibrium for the nonlinear system. Lecture 3 Example: Two real negative eigenvalues Given the eigenvalues λ < λ2 <, with corresponding eigenvectors v and v2, respectively. Solution: x(t) = ce λ t v + c2e λ 2t v 2 Slow eigenvalue/vector: x(t) c2e λ 2t v 2 for large t. Moves along the slow eigenvector towards x = for large t Fast eigenvalue/vector: x(t) ce λ t v + c2v2 for small t. Moves along the fast eigenvector for small t Lecture 3 2

22 Phase-Plane Analysis for Linear Systems The location of the eigenvalues λ(a) determines the characteristics of the trajectories. Six cases: stable node unstable node saddle point Im λi = : λ, λ2 < λ, λ2 > λ < < λ2 Im λi : Re λi < Re λi > Re λi = stable focus unstable focus center point Lecture 3 3 ẋ = Example Unstable Focus [ σ ω ω σ x(t) = e At x() = ] x, σ, ω >, λ,2 = σ ± iω [ i i ] [ e σt e iωt e σt e iωt ] [ i i ] x() In polar coordinates r = x 2 + x 2 2, θ = arctan x 2/x (x = r cos θ, x2 = r sin θ): ṙ = σr θ = ω Lecture 3 5 Equilibrium Points for Linear Systems x2 x Im λ Re λ Lecture 3 4 λ,2 = ± i λ,2 =.3 ± i Lecture 3 6

23 Example Stable Node ẋ = [ 2 (λ, λ2) = (, 2) and ] x [ [ v v2] = ] v is the slow direction and v2 is the fast. Lecture minute exercise: What is the phase portrait if λ = λ2? Hint: Two cases; only one linear independent eigenvector or all vectors are eigenvectors Lecture 3 9 Fast: x2 = x + c3 Slow: x2 = Lecture 3 8 Phase-Plane Analysis for Nonlinear Systems Close to equilibrium points nonlinear system linear system Theorem: Assume ẋ = f(x) = Ax + g(x), with lim x g(x) / x =. If ż = Az has a focus, node, or saddle point, then ẋ = f(x) has the same type of equilibrium at the origin. Remark: If the linearized system has a center, then the nonlinear system has either a center or a focus. Lecture 3 2

24 How to Draw Phase Portraits By hand:. Find equilibria 2. Sketch local behavior around equilibria 3. Sketch (ẋ, ẋ2) for some other points. Notice that dx2 dx = ẋ ẋ2 4. Try to find possible periodic orbits 5. Guess solutions By computer:. Matlab: dee or pplane Lecture 3 2 Phase-Plane Analysis of PLL Let (x, x2) = (θ out, θ out ), K, T >, and θ in (t) θ in. ẋ(t) = x2(t) ẋ2(t) = T x2(t) + KT sin(θ in x(t)) Equilibria are (θ in + nπ, ) since ẋ = x2 = ẋ2 = sin(θin x) = x = θ in + nπ Lecture 3 23 Phase-Locked Loop A PLL tracks phase θ in (t) of a signal s in (t) = A sin[ωt + θ in (t)]. s in Phase θout Detector Filter VCO e K sin( ) + st θ in θout θout s Lecture 3 22 Classification of Equilibria Linearization gives the following characteristic equations: n even: λ 2 + T λ + KT = K > (4T ) gives stable focus < K < (4T ) gives stable node n odd: λ 2 + T λ KT = Saddle points for all K, T > Lecture 3 24

25 Phase-Plane for PLL (K, T ) = (/2, ): focuses ( 2kπ, ), saddle points ( (2k + )π, ) Lecture 3 25 Periodic solution: Polar coordinates. x = r cos θ ẋ = cos θṙ r sin θ θ implies ( ṙ Now, from () x2 = r sin θ ẋ2 = sin θṙ + r cos θ θ θ ) = r ( r cos θ r sin θ sin θ cos θ ) ( ẋ ẋ = r( r 2 ) cos θ r sin θ ẋ2 ) gives Only r = is a stable equilibrium! ẋ2 = r( r 2 ) sin θ + r cos θ ṙ = r( r 2 ) θ = Lecture 3 27 Periodic Solutions Example of an asymptotically stable periodic solution: ẋ = x x2 x(x 2 + x 2 2) ẋ2 = x + x2 x2(x 2 + x 2 2) () Lecture 3 26 A system has a periodic solution if for some T > x(t + T ) = x(t), t A periodic orbit is the image of x in the phase portrait. When does there exist a periodic solution? When is it stable? Note that x(t) const is by convention not regarded periodic Lecture 3 28

26 Flow The solution of ẋ = f(x) is sometimes denoted φt(x) to emphasize the dependence on the initial point x R n φt( ) is called the flow. Lecture 3 29 Existence of Periodic Orbits A point x such that P (x ) = x corresponds to a periodic orbit. P (x ) = x x is called a fixed point of P. Lecture 3 3 Poincaré Map Assume φt(x) is a periodic solution with period T. Let Σ R n be an n -dim hyperplane transverse to f at x. Definition: The Poincaré map P : Σ Σ is P (x) = φτ(x)(x) where τ(x) is the time of first return. Σ P (x) x φt(x) Lecture 3 3 minute exercise: What does a fixed point of P k corresponds to? Lecture 3 32

27 Stable Periodic Orbit The linearization of P around x gives a matrix W such that P (x) W x if x is close to x. λj(w ) = for some j If λi(w ) < for all i j, then the corresponding periodic orbit is asymptotically stable If λi(w ) > for some i, then the periodic orbit is unstable. Lecture 3 33 The Poincaré map is P (r, θ) = ( [ + (r 2 )e 2 2π ] /2 θ + 2π ) (r, θ) = (, 2πk) is a fixed point. The periodic solution that corresponds to (r(t), θ(t)) = (, t) is asymptotically stable because W = dp d(r, θ) (, 2πk) = ( e 4π ) Stable periodic orbit (as we already knew for this example)! Lecture 3 35 Example Stable Unit Circle Rewrite () in polar coordinates: ṙ = r( r 2 ) θ = Choose Σ = {(r, θ) : r >, θ = 2πk}. The solution is φt(r, θ) = ( [ + (r 2 )e 2t ] /2, t + θ ) First return time from any point (r, θ) Σ is τ(r, θ) = 2π. Lecture 3 34

28

29 EL262 Nonlinear Control Lecture 4 Lyapunov methods for stability analysis Lecture 4 Alexandr Mihailovich Lyapunov (857 98) Master thesis On the stability of ellipsoidal forms of equilibrium of rotating fluids, St. Petersburg University, 884. Doctoral thesis The general problem of the stability of motion, 892. Formalized the idea: If the total energy is dissipated, the system must be stable. Lecture 4 3 Today s Goal You should be able to Prove local and global stability of equilibria using Lyapunov s method Prove stability of a set (e.g., a periodic orbit) using LaSalle s invariant set theorem Lecture 4 2 A Motivating Example x m Balance of forces yields mẍ = bẋ ẋ }{{} damping 3 kx } kx {{ } 3 spring, b, k, k > Total energy = kinetic + potential energy: V = mv2 2 + x F springds V (x, ẋ) = mẋ 2 /2 + kx 2 /2 + kx 4 /4 >, V (, ) = Change in energy along any solution x(t) d dt V (x, ẋ) = mẋẍ + k xẋ + kx 3 ẋ = b ẋ 3 <, ẋ Lecture 4 4

30 Stability Definitions Recall from Lecture 3 that an equilibrium x = of ẋ = f(x) is Locally stable, if for every ǫ > there exists δ > such that x() < δ x(t) < ǫ, t Locally asymptotically stable, if locally stable and t x() < δ lim Globally asymptotically stable, if asymptotically x() R n. Lecture 4 5 Lyapunov Function A function V that fulfills () (3) is called a Lyapunov function. Condition (3) means that V is non-increasing along all trajectories in Ω: d V V (x) = V (x) = dt x ẋ = V x f(x) V x2 x Lecture 4 7 Lyapunov Stability Theorem Theorem: Let ẋ = f(x), f() =, and Ω R n. If there exists a C function V : Ω R such that () V () = (2) V (x) > for all x Ω, x (3) V (x) for all x Ω then x = is locally stable. Furthermore, if (4) V (x) < for all x Ω, x then x = is locally asymptotically stable. The result is called Lyapunov s Direct Method Lecture 4 6 Conservation and Dissipation Conservation of energy: V (x) = V x f(x) is everywhere orthogonal to the normal V x V (x) = c. f(x) =, i.e. the vector field to the level surface Example: Total energy of a lossless mechanical system or total fluid in a closed system. Dissipation of energy: V (x) = V x f(x) and the normal V x obtuse angle. f(x), i.e. the vector field to the level surface V (x) = c make an Example: Total energy of a mechanical system with damping or total fluid in a system that leaks. Lecture 4 8

31 Geometric interpretation V (x) =constant f(x) dv dx x(t) Vector field points into sublevel sets Trajectories can only go to lower values of V (x) Lecture 4 9 Example Pendulum Is the origin stable for a mathematical pendulum? ẋ = x2, ẋ2 = g l sin x Lyapunov function candidate: V (x) = ( cos x)g/l + x 2 2/ x - -2 x2 Lecture 4 Boundedness: For an trajectory x(t) V ((x(t)) = V (x()) + t V (x(τ))dτ V (x()) which means that the whole trajectory lies in the set {z V (z) V (x())} For stability it is thus important that the sublevel sets {z V (z) c)} are locally bounded. Lecture 4 () V () = (2) V (x) > for 2π < x < 2π and (x, x2) (3) V (x) = g l ẋ sin x + x2ẋ2 =, x Hence, x = is locally stable. Note that x = is not asymptotically stable, so, of course, (4) is not fulfilled: V (x), x. Conservation of energy! Lecture 4 2

32 5 minute exercise: Consider Example 2 from Lecture 3: ẋ = x2(t) ẋ2 = x(t) ǫx 2 (t)x2(t) For what values of ǫ is the steady-state (, ) locally stable? Hint: try the standard Lyapunov function V (x) = x T x Can you say something about global stability of the equilibrium? Lecture 4 3 Radial Unboundedness is Necessary If (4) is not fulfilled, then global stability cannot be guaranteed. is a Lyapunov Example: Assume V (x) = x 2 /( + x 2 ) + x 2 2 function for some system. Then might x(t) even if V (x) <, as shown by the contour plot of V (x): x Lyapunov Theorem for Global Asymptotic Stability Theorem: Let ẋ = f(x) and f() =. If there exists a C function V : R n R such that () V () = (2) V (x) > for all x (3) V (x) < for all x (4) V (x) as x then x = is globally asymptotically stable. Lecture x 2 Lecture 4 5 Somewhat Stronger Assumptions Theorem: Let ẋ = f(x) and f() =. If there exists a C function V : R n R such that () V () = (2) V (x) > for all x (3) V (x) αv (x) for all x (4) The sublevel sets {x V (x) c} are bounded for all c then x = is globally asymptotically stable. Lecture 4 6

33 Proof Idea Assume x(t) (otherwise we have x(τ) = for all τ > t). Then V (x) V (x) α Integrating from to t gives log V (x(t)) log V (x() αt V (x(t)) e αt V (x()) Hence, V (x(t)), t. Using the properties of V it follows that x(t), t. Lecture 4 7 Positive Definite Matrices Definition: A matrix M is positive definite if x T Mx > for all x. It is positive semidefinite if x T Mx for all x. Lemma: M = M T is positive definite λi(m) >, i M = M T is positive semidefinite λi(m), i Note that if M = M T is positive definite, then the Lyapunov function candidate V (x) = x T Mx fulfills V () = and V (x) >, x. Lecture 4 9 Converse Lyapunov theorems Example: If the system is globally exponentially stable x(t) Me βt x(), M >, β > then there is a Lyapunov function that proves that it is globally asymptotically stable. There exist also Lyapunov instability theorems! Lecture 4 8 Symmetric Matrices Assume that M = M T. Then λmin(m) x 2 x T Mx λmax(m) x 2 Hint: Use the factorization M = UΛU T, where U is an orthogonal matrix (UU T = I) and Λ = diag(λ,..., λn). Lecture 4 2

34 Lyapunov Stability for Linear Systems Linear system: ẋ = Ax Lyapunov equation: Consider the quadratic function V (x) = x T P x, P = P T > V (x) = x T P ẋ + ẋ T P x = x T (P A + A T P ) }{{} Q x = x T Qx Thus, V < t if there exist a Q = Q T > such that P A + A T P = Q Global Asymptotic Stability: If Q is positive definite, then the Lyapunov Stability Theorem implies global asymptotic stability, and hence the eigenvalues of A must satisfy Re λi(a) < for all i Lecture 4 2 Interpretation Assume ẋ = Ax, x() = z. Then x T (t)qx(t)dt = z t ( e AT t Qe At dt ) z = z t P z Thus v(z) = z T P z is the cost-to-go from the point z (no input) with integral quadratic cost function using weighting matrix Q. Lecture 4 23 Converse Theorem for Linear Systems If Re λi(a) <, then for every symmetric positive definite Q there exist a symmetric positive definite matrix P such that P A + A T P = Q Proof: Choose P = e AT t Qe At dt. Then A T P + P A = = ( A T e AT t Qe At ( d tqe At dt eat ) + e AT t Qe At A dt ) [ dt = e AT t Qe At ] = Q Lecture 4 22 Lyapunov s Linearization Method Recall from Lecture 3: Theorem: Let x be an equilibrium of ẋ = f(x) with f C. Denote A = f (x x ) and α(a) = max Re(λ(A)). () If α(a) < then x is asymptotically stable (2) If α(a) > then x is unstable Lecture 4 24

35 Proof of () in Lyapunov s Linearization Let f(x) = Ax + g(x) where lim x g(x) / x =. The Lyapunov function candidate V (x) = x T P x satisfies V () =, V (x) > for x, and V (x) = x T P f(x) + f T (x)p x = x T P [Ax + g(x)] + [x T A T + g T (x)]p x = x T (P A + A T P )x + 2x T P g(x) = x T Qx + 2x T P g(x) where x T Qx λmin(q) x 2 we need to show that 2x T P g(x) < λmin(q) x 2 Lecture 4 25 LaSalle s Theorem for Global Asymptotic Stability Theorem: Let ẋ = f(x) and f() =. If there exists a C function V : R n R such that () V () = (2) V (x) > for all x (3) V (x) for all x (4) V (x) as x (5) The only solution of ẋ = f(x) such that V (x) = is x(t) = for all t then x = is globally asymptotically stable. Lecture 4 27 For all γ > there exists r > such that g(x) < γ x, x < r Thus, V < λ min(q) x 2 + 2γλmax(P ) x 2 which becomes strictly negative if we choose γ < 2 λmin(q) λmax(p ) Lecture 4 26 A Motivating Example (cont d) mẍ = bẋ ẋ kx kx 3 V (x) = (2mẋ 2 + 2kx 2 + kx 4 )/4 >, V (, ) = V (x) = b ẋ 3 Assume that there is a trajectory with ẋ(t) =, x(t). Then d dtẋ(t) = k m x(t) k m x3 (t), which means that ẋ(t) can not stay constant. Hence, x(t) = is the only possible trajectory for which V (x) =, and the LaSalle theorem gives global asymptotic stability. Lecture 4 28

36 Invariant Sets Definition: A set M is invariant with respect to ẋ = f(x), if x() M implies that x(t) M for all t. x() x(t) M Definition: x(t) approaches a set M as t, if for each ǫ > there is a T > such that dist(x(t), M) < ǫ for all t > T. Here dist(p, M) = infx M p x. Lecture 4 29 Example Periodic Orbit Show that x(t) approaches {x : x = } {} for ẋ = x x2 x(x 2 + x 2 2) ẋ2 = x + x2 x2(x 2 + x 2 2) It is possible to show that Ω = { x R} is invariant for sufficiently large R >. Let V (x) = (x 2 + x 2 2 ) 2. V V (x) = x f(x) = 2(x2 + x 2 2 ) d dt (x2 + x 2 2 ) = 2(x 2 + x 2 2 ) 2 (x 2 + x 2 2), x Ω Lecture 4 3 LaSalle s Invariant Set Theorem Theorem: Let Ω R n be a compact set invariant with respect to ẋ = f(x). Let V : R n R be a C function such that V (x) for x Ω. Let E be the set of points in Ω where V (x) =. If M is the largest invariant set in E, then every solution with x() Ω approaches M as t. E V (x) x M Ω E M Note that V does not have to be a positive definite function. Lecture 4 3 E = {x Ω : V (x) = } = {x : x = } {} The largest invariant set of E is M = E because d dt (x2 + x 2 2 ) = 2(x 2 + x 2 2 )(x 2 + x 2 2) = for x M Lecture 4 32

37 EL262 Nonlinear Control Lecture 5 Input output stability u y S Lecture 5 r y G(s) History f(y) y f( ) For what G(s) and f( ) is the closed-loop system stable? Luré and Postnikov s problem (944) Aizerman s conjecture (949) (False!) Kalman s conjecture (957) (False!) Solution by Popov (96) (Led to the Circle Criterion) Lecture 5 3 Today s Goal You should be able to derive the gain of a system analyze stability using Small Gain Theorem Circle Criterion Passivity Lecture 5 2 Gain Idea: Generalize the concept of gain to nonlinear dynamical systems u y S The gain γ of S is the largest amplification from u to y Here S can be a constant, a matrix, a linear time-invariant system, etc Question: How should we measure the size of u and y? Lecture 5 4

38 Norms A norm measures size Definition: A norm is a function : Ω R +, such that for all x, y Ω x and x = x = x + y x + y αx = α x, for all α R Examples: Euclidean norm: x = x x 2 n Max norm: x = max{ x,..., xn } Lecture 5 5 Eigenvalues are not gains The spectral radius of a matrix M ρ(m) = max i λi(m) is not a gain (nor a norm). Why? What amplification is described by the eigenvalues? Lecture 5 7 Gain of a Matrix Every matrix M C n n has a singular value decomposition M = UΣV where Σ = diag {σ,..., σn} ; U U = I ; V V = I σi - the singular values The gain of M is the largest singular value of M : Mx σmax(m) = σ = sup x R n x where is the Euclidean norm. Lecture 5 6 Signal Norms A signal x is a function x : R + R. A signal norm k is a norm on the space of signals x. Examples: 2-norm (energy norm): x 2 = x(t) 2 dt sup-norm: x = sup t R + x(t) Lecture 5 8

39 Parseval s Theorem L2 denotes the space of signals with bounded energy: x 2 < Theorem: If x, y L2 have the Fourier transforms X(iω) = then In particular, x 2 2 = e iωt x(t)dt, Y (iω) = y(t)x(t)dt = 2π x(t) 2 dt = 2π Y (iω)x(iω)dω. e iωt y(t)dt, X(iω) 2 dω. The power calculated in the time domain equals the power calculated in the frequency domain Lecture minute exercise: Show that γ(ss2) γ(s)γ(s2). u y S2 S Lecture 5 System Gain A system S is a map from L2 to L2: y = S(u). u y S The gain of S is defined as γ(s) = sup u L2 y 2 u 2 = sup u L2 S(u) 2 u 2 Example: The gain of a scalar static system y(t) = αu(t) is γ(α) = sup u L2 αu 2 u 2 = sup u L2 α u 2 u 2 = α Lecture 5 Gain of a Static Nonlinearity Lemma: A static nonlinearity f such that f(x) K x and f(x ) = Kx has gain γ(f) = K. Kx f(x) u(t) y(t) f( ) x x Proof: y 2 2 = f 2( u(t) ) dt K 2 u 2 (t)dt = K 2 u 2 2, where u(t) = x, t (, ), gives equality, so γ(f) = sup u L2 y 2/ u 2 = K Lecture 5 2

40 Gain of a Stable Linear System Lemma: γ(g) G(iω) γ ( G ) = sup u L2 Gu 2 u 2 = sup ω (, ) G(iω) Proof: Assume G(iω) K for ω (, ) and G(iω ) = K for some ω. Parseval s theorem gives y 2 2 = 2π = 2π Y (iω) 2 dω G(iω) 2 U(iω) 2 dω K 2 u 2 2 Arbitrary close to equality by choosing u(t) close to sin ω t. Lecture 5 3 The Small Gain Theorem r e S S2 e2 r2 Theorem: Assume S and S2 are BIBO stable. If γ(s)γ(s2) <, then the closed-loop system is BIBO stable from (r, r2) to (e, e2) Lecture 5 5 BIBO Stability Definition: S is bounded-input bounded-output (BIBO) stable if γ(s) <. u y S γ ( S ) = sup u L2 S(u) 2 u 2 Example: If ẋ = Ax is asymptotically stable then G(s) = C(sI A) B + D is BIBO stable. Lecture 5 4 Example Static Nonlinear Feedback r y G(s) Ky f(y) y f( ) G(s) = 2 (s + ), f(y) 2 y K, y, f() = γ(g) = 2 and γ(f) K. Small Gain Theorem gives BIBO stability for K (, /2). Lecture 5 6

41 Proof of the Small Gain Theorem e 2 r 2 + γ(s2)[ r2 2 + γ(s) e 2] gives e 2 r 2 + γ(s2) r2 2 γ(s2)γ(s) γ(s2)γ(s) <, r 2 <, r2 2 < give e 2 <. Similarly we get e2 2 r γ(s) r 2 γ(s)γ(s2) so also e2 is bounded. Note: Formal proof requires 2e, see Khalil Lecture 5 7 The Nyquist Theorem From: U() G(s).5 To: Y() Small Gain Theorem can be Conservative Let f(y) = Ky in the previous example. Then the Nyquist Theorem proves stability for all K [, ), while the Small Gain Theorem only proves stability for K (, /2). r y G(s).5 f( ) G(iω) Lecture Theorem: If G has no poles in the right half plane and the Nyquist curve G(iω), ω [, ), does not encircle, then the closed-loop system is stable. Lecture 5 8 r y G(s) The Circle Criterion k2y f(y) ky y k k2 f( ) G(iω) Theorem: Assume that G(s) has no poles in the right half plane, and k f(y) y k2, y, f() = If the Nyquist curve of G(s) does not encircle or intersect the circle defined by the points /k and /k2, then the closed-loop system is BIBO stable. Lecture 5 2

42 Example Static Nonlinear Feedback (cont d) Ky f(y) y K G(iω) The circle is defined by /k = and /k2 = /K. Since min Re G(iω) = /4 the Circle Criterion gives that the system is BIBO stable if K (, 4). Lecture 5 2 r G(s) k f( ) r2 R G(iω) Small Gain Theorem gives stability if G(iω) R <, where G G = is stable (This has to be checked later). Hence, + kg G(iω) = G(iω) + k > R Lecture 5 23 Proof of the Circle Criterion Let k = (k + k2)/2, f(y) = f(y) ky, and r = r kr2: f(y) k 2 k =: R, y, f() = y 2 r e y2 G(s) f( ) y e2 r2 G k r G f( ) y r2 Lecture 5 22 The curve G (iω) and the circle {z C : z + k > R} mapped through z /z gives the result: k2 k k k2 Note that R G + kg G(iω) is stable since /k is inside the circle. G(iω) Note that G(s) may have poles on the imaginary axis, e.g., integrators are allowed Lecture 5 24

43 Passivity and BIBO Stability The main result: Feedback interconnections of passive systems are passive, and BIBO stable (under some additional mild criteria) Lecture 5 25 Passive System u y S Definition: Consider signals u, y : [, T ] R m. The system S is passive if y, u T, for all T > and all u and strictly passive if there exists ǫ > such that y, u T ǫ( y 2 T + u 2 T ), for all T > and all u Warning: There exist many other definitions for strictly passive Lecture 5 27 Scalar Product Scalar product for signals y and u y, u T = T y T (t)u(t)dt u y S If u and y are interpreted as vectors then y, u T = y T u T cos φ y T = y, y T - length of y, φ - angle between u and y Cauchy-Schwarz Inequality: y, u T y T u T Example: u = sin t and y = cos t are orthogonal if T = kπ, because cos φ = y, u T y T u T = Lecture minute exercise: Is the pure delay system y(t) = u(t θ) passive? Consider for instance the input u(t) = sin ((π/θ)t). Lecture 5 28

44 Example Passive Electrical Components u(t) = Ri(t) : u, i T = i = C du dt : u, i T = u = L di dt : u, i T = T T T Ri 2 (t)dt R i, i T u(t)c du dt dt = Cu2 (T ) 2 L di dt i(t)dt = Li2 (T ) 2 Lecture 5 29 Passivity of Linear Systems Theorem: An asymptotically stable linear system G(s) is passive if and only if Re G(iω), ω > It is strictly passive if and only if there exists ǫ > such that Re G(iω ǫ), ω > Example: G(s) = s + G(s) = s is strictly passive, is passive but not strictly passive G(iω) Lecture 5 3 Feedback of Passive Systems is Passive r e y S y2 e2 r2 S2 Lemma: If S and S2 are passive then the closed-loop system from (r, r2) to (y, y2) is also passive. Proof: y, e T = y, e T + y2, e2 T = y, r y2 T + y2, r2 + y T = y, r T + y2, r2 T = y, r T Hence, y, r T if y, e T and y2, e2 T. Lecture 5 3 A Strictly Passive System Has Finite Gain u y S Lemma: If S is strictly passive then γ(s) = sup u L 2 y 2 <. u 2 Proof: ǫ( y 2 + u 2 ) y, u y u = y 2 u 2 Hence, ǫ y 2 2 y 2 u 2, so y 2 ǫ u 2 Lecture 5 32

45 The Passivity Theorem r e y S y2 e2 r2 S2 Theorem: If S is strictly passive and S2 is passive, then the closed-loop system is BIBO stable from r to y. Lecture 5 33 The Passivity Theorem is a Small Phase Theorem r e y S y2 e2 r2 S2 φ φ2 S2 passive cos φ2 φ2 π/2 S strictly passive cos φ > φ < π/2 Lecture 5 35 Proof S strictly passive and S2 passive give ǫ ( y 2 T + e 2 T ) y, e T + y2, e2 T = y, r T Therefore y 2 T + r y2, r y2 T ǫ y, r T or y 2 T + y2 2 T 2 y2, r2 T + r 2 T ǫ y, r T Hence y 2 T 2 y2, r2 T + ( ǫ y, r T 2 + ) y T r T ǫ Let T and the result follows. Lecture 5 34 G(s) 2 minute exercise: Apply the Passivity Theorem and compare it with the Nyquist Theorem. What about conservativeness? [Compare the discussion on the Small Gain Theorem.] Lecture 5 36

46 Example Gain Adaptation Applications in telecommunication channel estimation and in noise cancellation etc. u θ Process G(s) y θ(t) G(s) ym Model Adaptation law: dθ dt = cu(t)[y m(t) y(t)], c >. Lecture 5 37 Gain Adaptation is BIBO Stable (θ θ )u ym y G(s) θ θ c s u S S is passive (see exercises). If G(s) is strictly passive, the closed-loop system is BIBO stable Lecture 5 39 Gain Adaptation Closed-Loop System u θ G(s) θ(t) G(s) y c θ s ym Lecture 5 38 Simulation of Gain Adaptation Let G(s) = s +, c =, u = sin t, and θ() =. 2 y, ym θ Lecture 5 4

47 Storage Function Consider the nonlinear control system ẋ = f(x, u), y = h(x) A storage function is a C function V : R n R such that V () = and V (x), x V (x) u T y, x, u Remark: V (T ) represents the stored energy in the system V (x(t )) }{{} stored energy at t = T T y(t)u(t)dt }{{} absorbed energy + V (x()) }{{} stored energy at t =, T > Lecture 5 4 Lyapunov vs. Passivity Storage function is a generalization of Lyapunov function Lyapunov idea: Energy is decreasing V Passivity idea: Increase in stored energy Added energy V u T y Lecture 5 43 Storage Function and Passivity Lemma: If there exists a storage function V for a system ẋ = f(x, u), y = h(x) with x() =, then the system is passive. Proof: For all T >, y, u T = T y(t)u(t)dt V (x(t )) V (x()) = V (x(t )) Lecture 5 42 Example KYP Lemma Consider an asymptotically stable linear system ẋ = Ax + Bu, y = Cx Assume there exists positive definite matrices P, Q such that A T P + P A = Q, B T P = C Consider V =.5x T P x. Then V =.5(ẋ T P x + x T P ẋ) =.5x T (A T P + P A)x + ub T P x =.5x T Qx + uy < uy, x and hence the system is strictly passive. This fact is part of the Kalman-Yakubovich-Popov lemma. Lecture 5 44

48 Today s Goal You should be able to derive the gain of a system analyze stability using Small Gain Theorem Circle Criterion Passivity Lecture 5 45

49 EL262 Nonlinear Control Lecture 6 Describing function analysis Lecture 6 Motivating Example r e u y G(s) 2 y u G(s) = 4 s(s + ) 2 and u = sat e give a stable oscillation. How can the oscillation be predicted? Lecture 6 3 Today s Goal You should be able to Derive describing functions for static nonlinearities Analyze existence and stability of periodic solutions by describing function analysis Lecture 6 2 A Frequency Response Approach Nyquist / Bode: A (linear) feedback system will have sustained oscillations (center) if the loop-gain is at the frequency where the phase lag is 8 o But, can we talk about the frequency response, in terms of gain and phase lag, of a static nonlinearity? Lecture 6 4

50 Fourier Series A periodic function u(t) = u(t + T ) has a Fourier series expansion u(t) = a 2 + n= = a 2 + n= (an cos nωt + bn sin nωt) a 2 n + b 2 n sin[nωt + arctan(an/bn)] where ω = 2π/T and an(ω) = 2 T T u(t) cos nωt dt, bn(ω) = 2 T T u(t) sin nωt dt Note: Sometimes we make the change of variable t φ/ω Lecture 6 5 Key Idea r e u y N.L. G(s) e(t) = A sin ωt gives u(t) = n= a 2 n + b 2 n sin[nωt + arctan(an/bn)] If G(inω) G(iω) for n 2, then n = suffices, so that y(t) G(iω) a 2 + b 2 sin[ωt + arctan(a/b) + arg G(iω)] That is, we assume all higher harmonics are filtered out by G Lecture 6 7 The Fourier Coefficients are Optimal The finite expansion ûk(t) = a k 2 + n= (an cos nωt + bn sin nωt) solves min û 2 T T [ u(t) û k(t) ] 2 dt Lecture 6 6 Definition of Describing Function The describing function is N(A, ω) = b (ω) + ia(ω) A e(t) u(t) N.L. e(t) û(t) N(A, ω) If G is low pass and a =, then û(t) = N(A, ω) A sin[ωt + arg N(A, ω)] can be used instead of u(t) to analyze the system. Amplitude dependent gain and phase shift! Lecture 6 8

51 H Describing Function for a Relay u e.5 e u H a = π b = π 2π 2π -.5 u(φ) cos φ dφ = u(φ) sin φ dφ = 2 π The describing function for a relay is thus φ = 2πt/T π H sin φ dφ = 4H π N(A) = b (ω) + ia(ω) A = 4H πa Lecture 6 9 Existence of Periodic Solutions e u y f( ) G(s) /N(A) G(iω) A Proposal: sustained oscillations if loop-gain and phase-lag 8 o G(iω)N(A) = G(iω) = /N(A) The intersections of the curves G(iω) and /N(A) give ω and A for a possible periodic solution. Lecture 6 Odd Static Nonlinearities Assume f( ) and g( ) are odd (i.e. f( e) = f(e)) static nonlinearities with describing functions Nf and Ng. Then, Im Nf(A, ω) = Nf(A, ω) = Nf(A) Nαf(A) = αnf(a) Nf+g(A) = Nf(A) + Ng(A) Lecture 6 Periodic Solutions in Relay System r e u y G(s) /N(A) G(iω) G(s) = 3 (s + ) 3 with feedback u = sgn y No phase lag in f( ), arg G(iω) = π for ω = 3 =.7 G(i 3) = 3/8 = /N(A) = πa/4 A = 2/8π.48 Lecture 6 2

52 The prediction via the describing function agrees very well with the true oscillations:.5 u y Note that G filters out almost all higher-order harmonics. Lecture 6 3 a = π b = π = 4A π = A π 2π 2π ( φ u(φ) cos φ dφ = u(φ) sin φ dφ = 4 π sin 2 φ dφ + 4D π ) 2φ + sin 2φ π/2 π/2 φ u(φ) sin φ dφ sin φ dφ Lecture 6 5 Describing Function for a Saturation u H D D e.5 e u H φ Let e(t) = A sin ωt = A sin φ. First set H = D. Then for φ (, π) { A sin φ, φ (, φ u(φ) = ) (π φ, π) D, φ (φ, π φ) where φ = arcsin D/A. Lecture 6 4 Hence, if H = D, then N(A) = ( ) 2φ + sin 2φ π If H D, then the rule Nαf(A) = αnf(a) gives N(A) = H ( ) 2φ + sin 2φ Dπ N(A) for H = D = Lecture 6 6

53 5 minute exercise: What oscillation amplitude and frequency do the describing function analysis predict for the Motivating Example? Lecture 6 7 Stability of Periodic Solutions Ω G(Ω) /N(A) Assume that G(s) is stable. If G(Ω) encircles the point /N(A), then the oscillation amplitude is increasing. If G(Ω) does not encircle the point /N(A), then the oscillation amplitude is decreasing. Lecture 6 9 The Nyquist Theorem KG(s) G(iω) /K Assume that G is stable, and K is a positive gain. If G(iω) goes through the point /K the closed-loop system displays sustained oscillations If G(iω) encircles the point /K, then the closed-loop system is unstable (growing amplitude oscillations). If G(iω) does not encircle the point /K, then the closed-loop system is stable (damped oscillations) Lecture 6 8 An Unstable Periodic Solution G(Ω) /N(A) An intersection with amplitude A is unstable if A < A leads to decreasing amplitude and A > A leads to increasing. Lecture 6 2

54 Stable Periodic Solution in Relay System r e u y G(s) G(iω) /N(A) G(s) = (s + )2 (s + ) 3 with feedback u = sgn y gives one stable and one unstable limit cycle. The left most intersection corresponds to the stable one. Lecture 6 2 Describing Function for a Quantizer u e D e u Let e(t) = A sin ωt = A sin φ. Then for φ (, π) {, φ (, φ u(φ) = ), φ (φ, π φ) where φ = arcsin D/A. Lecture 6 23 Automatic Tuning of PID Controller Period and amplitude of relay feedback limit cycle can be used for autotuning: PID Σ A u Process y T Relay y u 5 Time Lecture 6 22 a = π b = π 2π 2π u(φ) cos φ dφ = u(φ) sin φdφ = 4 π π/2 φ sin φdφ = 4 π cos φ = 4 π D 2 /A 2 N(A) = {, A < D 4 D 2 /A 2, A D πa Lecture 6 24

55 Plot of Describing Function Quantizer N(A) for D = Notice that N(A).3/A for large amplitudes Lecture 6 25 Accuracy of Describing Function Analysis Control loop with friction F = sgn y: F Friction y ref C G y Corresponds to G s(s z) + GC = s 3 + 2s 2 + 2s + with feedback u = sgn y The oscillation depends on the zero at s = z. Lecture 6 27 Describing Function Pitfalls Describing function analysis can give erroneous results. A DF may predict a limit cycle even if one does not exist. A limit cycle may exist even if the DF does not predict it. The predicted amplitude and frequency are only approximations and can be far from the true values. Lecture y z = / y z = 4/3.2 z = 4/3.8 z = / DF gives period times and amplitudes (T, A) = (.4,.) and (7.3,.23), respectively. Accurate results only if y is close to sinusoidal! Lecture 6 28

56 2 minute exercise: What is N(A) for f(x) = x 2? Lecture 6 29 Analysis of Oscillations A Summary Time-domain: Poincaré maps and Lyapunov functions Rigorous results but only for simple examples Hard to use for large problems Frequency-domain: Describing function analysis Approximate results Powerful graphical methods Lecture 6 3 Harmonic Balance e(t) = A sin ωt u(t) f( ) A few more Fourier coefficients in the truncation ûk(t) = a k 2 + n= (an cos nωt + bn sin nωt) may give much better result. Describing function corresponds to k = and a =. Example: f(x) = x 2 gives u(t) = ( cos 2ωt)/2. Hence by considering a = and a2 = /2 we get the exact result. Lecture 6 3 Today s Goal You should be able to Derive describing functions for static nonlinearities Analyze existence and stability of periodic solutions by describing function analysis Lecture 6 32

57 EL262 Nonlinear Control Lecture 7 Compensation for saturation (anti-windup) Friction models Compensation for friction Lecture 7 The Problem with Saturating Actuator r e v PID u G y The feedback path is broken when u saturates Open loop behavior! Leads to problem when system and/or the controller are unstable Example: I-part in PID Recall: C PID (s) = K ( + Tis + T ds ) Lecture 7 3 Today s Goal You should be able to analyze and design Anti-windup for PID and state-space controllers Compensation for friction Lecture 7 2 Example Windup in PID Controller Time - u y PID controller without (dashed) and with (solid) anti-windup Lecture 7 4

58 Anti-Windup for PID Controller Anti-windup (a) with actuator output available and (b) without (a) y KT d s Actuator e v u K K T i T t s + e s (b) y KT d s Actuator model Actuator e v u K K T i + T t s e s Lecture 7 5 Anti-Windup for Observer-Based State Feedback Controller x m y Observer State feedback x ˆ Σ L Actuator sat v ˆx = Aˆx + B sat v + K(y C ˆx) v = L(xm ˆx) ˆx is estimate of process state, xm desired (model) state Need actuator model if sat v is not measurable Lecture 7 7 Anti-Windup is Based on Tracking When the control signal saturates, the integration state in the controller tracks the proper state The tracking time Tt is the design parameter of the anti-windup Common choices of Tt: Tt = Ti Tt = TiTd Remark: If < Tt Ti, then the integrator state becomes sensitive to the instances when es : t [ Ke(τ) I(t) = + e ] s(τ) dτ t es(τ) dτ Ti Tt Tt Lecture 7 6 State-space controller: Anti-Windup for General State-Space Controller ẋc = F xc + Gy u = Cxc + Dy Windup possible if F unstable and u saturates Idea: Rewrite representation of control law from (a) to (b) with the same input output relation, but where the unstable SA is replaced by a stable SB. If u saturates, then (b) behaves better than (a). (a) (b) y y u u S A S B Lecture 7 8

59 Mimic the observer-based controller: ẋc = F xc + Gy + K(u Cxc Dy) = (F KC)xc + (G KD)y + Ku Choose K such that F = F KC has desired (stable) eigenvalues. Then use controller ẋc = Fxc + Gy + Ku u = sat(cxc + Dy) where G = G KD. Lecture 7 9 Controllers with Stable Zeros Most controllers are minimum phase, i.e. have zeros strictly in LHP ẋc = F xc + Gy u = Cxc + Dy u= ẋc = zero dynamics {}}{ (F GC/D) xc y = C/Dxc Thus, choose observer gain K = G/D F KC = F GC/D and the eigenvalues of the observer based controller becomes equal to the zeros of F (s) when u saturates Note that this implies G KD = in the figure on the previous slide, and we thus obtain P-feedback with gain D under saturation. Lecture 7 State-space controller without and with anti-windup: D F y G xc s C u D F KC y G KD s xc C v sat u K Lecture 7 Controller F (s) with Stable Zeros Let D = lims F (s) and consider the feedback implementation y u Σ sat + D F (s) D It is easy to show that transfer function from y to u with no saturation equals F (s)! If the transfer function (/F (s) /D) in the feedback loop is stable (stable zeros) No stability problems in case of saturation Lecture 7 2

60 Internal Model Control (IMC) IMC: apply feedback only when system G and model Ĝ differ! C(s) r u y Q(s) G(s) Ĝ(s) ŷ Assume G stable. Note: feedback from the model error y ŷ. Design: assume Ĝ G and choose Q stable with Q G. Lecture 7 3 IMC with Static Nonlinearity Include nonlinearity in model r u Q G v Ĝ y Choose Q G. Lecture 7 5 Example Ĝ(s) = Ts + Choose Q = T s + τs +, τ < T Gives the controller F = Q QĜ F = T s + τs = T τ ( + Ts ) PI-controller! Lecture 7 4 Example (cont d) Assume r = and abuse of Laplace transform notation u = Q(y Ĝv) = T s + τs + y + τs + v if u < u max (v = u): PI controller u = (T s + ) τs If u > u max (v = ±u max ): y No integration. u = T s + τs + y ± u max τs + An alternative way to implement anti-windup! Lecture 7 6

61 Other Anti-Windup Solutions Solutions above are all based on tracking. Other solutions include: Tune controller to avoid saturation Don t update controller states at saturation Conditionally reset integration state to zero Apply optimal control theory (Lecture 2) Lecture 7 7 Friction Friction is present almost everywhere Often bad: Friction in valves and other actuators Sometimes good: Friction in brakes Sometimes too small: Earthquakes Problems: How to model friction? How to compensate for friction? How to detect friction in control loops? Lecture 7 9 Bumpless Transfer Another application of the tracking idea is in the switching between automatic and manual control modes. PID with anti-windup and bumpless transfer: u c Tm T r s + e T r y PD u c s A M u + T r Note the incremental form of the manual control mode ( u uc/tm) Lecture 7 8 Stick-Slip Motion 2 x y F f 2 Fp.4 Time.2 ẏ y x 2 Time Fp F f 2 Time Lecture 7 2

62 Position Control of Servo with Friction F Friction x r u x PID Σ ms v s Time Time Time Lecture 7 2 Friction Modeling Lecture minute exercise: Which are the signals in the previous plots? Lecture 7 22 Stribeck Effect Friction increases with decreasing velocity (for low velocities) Stribeck (92) Steady state friction: Joint Friction [Nm] Velocity [rad/sec] Lecture 7 24

63 Classical Friction Models Advanced models capture various friction phenomena better Lecture 7 25 Integral Action Integral action compensates for any external disturbance Works if friction force changes slowly (v(t) const) If friction force changes quickly, then large integral action (small Ti) necessary. May lead to stability problem F Friction xr u v x PID ms s Lecture 7 27 Friction Compensation Lubrication Integral action Dither signal Model-based friction compensation Adaptive friction compensation The Knocker Lecture 7 26 Modified Integral Action Modify the integral part to I = K t ê(τ)dτ where Ti ê(t) η e(t) Advantage: Avoid that small static error introduces oscillation Disadvantage: Error won t go to zero Lecture 7 28

64 Dither Signal Avoid sticking at v = (where there is high friction) by adding high-frequency mechanical vibration (dither) F Friction xr u v x PID ms s Cf., mechanical maze puzzle (labyrintspel) Lecture 7 29 Adaptive Friction Compensation F Friction u PID + + ms v s x Friction estimator ˆF Coulomb friction model: F = a sgn v Friction estimator: ż = ku PID sgn v â = z km v ˆF = â sgn v Lecture 7 3 Model-Based Friction Compensation For process with friction F : mẍ = u F use control signal u = u PID + ˆF where u PID is the regular control signal and ˆF an estimate of F. Possible if: An estimate ˆF F is available u and F does apply at the same point Lecture 7 3 Adaptation converges: e = a â as t Proof: de dt = dâ dt = dz dt + km d dt v = ku PID sgn v + km v sgn v = k sgn v(u PID m v) = k sgn v(f ˆF ) = k(a â) = ke Remark: Careful with d dt v at v =. Lecture 7 32

65 .5 v P-controller vref.5 PI-controller u P-controller with adaptive friction compensation Lecture 7 33 Detection of Friction in Control Loops Friction is due to wear and increases with time Q: When should valves be maintained? Idea: Monitor loops automatically and estimate friction The Knocker Coulomb friction compensation and square wave dither Typical control signal u Hägglund: Patent and Innovation Cup winner Lecture u y time Horch: PhD thesis (2) and patent Lecture 7 35 Today s Goal You should be able to analyze and design Anti-windup for PID, state-space, and polynomial controllers Compensation for friction Lecture 7 36

66 Next Lecture Backlash Quantization Lecture 7 37

67 EL262 Nonlinear Control Lecture 8 Backlash Quantization Lecture 8 Linear and Angular Backlash xout Fout Fin D D xin 2D Tin Tout θin θout Lecture 8 3 Today s Goal You should be able to analyze and design for Backlash Quantization Lecture 8 2 Backlash Backlash (glapp) is present in most mechanical and hydraulic systems increasing with wear necessary for a gearbox to work in high temperature bad for control performance sometimes inducing oscillations Lecture 8 4

68 Backlash Model ẋout = {ẋin, in contact, otherwise in contact if xout xin = D and ẋin(xin xout) >. xout θout D D xin θin Multivalued output; current output depends on history. Thus, backlash is a dynamic phenomena. Lecture 8 5 Effects of Backlash P-control of motor angle with gearbox having backlash with D =.2 θ ref e u θin θ in θ out K + st s Lecture 8 7 Alternative Model Force (Torque) D xin xout (θin θout) Not equivalent to Backlash Model Lecture No backlash K =.25,, 4.6 Backlash K =.25,, Oscillations for K = 4 but not for K =.25 or K =. Why? Note that the amplitude will decrease with decreasing D, but never vanish Lecture 8 8

69 Describing Function for Backlash If A < D then N(A) = else Re N(A) = [ π + arcsin( 2D/A) π 2 ( D + 2( 2D/A) D A A Im N(A) = 4D ( D ) πa A )] Lecture 8 9 /N(A) for D =.2: Im -/N(A) Describing Function Analysis Nyquist Diagrams 4.8 Input and output of backlash Re -/N(A) Note that /N(A) as A (physical interpretation?) Lecture 8 Imaginary Axis /N(A) K =.25 K = 4 K = Real Axis K = 4, D =.2: DF analysis: Intersection at A =.33, ω =.24 Simulation: A =.33, ω = 2π/5. =.26 Describing function predicts oscillation well Lecture 8 replacements Stability Proof for Backlash System The describing function method is only approximate. Do there exist conditions that guarantee stability (of the steady-state)? e θin θ in θ out θ ref u K + st s Note that θin and θout will not converge to zero Q: What about θin and θout? Lecture 8 2

70 Rewrite the system: θin θout θin θout BL G(s) G(s) The block BL satisfies { θout = θin in contact otherwise Lecture 8 3 Backlash Compensation Mechanical solutions Deadzone Linear controller design Backlash inverse Lecture 8 5 Homework 2 Analyze this backlash system with input output stability results: θin BL θout G(s) Passivity Theorem BL is passive Small Gain Theorem BL has gain γ(bl) = Circle Criterion BL contained in sector [, ] Lecture 8 4 Linear Controller Design Introduce phase lead compensation: θ ref e u θin θ in θ out K +st 2 +st + st s Lecture 8 6

71 F (s) = K +st 2 with T =.5, T2 = 2.: +st Nyquist Diagrams y, with/without filter with filter Imaginary Axis -8 without filter Real Axis - -2 u, with/without filter Oscillation removed! Lecture 8 7 xin(t) = u + D if u(t) > u(t ) u if u(t) < u(t ) xin(t ) otherwise If D = D then perfect compensation (xout = u) If D < D then under-compensation (decreased backlash) If D > D then over-compensation (may give oscillation) Lecture 8 9 Backlash Inverse Idea: Let xin jump ±2D when ẋout should change sign xout u xin xout D D xin Lecture 8 8 Example Perfect Compensation Motor with backlash on input in feedback with PD-controller Lecture 8 2

72 Example Under-Compensation Lecture 8 2 Quantization What precision is needed in A/D and D/A converters? (8 4 bits?) What precision is needed in computations? (8 64 bits?) u y D/A G A/D /2 Quantization in A/D and D/A converters Quantization of parameters Roundoff, overflow, underflow in computations Lecture 8 23 Example Over-Compensation Lecture 8 22 Linear Model of Quantization Model quantization error as a uniformly distributed stochastic signal e independent of u with Var(e) = e 2 fe de = /2 /2 e 2 2 de = 2 e u y u Q + y May be reasonable if is small compared to the variations in u But, added noise can never affect stability while quantization can! Lecture 8 24

73 Describing Function for Deadzone Relay u e D e u -.8 Lecture 6 N(A) = {, A < D 4 D 2 /A 2, A > D πa Lecture A/Delta Describing Function for Quantizer y u Q y /2 u N(A) =, A < 2 ( ) 4 n 2 2i πa 2A 2n, < A < 2n+ 2 2 i= Lecture 8 26 N (A) The maximum value is 4/π.27 attained at A.7. Predicts oscillation if Nyquist curve intersects negative real axis to the left of π/4.79 Controller with gain margin > /.79 =.27 avoids oscillation Reducing reduces only the oscillation amplitude Lecture 8 27 Example Motor with P-controller. Quantization of process output with =.2 Nyquist of linear part (K & ZOH & G(s)) intersects at.5k: Stability for K < 2 without Q. Stable oscillation predicted for K > 2/.27 =.57 with Q. r K u y e s G(s) Q s Lecture 8 28

74 (a) K =.8 Output 5 (b) K =.2 Output 5 (c) K =.6 Output 5 Time Lecture 8 29 Example /s 2 & 2nd-Order Controller 2 Imaginary axis Quantization =.2 in A/D converter: Real axis A/D F D/A G Lecture 8 3 Output.2 Output Input Time Describing function: Ay =. and T = 39 Simulation: Ay =. and T = 28 Lecture 8 3 Quantization =. in D/A converter: Output Unquantized Input Time Describing function: Au =.5 and T = 39 Simulation: Au =.5 and T = 39 Lecture 8 32

75 Quantization Compensation Improve accuracy (larger word length) Avoid unstable controller and gain margins <.3 Use the tracking idea from anti-windup to improve D/A converter Digital Analog controller D/A + Use analog dither, oversampling, and digital lowpass filter to improve accuracy of A/D converter + A/D filter decim. Lecture 8 33 Today s Goal You should now be able to analyze and design for Backlash Quantization Lecture 8 34

76

77 EL262 Nonlinear Control Lecture 9 Nonlinear control design based on high-gain control Lecture 9 History of the Feedback Amplifier New York San Francisco communication link 94. High signal amplification with low distortion was needed. f( ) f( ) r y f( ) k Feedback amplifiers were the solution! Black, Bode, and Nyquist at Bell Labs Lecture 9 3 Today s Goal You should be able to analyze and design High-gain control systems Sliding mode controllers Lecture 9 2 Linearization Through High Gain Feedback α2e f(e) r y f( ) αe e K α f(e) e α2 α + αk r y α2 + α2k r choose K /α, yields y K r Lecture 9 4

78 A Word of Caution Nyquist: high loop-gain may induce oscillations (due to dynamics)! Lecture 9 5 Remark: How to Obtain f from f using Feedback v k u s f(u) u f( ) u = k ( v f(u) ) If k > large and df/du >, then u and = k ( v f(u) ) f(u) = v u = f (v) Lecture 9 7 Inverting Nonlinearities Compensation of static nonlinearity through inversion: Controller F (s) ˆf ( ) f( ) G(s) Should be combined with feedback as in the figure! Lecture 9 6 Example Linearization of Static Nonlinearity r e u y K f( ).8.6 y(r).4.2 f(u) Linearization of f(u) = u 2 through feedback. The case K = is shown in the plot: y(r) r. Lecture 9 8

79 The Sensitivity Function S = ( + GF ) The closed-loop system is G cl = G + GF r y G F Small perturbations dg in G gives dg cl dg = ( + GF ) 2 dg cl = G cl + GF dg G = S dg G S is the closed-loop sensitivity to open-loop perturbations. Lecture 9 9 Example Distortion Reduction Let G =, distortion dg/g =. r y G K Choose K =. S = ( + GK).. Then dg cl = S dg G cl G. feedback amplifiers in series give total amplification G tot = (G cl ) and total distortion dg tot = ( + 3 ). G tot Lecture 9 Distortion Reduction via Feedback The feedback reduces distortion in each link. Several links give distortion-free high gain. f( ) f( ) K K Lecture 9 Transcontinental Communication Revolution The feedback amplifier was patented by Black 937. Year Channels Loss (db) No amp s Lecture 9 2

80 Sensitivity and the Circle Criterion r r y GF f( ) GF (iω) Consider a circle C := {z C : z + = r}, r (, ). GF (iω) stays outside C if + GF (iω) > r S(iω) r Then, the Circle Criterion gives stability if + r f(y) y r Lecture 9 3 On Off Control On off control is the simplest control strategy. Common in temperature control, level control etc. r e u y G(s) The relay corresponds to infinitely high gain at the switching point. Lecture 9 5 Small Sensitivity Allows Large Uncertainty If S(iω) is small, we can choose r large (close to one). This corresponds to a large sector for f( ). Hence, S(iω) small implies low sensitivity to nonlinearities. k2y f(y) ky k = + r k2 = r y Lecture 9 4 A Control Design Idea Assume V (x) = x T P x, P = P T >, represents the energy of ẋ = Ax + Bu, u [, ] Choose u such that V decays as fast as possible: V = x T (A T P + P A)x + 2B T P xu is minimized if u = sgn(b T P x) (Notice that V = a + bu, i.e. just a segment of line in u, < u <. Hence the lowest value is at an endpoint, depending on the sign of the slope b. ) B T QP x = ẋ = Ax B ẋ = Ax + B Lecture 9 6

81 ẋ = { f + (x), σ(x) > f (x), σ(x) < Sliding Modes f + σ(x) > f σ(x) < The sliding mode is ẋ = αf + + ( α)f, where α satisfies αf n + + ( α)f n = for the normal projections of f +, f f αf + + ( α)f f + The sliding surface is S = {x : σ(x) = }. Lecture 9 7 For small x2 we have dx2 ẋ2(t) x, x x2 > dx ẋ2(t) x +, dx2 + x x2 < dx This implies the following behavior x2 > x2 = x = x = x2 < Lecture 9 9 Example ẋ = ( ) x + ( ) u = Ax + Bu u = sgn σ(x) = sgn x2 = sgn(cx) is equivalent to ẋ = Ax B, x 2 > Ax + B, x2 < Lecture 9 8 Sliding Mode Dynamics The dynamics along the sliding surface S is obtained by setting u = u eq [, ] such that x(t) stays on S. u eq is called the equivalent control. Lecture 9 2

82 Example (cont d) Finding u = u eq such that σ(x) = ẋ2 = on σ(x) = x2 = gives = ẋ2 = x x2 }{{} = + u eq = x + u eq u eq = x Insert this in the equation for ẋ: ẋ = x2 }{{} = + u eq = x gives the dynamics on the sliding surface S = {x : x2 = }. Lecture 9 2 Equivalent Control for Linear System ẋ = Ax + Bu u = sgn σ(x) = sgn(cx) Assume CB >. The sliding surface S = {x : Cx = } so = σ(x) = dσ ( ) f(x) + g(x)u = C ( ) Ax + Bu eq dx gives u eq = CAx/CB. Example (cont d): For the example: u eq = CAx/CB = ( ) x = x, because σ(x) = x2 =. (Same result as before.) Lecture 9 23 Deriving the Equivalent Control Assume ẋ = f(x) + g(x)u u = sgn σ(x) has a stable sliding surface S = {x : σ(x) = }. Then, for x S, = σ(x) = dσ dx dx dt = dσ ( ) f(x) + g(x)u dx The equivalent control is thus given by u eq = ( dσ dx g(x) ) dσ dx f(x) if the inverse exists. Lecture 9 22 Sliding Dynamics The dynamics on S = {x : Cx = } is given by ( ẋ = Ax + Bu eq = I CB BC ) Ax, under the constraint Cx =, where the eigenvalues of (I BC/CB)A are equal to the zeros of sg(s) = sc(si A) B. Remark: The condition that Cx = corresponds to the zero at s =, and thus this dynamic disappears on S = {x : Cx = }. Lecture 9 24

83 Proof ẋ = Ax + Bu y = Cx ẏ = CAx + CBu u = CB CAx CB ẏ ẋ = ( I CB BC ) Ax CB Bẏ Hence, the transfer function from ẏ to u equals CB + CB CA(sI ((I CB BC)A)) CB B but this transfer function is also /(sg(s)) Hence, the eigenvalues of (I BC/CB)A are equal to the zeros of sg(s). Lecture 9 25 Closed-Loop Stability Consider V (x) = σ 2 (x)/2 with σ(x) = p T x. Then, ( ) V = σ T (x) σ(x) = x T p p T f(x) + p T g(x)u With the chosen control law, we get V = µσ(x) sgn σ(x) < so x tend to σ(x) =. = σ(x) = px + + pn xn + pnxn = px (n ) n + + pn x () n + pnx () n where x (k) denote time derivative. Now p corresponds to a stable differential equation, and xn exponentially as t. The state relations xk = ẋk now give x exponentially as t. Lecture 9 27 Design of Sliding Mode Controller Idea: Design a control law that forces the state to σ(x) =. Choose σ(x) such that the sliding mode tends to the origin. Assume d dt x x2... = f(x) + g(x)u x... = f(x) + g(x)u xn xn Choose control law u = pt f(x) µ p T g(x) µ p T g(x) sgn σ(x), where µ > is a design parameter, σ(x) = p T x, and p T = ( p... pn) are the coefficients of a stable polynomial. Lecture 9 26 Time to Switch Consider an initial point x such that σ = σ(x) >. Since σ(x) σ(x) = µσ(x) sgn σ(x) it follows that as long as σ(x) > : σ(x) = µ Hence, the time to the first switch (σ(x) = ) is t s = σ µ < Note that t s as µ. Lecture 9 28

84 Example Sliding Mode Controller Design state-feedback controller for ẋ = ( ) y = ( ) x x + ( ) u Choose ps + p2 = s + so that σ(x) = x + x2. The controller is given by u = pt Ax µ p T B µ p T B sgn σ(x) = 2x µ sgn(x + x2) Lecture 9 29 Time Plots x Initial condition x() = (.5 ) T. Simulation agrees well with time to switch x2 t s = σ µ = 3 and sliding dynamics ẏ = y u Lecture 9 3 Phase Portrait Simulation with µ =.5. Note the sliding surface σ(x) = x + x2. x2 2 2 x Lecture 9 3 The Sliding Mode Controller is Robust Assume that only a model ẋ = f(x) + ĝ(x)u of the true system ẋ = f(x) + g(x)u is known. Still, however, [ p T (fĝ T T fg )p V = σ(x) p T ĝ µ pt g p T ĝ sgn σ(x) ] < if sgn(p T g) = sgn(p T ĝ) and µ > is sufficiently large. The closed-loop system is thus robust against model errors! (High gain control with stable open loop zeros) Lecture 9 32

85 Comments on Sliding Mode Control Efficient handling of model uncertainties Often impossible to implement infinite fast switching Smooth version through low pass filter or boundary layer Applications in robotics and vehicle control Compare puls-width modulated control signals Lecture 9 33 Next Lecture Lyapunov design methods Exact feedback linearization Lecture 9 35 Today s Goal You should be able to analyze and design High-gain control systems Sliding mode controllers Lecture 9 34

86

87 EL262 Nonlinear Control Lecture Exact feedback linearization Input-output linearization Lyapunov-based control design methods Lecture Nonlinear Output Feedback Controllers Nonlinear dynamical controller: ż = a(z, y), u = c(z) Linear dynamics, static nonlinearity: ż = Az + By, u = c(z) Linear controller: ż = Az + By, u = Cz Lecture 3 Output Feedback and State Feedback ẋ = f(x, u) y = h(x) Output feedback: Find u = k(y) such that the closed-loop system ẋ = f ( x, k ( h(x) )) has nice properties. State feedback: Find u = l(x) such that ẋ = f(x, l(x)) has nice properties. k and l may include dynamics. Lecture 2 Nonlinear Observers What if x is not measurable? ẋ = f(x, u), y = h(x) Simplest observer x = f( x, u) Feedback correction, as in linear case, x = f( x, u) + K(y h( x)) Choices of K Linearize f at x, find K for the linearization Linearize f at x(t), find K = K( x) for the linearization Second case is called Extended Kalman Filter Lecture 4

88 Some state feedback control approaches Exact feedback linearization Input-output linearization Lyapunov-based design - backstepping control Lecture 5 Another Example ẋ = a sin x2 ẋ2 = x 2 + u How do we cancel the term sin x2? Perform transformation of states into linearizable form: z = x, z2 = ẋ = a sin x2 yields ż = z2, ż2 = a cos x2( x 2 + u) and the linearizing control becomes u(x) = x 2 v +, x2 [ π/2, π/2] a cos x2 Lecture 7 Exact Feedback Linearization Consider the nonlinear control-affine system ẋ = f(x) + g(x)u idea: use a state-feedback controller u(x) to make the system linear Example : The state-feedback controller ẋ = cos x x 3 + u u(x) = cos x + x 3 kx + v yields the linear system ẋ = kx + v Lecture 6 Diffeomorphisms A nonlinear state transformation z = T (x) with T invertible for x in the domain of interest T and T continuously differentiable is called a diffeomorphism Definition: A nonlinear system ẋ = f(x) + g(x)u is feedback linearizable if there exist a diffeomorphism T whose domain contains the origin and transforms the system into the form ẋ = Ax + Bγ(x) (u α(x)) with (A, B) controllable and γ(x) nonsingular for all x in the domain of interest. Lecture 8

89 Exact vs. Input-Output Linearization The example again, but now with an output ẋ = a sin x2, ẋ2 = x 2 + u, y = x2 The control law u = x 2 + v/a cos x2 yields ż = z2 ; ż2 = v ; y = sin (z2/a) which is nonlinear in the output. If we want a linear input-output relationship we could instead use u = x 2 + v to obtain ẋ = a sin x2, ẋ2 = v, y = x2 which is linear from v to y Caution: the control has rendered the state x unobservable Lecture 9 Example: controlled van der Pol equation ẋ = x2 ẋ2 = x + ǫ( x 2 )x2 + u y = x Differentiate the output ẏ = ẋ = x2 ÿ = ẋ2 = x + ǫ( x 2 )x2 + u The state feedback controller u = x ǫ( x 2 )x2 + v ÿ = v Lecture Input-Output Linearization Use state feedback u(x) to make the control-affine system ẋ = f(x) + g(x)u y = h(x) linear from the input v to the output y The general idea: differentiate the output, y = h(x), p times untill the control u appears explicitly in y (p), and then determine u so that y (p) = v i.e., G(s) = /s p Lecture Lie Derivatives Consider the nonlinear SISO system ẋ = f(x) + g(x)u ; x R n, u R y = h(x), y R The derivative of the output ẏ = dh dxẋ = dh dx (f(x) + g(x)u) L fh(x) + Lgh(x)u where Lfh(x) and Lgh(x) are Lie derivatives (Lfh is the derivative of h along the vector field of ẋ = f(x)) Repeated derivatives L k fh(x) = d(lk f h) dx f(x), LgLfh(x) = d(l fh) dx g(x) Lecture 2

90 Lie derivatives and relative degree The relative degree p of a system is defined as the number of integrators between the input and the output (the number of times y must be differentiated for the input u to appear) A linear system Y (s) U(s) = bs m bm s n + as n an has relative degree p = n m A nonlinear system has relative degree p if LgL i f h(x) =, i =,..., p ; LgL p f h(x) x D Lecture 3 The input-output linearizing control Consider a nth order SISO system with relative degree p ẋ = f(x) + g(x)u y = h(x) Differentiating the output repeatedly ẏ = dh dxẋ = L fh(x) + = {}}{ Lgh(x)u... y (p) = L p f h(x) + L gl p f h(x)u Lecture 5 Example The controlled van der Pol equation ẋ = x2 ẋ2 = x + ǫ( x 2 )x2 + u y = x Differentiating the output ẏ = ẋ = x2 ÿ = ẋ2 = x + ǫ( x 2 )x2 + u Thus, the system has relative degree p = 2 Lecture 4 and hence the state-feedback controller u = LgL p f h(x) ( L p f h(x) + v) results in the linear input-output system y (p) = v Lecture 6

91 Zero Dynamics Note that the order of the linearized system is p, corresponding to the relative degree of the system Thus, if p < n then n p states are unobservable in y. The dynamics of the n p states not observable in the linearized dynamics of y are called the zero dynamics. Corresponds to the dynamics of the system when y is forced to be zero for all times. A system with unstable zero dynamics is called non-minimum phase (and should not be input-output linearized!) Lecture 7 Lyapunov-Based Control Design Methods ẋ = f(x, u) Find stabilizing state feedback u = u(x) Verify stability through Control Lyapunov function Methods depend on structure of f Here we limit discussion to Back-stepping control design, which require certain f discussed later. Lecture 9 van der Pol again ẋ = x2 ẋ2 = x + ǫ( x 2 )x2 + u With y = x the relative degree p = n = 2 and there are no zero dynamics, thus we can transform the system into ÿ = v. With y = x2 the relative degree p = < n and the zero dynamics are given by ẋ =, which is not asymptotically stable (but bounded) Lecture 8 A simple introductory example Consider ẋ = cos x x 3 + u Apply the linearizing control u = cos x + x 3 kx Choose the Lyapunov candidate V (x) = x 2 /2 V (x) >, V = kx 2 < Thus, the system is globally asymptotically stable But, the term x 3 in the control law may require large control moves! Lecture 2

92 The same example ẋ = cos x x 3 + u Now try the control law u = cos x kx Choose the same Lyapunov candidate V (x) = x 2 /2 V (x) >, V = x 4 kx 2 < Thus, also globally asymptotically stable (and more negative V ) Lecture 2 Simulating the two controllers Simulation with x() = State trajectory Control input State x Input u Back-Stepping Control Design We want to design a state feedback u = u(x) that stabilizes ẋ = f(x) + g(x)x2 ẋ2 = u () at x = with f() =. Idea: See the system as a cascade connection. Design controller first for the inner loop and then for the outer. u x2 x g(x) f() Lecture 23 linearizing linearizing non-linearizing non-linearizing time [s] time [s] The linearizing control is slower and uses excessive input. Thus, linearization can have a significant cost! Lecture 22 Suppose the partial system ẋ = f(x) + g(x) v can be stabilized by v = φ(x) and there exists Lyapunov fcn V = V(x) such that V(x) = dv dx ( f(x) + g(x)φ(x) ) W (x) for some positive definite function W. This is a critical assumption in backstepping control! Lecture 24

93 The Trick Equation () can be rewritten as ẋ = f(x) + g(x)φ(x) + g(x)[x2 φ(x)] ẋ2 = u u x2 x g(x) φ(x) f + gφ Lecture 25 Consider V2(x, x2) = V(x) + ζ 2 /2. Then, V2(x, x2) = dv ( ) f(x) + g(x)φ(x) dx + dv dx g(x)ζ + ζv Choosing gives W (x) + dv v = dv dx dx g(x)ζ + ζv g(x) kζ, k > V2(x, x2) W (x) kζ 2 Hence, x = is asymptotically stable for () with control law u(x) = φ(x) + v(x). If V radially unbounded, then global stability. Lecture 27 Introduce new state ζ = x2 φ(x) and control v = u φ: ẋ = f(x) + g(x)φ(x) + g(x)ζ ζ = v where φ(x ) = dφ ẋ = dφ dx dx ( ) f(x) + g(x)x2 u ζ x g(x) φ(x) f + gφ Lecture 26 Back-Stepping Lemma Lemma: Let z = (x,..., xk ) T and ż = f(z) + g(z)xk ẋk = u Assume φ() =, f() =, ż = f(z) + g(z)φ(z) stable, and V (z) a Lyapunov fcn (with V W ). Then, ( ) f(z) + g(z)xk u = dφ dz dv dz g(z) (x k φ(z)) stabilizes x = with V (z) + (xk φ(z)) 2 /2 being a Lyapunov fcn. Lecture 28

94 Strict Feedback Systems Back-stepping Lemma can be applied to stabilize systems on strict feedback form: ẋ = f(x) + g(x)x2 ẋ2 = f2(x, x2) + g2(x, x2)x3 ẋ3 = f3(x, x2, x3) + g3(x, x2, x3)x4... ẋn = fn(x,..., xn) + gn(x,..., xn)u where gk Note: x,..., xk do not depend on xk+2,..., xn. Lecture 29 Back-Stepping Back-Stepping Lemma can be applied recursively to a system ẋ = f(x) + g(x)u on strict feedback form. Back-stepping generates stabilizing feedbacks φk(x,..., xk) (equal to u in Back-Stepping Lemma) and Lyapunov functions Vk(x,..., xk) = Vk (x,..., xk ) + [xk φk ] 2 /2 by stepping back from x to u (see Khalil pp for details). Back-stepping results in the final state feedback u = φn(x,..., xn) Lecture 3 2 minute exercise: Give an example of a linear system ẋ = Ax + Bu on strict feedback form. Lecture 3 Example Design back-stepping controller for ẋ = x 2 + x2, ẋ2 = x3, ẋ3 = u Step Verify strict feedback form Step Consider first subsystem ẋ = x 2 + φ(x), ẋ2 = u where φ(x) = x 2 x stabilizes the first equation. With V(x) = x 2 /2, Back-Stepping Lemma gives u = ( 2x )(x 2 + x2) x (x2 + x 2 + x) = φ2(x, x2) V2 = x 2 /2 + (x2 + x 2 + x) 2 /2 Lecture 32

95 Step 2 Applying Back-Stepping Lemma on ẋ = x 2 + x2 ẋ2 = x3 ẋ3 = u gives u = u2 = dφ 2 dz ( f(z) + g(z)xn ) dv 2 dz g(z) (x n φ2(z)) = φ 2 x (x 2 + x2) + φ 2 x2 x3 V 2 x2 which globally stabilizes the system. (x3 φ2(x, x2)) Lecture 33

96

97 EL262 Nonlinear Control Lecture Nonlinear controllability Gain scheduling Lecture Controllability Definition: ẋ = f(x, u) is controllable if for any x, x there exists T > and u : [, T ] R such that x() = x and x(t ) = x. Lecture 3 Today s Goal You should be able to Determine if a nonlinear system is controllable Apply gain scheduling to simple examples Lecture 2 Linear Systems Lemma: ẋ = Ax + Bu is controllable if and only if Wn = ( B AB... A n B ) has full rank. Is there a corresponding result for nonlinear systems? Lecture 4

98 Controllable Linearization Lemma: Let ż = Az + Bu be the linearization of ẋ = f(x) + g(x)u at x = with f() =. If the linear system is controllable then the nonlinear system is controllable in a neighborhood of the origin. Remark: Hence, if rank Wn = n then there is an ǫ > such that for every x Bǫ() there exists u : [, T ] R so that x(t ) = x A nonlinear system can be controllable, even if the linearized system is not controllable Lecture 5 Linearization for u = u2 = gives ż = Az + Bu + B2u2 with A = and B =, B2 = cos(ϕ + θ) sin(ϕ + θ) sin(θ) rank Wn = rank ( B AB... A n B ) = 2 < 4, so the linearization is not controllable. Still the car is controllable! Linearization does not capture the controllability good enough Lecture 7 y Car Example θ ϕ x (x, y) d dt x y ϕ θ = u + Input: u steering wheel velocity, u2 forward velocity cos(ϕ + θ) sin(ϕ + θ) sin(θ) u2 = g(z)u + g2(z)u2 Lecture 6 Lie Brackets Lie bracket between vector fields f, g : R n R n is a vector field defined by [f, g] = g x f f x g Example: ( ) ( ) cos x 2 x f =, g = x ( ) ( ) ( ) ( ) cos x 2 sin x 2 x [f, g] = x ( ) cos x 2 + sin x2 = x Lecture 8

99 Lie Bracket Direction For the system the control (u, u2) = ẋ = g(x)u + g2(x)u2 (, ), t [, ǫ) (, ), t [ǫ, 2ǫ) (, ), t [2ǫ, 3ǫ) (, ), t [3ǫ, 4ǫ) gives motion x(4ǫ) = x() + ǫ 2 [g, g2] + O(ǫ 3 ) The system can move in the [g, g2] direction! Lecture 9 Proof, continued 3. Similarily, for t [2ǫ, 3ǫ] ( dg x(3ǫ) = x + ǫg2 + ǫ 2 2 dx g dg dx g dg2 dx g 2 ) 4. Finally, for t [3ǫ, 4ǫ] ( dg x(4ǫ) = x + ǫ 2 2 dx g dg dx g 2 ) Lecture Proof. For t [, ǫ], assuming ǫ small and x() = x, Taylor series yields x(ǫ) = x + g(x)ǫ + 2 dg dx g (x)ǫ 2 + O(ǫ 3 ) () 2. Similarily, for t [ǫ, 2ǫ] x(2ǫ) = x(ǫ) + g2(x(ǫ))ǫ + 2 dg2 dx g 2(x(ǫ))ǫ 2 and with x(ǫ) from (), and g2(x(ǫ)) = g2(x) + dg 2 dx ǫg (x) x(2ǫ) = x + ǫ(g(x) + g2(x))+ ǫ 2 ( 2 dg dx (x )g(x) + dg 2 dx (x )g(x) + 2 dg2 dx (x )g2(x) Lecture Car Example (Cont d) g3 := [g, g2] = g 2 x g g x g 2 = sin(ϕ + θ) sin(ϕ + θ) cos(ϕ + θ) cos(ϕ + θ) cos(θ) = sin(ϕ + θ) cos(ϕ + θ) cos(θ) Lecture 2 )

100 We can hence move the car in the g3 direction ( wriggle ) by applying the control sequence (u, u2) = {(, ), (, ), (, ), (, )} y θ ϕ x (x, y) Lecture 3 Parking Theorem You can get out of any parking lot that is ǫ > bigger than your car by applying control corresponding to g4, that is, by applying the control sequence Wriggle, Drive, Wriggle, Drive Lecture 5 The car can also move in the direction g4 := [g3, g2] = g 2 x g 3 g 3 x g 2 =... = sin(ϕ + 2θ) cos(ϕ + 2θ) ( sin(ϕ), cos(ϕ)) g4 direction corresponds to sideways movement Lecture 4 2 minute exercise: What does the direction [g, g2] correspond to for a linear system ẋ = g(x)u + g2(x)u2 = Bu + B2u2? Lecture 6

101 The Lie Bracket Tree [g, g2] [g, [g, g2]] [g2, [g, g2]] [g, [g, [g, g2]]] [g2, [g, [g, g2]]] [g, [g2, [g, g2]]] [g2, [g2, [g, g2]]] Lecture 7 Example Unicycle d dt x x2 θ = cos θ sin θ u + u2 θ (x, x2) g = cos θ sin θ, g2 =, [g, g2] = sin θ cos θ Controllable because {g, g2, [g, g2]} spans R 3 Lecture 9 Controllability Theorem Theorem:The system ẋ = g(x)u + g2(x)u2 is controllable if the Lie bracket tree (together with g and g2) spans R n for all x Remark: The system can be steered in any direction of the Lie bracket tree Lecture 8 2 minute exercise: Show that {g, g2, [g, g2]} spans R 3 for the unicycle Is the linearization of the unicycle controllable? Lecture 2

102 Example Rolling Penny φ (x, x2) θ d dt x x2 θ φ = cos θ sin θ u+ u2 Controllable because {g, g2, [g, g2], [g2, [g, g2]]} spans R 4 Lecture 2 Gain Scheduling Control parameters depend on operating conditions: Controller parameters Gain schedule Command signal Controller Control signal Process Operating condition Output Example: PID controller with K = K(α), where α is the scheduling variable. Examples of scheduling variable are production rate, machine speed, Mach number, flow rate Lecture 23 When is Feedback Linearization Possible? Q: When can we transform ẋ = f(x) + g(x)u into ż = Az + bv by means of feedback u = α(x) + β(x)v and change of variables z = T (x) (see previous lecture)? A: The answer requires Lie brackets and further concepts from differential geometry (see Khalil and PhD course) v u x z β ẋ = f T α Lecture 22 Valve Characteristics Flow Quickopening Linear Equalpercentage Position Lecture 24

103 Nonlinear Valve Process u c c u v y Σ PI f ˆ G (s) Valve characteristics ˆf(u) f(u) Lecture 25 With gain scheduling:.3 uc Time uc. y y Time uc y Time Lecture 27 Without gain scheduling:.3 uc.2 y Time uc y Time y 5. uc Time Lecture 26 Flight Control Pitch dynamics q = θ θ V α N z δ e Lecture 28 f

104 Flight Control Operating conditions: Altitude (x ft) Mach number Lecture 29 Today s Goal You should be able to Determine if a nonlinear system is controllable Apply gain scheduling to simple examples Lecture 3 The Pitch Control Channel V IAS H Pitch stick V IAS Gear Position Filter A/D K DSE Acceleration Filter Pitch rate A/D Filter A/D H M T s + T s M Σ H K SG Σ Filter + T 3 s K Q K NZ Σ D/A Σ D/A Σ Σ To servos T 2s + T 2 s K QD M H H M VIAS Lecture 3

105 EL262 Nonlinear Control Lecture 2 Optimal control Lecture 2 Optimal Control Problems Idea: formulate the control design problem as an optimization problem min u(t) J(x, u, t), ẋ = f(t, x, u) + provides a systematic design framework + applicable to nonlinear problems + can deal with constraints - difficult to formulate control objectives as a single objective function - determining the optimal controller can be hard Lecture 2 3 Today s Goal You should be able to Design controllers based on optimal control theory Lecture 2 2 Example Boat in Stream Sail as far as possible in x direction Speed of water v(x2) with dv/dx2 = v(x2) x2 Rudder angle control: u(t) U = {(u, u2) : u 2 + u 2 2 = } x max u:[,tf ] U x(tf) ẋ(t) = v(x2) + u(t) ẋ2(t) = u2(t) x() = x2() = Lecture 2 4

106 Example Resource Allocation Maximization of stored profit x(t) [, ) production rate u(t) [, ] portion of x reinvested u(t) portion of x stored γu(t)x(t) change of production rate (γ > ) [ u(t)]x(t) amount of stored profit max u:[,tf ] [,] t f ẋ(t) = γu(t)x(t) x() = x > [ u(t)]x(t)dt Lecture 2 5 Optimal Control Problem Standard form: min u:[,tf ] U t f L(x(t), u(t)) dt + φ(x(tf)) ẋ(t) = f(x(t), u(t)), x() = x Remarks: U R m set of admissible control Infinite dimensional optimization problem: Optimization over functions u : [, tf] U Constraints on x from the dynamics Final time tf fixed (free later) Lecture 2 7 Example Minimal Curve Length Find the curve with minimal length between a given point and a line x(t) Curve: (t, x(t)) with x() = a a Line: Vertical through (tf, ) t min u:[,tf ] R t f ẋ(t) = u(t) x() = a + u 2 (t)dt tf Lecture 2 6 Pontryagin s Maximum Principle Theorem: Introduce the Hamiltonian function H(x, u, λ) = L(x, u) + λ T f(x, u) Suppose the optimal control problem above has the solution u : [, tf] U and x : [, tf] R n. Then, min u U H(x (t), u, λ(t)) = H(x (t), u (t), λ(t)), t [, tf] where λ(t) solves the adjoint equation λ(t) = H T x (x (t), u (t), λ(t)), λ(tf) = φt x (x (tf)) Moreover, the optimal control is given by u (t) = arg min u U H(x (t), u, λ(t)) Lecture 2 8

107 Remarks See textbook, e.g., Glad and Ljung, for proof. The outline is simply to note that every change of u(t) from the optimal u (t) must increase the criterium. Then perform a clever Taylor expansion. Pontryagin s Maximum Principle provides necessary condition: there may exist many or none solutions (cf., minu:[,] R x(), ẋ = u, x() = ) The Maximum Principle provides all possible candidates. Solution involves 2n ODE s with boundary conditions x() = x and λ(tf) = φ T / x(x (tf)). Often hard to solve explicitly. maximum is due to Pontryagin s original formulation Lecture 2 9 Optimal control u (t) = arg min λ(t)(v(x 2(t)) + u) + λ2(t)u2 u 2 +u2 2 = = arg min λ(t)u + λ2(t)u2 u 2 +u2 2 = Hence, u(t) = λ(t) λ 2 (t) + λ 2 2(t), u 2(t) = λ2(t) λ 2 (t) + λ 2 2(t) or u(t) = + (t t f), u 2(t) = 2 tf t + (t t f) 2 Lecture 2 Example Boat in Stream (cont d) Hamiltonian satisfies H = λ T f = ( λ λ2 H x = ( λ ) ( v(x2) + u u2 ), φ(x) = x ) Adjoint equations λ(t) =, λ(tf) = λ2(t) = λ(t), λ2(tf) = have solution λ(t) =, λ2(t) = t tf Lecture 2 Example Resource Allocation (cont d) min u:[,tf ] [,] t f [u(t) ]x(t)dt ẋ(t) = γu(t)x(t), x() = x Hamiltonian satisfies H = L + λ T f = (u )x + λγux Adjoint equation λ(t) = u (t) λ(t)γu (t), λ(t f) = Lecture 2 2

108 Optimal control u (t) = arg min u [,] (u )x (t) + λ(t)γux (t) = arg min u( + λ(t)γ), (x (t) > ) u [,] { =, λ(t) /γ, λ(t) < /γ For t tf, we have u (t) = (why?) and thus λ(t) =. For t < tf /γ, we have u (t) = and thus λ(t) = γλ(t). Lecture minute exercise: Find the curve with minimal length by solving t f min u:[,tf ] R + u 2 (t)dt ẋ(t) = u(t), x() = a Lecture 2 5 λ(t) u (t) u (t) = , t [, t f /γ], t (tf /γ, tf] It is optimal to reinvest in the beginning Lecture minute exercise II: Solve the optimal control problem min ẋ = x + u x() = u 4 dt + x() Lecture 2 6

109 History Calculus of Variations Brachistochrone (shortest time) problem (696): Find the (frictionless) curve that takes a particle from A to B in shortest time dx 2 + dy 2 dt = ds v = v = + y (x) 2 dx 2gy(x) Minimize J(y) = B A + y (x) 2 dx 2gy(x) Solved by John and James Bernoulli, Newton, l Hospital Find the curve enclosing largest area (Euler) Lecture 2 7 Goddard s Rocket Problem (9) How to send a rocket as high up in the air as possible? d dt v h m = u D m g v γu h m (v(), h(), m()) = (,, m), g, γ > u motor force, D = D(v, h) air resistance Constraints: u umax and m(tf) = m (empty) Optimization criterion: maxu h(tf) Lecture 2 9 History Optimal Control The space race (Sputnik, 957) Pontryagin s Maximum Principle (956) Bellman s Dynamic Programming (957) Huge influence on engineering and other sciences: Robotics trajectory generation Aeronautics satellite orbits Physics Snell s law, conservation laws Finance portfolio theory Lecture 2 8 Generalized form: min u:[,tf ] U t f L(x(t), u(t)) dt + φ(x(tf)) ẋ(t) = f(x(t), u(t)), x() = x ψ(x(tf)) = Note the diffences compared to standard form: End time tf is free Final state is constrained: ψ(x(tf)) = x3(tf) m = Lecture 2 2

110 Solution to Goddard s Problem Goddard s problem is on generalized form with x = (v, h, m) T, L, φ(x) = x2, ψ(x) = x3 m D(v, h) : Easy: let u(t) = umax until m(t) = m Burn fuel as fast as possible, because it costs energy to lift it D(v, h) : Hard: e.g., it can be optimal to have low speed when air resistance is high, in order to burn fuel at higher level Took 5 years before a complete solution was presented Lecture 2 2 λ(t) = H T λ T (tf) = n x (x (t), u (t), λ(t), n) φ x (t f, x (tf)) + µ T ψ x (t f, x (tf)) H(x (tf), u (tf), λ(tf), n) = n φ t (t f, x (tf)) µ T ψ t (t f, x (tf)) Remarks: tf may be a free variable With fixed tf : H(x (tf), u (tf), λ(tf), n) = ψ defines end point constraints Lecture 2 23 General Pontryagin s Maximum Principle Theorem: Suppose u : [, tf] U and x : [, tf] R n are solutions to min u:[,tf ] U t f L(x(t), u(t)) dt + φ(tf, x(tf)) ẋ(t) = f(x(t), u(t)), x() = x ψ(tf, x(tf)) = Then, there exists n, µ R n such that (n, µ T ) and min u U H(x (t), u, λ(t), n) = H(x (t), u (t), λ(t), n), t [, tf] where H(x, u, λ, n) = nl(x, u) + λ T f(x, u) Lecture 2 22 Example Minimum Time Control Bring the states of the double integrator to the origin as fast as possible min u:[,tf ] [,] t f dt = min t f u:[,tf ] [,] ẋ(t) = x2(t), ẋ2(t) = u(t) ψ(x(tf)) = (x(tf), x2(tf)) T = (, ) T Optimal control is the bang-bang control u (t) = arg min + λ (t)x 2(t) + λ2(t)u u [,] {, λ = 2(t) <, λ2(t) Lecture 2 24

111 Adjoint equations λ(t) =, λ2(t) = λ(t) gives λ(t) = c, λ2(t) = c2 ct With u(t) = ζ = ±, we have x(t) = x() + x2()t + ζt 2 /2 x2(t) = x2() + ζt Eliminating t gives curves x(t) ± x2(t) 2 /2 = const These define the switch curve, where the optimal control switch Lecture 2 25 with Linear Quadratic Control min u:[, ) R m (x T Qx + u T Ru) dt ẋ = Ax + Bu has optimal solution u = Lx where L = R B T S and S > is the solution to SA + A T S + Q SBR B T S = Lecture 2 27 Reference Generation using Optimal Control Optimal control problem makes no distinction between open-loop control u (t) and closed-loop control u (t, x). We may use the optimal open-loop solution u (t) as the reference value to a linear regulator, which keeps the system close to the wanted trajectory Efficient design method for nonlinear problems Lecture 2 26 Properties of LQ Control Stabilizing Closed-loop system stable with u = α(t)lx for α(t) [/2, ) (infinite gain margin) Phase margin 6 degrees If x is not measurable, then one may use a Kalman filter; leads to linear quadratic Gaussian (LQG) control. But, then system may have arbitrarily poor robustness! (Doyle, 978) Lecture 2 28

112 Tetra Pak Milk Race Move milk in minimum time without spilling [Grundelius & Bernhardsson,999] Lecture 2 29 Pros & Cons for Optimal Control + Systematic design procedure + Applicable to nonlinear control problems + Captures limitations (as optimization constraints) Hard to find suitable criteria Hard to solve the equations that give optimal controller Lecture 2 3 Given dynamics of system and maximum slosh φ =.63, solve dt, where u is the acceleration. minu:[,tf ] [,] 5 t f Acceleration Slosh Optimal time = 375 ms, TetraPak = 54ms Lecture 2 3 SF2852 Optimal Control Theory Period 3, 7.5 credits Optimization and Systems Theory Dynamic Programming: Discrete & continuous; Principle of optimality; Hamilton-Jacobi-Bellman equation Pontryagin s Maximum principle: Main results; Special cases such as time optimal control and LQ control Numerical Methods: Numerical solution of optimal control problems Applications: Aeronautics, Robotics, Process Control, Bioengineering, Economics, Logistics Lecture 2 32

113 Today s Goal You should be able to Design controllers based on optimal control theory for Standard form Generalized form Understand possibilities and limitations of optimal control Lecture 2 33

114

115 EL262 Nonlinear Control Lecture 3 Fuzzy logic and fuzzy control Artificial neural networks Some slides copied from K.-E. Årzén and M. Johansson Lecture 3 Fuzzy Control Many plants are manually controlled by experienced operators Transfer process knowledge to control algorithm is difficult Idea: Model operator s control actions (instead of the plant) Implement as rules (instead of as differential equations) Example of a rule: IF Speed is High AND Traffic is Heavy THEN Reduce Gas A Bit Lecture 3 3 Today s Goal You should understand the basics of fuzzy logic and fuzzy controllers understand simple neural networks Lecture 3 2 Model Controller Instead of Plant Conventional control design Model plant P Analyze feedback Synthesize controller C Implement control algorithm Fuzzy control design Model manual control Implement control rules r e u y C P Lecture 3 4

116 Fuzzy Set Theory Specify how well an object satisfies a (vague) description Conventional set theory: x A or x A Fuzzy set theory: x A to a certain degree µa(x) Membership function: µa : Ω [, ] expresses the degree x belongs to A µc µw Cold Warm 25 A fuzzy set is defined as (A, µa) x Lecture 3 5 Fuzzy Logic How to calculate with fuzzy sets (A, µa)? Conventional logic: AND: A B OR: A B NOT: A Fuzzy logic: AND: µa B(x) = min(µa(x), µb(x)) OR: µa B(x) = max(µa(x), µb(x)) NOT: µa (x) = µ A(x) Defines logic calculations as X AND Y OR Z Mimic human linguistic (approximate) reasoning [Zadeh,965] Lecture 3 7 Example Q: Is the temperature x = 5 cold? A: It is quite cold since µc(5) = 2/3. Q2: Is x = 5 warm? A2: It is not really warm since µw (5) = /3. µc µw Cold Warm 25 5 x Lecture 3 6 Example Q: Is it cold AND warm? Q2: Is it cold OR warm? µc W Cold Warm Cold Warm µc W 25 x 25 x Lecture 3 8

117 Fuzzy Control System r Fuzzy Controller u Plant y r, y, u : [, ) R are conventional signals Fuzzy controller is a nonlinear mapping from y (and r) to u Lecture 3 9 Fuzzifier Fuzzy set evaluation of input y Example y = 5: µc(5) = 2/3 and µw (5) = /3 µc µw Cold Warm 25 y 5 Lecture 3 Fuzzy Controller Fuzzy Controller y u Fuzzy Fuzzifier Inference Defuzzifier Fuzzifier: Fuzzy set evaluation of y (and r) Fuzzy Inference: Fuzzy set calculations Defuzzifier: Map fuzzy set to u Fuzzifier and defuzzifier act as interfaces to the crisp signals Lecture 3 Fuzzy Inference Fuzzy Inference:. Calculate degree of fulfillment for each rule 2. Calculate fuzzy output of each rule 3. Aggregate rule outputs Examples of fuzzy rules: Rule : IF y is Cold }{{}. Rule 2: IF y is Warm }{{}. THEN u is High }{{} 2. THEN u } is {{ Low } 2. Lecture 3 2

118 . Calculate degree of fulfillment for rules Lecture Aggregate rule outputs Lecture Calculate fuzzy output of each rule Note that mu is standard fuzzy-logic nomenclature for truth value : Lecture 3 4 Defuzzifier Lecture 3 6

119 Fuzzy Controller Summary Fuzzy Controller y u Fuzzy Fuzzifier Inference Defuzzifier Fuzzifier: Fuzzy set evaluation of y (and r) Fuzzy Inference: Fuzzy set calculations. Calculate degree of fulfillment for each rule 2. Calculate fuzzy output of each rule 3. Aggregate rule outputs Defuzzifier: Map fuzzy set to u Lecture 3 7 Lecture 3 9 Example Fuzzy Control of Steam Engine Lecture 3 8 Lecture 3 2

120 Lecture 3 2 Rule-Based View of Fuzzy Control Lecture 3 23 Lecture 3 22 Nonlinear View of Fuzzy Control Lecture 3 24

121 Pros and Cons of Fuzzy Control Advantages User-friendly way to design nonlinear controllers Explicit representation of operator (process) knowledge Intuitive for non-experts in conventional control Disadvantages Limited analysis and synthesis Sometimes hard to combine with classical control Not obvious how to include dynamics in controller Fuzzy control is a way to obtain a class of nonlinear controllers Lecture 3 25 Neurons Brain neuron Artificial neuron x w x2 w2 wn xn y Σ φ( ) b Lecture 3 27 Neural Networks How does the brain work? A network of computing components (neurons) Lecture 3 26 Model of a Neuron Inputs: x, x2,..., xn Weights: w, w2,..., wn Bias: b y = φ ( b + n i= w ixi Nonlinearity: φ( ) Output: y ) x w x2 w2 y Σ φ( ) wn xn b Lecture 3 28

Lecture 5 Input output stability

Lecture 5 Input output stability Lecture 5 Input output stability or How to make a circle out of the point 1+0i, and different ways to stay away from it... k 2 yf(y) r G(s) y k 1 y y 1 k 1 1 k 2 f( ) G(iω) Course Outline Lecture 1-3 Lecture

More information

Overview Lecture 1. Nonlinear Control and Servo systems Lecture 1. Course Material. Course Goal. Course Material, cont. Lectures and labs.

Overview Lecture 1. Nonlinear Control and Servo systems Lecture 1. Course Material. Course Goal. Course Material, cont. Lectures and labs. Overview Lecture Nonlinear Control and Servo sstems Lecture Anders Rantzer Practical information Course contents Nonlinear control sstems phenomena Nonlinear differential equations Automatic Control LTH

More information

Lecture 2. Today s Goal. Linearization Around a Trajectory, cont. Linearization Around a Trajectory. Linearization, cont d.

Lecture 2. Today s Goal. Linearization Around a Trajectory, cont. Linearization Around a Trajectory. Linearization, cont d. Lecture Today s Goal Linearization Stability definitions Simulation in Matlab/Simulink To be able to linearize, bot around euilibria and trajectories eplain definitions of stability ceck local stability

More information

Today s Goal. Lecture 2. Linearization Around a Trajectory, cont. Linearization Around a Trajectory. Linearization, cont d.

Today s Goal. Lecture 2. Linearization Around a Trajectory, cont. Linearization Around a Trajectory. Linearization, cont d. Lecture Today s Goal To be able to linearize, bot around euilibria and trajectories eplain definitions of stability ceck local stability and local controllability at euilibria simulate in Simulink Material

More information

Lecture 6 Describing function analysis

Lecture 6 Describing function analysis Lecture 6 Describing function analysis Today s Goal: To be able to Derive describing functions for static nonlinearities Predict stability and existence of periodic solutions through describing function

More information

Lecture 6 Describing function analysis

Lecture 6 Describing function analysis Lecture 6 Describing function analysis Today s Goal: To be able to Derive describing functions for static nonlinearities Predict stability and existence of periodic solutions through describing function

More information

EN Nonlinear Control and Planning in Robotics Lecture 3: Stability February 4, 2015

EN Nonlinear Control and Planning in Robotics Lecture 3: Stability February 4, 2015 EN530.678 Nonlinear Control and Planning in Robotics Lecture 3: Stability February 4, 2015 Prof: Marin Kobilarov 0.1 Model prerequisites Consider ẋ = f(t, x). We will make the following basic assumptions

More information

Lecture 9 Nonlinear Control Design

Lecture 9 Nonlinear Control Design Lecture 9 Nonlinear Control Design Exact-linearization Lyapunov-based design Lab 2 Adaptive control Sliding modes control Literature: [Khalil, ch.s 13, 14.1,14.2] and [Glad-Ljung,ch.17] Course Outline

More information

Lecture 7: Anti-windup and friction compensation

Lecture 7: Anti-windup and friction compensation Lecture 7: Anti-windup and friction compensation Compensation for saturations (anti-windup) Friction models Friction compensation Material Lecture slides Course Outline Lecture 1-3 Lecture 2-6 Lecture

More information

Introduction to Nonlinear Control Lecture # 4 Passivity

Introduction to Nonlinear Control Lecture # 4 Passivity p. 1/6 Introduction to Nonlinear Control Lecture # 4 Passivity È p. 2/6 Memoryless Functions ¹ y È Ý Ù È È È È u (b) µ power inflow = uy Resistor is passive if uy 0 p. 3/6 y y y u u u (a) (b) (c) Passive

More information

Lyapunov Stability Theory

Lyapunov Stability Theory Lyapunov Stability Theory Peter Al Hokayem and Eduardo Gallestey March 16, 2015 1 Introduction In this lecture we consider the stability of equilibrium points of autonomous nonlinear systems, both in continuous

More information

Introduction to Modern Control MT 2016

Introduction to Modern Control MT 2016 CDT Autonomous and Intelligent Machines & Systems Introduction to Modern Control MT 2016 Alessandro Abate Lecture 2 First-order ordinary differential equations (ODE) Solution of a linear ODE Hints to nonlinear

More information

LMI Methods in Optimal and Robust Control

LMI Methods in Optimal and Robust Control LMI Methods in Optimal and Robust Control Matthew M. Peet Arizona State University Lecture 15: Nonlinear Systems and Lyapunov Functions Overview Our next goal is to extend LMI s and optimization to nonlinear

More information

A conjecture on sustained oscillations for a closed-loop heat equation

A conjecture on sustained oscillations for a closed-loop heat equation A conjecture on sustained oscillations for a closed-loop heat equation C.I. Byrnes, D.S. Gilliam Abstract The conjecture in this paper represents an initial step aimed toward understanding and shaping

More information

2006 Fall. G(s) y = Cx + Du

2006 Fall. G(s) y = Cx + Du 1 Class Handout: Chapter 7 Frequency Domain Analysis of Feedback Systems 2006 Fall Frequency domain analysis of a dynamic system is very useful because it provides much physical insight, has graphical

More information

Contents lecture 5. Automatic Control III. Summary of lecture 4 (II/II) Summary of lecture 4 (I/II) u y F r. Lecture 5 H 2 and H loop shaping

Contents lecture 5. Automatic Control III. Summary of lecture 4 (II/II) Summary of lecture 4 (I/II) u y F r. Lecture 5 H 2 and H loop shaping Contents lecture 5 Automatic Control III Lecture 5 H 2 and H loop shaping Thomas Schön Division of Systems and Control Department of Information Technology Uppsala University. Email: thomas.schon@it.uu.se,

More information

Lecture 9 Nonlinear Control Design. Course Outline. Exact linearization: example [one-link robot] Exact Feedback Linearization

Lecture 9 Nonlinear Control Design. Course Outline. Exact linearization: example [one-link robot] Exact Feedback Linearization Lecture 9 Nonlinear Control Design Course Outline Eact-linearization Lyapunov-based design Lab Adaptive control Sliding modes control Literature: [Khalil, ch.s 13, 14.1,14.] and [Glad-Ljung,ch.17] Lecture

More information

ECEN 420 LINEAR CONTROL SYSTEMS. Lecture 6 Mathematical Representation of Physical Systems II 1/67

ECEN 420 LINEAR CONTROL SYSTEMS. Lecture 6 Mathematical Representation of Physical Systems II 1/67 1/67 ECEN 420 LINEAR CONTROL SYSTEMS Lecture 6 Mathematical Representation of Physical Systems II State Variable Models for Dynamic Systems u 1 u 2 u ṙ. Internal Variables x 1, x 2 x n y 1 y 2. y m Figure

More information

1 Lyapunov theory of stability

1 Lyapunov theory of stability M.Kawski, APM 581 Diff Equns Intro to Lyapunov theory. November 15, 29 1 1 Lyapunov theory of stability Introduction. Lyapunov s second (or direct) method provides tools for studying (asymptotic) stability

More information

Prof. Krstic Nonlinear Systems MAE281A Homework set 1 Linearization & phase portrait

Prof. Krstic Nonlinear Systems MAE281A Homework set 1 Linearization & phase portrait Prof. Krstic Nonlinear Systems MAE28A Homework set Linearization & phase portrait. For each of the following systems, find all equilibrium points and determine the type of each isolated equilibrium. Use

More information

D(s) G(s) A control system design definition

D(s) G(s) A control system design definition R E Compensation D(s) U Plant G(s) Y Figure 7. A control system design definition x x x 2 x 2 U 2 s s 7 2 Y Figure 7.2 A block diagram representing Eq. (7.) in control form z U 2 s z Y 4 z 2 s z 2 3 Figure

More information

Lecture 4. Chapter 4: Lyapunov Stability. Eugenio Schuster. Mechanical Engineering and Mechanics Lehigh University.

Lecture 4. Chapter 4: Lyapunov Stability. Eugenio Schuster. Mechanical Engineering and Mechanics Lehigh University. Lecture 4 Chapter 4: Lyapunov Stability Eugenio Schuster schuster@lehigh.edu Mechanical Engineering and Mechanics Lehigh University Lecture 4 p. 1/86 Autonomous Systems Consider the autonomous system ẋ

More information

Control Systems I. Lecture 4: Diagonalization, Modal Analysis, Intro to Feedback. Readings: Emilio Frazzoli

Control Systems I. Lecture 4: Diagonalization, Modal Analysis, Intro to Feedback. Readings: Emilio Frazzoli Control Systems I Lecture 4: Diagonalization, Modal Analysis, Intro to Feedback Readings: Emilio Frazzoli Institute for Dynamic Systems and Control D-MAVT ETH Zürich October 13, 2017 E. Frazzoli (ETH)

More information

FRTN10 Multivariable Control Lecture 1

FRTN10 Multivariable Control Lecture 1 FRTN10 Multivariable Control Lecture 1 Anton Cervin Automatic Control LTH, Lund University Department of Automatic Control Founded 1965 by Karl Johan Åström (IEEE Medal of Honor) Approx. 50 employees Education

More information

Output Feedback and State Feedback. EL2620 Nonlinear Control. Nonlinear Observers. Nonlinear Controllers. ẋ = f(x,u), y = h(x)

Output Feedback and State Feedback. EL2620 Nonlinear Control. Nonlinear Observers. Nonlinear Controllers. ẋ = f(x,u), y = h(x) Output Feedback and State Feedback EL2620 Nonlinear Control Lecture 10 Exact feedback linearization Input-output linearization Lyapunov-based control design methods ẋ = f(x,u) y = h(x) Output feedback:

More information

Nonlinear System Analysis

Nonlinear System Analysis Nonlinear System Analysis Lyapunov Based Approach Lecture 4 Module 1 Dr. Laxmidhar Behera Department of Electrical Engineering, Indian Institute of Technology, Kanpur. January 4, 2003 Intelligent Control

More information

Lecture 4 Lyapunov Stability

Lecture 4 Lyapunov Stability Lecture 4 Lyapunov Stability Material Glad & Ljung Ch. 12.2 Khalil Ch. 4.1-4.3 Lecture notes Today s Goal To be able to prove local and global stability of an equilibrium point using Lyapunov s method

More information

Raktim Bhattacharya. . AERO 422: Active Controls for Aerospace Vehicles. Dynamic Response

Raktim Bhattacharya. . AERO 422: Active Controls for Aerospace Vehicles. Dynamic Response .. AERO 422: Active Controls for Aerospace Vehicles Dynamic Response Raktim Bhattacharya Laboratory For Uncertainty Quantification Aerospace Engineering, Texas A&M University. . Previous Class...........

More information

Exercises in Nonlinear Control Systems

Exercises in Nonlinear Control Systems Exercises in Nonlinear Control Systems Mikael Johansson Bo Bernhardsson Karl Henrik Johansson Created: 998 Latest update: November 9, 26 Introduction The exercises are divided into problem areas that roughly

More information

GATE EE Topic wise Questions SIGNALS & SYSTEMS

GATE EE Topic wise Questions SIGNALS & SYSTEMS www.gatehelp.com GATE EE Topic wise Questions YEAR 010 ONE MARK Question. 1 For the system /( s + 1), the approximate time taken for a step response to reach 98% of the final value is (A) 1 s (B) s (C)

More information

BIBO STABILITY AND ASYMPTOTIC STABILITY

BIBO STABILITY AND ASYMPTOTIC STABILITY BIBO STABILITY AND ASYMPTOTIC STABILITY FRANCESCO NORI Abstract. In this report with discuss the concepts of bounded-input boundedoutput stability (BIBO) and of Lyapunov stability. Examples are given to

More information

CDS Solutions to the Midterm Exam

CDS Solutions to the Midterm Exam CDS 22 - Solutions to the Midterm Exam Instructor: Danielle C. Tarraf November 6, 27 Problem (a) Recall that the H norm of a transfer function is time-delay invariant. Hence: ( ) Ĝ(s) = s + a = sup /2

More information

Georgia Institute of Technology Nonlinear Controls Theory Primer ME 6402

Georgia Institute of Technology Nonlinear Controls Theory Primer ME 6402 Georgia Institute of Technology Nonlinear Controls Theory Primer ME 640 Ajeya Karajgikar April 6, 011 Definition Stability (Lyapunov): The equilibrium state x = 0 is said to be stable if, for any R > 0,

More information

Solution of Additional Exercises for Chapter 4

Solution of Additional Exercises for Chapter 4 1 1. (1) Try V (x) = 1 (x 1 + x ). Solution of Additional Exercises for Chapter 4 V (x) = x 1 ( x 1 + x ) x = x 1 x + x 1 x In the neighborhood of the origin, the term (x 1 + x ) dominates. Hence, the

More information

Solution of Linear State-space Systems

Solution of Linear State-space Systems Solution of Linear State-space Systems Homogeneous (u=0) LTV systems first Theorem (Peano-Baker series) The unique solution to x(t) = (t, )x 0 where The matrix function is given by is called the state

More information

Lecture 8. Chapter 5: Input-Output Stability Chapter 6: Passivity Chapter 14: Passivity-Based Control. Eugenio Schuster.

Lecture 8. Chapter 5: Input-Output Stability Chapter 6: Passivity Chapter 14: Passivity-Based Control. Eugenio Schuster. Lecture 8 Chapter 5: Input-Output Stability Chapter 6: Passivity Chapter 14: Passivity-Based Control Eugenio Schuster schuster@lehigh.edu Mechanical Engineering and Mechanics Lehigh University Lecture

More information

STABILITY. Phase portraits and local stability

STABILITY. Phase portraits and local stability MAS271 Methods for differential equations Dr. R. Jain STABILITY Phase portraits and local stability We are interested in system of ordinary differential equations of the form ẋ = f(x, y), ẏ = g(x, y),

More information

Stabilization and Passivity-Based Control

Stabilization and Passivity-Based Control DISC Systems and Control Theory of Nonlinear Systems, 2010 1 Stabilization and Passivity-Based Control Lecture 8 Nonlinear Dynamical Control Systems, Chapter 10, plus handout from R. Sepulchre, Constructive

More information

Exercises Automatic Control III 2015

Exercises Automatic Control III 2015 Exercises Automatic Control III 205 Foreword This exercise manual is designed for the course "Automatic Control III", given by the Division of Systems and Control. The numbering of the chapters follows

More information

Nonlinear Autonomous Systems of Differential

Nonlinear Autonomous Systems of Differential Chapter 4 Nonlinear Autonomous Systems of Differential Equations 4.0 The Phase Plane: Linear Systems 4.0.1 Introduction Consider a system of the form x = A(x), (4.0.1) where A is independent of t. Such

More information

Stability of Parameter Adaptation Algorithms. Big picture

Stability of Parameter Adaptation Algorithms. Big picture ME5895, UConn, Fall 215 Prof. Xu Chen Big picture For ˆθ (k + 1) = ˆθ (k) + [correction term] we haven t talked about whether ˆθ(k) will converge to the true value θ if k. We haven t even talked about

More information

Engineering Tripos Part IIB Nonlinear Systems and Control. Handout 4: Circle and Popov Criteria

Engineering Tripos Part IIB Nonlinear Systems and Control. Handout 4: Circle and Popov Criteria Engineering Tripos Part IIB Module 4F2 Nonlinear Systems and Control Handout 4: Circle and Popov Criteria 1 Introduction The stability criteria discussed in these notes are reminiscent of the Nyquist criterion

More information

Nonlinear Control Lecture 5: Stability Analysis II

Nonlinear Control Lecture 5: Stability Analysis II Nonlinear Control Lecture 5: Stability Analysis II Farzaneh Abdollahi Department of Electrical Engineering Amirkabir University of Technology Fall 2010 Farzaneh Abdollahi Nonlinear Control Lecture 5 1/41

More information

EE363 homework 7 solutions

EE363 homework 7 solutions EE363 Prof. S. Boyd EE363 homework 7 solutions 1. Gain margin for a linear quadratic regulator. Let K be the optimal state feedback gain for the LQR problem with system ẋ = Ax + Bu, state cost matrix Q,

More information

MCE693/793: Analysis and Control of Nonlinear Systems

MCE693/793: Analysis and Control of Nonlinear Systems MCE693/793: Analysis and Control of Nonlinear Systems Systems of Differential Equations Phase Plane Analysis Hanz Richter Mechanical Engineering Department Cleveland State University Systems of Nonlinear

More information

3 Stability and Lyapunov Functions

3 Stability and Lyapunov Functions CDS140a Nonlinear Systems: Local Theory 02/01/2011 3 Stability and Lyapunov Functions 3.1 Lyapunov Stability Denition: An equilibrium point x 0 of (1) is stable if for all ɛ > 0, there exists a δ > 0 such

More information

L2 gains and system approximation quality 1

L2 gains and system approximation quality 1 Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.242, Fall 24: MODEL REDUCTION L2 gains and system approximation quality 1 This lecture discusses the utility

More information

EECS C128/ ME C134 Final Wed. Dec. 15, am. Closed book. Two pages of formula sheets. No calculators.

EECS C128/ ME C134 Final Wed. Dec. 15, am. Closed book. Two pages of formula sheets. No calculators. Name: SID: EECS C28/ ME C34 Final Wed. Dec. 5, 2 8- am Closed book. Two pages of formula sheets. No calculators. There are 8 problems worth points total. Problem Points Score 2 2 6 3 4 4 5 6 6 7 8 2 Total

More information

Converse Lyapunov theorem and Input-to-State Stability

Converse Lyapunov theorem and Input-to-State Stability Converse Lyapunov theorem and Input-to-State Stability April 6, 2014 1 Converse Lyapunov theorem In the previous lecture, we have discussed few examples of nonlinear control systems and stability concepts

More information

Nonlinear Control. Nonlinear Control Lecture # 6 Passivity and Input-Output Stability

Nonlinear Control. Nonlinear Control Lecture # 6 Passivity and Input-Output Stability Nonlinear Control Lecture # 6 Passivity and Input-Output Stability Passivity: Memoryless Functions y y y u u u (a) (b) (c) Passive Passive Not passive y = h(t,u), h [0, ] Vector case: y = h(t,u), h T =

More information

Complex Dynamic Systems: Qualitative vs Quantitative analysis

Complex Dynamic Systems: Qualitative vs Quantitative analysis Complex Dynamic Systems: Qualitative vs Quantitative analysis Complex Dynamic Systems Chiara Mocenni Department of Information Engineering and Mathematics University of Siena (mocenni@diism.unisi.it) Dynamic

More information

Topic # /31 Feedback Control Systems. Analysis of Nonlinear Systems Lyapunov Stability Analysis

Topic # /31 Feedback Control Systems. Analysis of Nonlinear Systems Lyapunov Stability Analysis Topic # 16.30/31 Feedback Control Systems Analysis of Nonlinear Systems Lyapunov Stability Analysis Fall 010 16.30/31 Lyapunov Stability Analysis Very general method to prove (or disprove) stability of

More information

ECE557 Systems Control

ECE557 Systems Control ECE557 Systems Control Bruce Francis Course notes, Version.0, September 008 Preface This is the second Engineering Science course on control. It assumes ECE56 as a prerequisite. If you didn t take ECE56,

More information

Nonlinear Control Systems

Nonlinear Control Systems Nonlinear Control Systems António Pedro Aguiar pedro@isr.ist.utl.pt 5. Input-Output Stability DEEC PhD Course http://users.isr.ist.utl.pt/%7epedro/ncs2012/ 2012 1 Input-Output Stability y = Hu H denotes

More information

Exam in Systems Engineering/Process Control

Exam in Systems Engineering/Process Control Department of AUTOMATIC CONTROL Exam in Systems Engineering/Process Control 27-6-2 Points and grading All answers must include a clear motivation. Answers may be given in English or Swedish. The total

More information

Transfer Functions. Chapter Introduction. 6.2 The Transfer Function

Transfer Functions. Chapter Introduction. 6.2 The Transfer Function Chapter 6 Transfer Functions As a matter of idle curiosity, I once counted to find out what the order of the set of equations in an amplifier I had just designed would have been, if I had worked with the

More information

Raktim Bhattacharya. . AERO 632: Design of Advance Flight Control System. Norms for Signals and Systems

Raktim Bhattacharya. . AERO 632: Design of Advance Flight Control System. Norms for Signals and Systems . AERO 632: Design of Advance Flight Control System Norms for Signals and. Raktim Bhattacharya Laboratory For Uncertainty Quantification Aerospace Engineering, Texas A&M University. Norms for Signals ...

More information

Event-triggered control subject to actuator saturation

Event-triggered control subject to actuator saturation Event-triggered control subject to actuator saturation GEORG A. KIENER Degree project in Automatic Control Master's thesis Stockholm, Sweden 212 XR-EE-RT 212:9 Diploma Thesis Event-triggered control subject

More information

TTK4150 Nonlinear Control Systems Solution 6 Part 2

TTK4150 Nonlinear Control Systems Solution 6 Part 2 TTK4150 Nonlinear Control Systems Solution 6 Part 2 Department of Engineering Cybernetics Norwegian University of Science and Technology Fall 2003 Solution 1 Thesystemisgivenby φ = R (φ) ω and J 1 ω 1

More information

E209A: Analysis and Control of Nonlinear Systems Problem Set 6 Solutions

E209A: Analysis and Control of Nonlinear Systems Problem Set 6 Solutions E9A: Analysis and Control of Nonlinear Systems Problem Set 6 Solutions Michael Vitus Gabe Hoffmann Stanford University Winter 7 Problem 1 The governing equations are: ẋ 1 = x 1 + x 1 x ẋ = x + x 3 Using

More information

MCE693/793: Analysis and Control of Nonlinear Systems

MCE693/793: Analysis and Control of Nonlinear Systems MCE693/793: Analysis and Control of Nonlinear Systems Lyapunov Stability - I Hanz Richter Mechanical Engineering Department Cleveland State University Definition of Stability - Lyapunov Sense Lyapunov

More information

Some solutions of the written exam of January 27th, 2014

Some solutions of the written exam of January 27th, 2014 TEORIA DEI SISTEMI Systems Theory) Prof. C. Manes, Prof. A. Germani Some solutions of the written exam of January 7th, 0 Problem. Consider a feedback control system with unit feedback gain, with the following

More information

Math 312 Lecture Notes Linear Two-dimensional Systems of Differential Equations

Math 312 Lecture Notes Linear Two-dimensional Systems of Differential Equations Math 2 Lecture Notes Linear Two-dimensional Systems of Differential Equations Warren Weckesser Department of Mathematics Colgate University February 2005 In these notes, we consider the linear system of

More information

Identification Methods for Structural Systems

Identification Methods for Structural Systems Prof. Dr. Eleni Chatzi System Stability Fundamentals Overview System Stability Assume given a dynamic system with input u(t) and output x(t). The stability property of a dynamic system can be defined from

More information

154 Chapter 9 Hints, Answers, and Solutions The particular trajectories are highlighted in the phase portraits below.

154 Chapter 9 Hints, Answers, and Solutions The particular trajectories are highlighted in the phase portraits below. 54 Chapter 9 Hints, Answers, and Solutions 9. The Phase Plane 9.. 4. The particular trajectories are highlighted in the phase portraits below... 3. 4. 9..5. Shown below is one possibility with x(t) and

More information

Step Response Analysis. Frequency Response, Relation Between Model Descriptions

Step Response Analysis. Frequency Response, Relation Between Model Descriptions Step Response Analysis. Frequency Response, Relation Between Model Descriptions Automatic Control, Basic Course, Lecture 3 November 9, 27 Lund University, Department of Automatic Control Content. Step

More information

Richiami di Controlli Automatici

Richiami di Controlli Automatici Richiami di Controlli Automatici Gianmaria De Tommasi 1 1 Università degli Studi di Napoli Federico II detommas@unina.it Ottobre 2012 Corsi AnsaldoBreda G. De Tommasi (UNINA) Richiami di Controlli Automatici

More information

Chapter III. Stability of Linear Systems

Chapter III. Stability of Linear Systems 1 Chapter III Stability of Linear Systems 1. Stability and state transition matrix 2. Time-varying (non-autonomous) systems 3. Time-invariant systems 1 STABILITY AND STATE TRANSITION MATRIX 2 In this chapter,

More information

ANSWERS Final Exam Math 250b, Section 2 (Professor J. M. Cushing), 15 May 2008 PART 1

ANSWERS Final Exam Math 250b, Section 2 (Professor J. M. Cushing), 15 May 2008 PART 1 ANSWERS Final Exam Math 50b, Section (Professor J. M. Cushing), 5 May 008 PART. (0 points) A bacterial population x grows exponentially according to the equation x 0 = rx, where r>0is the per unit rate

More information

Controls Problems for Qualifying Exam - Spring 2014

Controls Problems for Qualifying Exam - Spring 2014 Controls Problems for Qualifying Exam - Spring 2014 Problem 1 Consider the system block diagram given in Figure 1. Find the overall transfer function T(s) = C(s)/R(s). Note that this transfer function

More information

Introduction to Nonlinear Control Lecture # 3 Time-Varying and Perturbed Systems

Introduction to Nonlinear Control Lecture # 3 Time-Varying and Perturbed Systems p. 1/5 Introduction to Nonlinear Control Lecture # 3 Time-Varying and Perturbed Systems p. 2/5 Time-varying Systems ẋ = f(t, x) f(t, x) is piecewise continuous in t and locally Lipschitz in x for all t

More information

State Regulator. Advanced Control. design of controllers using pole placement and LQ design rules

State Regulator. Advanced Control. design of controllers using pole placement and LQ design rules Advanced Control State Regulator Scope design of controllers using pole placement and LQ design rules Keywords pole placement, optimal control, LQ regulator, weighting matrixes Prerequisites Contact state

More information

MAE143a: Signals & Systems (& Control) Final Exam (2011) solutions

MAE143a: Signals & Systems (& Control) Final Exam (2011) solutions MAE143a: Signals & Systems (& Control) Final Exam (2011) solutions Question 1. SIGNALS: Design of a noise-cancelling headphone system. 1a. Based on the low-pass filter given, design a high-pass filter,

More information

Practice Problems for Final Exam

Practice Problems for Final Exam Math 1280 Spring 2016 Practice Problems for Final Exam Part 2 (Sections 6.6, 6.7, 6.8, and chapter 7) S o l u t i o n s 1. Show that the given system has a nonlinear center at the origin. ẋ = 9y 5y 5,

More information

Systems Analysis and Control

Systems Analysis and Control Systems Analysis and Control Matthew M. Peet Arizona State University Lecture 5: Calculating the Laplace Transform of a Signal Introduction In this Lecture, you will learn: Laplace Transform of Simple

More information

2.10 Saddles, Nodes, Foci and Centers

2.10 Saddles, Nodes, Foci and Centers 2.10 Saddles, Nodes, Foci and Centers In Section 1.5, a linear system (1 where x R 2 was said to have a saddle, node, focus or center at the origin if its phase portrait was linearly equivalent to one

More information

Subject: Optimal Control Assignment-1 (Related to Lecture notes 1-10)

Subject: Optimal Control Assignment-1 (Related to Lecture notes 1-10) Subject: Optimal Control Assignment- (Related to Lecture notes -). Design a oil mug, shown in fig., to hold as much oil possible. The height and radius of the mug should not be more than 6cm. The mug must

More information

Nonlinear Control Lecture 1: Introduction

Nonlinear Control Lecture 1: Introduction Nonlinear Control Lecture 1: Introduction Farzaneh Abdollahi Department of Electrical Engineering Amirkabir University of Technology Fall 2011 Farzaneh Abdollahi Nonlinear Control Lecture 1 1/15 Motivation

More information

Problem List MATH 5173 Spring, 2014

Problem List MATH 5173 Spring, 2014 Problem List MATH 5173 Spring, 2014 The notation p/n means the problem with number n on page p of Perko. 1. 5/3 [Due Wednesday, January 15] 2. 6/5 and describe the relationship of the phase portraits [Due

More information

Handout 2: Invariant Sets and Stability

Handout 2: Invariant Sets and Stability Engineering Tripos Part IIB Nonlinear Systems and Control Module 4F2 1 Invariant Sets Handout 2: Invariant Sets and Stability Consider again the autonomous dynamical system ẋ = f(x), x() = x (1) with state

More information

1 Assignment 1: Nonlinear dynamics (due September

1 Assignment 1: Nonlinear dynamics (due September Assignment : Nonlinear dynamics (due September 4, 28). Consider the ordinary differential equation du/dt = cos(u). Sketch the equilibria and indicate by arrows the increase or decrease of the solutions.

More information

Linear Differential Equations. Problems

Linear Differential Equations. Problems Chapter 1 Linear Differential Equations. Problems 1.1 Introduction 1.1.1 Show that the function ϕ : R R, given by the expression ϕ(t) = 2e 3t for all t R, is a solution of the Initial Value Problem x =

More information

DO NOT DO HOMEWORK UNTIL IT IS ASSIGNED. THE ASSIGNMENTS MAY CHANGE UNTIL ANNOUNCED.

DO NOT DO HOMEWORK UNTIL IT IS ASSIGNED. THE ASSIGNMENTS MAY CHANGE UNTIL ANNOUNCED. EE 533 Homeworks Spring 07 Updated: Saturday, April 08, 07 DO NOT DO HOMEWORK UNTIL IT IS ASSIGNED. THE ASSIGNMENTS MAY CHANGE UNTIL ANNOUNCED. Some homework assignments refer to the textbooks: Slotine

More information

Half of Final Exam Name: Practice Problems October 28, 2014

Half of Final Exam Name: Practice Problems October 28, 2014 Math 54. Treibergs Half of Final Exam Name: Practice Problems October 28, 24 Half of the final will be over material since the last midterm exam, such as the practice problems given here. The other half

More information

EE C128 / ME C134 Final Exam Fall 2014

EE C128 / ME C134 Final Exam Fall 2014 EE C128 / ME C134 Final Exam Fall 2014 December 19, 2014 Your PRINTED FULL NAME Your STUDENT ID NUMBER Number of additional sheets 1. No computers, no tablets, no connected device (phone etc.) 2. Pocket

More information

Nonlinear Control. Nonlinear Control Lecture # 2 Stability of Equilibrium Points

Nonlinear Control. Nonlinear Control Lecture # 2 Stability of Equilibrium Points Nonlinear Control Lecture # 2 Stability of Equilibrium Points Basic Concepts ẋ = f(x) f is locally Lipschitz over a domain D R n Suppose x D is an equilibrium point; that is, f( x) = 0 Characterize and

More information

Copyright (c) 2006 Warren Weckesser

Copyright (c) 2006 Warren Weckesser 2.2. PLANAR LINEAR SYSTEMS 3 2.2. Planar Linear Systems We consider the linear system of two first order differential equations or equivalently, = ax + by (2.7) dy = cx + dy [ d x x = A x, where x =, and

More information

Lecture 5: Linear Systems. Transfer functions. Frequency Domain Analysis. Basic Control Design.

Lecture 5: Linear Systems. Transfer functions. Frequency Domain Analysis. Basic Control Design. ISS0031 Modeling and Identification Lecture 5: Linear Systems. Transfer functions. Frequency Domain Analysis. Basic Control Design. Aleksei Tepljakov, Ph.D. September 30, 2015 Linear Dynamic Systems Definition

More information

Applied Math Qualifying Exam 11 October Instructions: Work 2 out of 3 problems in each of the 3 parts for a total of 6 problems.

Applied Math Qualifying Exam 11 October Instructions: Work 2 out of 3 problems in each of the 3 parts for a total of 6 problems. Printed Name: Signature: Applied Math Qualifying Exam 11 October 2014 Instructions: Work 2 out of 3 problems in each of the 3 parts for a total of 6 problems. 2 Part 1 (1) Let Ω be an open subset of R

More information

1. The Transition Matrix (Hint: Recall that the solution to the linear equation ẋ = Ax + Bu is

1. The Transition Matrix (Hint: Recall that the solution to the linear equation ẋ = Ax + Bu is ECE 55, Fall 2007 Problem Set #4 Solution The Transition Matrix (Hint: Recall that the solution to the linear equation ẋ Ax + Bu is x(t) e A(t ) x( ) + e A(t τ) Bu(τ)dτ () This formula is extremely important

More information

Old Math 330 Exams. David M. McClendon. Department of Mathematics Ferris State University

Old Math 330 Exams. David M. McClendon. Department of Mathematics Ferris State University Old Math 330 Exams David M. McClendon Department of Mathematics Ferris State University Last updated to include exams from Fall 07 Contents Contents General information about these exams 3 Exams from Fall

More information

Control System Design

Control System Design ELEC4410 Control System Design Lecture 19: Feedback from Estimated States and Discrete-Time Control Design Julio H. Braslavsky julio@ee.newcastle.edu.au School of Electrical Engineering and Computer Science

More information

EL2520 Control Theory and Practice

EL2520 Control Theory and Practice So far EL2520 Control Theory and Practice r Fr wu u G w z n Lecture 5: Multivariable systems -Fy Mikael Johansson School of Electrical Engineering KTH, Stockholm, Sweden SISO control revisited: Signal

More information

CDS 101/110a: Lecture 2.1 Dynamic Behavior

CDS 101/110a: Lecture 2.1 Dynamic Behavior CDS 11/11a: Lecture 2.1 Dynamic Behavior Richard M. Murray 6 October 28 Goals: Learn to use phase portraits to visualize behavior of dynamical systems Understand different types of stability for an equilibrium

More information

1. < 0: the eigenvalues are real and have opposite signs; the fixed point is a saddle point

1. < 0: the eigenvalues are real and have opposite signs; the fixed point is a saddle point Solving a Linear System τ = trace(a) = a + d = λ 1 + λ 2 λ 1,2 = τ± = det(a) = ad bc = λ 1 λ 2 Classification of Fixed Points τ 2 4 1. < 0: the eigenvalues are real and have opposite signs; the fixed point

More information

Frequency methods for the analysis of feedback systems. Lecture 6. Loop analysis of feedback systems. Nyquist approach to study stability

Frequency methods for the analysis of feedback systems. Lecture 6. Loop analysis of feedback systems. Nyquist approach to study stability Lecture 6. Loop analysis of feedback systems 1. Motivation 2. Graphical representation of frequency response: Bode and Nyquist curves 3. Nyquist stability theorem 4. Stability margins Frequency methods

More information

Balanced Truncation 1

Balanced Truncation 1 Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.242, Fall 2004: MODEL REDUCTION Balanced Truncation This lecture introduces balanced truncation for LTI

More information

Solutions to Math 53 Math 53 Practice Final

Solutions to Math 53 Math 53 Practice Final Solutions to Math 5 Math 5 Practice Final 20 points Consider the initial value problem y t 4yt = te t with y 0 = and y0 = 0 a 8 points Find the Laplace transform of the solution of this IVP b 8 points

More information

ẋ n = f n (x 1,...,x n,u 1,...,u m ) (5) y 1 = g 1 (x 1,...,x n,u 1,...,u m ) (6) y p = g p (x 1,...,x n,u 1,...,u m ) (7)

ẋ n = f n (x 1,...,x n,u 1,...,u m ) (5) y 1 = g 1 (x 1,...,x n,u 1,...,u m ) (6) y p = g p (x 1,...,x n,u 1,...,u m ) (7) EEE582 Topical Outline A.A. Rodriguez Fall 2007 GWC 352, 965-3712 The following represents a detailed topical outline of the course. It attempts to highlight most of the key concepts to be covered and

More information

CONTROL DESIGN FOR SET POINT TRACKING

CONTROL DESIGN FOR SET POINT TRACKING Chapter 5 CONTROL DESIGN FOR SET POINT TRACKING In this chapter, we extend the pole placement, observer-based output feedback design to solve tracking problems. By tracking we mean that the output is commanded

More information