Nonlinear and Adaptive Control
|
|
- Lilian Armstrong
- 5 years ago
- Views:
Transcription
1 Nonlinear and Adaptive Control Zhengtao Ding Control Systems Centre School of Electrical and Electronic Engineering University of Manchester, P O Box 88 Manchester M60 1QD, United Kingdom January 24,
2 Aims To introduce techniques for analysis and design of nonlinear and adaptive control systems. Learning Outcomes On completion of the course, students should be able to Demonstrate a knowledge of effects of nonlinearities on the operation of control systems. Show an understanding of methods for reducing nonlinear effects in control systems. Apply phase-plane method to second-order nonlinear systems. Obtain locally linearized models for nonlinear dynamic systems. 2
3 Use the describing function method for predicting nonlinear feedback systems behaviour. Investigate the stability of nonlinear systems by using Lyapunov functions. Use backstepping technique for control design of a special class of nonlinear systems. Design and analysis adaptive control of uncertain linear systems. Understand robustness of adaptive schemes. Design adaptive control for special class of uncertain nonlinear systems. 3
4 1. Introduction to Nonlinear Systems 1.1 Nonlinear functions and nonlinearities A function f : R R is linear, if for any x, y, k R, f(kx) = kf(x) (1) f(x + y) = f(x) + f(y) (2) A function does not satisfy these conditions is a nonlinear functions. Similar conditions can be imposed for linear maps between R m and R n for any positive integers m and n. Linear dynamic systems ẋ = Ax + Bu (3) y = Cx + Du (4) for state x R n, u R m, and u R p, and matrices are in suitable dimensions. 4
5 Single-Input-Single-Output (SISO) systems with input saturation ẋ = Ax + Bσ(u) (5) y = Cx (6) where σ : R R is a saturation function defined as 1 for u < 1 σ(u) = u otherwise 1 for u > 1 The saturation function σ is a nonlinear function and this system is a nonlinear system. A general nonlinear system is often described by (7) ẋ = f(x, u) (8) y = h(x, u) (9) where functions f and h are often assumed to be smooth enough (ie, having continuous derivatives to certain orders). 5
6 System with unknown parameters ẋ = ax + u (10) where a is an unknown parameter. How to design a control system to ensure the stability of the system? If a range a < a < a + is known, then we can design a control law as with c > 0, which results the closed-loop system u = cx a + x (11) ẋ = cx + (a a + )x (12) 6
7 Adaptive control can be used in the case of complete unknown a. u = cx âx (13) â = x 2 (14) If we let ã = a â, the closed-loop system is described by ẋ = cx + ãx (15) ã = x 2 (16) This adaptive system is nonlinear, even though the original uncertain system is linear. This adaptive system is stable, but how to show it? 7
8 1.2 Common nonlinearities in control systems Nonlinear components, such springs, resistors etc. Dead zone and saturation in actuators Switching devices, relay and hysteresis relay Backlash 8
9 1.3 Common nonlinear systems behaviours Dynamics (of linearized systems) depends on operating point (hence the input). There may be more than one equilibrium point. Limit cycles (persistent oscillations) may occur, and also chaotic motion. For a pure sinusoidal input at a fixed frequency, the output waveforms is generally distorted, and the output amplitude is a nonlinear function of the input amplitude. 9
10 1.4 Compensating nonlinearities and nonlinear control design Gain Scheduling switching controller parameters in different operating regions. Feedback Linearisation By using nonlinear control laws. Nonlinear Function Approximations using fuzzy logic and neural networks. Adaptive Control such self-tuning control model reference adaptive control, and nonlinear adaptive control. Bang-bang Control Switching between operating limits (time-optimal control). Variable Structure Control, sliding mode control. Backstepping Design design control input in an iterative way by moving down the different virtual control until the control input is design. 10
11 Forwarding Control an iterative design working in a way opposite to backstepping. Semiglobal Control Design achieving stability in arbitrarily large regions, often by using high gain control. 11
12 2. State Space Models 2.1 Nonlinear systems and linearisation around an operating point Consider ẋ = f(x, u) (17) y = h(x, u) (18) An operating point at (x e, u e ) is taken with x = x e and u = u e being constants such that f(x e, u e ) = 0. A linerized model around the operation point can then be obtained. Let x = x x e (19) ū = u u e (20) ȳ = h(x, u) h(x e, u e ) (21) then the linearized model is given by x = A x + Bū (22) 12
13 where ȳ = C x + Dū (23) a i,j = f i x j (x e, u e ) (24) b i,j = f i u j (x e, u e ) (25) c i,j = h i x j (x e, u e ) (26) d i,j = h i u j (x e, u e ) (27) 13
14 2.2 Some special forms Strict feedback from ẋ 1 = x 2 + φ 1 (x 1 ) ẋ 2 = x 3 + φ 2 (x 1, x 2 ). ẋ n 1 = x n + φ n 1 (x 1, x 2,..., x n 1 ) ẋ n = u + φ n (x 1, x 2,..., x n ) (28) 14
15 Output feedback from ẋ 1 = x 2 + φ 1 (y) ẋ 2 = x 3 + φ 2 (y). ẋ r = x r+1 + φ r (y) + b r u. ẋ n = φ n (y) + b n u y = x 1 (29) 15
16 Normal from with z R n r. Special normal from ż = f 0 (z, ξ 1,..., ξ r ) ξ 1 = ξ 2 ξ 2 = ξ 3. ξ r = f r (z, ξ 1,..., ξ r ) + b r u (30) ż = f 0 (z, ξ 1 ) ξ 1 = ξ 2 ξ 2 = ξ 3. ξ r = f r (z, ξ 1,..., ξ r ) + b r u (31) 16
17 2.3 Autonomous systems Autonomous systems: a dynamic system which does not explicitly depend on time. Often it can be expressed as ẋ = f(x) (32) For an autonomous system, the set of all the trajectories provides a complete geometrical representation of the dynamic behaviour. This is often referred to as the phase portrait, especially for second order systems in the format: ẋ 1 = x 2 ẋ 2 = φ(x 1, x 2 ) (33) Singular point: the point x e such that f(x e ) = 0. 17
18 2.4 Second order systems Also referred to as phase plane. ẋ 1 = f 1 (x 1, x 2 ) ẋ 2 = f 2 (x 1, x 2 ) (34) Classification of singular points (based of eigenvalues) λ 1 < 0, λ 2 < 0 Stable Node λ 1 > 0, λ 2 > 0 Unstable Node λ 1 < 0, λ 2 > 0 Saddle Point λ 1,2 = µ ± iν(µ < 0) Stable Focus λ 1,2 = µ ± iν(µ > 0) Unstable Focus λ 1,2 = ±iν(ν > 0) Centre 18
19 Slope of trajectory dx 2 = f 1(x 1, x 2 ) dx 1 f 2 (x 1, x 2 ) An Isocline is a curve on which f 1(x 1,x 2 ) f 2 (x 1,x 2 ) is constant. (35) 19
20 Example 2.1 The swing equation of a synchronous machine H δ = P m P e sin δ (36) where H is the inertia, δ, the rotor angle, P m, mechanical power, P e the maximum electrical power generated. The state space model is obtained by let x 1 = δ and x 2 = δ as where u = P m as the input. ẋ 1 = x 2 ẋ 2 = u P e sin x 1 H y = P e sin x 1 (37) 20
21 Singular points Linearised model Stable or unstable? x 1e = arcsin(u e /P e ) or π arcsin(u e /P e ) x 2e = 0 A = 0 1 P e cos x 1e H 0 C = [ P e cos x 1e 0 ] B = 0 1 H (38) 21
22 2.5 Limit cycles A special trajectory which takes the form of a closed curve, and is intrinsic to the system, and not imposed from outside. Limits cycles are periodic solutions. A limit cycle is stable is nearby trajectories converge to it asymptotically, unstable is move away. A general condition for the existence of limit cycles in the phase plane is the Poincare-Bendixson theorem: If a trajectory of the second-order autonomous system remains in a finite region, then one of the following is true: (a) the trajectory goes to an equilibrium point (b) the trajectory tends to an asymptotically stable limit cycle (c) the trajectory is itself a stable limit cycle 22
23 Example 2.2 The Van der Pol oscillator This is one of the best known models in nonlinear systems theory, originally developed to describe the operation of an electronic oscillator, which depends on the existence of a region with effective negative resistance. ẋ 1 = x 2 ẋ 2 = x 1 + ɛ(x 2 x 3 2) (39) 23
24 2.6 Strange attractors and chaos For high order nonlinear systems, there are more complicated features if the trajectories remain in a bounded region. Positive limit set of a trajectory the set of all the points for which the trajectory converge to. Strange limit sets are those limit sets which may or may not be asymptotically attractive to the neighbouring trajectories. The trajectories they contain may be locally divergent from each other, within the attracting set. Such struectures are associated with the quasi-random behaviour of solutions called chaos. 24
25 Example 2.3 The Lorenz attractor This is one the most widely studied examples of strange behavour in ordinary differential equations, which is originated from studies of turbulent convection by Lorenz. The equation is in the form where σ, λ and b are positive constants. ẋ 1 = σ(x 2 x 1 ) ẋ 2 = (1 + λ x 3 )x 1 x 2 ẋ 3 = x 1 x 2 bx 3 (40) 25
26 3. Describing Functions 3.1 Describing function fundamentals If a nonlinear system contains only one nonlinear component and the other part is linear, then this kind of systems may be analyzed using describing functions. Some genuinely nonlinear systems can also be arranged in the form of with only one nonlinear element. The basic assumption for describing function analysis are: there is only a single nonlinear component the nonlinear component is not time-varying corresponding to a sinusoidal input x = sin(ωt), only the fundamental component has to be considered the nonlinearity is odd 26
27 Basic definition For a nonlinear component described by single-value nonlinear function f(), its output w(t) = f(a sin(ωt)) to a sinusoidal input A sin(ωt) is a periodical function, although it may not be sinusoidal in general. A periodical function can be expanded in Fourier series where w(t) = a n=1 [a n cos(nωt) + b n sin(nωt)] (41) a 0 = 1 π a n = 1 π b n = 1 π π π w(t)d(ωt) π π w(t) cos(nωt)d(ωt) π π w(t) sin(nωt)d(ωt) Deu to the assumption that f is an odd function, we have a 0 = 0, and the 27
28 fundamental component is given by where w 1 = a 1 cos(ωt) + b 1 sin(ωt) = M sin(ωt + φ) (42) M(A, ω) = a b2 1 φ(a, ω) = arctan(a 1 /b 1 ) In complex expression, we have w 1 = Me j(ωt+φ) = (b 1 + ja 1 )e jωt. Describing function is defined, similar to frequency response, as the complex ratio of the fundamental component of the nonlinear element against the input by N(A, ω) = Mejωt+φ Ae jωt = b 1 + ja 1 A (43) 28
29 Example 3.1 Describing function of hardening spring The characteristics of a hardening spring are given by Given the input A sin(ωt), the output is w = x + x 3 /2 (44) w = A sin(ωt) + A3 2 sin3 (ωt) (45) Since w is an odd function, we have a 1 = 0. The coefficient b 1 is given by b 1 = 1 π π π [A sin(ωt) + A3 2 sin3 (ωt)] sin(ωt)d(ωt) = A A3 (46) Therefore the describing function is N(A, ω) = N(A) = A2 (47) 29
30 3.3 Describing functions for common nonlinear components Saturation A saturation function is described by f(x) = kx for x < a sign(x)ka otherwise (48) The output to the input A sin(ωt), for A > a, is symmetric over quarters of a period, and in the first quarter, w 1 (x) = ka sin(ωt) 0 ωt γ ka γ < ωt π/2 (49) where γ = sin 1 (a/a). The function is odd, hence we have a 1 = 0, and the symmetry of w 1 (t) implies that 30
31 b 1 = 4 π π/4 0 w 1 sin(ωt)d(ωt) = 4 γ0 ka sin 2 (ωt)d(ωt) + 4 ka sin(ωt)d(ωt) π π = 2kA π [γ + a 1 a2 A A2] (50) Therefore the describing function is given by N(A) = b 1 A = 2k π π/2 γ sin 1 a A + a 1 a2 A A 2 (51) 31
32 Ideal relay The output from the ideal relay (sign function) is give by, with M > 0, w 1 (x) = M π ωt < 0 M 0 ωt < π (52) It is again an odd function, hence we have a 1 = 0. The coefficient b 1 is given by b 1 = π/2 0 M sin(ωt)(. ωt) = 4M π and therefore the describing function is given by N(A) = 4M πa (53) (54) 32
33 3.4 Describing function analysis of nonlinear systems Extended Nyquist criterion: Consider a linear feedback system with forward transfer function G(s) and feedback control gain H(s). The characteristic equation of this system is given by G(s)H(s) + 1 = 0 or G(s)H(s) = 1 (55) The Nyquist criterion can then be analyze the stability of the system based on the Nyquist plot the open-loop transfer function G(s)H(s) with the encirclement of the point ( 1, 0). An extension to this criterion for the case of the forward transfer function is KG(s), characteristic equation is KG(s)H(s) + 1 = 0 or G(s)H(s) = 1/K (56) The same argument used in the derivation of Nyquist can be applied again, with ( 1, 0) being replace by (1/K, 0). Note that K can be a complex number. 33
34 Existence of limit cycles Consider a unit feedback system with N(A, ω) and G(s) in the forward path. The equations describing this system can be put as x = y w = N(A, ω)x y = G(jω)w with y as the output. From the above equations, it can be obtained that and it can be arranged as y = G(jω)N(A, ω)( y) (57) [G(jω)N(A, ω) + 1]y = 0 (58) If there exists a limit cycle, then y 0, which implies that G(jω)N(A, ω) + 1 = 0 (59) 34
35 or G(jω) = 1/N(A, ω) (60) Therefore the amplitude A and frequency ω of the limit cycle must satisfy the above equation. The equation 60 is difficult to solve in general. Graphic solutions can be found by plotting G(jω) and 1/N(A, ω) to see if they intersect each other. The intersection points are the solutions. It is easy to plot 1/N if it is independent of the frequency. Stability of limit cycles Discussions based on the extended Nyquist criterion. 35
36 Example 3.1 Consider a linear transfer function G(s) = K s(s+1)(s+2) and an ideal relay with M = 1 in a closed loop. Determine if there exists a limit cycle. If so, determine the amplitude and frequency of the limit cycle. (ω = 2, A = 2 3π ) 36
37 4. Stability Theory 4.1 Definitions Consider the system ẋ = f(x) (61) with x = 0 as an equilibrium point. Lyapunov Stability The equilibrium point x = 0 is said to be stable if for any R > 0, there exists r > 0 such that if x(0) < r, then x(t) < R for all t 0. Otherwise the equilibrium is unstable. Critically stable linear system is Lyapunov stable. 37
38 Asymptotic Stability An equilibrium point 0 is asymptotically stable if it is stable (Lyapunov) and if in addition there exists some r > 0 such that x(0) < r implies that lim t x(t) = 0. Linear systems with poles in the left side of complex plane are asymptotically stable. Consider ẋ = x 3 38
39 Exponential Stability An equilibrium point 0 is exponentially stable if there exist two positive real numbers α and λ such that t > 0, x(t) < α x(0) e λt (62) in some ball B r around the origin. For linear systems, asymptotic stability implies exponential stability. Global Stability If asymptotic (exponential) stability holds for any initial state, the equilibrium point is said to be globally asymptotically (exponentially) stable. 39
40 4.2 Linearization and local stability Lyapunov s linearisation method If the linearized system is strictly stable (ie, with system s poles in the open left-half complex plane), then the equilibrium point is asymptotically stable for the actually nonlinear system. If the linearized system is unstable (ie, with system s poles in the open right-half complex plane), then the equilibrium point is unstable. If the linearized system is marginally stable (with poles on the imaginary axis), then the stability of the original system cannot be concluded using the linearised model. Examples 40
41 4.3 Lyapunov s direct method Positive definite functions A scalar function V (x) is said to be locally positive definite if V (0) = 0 and in a ball B r, x 0 implies V (x) > 0. If the above properties hold for the entire space, the V (x) is said to be globally positive definite. Lyapunov function If in a ball B R, the function V (x) is positive definite and has continuous partial derivatives, and if its time derivative along any state trajectory of system (61) is negative semidefinite, ie, then V (x) is a Lyapunov function. V (x) 0 (63) 41
42 Lyapunov Theorem for Local Stability If in a ball B R, there exists a scalar function V (x) with continuous first order derivatives such that V (x) is positive definite (locally in B R ) V (x) is negative semidefinite (locally in B R ) then the equilibrium point 0 is stable. Furthermore, if V (x) is locally negative definite in B R, then the stability is asymptotic. 42
43 Example 4.1 S simple pendulum is described by θ + θ + sin θ = 0 (64) Consider the scalar function V (x) = (1 cos θ) + θ 2 2 (65) 43
44 Lyapunov Theorem for Global Stability Assume that there exists a scalar function V (x) with continuous first order derivatives such that V (x) is positive definite V (x) is negative definite V (x) as x then the equilibrium at the origin is asymptotically stable. 44
45 4.4 Lyapunov analysis of linear-time-invariant systems Consider LTI system ẋ = Ax. If a Lyapunov function candidate is give by V (x) = x T P x (66) then direct evaluation gives V = x T A T P x + x T P Ax := x T Qx (67) where A T P + P A = Q (68) If Q is positive definite, then the system is asymptotically stable. 45
46 Theorem of stability of LTI systems A necessary and sufficient condition for a LTI system ẋ = Ax to be strictly stable is that, for any positive definite matrix Q, the unique solution P of Lyapunov equation (68) is positive definite. Proof: Briefly, choose a positive definite Q, and define and note P = 0 exp(a T t)q exp(at)dt (69) Q = 0 d[exp(a T t)q exp(at)] (70) 46
47 5. Advanced Stability Theory 5.1 Positive Real Systems Consider a SISO dynamic system and its transfer function matrix is given by ẋ = Ax + bu y = c T x (71) G(s) = c T (si A) 1 b (72) Positive Reals Systems A system described in (75) is said to be positive real if R(G(s)) 0 for all R(s) 0 (73) It is strictly positive real if G(s ɛ) is positive real for some ɛ > 0. Examples: G(s) = 1 s+λ with λ > 0. 47
48 Theorem 5.1 A transfer function G(s) is strictly positive real (SPR) if and only if G(s) is a strictly stable transfer function the real part of G(s) is strictly positive along the jω axis, ie., Some necessary conditions for SPR G(s) is strictly stable ω 0, R[G(jω)] > 0 (74) The Nyquist plot of G(jω) lies entirely in the right half of complex plane G(s) has a relative degree of 0 or 1 G(s) is strictly minimum phase Examples: G 1 = s 1 s 2 +as+b, G 2 = s+1 s 2 s+1, G 3 = 1 s 2 +as+b, G 4 = s+1 s 2 +s+1 48
49 Theorem 5.2 A transfer function G(s) is positive real (PR) if and only if G(s) is a stable transfer function The poles of G(s) on the jω axis are simple and associated residues are real and non-negative R[G(jω)] 0 for any ω such that jω is not a pole of G(s) 49
50 Kalman-Yakubovic Lemma Consider a controllable LTI system Its transfer function matrix is given by ẋ = Ax + bu y = c T x (75) G(s) = c T (si A) 1 b (76) is strictly positive real if and only if there exist positive definite matrices P and Q such that A T P + P A = Q (77) P b = c (78) 50
51 5.2 Stability of feedback system Consider the closed-loop system ẋ = Ax + bu y = c T x u = F (y)y (79) If c T (si A) 1 b is strictly positive real, then what is the condition on f(y) for the stability if the closed-loop system? 51
52 From K-Y lemma, there exist positive definite P and Q. Define V = x T P x, its derivative is given by V = x T Qx + 2x T P Bu = x T Qx 2x T P BF (y)y = x T Qx 2x T cf (y)y = x T Qx 2F (y)y 2 (80) Therefore if F (y) > 0 then system is exponentially stable. The condition is only a sufficient condition. 52
53 5.3 Circle Criterion Consider the case α < F (y) < β. Under what condition of c T (si A) 1 b, the closed-loop system (79) is stable? Consider the transform F = F α (81) β F and obviously we have F > 0. How to use this transform for analysis of systems stability? Consider the characteristic equation of (79). We can write which gives G(s)F + 1 = 0 (82) G(s)(F α) = αg 1 (83) G(s)(β F ) = βg + 1 (84) 53
54 With (83) divided by (84), we can obtain 1 + βg 1 + αg F α β F + 1 = 0 (85) Let G := 1 + βg (86) 1 + αg Based on the analysis of feedback stability of SPR system, we can conclude that the closed-loop system is stable if G is SPR. What is the condition of G if G is SPR? From (86), it can be obtained that G = G 1 β α G (87) For the case of 0 < α < β, G is SPR is equivalent to that Nyquist plot of G does not encircle the circle centered as ( 1 2 (1/α + 1/β), 0) with radius of (1/α 1/β). That is why it is referred as circle criterion
55 6. Feedback Linearization 6.1 Introduction Consider the following nonlinear systems Two Examples: x = f(x) + g(x)u y = h(x) (88) ẋ 1 = x 2 + y 3 ẋ 2 = y 2 + u y = x 1 (89) ẋ 1 = x 2 + y 3 + u ẋ 2 = y 2 + u y = x 1 (90) 55
56 6.2 Input-Output Linearization Lie Derivative A Lie derivative of a function h(x) along the vector field f(x) is defined as Notation L f h(x) = h(x) f(x) (91) x L k fh(x) = L f (L k 1 f h(x)) = Lk 1 f h(x) f(x) (92) x Relative Degree A system has relative degree ρ at a point x if the following conditions are satisfied: L g L k fh(x) = 0 for k = 0,..., ρ 2 (93) L g L ρ 1 f h(x) 0 (94) Examples. Use (89) and linear systems to demonstrate the concepts. 56
57 It can be obtained that If we define the input as which results at y (k) = L k fh(x) for k = 0,..., ρ 1 (95) y (ρ) = L ρ f h(x) + L gl ρ 1 f h(x)u (96) u = 1 L g L ρ 1 f h(x) [ Lρ fh(x) + v] (97) y (ρ) = v (98) 57
58 Therefore the system is input-output linearized by defining ξ i = L i 1 f h(x) for i = 1,..., ρ (99) and the linearized part of the system is described by ξ 1 = ξ 2 (100). (101) ξ ρ 1 = ξ ρ (102) ξ ρ = v (103) The other part of system dynamics are characterized by the zero dynamics. Examples 58
59 6.3 Full State Linearization Consider ẋ = f(x) + g(x)u (104) where x r n, and u R. The full state linearization can be achieved if there exists a function h(x) such that the relative degree regrading h(x) as the output is n, the dimension of the system. Lie Bracket For two vector fields f(x) and g(x), their Lie Bracket is defined as [f, g](x) = g f f(x) g(x) (105) x x Notations: ad 0 fg(x) = g(x) (106) ad 1 fg(x) = [f, g](x) (107) ad k fg(x) = [f, ad k 1 f g](x) (108) 59
60 Example Example f(x) = x 2 sin x 1 x 2, g(x) = 0 x 1 (109) f(x) = Ax, g(x) = b (110) 60
61 Distribution The collection of vector spaces = span{f 1 (x),..., f n (x)} (111) The dimension of distribution is defined as dim( (x)) = rank[f 1 (x),..., f n (x)] (112) Involutive A distribution is involutive if g 1 and g 2 [g 1, g 2 ] 61
62 Example Let = span{f 1, f 2 } where f 1 (x) = 2x 2 1 0, f 2 (x) = 1 0 x 2 (113) 62
63 Theorem 6.1 The system (104) is feedback linearizable if and only if the matrix [g(x), ad f g(x),..., ad n 1 f g(x)] has full rank; the distribution span{g(x), ad f g(x),..., ad n 2 f g(x)} is involutive. 63
64 Example ẋ 1 = a sin x 2 ẋ 2 = x u 64
65 7. Backstepping Design 7.1 Integrator Backstepping Consider ẋ = f(x) + g(x)ξ ξ = u (114) with x R n and ξ R. There exist a function φ(x) with φ(0) = 0 and a positive definite function V (x) such that V [f(x) + g(x)φ(x)] W (x) (115) x where W (x) is positive definite. Condition (115) implies that the system ẋ = f(x)+g(x)φ(x) is asymptotically stable. How to design the control input so that the system is stable? 65
66 Consider ẋ = f(x) + g(x)φ(x) + g(x)(ξ φ(x)) (116) Let z = ξ φ(x). It is easy to obtain that ẋ = f(x) + g(x)φ(x) + g(x)z (117) ż = u φ = u φ (f(x) + g(x)ξ) x (118) Consider a Lyapunov function candidate Its derivative is given by V c (x, ξ) = V (x) z2 (119) V c = V V [f(x) + g(x)φ(x)] + x x g(x)z +z[u φ (f(x) + g(x)ξ)] (120) x 66
67 Let u = cz V x with c > 0 which results at φ g(x) + (f(x) + g(x)ξ) (121) x V c = W (x) cz 2 (122) Theorem 7.1 For a system described in (114), the control design given in (121) ensures the global asymptotic stability of the closed-loop system. 67
68 Example 7.1 Example 7.2 Example 7.3 ẋ 1 = x x 2 ẋ 2 = u (123) ẋ 1 = x 1 + x 2 1x 2 ẋ 2 = u (124) ẋ 1 = x 2 1 x 1 x 2 ẋ 2 = u (125) 68
69 7.2 Iterative Backstepping Consider a nonlinear system in the strict feedback form ẋ 1 = x 2 + φ 1 (x 1 ) ẋ 2 = x 3 + φ 2 (x 1, x 2 ). ẋ n 1 = x n + φ n 1 (x 1, x 2,..., x n 1 ) ẋ n = u + φ n (x 1, x 2,..., x n ) (126) Backstepping technique introduced in the previous section can be applied iteratively. Define the notations z 1 = x 1 (127) z i = x i α i 1 (x 1,..., x i 1 ), for i = 2,..., n (128) where α i 1 for i = 2,..., n are stabilizing functions obtained in the iterative beackstepping design. 69
70 Step 1: Let The resultant dynamics of z 1 is ż 1 = [x 2 α 1 ] + α 1 + φ 1 (x 1 ) = z 2 + α 1 + φ 1 (x 1 ) (129) α 1 = c 1 z 1 φ 1 (x 1 ) (130) ż 1 = c 1 z 1 + z 2 (131) 70
71 Step 2: The dynamics of z 2 is Design α 2 as ż 2 = ẋ 2 α 1 = x 3 + φ 2 (x 1, x 2 ) α 1 x 1 (x 2 + φ 1 (x 1 )) = z 3 + α 2 + φ 2 (x 1, x 2 ) α 1 x 1 (x 2 + φ 1 (x 1 )) (132) α 2 = z 1 c 2 z 2 φ 2 (x 1, x 2 ) + α 1 x 1 (x 2 + φ 1 (x 1 )) (133) The resultant dynamics of z 2 is given by ż 2 = z 1 c 2 z 2 + z 3 (134) 71
72 Step i : For 2 < i < n, the dynamics of z i is given by Design α i as ż i = ẋ i α i 1 (x 1,..., x i 1 ) = x i+1 + φ i (x 1,..., x i ) i 1 j=1 α i 1 x j (x j+1 + φ j (x 1,..., x j )) = z i+1 + α i + φ i (x 1,..., x i ) i 1 j=1 α i 1 x j (x j+1 + φ j (x 1,..., x j )) (135) α i = z i 1 c i z i φ i (x 1,..., x i ) + i 1 j=1 The resultant dynamics of z i is given by α i 1 x j (x j+1 + φ j (x 1,..., x j )) (136) ż i = z i 1 c i z i + z i+1 (137) 72
73 Step n: At the final step, we have ż n = ẋ n α n 1 (x 1,..., x n 1 ) = u + φ n (x 1,..., x n ) n 1 j=1 Design the final control input as α n 1 x j (x j+1 + φ j (x 1,..., x j )) (138) u = z n 1 c n z n φ n (x 1,..., x n ) + n 1 j=1 The resultant dynamics of z n is given by α n 1 x j (x j+1 + φ j (x 1,..., x j )) (139) ż n = z n 1 c n z n (140) 73
74 Stability Analysis Define a Lyapunov function candidate It is easy to obtain that V = 1 2 n i=1 z2 i (141) V = z 1 [ c 1 z 1 + z 2 ] + z 2 [ z 1 c 2 z 2 + z 3 ] z n 1 [ z n 2 c n 1 z n 1 + z n ] + z n [ z n 1 c n z 2 ] = n c izi 2 (142) i=1 Therefore the closed loop system under the proposed control is asymptotically stable. Theorem 7.2 For a system in the strict feedback form (126), the control input (139) renders the closed-loop system asymptotically stable. 74
75 Example 7.4 Example 7.5 ẋ 1 = x 2 + x 2 1 ẋ 2 = x 3 + x 1 x 2 ẋ 3 = u + x 2 3 (143) ẋ 1 = x 1 + x 2 1x 2 ẋ 2 = x 3 ẋ 3 = u (144) 75
76 8. Adaptive Control I 8.1 Introduction The basic objective of adaptive control is to maintain consistent performance of a system in the presence of uncertainty or unknown variation in plant parameters. MRAC Model Reference Adaptive Control consists of reference model which produce the desired output, and difference between the plant output and the reference output is then used to adjust the control parameter and the control input directly. MRAC is often in continuous-time domain, and for deterministic plants. STC Self-Tuning Control estimates systems parameters and then compute the control input from the estimated parameters. STC is often in discretetime and and for stochastic plants. We will focus on MRAC in this course. 76
77 Design of adaptive controllers Compared with the conventional control design, adaptive control is more involved, with the need to design the adaptation law. Adaptive control design usually involves the following three steps: Choose a control law containing variable parameters choose an adaptation law for adjusting those parameters analysis the convergence properties of the resulting control system 77
78 8.2 Adaptive control of first-order systems Consider a first order system ẏ + a p y = b p u (145) The output y is to follow the output of the reference model ẏ m + a m y m = b m r (146) The reference model is stable, ie., a m > 0. The signal r is the reference input. The design objective is to make the tracking error e = y y m converge to 0. 78
79 Model Reference Control Rearrange the system model as and therefor the we obtain ẏ + a m y = b p [u a p a m b p y] (147) ė + a m e = b p [u a p a m y b m r] b p b p := b p [u a y y a r r] (148) where a y = a p a m b p, a r = b m bp. If all the parameters are known, the control law is designed as u = a r r + a y y (149) 79
80 Adaptive control law With parameters unknown, let â r and â y denote their estimates of a r and a y. The control law under the certainty equivalence principle is given by Adaptive laws The estimates are updated by u = â r r + â y y (150) â r = sign(b p )γ r er (151) â y = sign(b p )γ y ey (152) were γ r and γ y are positive real design parameters, and are often referred to as adaptive gains. The closed loop system under the adaptive control law is given by where ã r = a r â r and ã y = a y â y. ė + a m e = b p [ ã y y ã r r] (153) 80
81 Stability analysis Consider the Lyapunov function candidate V = 1 2 e2 + b p 2γ r ã 2 r + b p 2γ y ã 2 y (154) Its derivative along the trajectory (153) and adaptive laws (151) and (151) is given by V = a m e 2 + ã r [ b p ã r eb p ã r r] + ã y [ b p ã y eb p ã y y] = a m e 2 (155) Therefore the system is Lyapunov stable with all the variables e, â r and â y are all bounded. Furthermore, we can show that o e 2 V (0) V ( ) (t)dt = < (156) a m Therefore we have established that e L 2 L and ė L. To conclude the stability analysis, we need Babalat s Lemma. 81
82 Barbalat s Lemma If a function f(t) is uniformly continuous for t [0, ), and 0 f(t)dt exists, then lim t f(t) = 0. Since ė and e are bounded, e 2 is uniformly continuous. Therefore we can conclude from Barbalat s Lemma that lim t e 2 (t) = 0 and hence lim t e(t) = 0. 82
83 Simulation study of first order system G p = 2 s 1 G m = 1 s + 2 (157) (158) 83
84 9. Adaptive Control II 9.1 Model reference control of high order systems Consider a nth-order system Z p (s) y(s) = k p u(s) (159) R p (s) where k p is the high frequency gain, Z p and R p are monic polynomials with orders of n ρ and n respectively with ρ as the relative degree. The reference model is given by Z m (s) y(s) m = k m r(s) (160) R m (s) where k m > 0 and Z m and R m are monic Hurwitz polynomials. 84
85 MRC of systems with ρ = 1 Following a similar manipulation to the first order system, we have y(s)r m (s) = k p Z p (s)u(s) (R p (s) R m (s))y(s) = k p Z m (s)[ Z p(s) Z m (s) u(s) + R m(s) R p (s) y(s)] Z m (s) = k p Z m (s)[u(s) θt 1 α(s) Z m (s) u(s) θt 2 α(s) Z m (s) y(s) θ 3y(s)] (161) where θ 1 R n 1, θ 2 R n 1, θ 3 R and Hence, we have α(s) = [s n 2,..., 1] T (162) y = k p Z m (s) R m (s) [u(s) θt 1 α(s) Z m (s) u(s) θt 2 α(s) Z m (s) y(s) θ 3y(s)] (163) 85
86 and e 1 = k p Z m (s) R m (s) [u(s) θt 1 α(s) Z m (s) u(s) θt 2 α(s) Z m (s) y(s) θ 3y(s) θ 4 r](164) where e 1 = y y m and θ 4 = k m kp. The control input for the model reference control is given by where u = θt 1 α(s) Z m (s) u + θt 2 α(s) Z m (s) y + θ 3y + θ 4 r = θ T ω (165) θ T = [θ T 1, θ T 4, θ 3, θ 4 ] ω = [ω T 1, ω T 2, y, r] T ω 1 = α(s) Z m (s) u; ω 2 = α(s) Z m (s) y 86
87 Example 9.1 Design MRC for the system s + 1 y(s) = s 2 2s + 1 u (166) with the reference model s + 3 y m (s) = s 2 + 2s + 3 r (167) 87
88 MRC of systems with ρ > 1 For a system with ρ > 1, the input in the same format as (165) can be obtained. The only difference is that Z m is of order n ρ < n 1. In this case, we let P (s) be a monic polynomial with order ρ 1 so that Z m (s)p (s) is of order n 1. We adopt a slightly different approach from the case of ρ = 1. Consider the identity y = Z m [ R mp R m Z m P y] = Z m [ QR p + 1 R m Z m P y] Z m = k p [ QZ pu + kp 1 1 y ] R m Z m P Z m = k p [u + 2 R m Z m P u + k 1 p 1 y] (168) Z m P where Q is a monic polynomial of order ρ 1, 1 and 2 are polynomials 88
89 of order n 1 and n 2 respectively. Therefore we can write and y = k p Z m R m [u θt 1 α(s) Z m P u θt 2 α(s) Z m P y θ 3y] (169) Z m e 1 = k p [u θt 1 α(s) R m Z m P u θt 2 α(s) Z m P y θ 3y θ 4 r] (170) Hence the control input is designed as u = θt 1 α(s) Z m (s)p (s) u + = θ T ω with the same format as (165) but ω 1 = α(s) Z m (s)p (s) u; ω 2 = θt 2 α(s) Z m (s)p (s) y + θ 3y + θ 4 r] α(s) Z m (s)p (s) y (171) 89
90 Example 9.2 Design MRC for the system 1 y(s) = s 2 2s + 1 u (172) with the reference model 1 y m (s) = s 2 + 2s + 3 r (173) 90
91 9.2 Adaptive Control (ρ = 1) Basic Assumptions Adaptive control design in this course assumes the known system order n, the known relative degree ρ the minimum phase of the plant, the known sign of the high frequency gain sign[k p ]. 91
92 We choose the reference model as SPR. If the parameters are unknown, the control law based on the certainty equivalence principle is given as u = ˆθ T ω (174) where ˆθ is the estimate of θ. The adaptive law is designed as ˆθ = sign[k p ]Γe 1 ω (175) where Γ R 2n, the adaptive gain, is a positive definite matrix, and ω is generated in the same way as in the MRC case. 92
93 Stability Analysis In this can we can write e 1 = k p Z m R m (ˆθ T ω θ T ω) = k m Z m R m [ k p k m θt ω] (176) where θ = θ ˆθ. Furthermore, we can put in the state space form as ė = A m e + b m [ k p k m θt ω] e 1 = c T me (177) where {A m, b m, c m } is a minimum state space realization of k m Z m (s) R m (s), ie, c T m(si A m ) 1 b m = k m Z m (s) R m (s) (178) 93
94 Since {A m, b m, c m } is SPR, there exist positive definite matrices P and Q such that Define a Lyapunov function candidate as A T mp + P A m = Q (179) P b m = c m (180) V = 1 2 et P e k p k m θ T Γ 1 θ (181) Its derivative is given by V = 1 2 et Qe + e T P b m [ k p k m θt ω] + k p k m θ T Γ 1 θ = 1 2 et Qe + e 1 [ k p k m θt ω] + k p k m θ T Γ 1 θ = 1 2 et Qe (182) 94
95 We can now conclude the boundedness of e and ˆθ. Furthermore it can be shown that e L 2 and ė 1 L. Therefore from Barbalat s lemma we have lim t e 1 (t) = 0. The boundedness of other system state variables can be established from the minimum phase property of the system. 95
96 9.3 Adaptive Control with ρ > 1 For systems with higher relative degrees, the certainty equivalence principle can still be use to design the control, ie, u = ˆθ T ω (183) with ω be the same as in (171). However the adaptive law is more involved as the reference model is no longer SPR. If L(s) is a polynomial of order ρ 1 which makes k Z m L m R m SPR, then the error model is given by Z m L e 1 = k m [k(u R f θ T φ)] (184) m where k = k p k m, u f = L(s) 1 u and φ = 1 as L(s) ω. An auxiliary error is constructed ɛ = e 1 k m Z m L R m [ˆk(u f ˆθ T φ)] k m Z m L R m [ɛn 2 s] (185) 96
97 where ˆk is an estimate of k, n 2 s = φ T φ+u 2 f. The adaptive laws are designed as ˆθ = sign[b p ]Γɛφ (186) ˆk = γɛ(u f ˆθ T φ) (187) 97
98 10. Robust Issues in Adaptive Control 10.1 Introduction Adaptive control and stability analysis have been carried under the condition that there are only parameter uncertainty in the system. However, many types of non-parametric uncertainties do exist in practice. These include high-frequency unmodelled dynamics, such as actuator dynamics or structural vibrations low-frequency unmodelled dynamics, such as Coulomb frictions. measurement noise computation roundoff error and sampling delay Such non-parametric uncertainties will affect the performance of adaptive control systems when they are applied to control practical systems. They may cause instability. Let us consider a simple example. 98
99 Consider the system output is described by y = θω (188) The adaptive law ˆθ = γɛω (189) ɛ = y ˆθω (190) will render the convergence of the estimate ˆθ by taking V = 1 2γ θ 2 (191) as a Lyapunov function candidate and the analysis V = θ(y ˆθω)ω = θ 2 ω 2 (192) Now, if the signal is corrupted by some unknown bounded disturbance d(t), ie., y = θω + d(t) (193) 99
100 The same adaptive will have a problem. In this case, V = θ(y ˆθω)ω = θ 2 ω 2 θdω = θ 2 ω ( θω + d) 2 + d2 2 (194) From the above analysis, we cannot conclude the boundedness of θ even we have ω bounded. In fact, if we take θ = 2, γ = 1 and ω = (1 + t) 1/2 L and let It can then be obtained that d(t) = (1 + t) 1/4 ( 5 4 2(1 + t) 1/4 ) (195) y(t) = 5 4 (1 + t) 1/4, 0 as t (196) ˆθ = 5 4 (1 + t) 3/4 ˆθ(1 + t) 1 (197) 100
101 which has a solution ˆθ = (1 + t) 1/4 as t (198) In this example, we have observed that adaptive law designed for disturbancefree system fails to remain bounded even the disturbance is bounded and converges to zero as t tends to infinity. In this session, we conside the simple model y = θω + d(t) (199) with d as bounded disturbance. A number of robust adaptive laws will be introduced for this simple model. In the following we keep use ɛ = y ˆθω and V = θ 2 2γ. Once the students understand the basic ideas, they can extend the robust adaptive laws to adaptive control. 101
102 10.2 Dead-zone Modification The adaptive law is modefied as γɛω ɛ > g ˆθ = (200) 0 ɛ g where g is a constant satisfying g > d(t) for all t. For ɛ > g, we have V = θɛω = (θω ˆθω)ɛ = (y d(t) ˆθω)ɛ = (ɛ d(t))ɛ < 0 (201) Therefore we have < 0 ɛ > g V = 0 ɛ g and we can conclude that V is bounded. 102 (202)
103 10.3 σ-modification The adaptive law is modefied as ˆθ = γɛω γσ ˆθ (203) where σ is a positive real constant. In this case, we have V = (ɛ d(t))ɛ + σ θˆθ = ɛ 2 + d(t)ɛ σ θ 2 + σ θθ ɛ2 2 + d2 0 2 σ θ σθ2 2 σγv + d σθ2 (204) 2 where d 0 d(t), t 0. To establish the boundedness of V, we need a standard result, which is a special case of the comparison lemma. 103
104 Comparison Lemma Let f, V : [0, ) R. Then implies that V αv + f, t t 0 0 (205) V (t) e α(t t 0) V (t 0 ) + t t0 e α(t τ) f(τ)dτ, t t 0 0 (206) for any finite constant α. 104
105 Applying the comparison lemma to (204), we have V (t) e σγt + V (0) t 0 e σγ(t τ) [ d σθ2 ]dτ, (207) 2 V ( ) 1 σγ [d σθ2 2 ] (208) Therefore we can conclude that V L. 105
106 10.4 Robust Adaptive Control The robust adaptive laws introduced can be applied to various adaptive control schemes. We demonstrate the application a robust adaptive to MRAC with ρ = 1. We start directly from the error model ė = A m e + b m [ k θ T ω + d(t)] e 1 = c T me (209) where k = k p /k m and d(t) is a bounded disturbance with bound d 0, which represents the non-parametric uncertainty in the system. As discussed earlier, we need a robust adaptive law to deal with the bounded disturbances. If we take σ-modification, then the adaptive law is ˆθ = sign[k p ]Γe 1 ω σγˆθ (210) This adaptive law will ensure the boundedness of the variables. 106
107 Stability Analysis Let Its derivative is given by Note V = 1 2 et P e k θ T Γ 1 θ (211) V = 1 2 et Qe + e 1 [ k θ T ω + d] + k θ T Γ 1 θ = 1 2 λ min(q) e 2 + e 1 d + k σ θ T ˆθ = 1 2 λ min(q) e 2 + e 1 d k σ θ 2 + k σ θ T θ (212) e 1 d 1 4 λ min(q) e 2 + d 2 0 λ min (Q) θ T θ 1 2 θ θ 2 (213) 107
108 Hence we have V 1 4 λ min(q) e 2 k σ 2 θ 2 + αv + where α is a positive real and d 2 0 λ min (Q) + k σ 2 θ 2 d 2 0 λ min (Q) + k σ 2 θ 2 (214) α = min{1 2 λ min(q), k σ} k max{λ max P, λ min (Γ) } (215) Therefore, we can conclude the boundedness of V from the comparison lemma, which further implies the boundedness of the tracking error e 1 and the estimate ˆθ. 108
109 11. Adaptive Control of Nonlinear Systems 11.1 First Order Nonlinear Systems Consider a first order nonlinear system described by ẏ = u + φ T (y)θ (216) where φ : R R p is a smooth nonlinear function, and θ R p is a unknow vector of constant parameters. For this system, adaptive control law can be designed as u = cy φ T (y)ˆθ (217) ˆθ = Γyφ(y) (218) where c is a positive real constant, and Γ is a positive definite gain matrix. 109
110 The closed-loop dynamics is given by with the usual notation θ = θ ˆθ. Stability Analysis Let ẏ = cy + φ T (y) θ (219) V = 1 2 y θ T Γ 1 θ (220) and its derivative is obtained as V = cy 2 (221) which ensures the boundedness of y and ˆθ. We can show lim t y(t) = 0 in the usual way. 110
111 11.2 Adaptive Backstepping Consider a second order nonlinear system ẋ 1 = x 2 + φ T 1 (x 1 )θ ẋ 2 = u + φ T 2 (x 1, x 2 )θ (222) where φ 1 : R R, φ 2 : R 2 R p are smooth nonlinear functions, and θ R p is an unknow constant parameters. From the previous section, we know that if x 2 = c 1 x 1 φ T 1 (x 1 )ˆθ, then the first part of the system is stable. Hence if we set α = c 1 x 1 φ T 1 (x 1 )ˆθ (223) then backstepping control design can be used to designed the control input u in a similar way as for the systems without unknown parameters. 111
112 For the adaptive backstepping design, we define We have the dynamics for z 1 as z 1 = x 1 (224) z 2 = x 2 α (225) ż 1 = z 2 + α + φ T 1 (x 1 )θ = c 1 z 1 + z 2 + φ T 1 (x 1 ) θ (226) Consider the dynamics for z 2 ż 2 = ẋ 2 α(x 1, ˆθ) = u + φ T 2 (x 1, x 2 )θ α ẋ x 1 α ˆθ 1 ˆθ = u + [φ 2 (x 1, x 2 ) α φ x 1 (x 1 )] T θ α x 1 x 2 α ˆθ 1 ˆθ (227) Notice that the adaptive law designed based on the dynamics of z 1 will not work because the dynamics of z 2 involves the unknown parameters θ as well. 112
113 We leave the adaptive law to the Lyapunov function based analysis later. From the dynamics of z 2, we design the control input as u = z 1 c 2 z 2 [φ 2 (x 1, x 2 ) α φ x 1 (x 1 )] T α ˆθ + x 1 x 2 + α ˆθ 1 ˆθ (228) The resultant dynamics of z 2 is given by ż 2 = z 1 c 2 z 2 + [φ 2 (x 1, x 2 ) α φ x 1 (x 1 )] T θ (229) 1 Consider a Lyapunov function candidate V = 1 2 [z2 1 + z θ T Γ 1 θ] (230) 113
114 We then have V = z 1 [ c 1 z 1 + z 2 + φ T 1 θ] +z 2 [ z 1 c 2 z 2 + [φ 2 α x 1 φ 1 ] T θ] ˆθT Γ 1 θ = c 1 z 2 1 c 2 z [z 1 φ 1 + z 2 (φ 2 α x 1 φ 1 ) Γ 1 ˆθ] T θ (231) From the above analysis, we decide the adaptive law as ˆθ = Γ[z 1 φ 1 + z 2 (φ 2 α x 1 φ 1 )] (232) and we have V = c 1 z 2 1 c 2 z 2 2 (233) Therefore we have the boundedness of z 1, z 1 and ˆθ and lim t z i = 0 for i = 1,
115 11.3 Adaptive Control of Strict Feedback Systems Consider a nonlinear system in the strict feedback form ẋ i = x i+1 + φ T i (x 1, x 2,..., x i )θ for i = 1,..., n 1 ẋ n = u + φ T n (x 1, x 2,..., x n )θ (234) Adaptive control design can be carried using backstepping iteratively in n steps. The main difficulty is in design of adaptive law, as the seam unknown parameter appears in every step. The initial method used multiple estiamtes for θ. Later the tuning function method was proposed to solve the multiple estimate problem. For more details please refer to (Krstic et al, 1995) 115
116 Example 11.1 Design an adaptive control law for Example 11.2 Design an adaptive control law for ẋ 1 = x 2 + (e x 1 1)θ ẋ 2 = u (235) ẋ 1 = x 2 + x 3 1θ + x 2 1 ẋ 2 = (1 + x 2 1)u + x 2 1θ (236) 116
2006 Fall. G(s) y = Cx + Du
1 Class Handout: Chapter 7 Frequency Domain Analysis of Feedback Systems 2006 Fall Frequency domain analysis of a dynamic system is very useful because it provides much physical insight, has graphical
More informationIntroduction to Nonlinear Control Lecture # 4 Passivity
p. 1/6 Introduction to Nonlinear Control Lecture # 4 Passivity È p. 2/6 Memoryless Functions ¹ y È Ý Ù È È È È u (b) µ power inflow = uy Resistor is passive if uy 0 p. 3/6 y y y u u u (a) (b) (c) Passive
More informationNonlinear Control Systems
Nonlinear Control Systems António Pedro Aguiar pedro@isr.ist.utl.pt 7. Feedback Linearization IST-DEEC PhD Course http://users.isr.ist.utl.pt/%7epedro/ncs1/ 1 1 Feedback Linearization Given a nonlinear
More informationEN Nonlinear Control and Planning in Robotics Lecture 3: Stability February 4, 2015
EN530.678 Nonlinear Control and Planning in Robotics Lecture 3: Stability February 4, 2015 Prof: Marin Kobilarov 0.1 Model prerequisites Consider ẋ = f(t, x). We will make the following basic assumptions
More informationProf. Krstic Nonlinear Systems MAE281A Homework set 1 Linearization & phase portrait
Prof. Krstic Nonlinear Systems MAE28A Homework set Linearization & phase portrait. For each of the following systems, find all equilibrium points and determine the type of each isolated equilibrium. Use
More informationOutput Feedback and State Feedback. EL2620 Nonlinear Control. Nonlinear Observers. Nonlinear Controllers. ẋ = f(x,u), y = h(x)
Output Feedback and State Feedback EL2620 Nonlinear Control Lecture 10 Exact feedback linearization Input-output linearization Lyapunov-based control design methods ẋ = f(x,u) y = h(x) Output feedback:
More informationLecture 9 Nonlinear Control Design
Lecture 9 Nonlinear Control Design Exact-linearization Lyapunov-based design Lab 2 Adaptive control Sliding modes control Literature: [Khalil, ch.s 13, 14.1,14.2] and [Glad-Ljung,ch.17] Course Outline
More informationLMI Methods in Optimal and Robust Control
LMI Methods in Optimal and Robust Control Matthew M. Peet Arizona State University Lecture 15: Nonlinear Systems and Lyapunov Functions Overview Our next goal is to extend LMI s and optimization to nonlinear
More informationTopic # /31 Feedback Control Systems. Analysis of Nonlinear Systems Lyapunov Stability Analysis
Topic # 16.30/31 Feedback Control Systems Analysis of Nonlinear Systems Lyapunov Stability Analysis Fall 010 16.30/31 Lyapunov Stability Analysis Very general method to prove (or disprove) stability of
More informationModeling and Analysis of Dynamic Systems
Modeling and Analysis of Dynamic Systems Dr. Guillaume Ducard Fall 2017 Institute for Dynamic Systems and Control ETH Zurich, Switzerland G. Ducard c 1 / 57 Outline 1 Lecture 13: Linear System - Stability
More informationGeorgia Institute of Technology Nonlinear Controls Theory Primer ME 6402
Georgia Institute of Technology Nonlinear Controls Theory Primer ME 640 Ajeya Karajgikar April 6, 011 Definition Stability (Lyapunov): The equilibrium state x = 0 is said to be stable if, for any R > 0,
More informationOutput Regulation of Uncertain Nonlinear Systems with Nonlinear Exosystems
Output Regulation of Uncertain Nonlinear Systems with Nonlinear Exosystems Zhengtao Ding Manchester School of Engineering, University of Manchester Oxford Road, Manchester M3 9PL, United Kingdom zhengtaoding@manacuk
More informationIntroduction. Performance and Robustness (Chapter 1) Advanced Control Systems Spring / 31
Introduction Classical Control Robust Control u(t) y(t) G u(t) G + y(t) G : nominal model G = G + : plant uncertainty Uncertainty sources : Structured : parametric uncertainty, multimodel uncertainty Unstructured
More informationA plane autonomous system is a pair of simultaneous first-order differential equations,
Chapter 11 Phase-Plane Techniques 11.1 Plane Autonomous Systems A plane autonomous system is a pair of simultaneous first-order differential equations, ẋ = f(x, y), ẏ = g(x, y). This system has an equilibrium
More informationFeedback Linearization
Feedback Linearization Peter Al Hokayem and Eduardo Gallestey May 14, 2015 1 Introduction Consider a class o single-input-single-output (SISO) nonlinear systems o the orm ẋ = (x) + g(x)u (1) y = h(x) (2)
More informationIntroduction. 1.1 Historical Overview. Chapter 1
Chapter 1 Introduction 1.1 Historical Overview Research in adaptive control was motivated by the design of autopilots for highly agile aircraft that need to operate at a wide range of speeds and altitudes,
More informationNonlinear Control. Nonlinear Control Lecture # 18 Stability of Feedback Systems
Nonlinear Control Lecture # 18 Stability of Feedback Systems Absolute Stability + r = 0 u y G(s) ψ( ) Definition 7.1 The system is absolutely stable if the origin is globally uniformly asymptotically stable
More informationFrequency methods for the analysis of feedback systems. Lecture 6. Loop analysis of feedback systems. Nyquist approach to study stability
Lecture 6. Loop analysis of feedback systems 1. Motivation 2. Graphical representation of frequency response: Bode and Nyquist curves 3. Nyquist stability theorem 4. Stability margins Frequency methods
More informationHandout 2: Invariant Sets and Stability
Engineering Tripos Part IIB Nonlinear Systems and Control Module 4F2 1 Invariant Sets Handout 2: Invariant Sets and Stability Consider again the autonomous dynamical system ẋ = f(x), x() = x (1) with state
More informationIntroduction to Nonlinear Control Lecture # 3 Time-Varying and Perturbed Systems
p. 1/5 Introduction to Nonlinear Control Lecture # 3 Time-Varying and Perturbed Systems p. 2/5 Time-varying Systems ẋ = f(t, x) f(t, x) is piecewise continuous in t and locally Lipschitz in x for all t
More informationStability of Parameter Adaptation Algorithms. Big picture
ME5895, UConn, Fall 215 Prof. Xu Chen Big picture For ˆθ (k + 1) = ˆθ (k) + [correction term] we haven t talked about whether ˆθ(k) will converge to the true value θ if k. We haven t even talked about
More informationNonlinear Control. Nonlinear Control Lecture # 8 Time Varying and Perturbed Systems
Nonlinear Control Lecture # 8 Time Varying and Perturbed Systems Time-varying Systems ẋ = f(t,x) f(t,x) is piecewise continuous in t and locally Lipschitz in x for all t 0 and all x D, (0 D). The origin
More informationProblem List MATH 5173 Spring, 2014
Problem List MATH 5173 Spring, 2014 The notation p/n means the problem with number n on page p of Perko. 1. 5/3 [Due Wednesday, January 15] 2. 6/5 and describe the relationship of the phase portraits [Due
More informationSTABILITY. Phase portraits and local stability
MAS271 Methods for differential equations Dr. R. Jain STABILITY Phase portraits and local stability We are interested in system of ordinary differential equations of the form ẋ = f(x, y), ẏ = g(x, y),
More informationSubject: Optimal Control Assignment-1 (Related to Lecture notes 1-10)
Subject: Optimal Control Assignment- (Related to Lecture notes -). Design a oil mug, shown in fig., to hold as much oil possible. The height and radius of the mug should not be more than 6cm. The mug must
More information1 The Observability Canonical Form
NONLINEAR OBSERVERS AND SEPARATION PRINCIPLE 1 The Observability Canonical Form In this Chapter we discuss the design of observers for nonlinear systems modelled by equations of the form ẋ = f(x, u) (1)
More informationControl Systems I. Lecture 4: Diagonalization, Modal Analysis, Intro to Feedback. Readings: Emilio Frazzoli
Control Systems I Lecture 4: Diagonalization, Modal Analysis, Intro to Feedback Readings: Emilio Frazzoli Institute for Dynamic Systems and Control D-MAVT ETH Zürich October 13, 2017 E. Frazzoli (ETH)
More informationBIBO STABILITY AND ASYMPTOTIC STABILITY
BIBO STABILITY AND ASYMPTOTIC STABILITY FRANCESCO NORI Abstract. In this report with discuss the concepts of bounded-input boundedoutput stability (BIBO) and of Lyapunov stability. Examples are given to
More informationState Regulator. Advanced Control. design of controllers using pole placement and LQ design rules
Advanced Control State Regulator Scope design of controllers using pole placement and LQ design rules Keywords pole placement, optimal control, LQ regulator, weighting matrixes Prerequisites Contact state
More informationA NONLINEAR TRANSFORMATION APPROACH TO GLOBAL ADAPTIVE OUTPUT FEEDBACK CONTROL OF 3RD-ORDER UNCERTAIN NONLINEAR SYSTEMS
Copyright 00 IFAC 15th Triennial World Congress, Barcelona, Spain A NONLINEAR TRANSFORMATION APPROACH TO GLOBAL ADAPTIVE OUTPUT FEEDBACK CONTROL OF RD-ORDER UNCERTAIN NONLINEAR SYSTEMS Choon-Ki Ahn, Beom-Soo
More informationNonlinear Autonomous Systems of Differential
Chapter 4 Nonlinear Autonomous Systems of Differential Equations 4.0 The Phase Plane: Linear Systems 4.0.1 Introduction Consider a system of the form x = A(x), (4.0.1) where A is independent of t. Such
More informationEML5311 Lyapunov Stability & Robust Control Design
EML5311 Lyapunov Stability & Robust Control Design 1 Lyapunov Stability criterion In Robust control design of nonlinear uncertain systems, stability theory plays an important role in engineering systems.
More informationChapter III. Stability of Linear Systems
1 Chapter III Stability of Linear Systems 1. Stability and state transition matrix 2. Time-varying (non-autonomous) systems 3. Time-invariant systems 1 STABILITY AND STATE TRANSITION MATRIX 2 In this chapter,
More informationCDS Solutions to the Midterm Exam
CDS 22 - Solutions to the Midterm Exam Instructor: Danielle C. Tarraf November 6, 27 Problem (a) Recall that the H norm of a transfer function is time-delay invariant. Hence: ( ) Ĝ(s) = s + a = sup /2
More informationComplex Dynamic Systems: Qualitative vs Quantitative analysis
Complex Dynamic Systems: Qualitative vs Quantitative analysis Complex Dynamic Systems Chiara Mocenni Department of Information Engineering and Mathematics University of Siena (mocenni@diism.unisi.it) Dynamic
More informationNonlinear Systems and Control Lecture # 12 Converse Lyapunov Functions & Time Varying Systems. p. 1/1
Nonlinear Systems and Control Lecture # 12 Converse Lyapunov Functions & Time Varying Systems p. 1/1 p. 2/1 Converse Lyapunov Theorem Exponential Stability Let x = 0 be an exponentially stable equilibrium
More informationLecture 5: Linear Systems. Transfer functions. Frequency Domain Analysis. Basic Control Design.
ISS0031 Modeling and Identification Lecture 5: Linear Systems. Transfer functions. Frequency Domain Analysis. Basic Control Design. Aleksei Tepljakov, Ph.D. September 30, 2015 Linear Dynamic Systems Definition
More informationPart II. Dynamical Systems. Year
Part II Year 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2017 34 Paper 1, Section II 30A Consider the dynamical system where β > 1 is a constant. ẋ = x + x 3 + βxy 2, ẏ = y + βx 2
More informationMCE693/793: Analysis and Control of Nonlinear Systems
MCE693/793: Analysis and Control of Nonlinear Systems Lyapunov Stability - I Hanz Richter Mechanical Engineering Department Cleveland State University Definition of Stability - Lyapunov Sense Lyapunov
More informationStability theory is a fundamental topic in mathematics and engineering, that include every
Stability Theory Stability theory is a fundamental topic in mathematics and engineering, that include every branches of control theory. For a control system, the least requirement is that the system is
More informationApplied Nonlinear Control
Applied Nonlinear Control JEAN-JACQUES E. SLOTINE Massachusetts Institute of Technology WEIPING LI Massachusetts Institute of Technology Pearson Education Prentice Hall International Inc. Upper Saddle
More informationNonlinear Control Lecture 2:Phase Plane Analysis
Nonlinear Control Lecture 2:Phase Plane Analysis Farzaneh Abdollahi Department of Electrical Engineering Amirkabir University of Technology Fall 2009 Farzaneh Abdollahi Nonlinear Control Lecture 2 1/68
More informationD(s) G(s) A control system design definition
R E Compensation D(s) U Plant G(s) Y Figure 7. A control system design definition x x x 2 x 2 U 2 s s 7 2 Y Figure 7.2 A block diagram representing Eq. (7.) in control form z U 2 s z Y 4 z 2 s z 2 3 Figure
More informationControl Systems. Internal Stability - LTI systems. L. Lanari
Control Systems Internal Stability - LTI systems L. Lanari outline LTI systems: definitions conditions South stability criterion equilibrium points Nonlinear systems: equilibrium points examples stable
More informationNON-LINEAR SYSTEMS. SUBJECT CODE: EIPE204 M.Tech, Second Semester. Prepared By. Dr. Kanhu Charan Bhuyan
NON-LINEAR SYSTEMS SUBJECT CODE: EIPE24 M.Tech, Second Semester Prepared By Dr. Kanhu Charan Bhuyan Asst. Professor Instrumentation and Electronics Engineering COLLEGE OF ENGINEERING AND TECHNOLOGY BHUBANESWAR
More informationRobust Semiglobal Nonlinear Output Regulation The case of systems in triangular form
Robust Semiglobal Nonlinear Output Regulation The case of systems in triangular form Andrea Serrani Department of Electrical and Computer Engineering Collaborative Center for Control Sciences The Ohio
More information154 Chapter 9 Hints, Answers, and Solutions The particular trajectories are highlighted in the phase portraits below.
54 Chapter 9 Hints, Answers, and Solutions 9. The Phase Plane 9.. 4. The particular trajectories are highlighted in the phase portraits below... 3. 4. 9..5. Shown below is one possibility with x(t) and
More informationLyapunov Stability Theory
Lyapunov Stability Theory Peter Al Hokayem and Eduardo Gallestey March 16, 2015 1 Introduction In this lecture we consider the stability of equilibrium points of autonomous nonlinear systems, both in continuous
More informationEE222 - Spring 16 - Lecture 2 Notes 1
EE222 - Spring 16 - Lecture 2 Notes 1 Murat Arcak January 21 2016 1 Licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Essentially Nonlinear Phenomena Continued
More informationStabilization and Passivity-Based Control
DISC Systems and Control Theory of Nonlinear Systems, 2010 1 Stabilization and Passivity-Based Control Lecture 8 Nonlinear Dynamical Control Systems, Chapter 10, plus handout from R. Sepulchre, Constructive
More informationNonlinear Control Lecture 2:Phase Plane Analysis
Nonlinear Control Lecture 2:Phase Plane Analysis Farzaneh Abdollahi Department of Electrical Engineering Amirkabir University of Technology Fall 2010 r. Farzaneh Abdollahi Nonlinear Control Lecture 2 1/53
More informationLECTURE 8: DYNAMICAL SYSTEMS 7
15-382 COLLECTIVE INTELLIGENCE S18 LECTURE 8: DYNAMICAL SYSTEMS 7 INSTRUCTOR: GIANNI A. DI CARO GEOMETRIES IN THE PHASE SPACE Damped pendulum One cp in the region between two separatrix Separatrix Basin
More informationHigh-Gain Observers in Nonlinear Feedback Control. Lecture # 3 Regulation
High-Gain Observers in Nonlinear Feedback Control Lecture # 3 Regulation High-Gain ObserversinNonlinear Feedback ControlLecture # 3Regulation p. 1/5 Internal Model Principle d r Servo- Stabilizing u y
More informationExam. 135 minutes, 15 minutes reading time
Exam August 6, 208 Control Systems II (5-0590-00) Dr. Jacopo Tani Exam Exam Duration: 35 minutes, 5 minutes reading time Number of Problems: 35 Number of Points: 47 Permitted aids: 0 pages (5 sheets) A4.
More informationMCE693/793: Analysis and Control of Nonlinear Systems
MCE693/793: Analysis and Control of Nonlinear Systems Systems of Differential Equations Phase Plane Analysis Hanz Richter Mechanical Engineering Department Cleveland State University Systems of Nonlinear
More information1 Lyapunov theory of stability
M.Kawski, APM 581 Diff Equns Intro to Lyapunov theory. November 15, 29 1 1 Lyapunov theory of stability Introduction. Lyapunov s second (or direct) method provides tools for studying (asymptotic) stability
More informationAnalysis of SISO Control Loops
Chapter 5 Analysis of SISO Control Loops Topics to be covered For a given controller and plant connected in feedback we ask and answer the following questions: Is the loop stable? What are the sensitivities
More informationẋ = f(x, y), ẏ = g(x, y), (x, y) D, can only have periodic solutions if (f,g) changes sign in D or if (f,g)=0in D.
4 Periodic Solutions We have shown that in the case of an autonomous equation the periodic solutions correspond with closed orbits in phase-space. Autonomous two-dimensional systems with phase-space R
More information1. Find the solution of the following uncontrolled linear system. 2 α 1 1
Appendix B Revision Problems 1. Find the solution of the following uncontrolled linear system 0 1 1 ẋ = x, x(0) =. 2 3 1 Class test, August 1998 2. Given the linear system described by 2 α 1 1 ẋ = x +
More informationThe Rationale for Second Level Adaptation
The Rationale for Second Level Adaptation Kumpati S. Narendra, Yu Wang and Wei Chen Center for Systems Science, Yale University arxiv:1510.04989v1 [cs.sy] 16 Oct 2015 Abstract Recently, a new approach
More informationDO NOT DO HOMEWORK UNTIL IT IS ASSIGNED. THE ASSIGNMENTS MAY CHANGE UNTIL ANNOUNCED.
EE 533 Homeworks Spring 07 Updated: Saturday, April 08, 07 DO NOT DO HOMEWORK UNTIL IT IS ASSIGNED. THE ASSIGNMENTS MAY CHANGE UNTIL ANNOUNCED. Some homework assignments refer to the textbooks: Slotine
More informationMCE693/793: Analysis and Control of Nonlinear Systems
MCE693/793: Analysis and Control of Nonlinear Systems Introduction to Describing Functions Hanz Richter Mechanical Engineering Department Cleveland State University Introduction Frequency domain methods
More informationCDS 101/110a: Lecture 2.1 Dynamic Behavior
CDS 11/11a: Lecture 2.1 Dynamic Behavior Richard M. Murray 6 October 28 Goals: Learn to use phase portraits to visualize behavior of dynamical systems Understand different types of stability for an equilibrium
More informationPractice Problems for Final Exam
Math 1280 Spring 2016 Practice Problems for Final Exam Part 2 (Sections 6.6, 6.7, 6.8, and chapter 7) S o l u t i o n s 1. Show that the given system has a nonlinear center at the origin. ẋ = 9y 5y 5,
More informationMCE693/793: Analysis and Control of Nonlinear Systems
MCE693/793: Analysis and Control of Nonlinear Systems Input-Output and Input-State Linearization Zero Dynamics of Nonlinear Systems Hanz Richter Mechanical Engineering Department Cleveland State University
More informationConverse Lyapunov theorem and Input-to-State Stability
Converse Lyapunov theorem and Input-to-State Stability April 6, 2014 1 Converse Lyapunov theorem In the previous lecture, we have discussed few examples of nonlinear control systems and stability concepts
More informationFEL3210 Multivariable Feedback Control
FEL3210 Multivariable Feedback Control Lecture 5: Uncertainty and Robustness in SISO Systems [Ch.7-(8)] Elling W. Jacobsen, Automatic Control Lab, KTH Lecture 5:Uncertainty and Robustness () FEL3210 MIMO
More informationCDS 101/110a: Lecture 2.1 Dynamic Behavior
CDS 11/11a: Lecture.1 Dynamic Behavior Richard M. Murray 6 October 8 Goals: Learn to use phase portraits to visualize behavior of dynamical systems Understand different types of stability for an equilibrium
More informationNonlinear Control Lecture 9: Feedback Linearization
Nonlinear Control Lecture 9: Feedback Linearization Farzaneh Abdollahi Department of Electrical Engineering Amirkabir University of Technology Fall 2011 Farzaneh Abdollahi Nonlinear Control Lecture 9 1/75
More informationADAPTIVE FEEDBACK LINEARIZING CONTROL OF CHUA S CIRCUIT
International Journal of Bifurcation and Chaos, Vol. 12, No. 7 (2002) 1599 1604 c World Scientific Publishing Company ADAPTIVE FEEDBACK LINEARIZING CONTROL OF CHUA S CIRCUIT KEVIN BARONE and SAHJENDRA
More informationZeros and zero dynamics
CHAPTER 4 Zeros and zero dynamics 41 Zero dynamics for SISO systems Consider a linear system defined by a strictly proper scalar transfer function that does not have any common zero and pole: g(s) =α p(s)
More informationSolution of Additional Exercises for Chapter 4
1 1. (1) Try V (x) = 1 (x 1 + x ). Solution of Additional Exercises for Chapter 4 V (x) = x 1 ( x 1 + x ) x = x 1 x + x 1 x In the neighborhood of the origin, the term (x 1 + x ) dominates. Hence, the
More informationLecture 4. Chapter 4: Lyapunov Stability. Eugenio Schuster. Mechanical Engineering and Mechanics Lehigh University.
Lecture 4 Chapter 4: Lyapunov Stability Eugenio Schuster schuster@lehigh.edu Mechanical Engineering and Mechanics Lehigh University Lecture 4 p. 1/86 Autonomous Systems Consider the autonomous system ẋ
More informationHigh-Gain Observers in Nonlinear Feedback Control. Lecture # 2 Separation Principle
High-Gain Observers in Nonlinear Feedback Control Lecture # 2 Separation Principle High-Gain ObserversinNonlinear Feedback ControlLecture # 2Separation Principle p. 1/4 The Class of Systems ẋ = Ax + Bφ(x,
More informationChap. 3. Controlled Systems, Controllability
Chap. 3. Controlled Systems, Controllability 1. Controllability of Linear Systems 1.1. Kalman s Criterion Consider the linear system ẋ = Ax + Bu where x R n : state vector and u R m : input vector. A :
More informationDynamical Systems & Lyapunov Stability
Dynamical Systems & Lyapunov Stability Harry G. Kwatny Department of Mechanical Engineering & Mechanics Drexel University Outline Ordinary Differential Equations Existence & uniqueness Continuous dependence
More informationOutput Adaptive Model Reference Control of Linear Continuous State-Delay Plant
Output Adaptive Model Reference Control of Linear Continuous State-Delay Plant Boris M. Mirkin and Per-Olof Gutman Faculty of Agricultural Engineering Technion Israel Institute of Technology Haifa 3, Israel
More informationChapter 3. LQ, LQG and Control System Design. Dutch Institute of Systems and Control
Chapter 3 LQ, LQG and Control System H 2 Design Overview LQ optimization state feedback LQG optimization output feedback H 2 optimization non-stochastic version of LQG Application to feedback system design
More information11 Chaos in Continuous Dynamical Systems.
11 CHAOS IN CONTINUOUS DYNAMICAL SYSTEMS. 47 11 Chaos in Continuous Dynamical Systems. Let s consider a system of differential equations given by where x(t) : R R and f : R R. ẋ = f(x), The linearization
More informationNonlinear Control. Nonlinear Control Lecture # 6 Passivity and Input-Output Stability
Nonlinear Control Lecture # 6 Passivity and Input-Output Stability Passivity: Memoryless Functions y y y u u u (a) (b) (c) Passive Passive Not passive y = h(t,u), h [0, ] Vector case: y = h(t,u), h T =
More informationIntroduction to Geometric Control
Introduction to Geometric Control Relative Degree Consider the square (no of inputs = no of outputs = p) affine control system ẋ = f(x) + g(x)u = f(x) + [ g (x),, g p (x) ] u () h (x) y = h(x) = (2) h
More information7 Planar systems of linear ODE
7 Planar systems of linear ODE Here I restrict my attention to a very special class of autonomous ODE: linear ODE with constant coefficients This is arguably the only class of ODE for which explicit solution
More information2.10 Saddles, Nodes, Foci and Centers
2.10 Saddles, Nodes, Foci and Centers In Section 1.5, a linear system (1 where x R 2 was said to have a saddle, node, focus or center at the origin if its phase portrait was linearly equivalent to one
More informationAn Approach of Robust Iterative Learning Control for Uncertain Systems
,,, 323 E-mail: mxsun@zjut.edu.cn :, Lyapunov( ),,.,,,.,,. :,,, An Approach of Robust Iterative Learning Control for Uncertain Systems Mingxuan Sun, Chaonan Jiang, Yanwei Li College of Information Engineering,
More informationLinear Systems Theory
ME 3253 Linear Systems Theory Review Class Overview and Introduction 1. How to build dynamic system model for physical system? 2. How to analyze the dynamic system? -- Time domain -- Frequency domain (Laplace
More informationLecture 9 Nonlinear Control Design. Course Outline. Exact linearization: example [one-link robot] Exact Feedback Linearization
Lecture 9 Nonlinear Control Design Course Outline Eact-linearization Lyapunov-based design Lab Adaptive control Sliding modes control Literature: [Khalil, ch.s 13, 14.1,14.] and [Glad-Ljung,ch.17] Lecture
More information10 Transfer Matrix Models
MIT EECS 6.241 (FALL 26) LECTURE NOTES BY A. MEGRETSKI 1 Transfer Matrix Models So far, transfer matrices were introduced for finite order state space LTI models, in which case they serve as an important
More informationNonlinear Control Systems
Nonlinear Control Systems António Pedro Aguiar pedro@isr.ist.utl.pt 5. Input-Output Stability DEEC PhD Course http://users.isr.ist.utl.pt/%7epedro/ncs2012/ 2012 1 Input-Output Stability y = Hu H denotes
More informationRichiami di Controlli Automatici
Richiami di Controlli Automatici Gianmaria De Tommasi 1 1 Università degli Studi di Napoli Federico II detommas@unina.it Ottobre 2012 Corsi AnsaldoBreda G. De Tommasi (UNINA) Richiami di Controlli Automatici
More informationProblem Set 4 Solution 1
Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.245: MULTIVARIABLE CONTROL SYSTEMS by A. Megretski Problem Set 4 Solution Problem 4. For the SISO feedback
More informationEngineering Tripos Part IIB Nonlinear Systems and Control. Handout 4: Circle and Popov Criteria
Engineering Tripos Part IIB Module 4F2 Nonlinear Systems and Control Handout 4: Circle and Popov Criteria 1 Introduction The stability criteria discussed in these notes are reminiscent of the Nyquist criterion
More informationChapter #4 EEE8086-EEE8115. Robust and Adaptive Control Systems
Chapter #4 Robust and Adaptive Control Systems Nonlinear Dynamics.... Linear Combination.... Equilibrium points... 3 3. Linearisation... 5 4. Limit cycles... 3 5. Bifurcations... 4 6. Stability... 6 7.
More informationB5.6 Nonlinear Systems
B5.6 Nonlinear Systems 4. Bifurcations Alain Goriely 2018 Mathematical Institute, University of Oxford Table of contents 1. Local bifurcations for vector fields 1.1 The problem 1.2 The extended centre
More informationEG4321/EG7040. Nonlinear Control. Dr. Matt Turner
EG4321/EG7040 Nonlinear Control Dr. Matt Turner EG4321/EG7040 [An introduction to] Nonlinear Control Dr. Matt Turner EG4321/EG7040 [An introduction to] Nonlinear [System Analysis] and Control Dr. Matt
More informationTTK4150 Nonlinear Control Systems Solution 6 Part 2
TTK4150 Nonlinear Control Systems Solution 6 Part 2 Department of Engineering Cybernetics Norwegian University of Science and Technology Fall 2003 Solution 1 Thesystemisgivenby φ = R (φ) ω and J 1 ω 1
More informationB5.6 Nonlinear Systems
B5.6 Nonlinear Systems 5. Global Bifurcations, Homoclinic chaos, Melnikov s method Alain Goriely 2018 Mathematical Institute, University of Oxford Table of contents 1. Motivation 1.1 The problem 1.2 A
More informationProblem Set 5 Solutions 1
Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.245: MULTIVARIABLE CONTROL SYSTEMS by A. Megretski Problem Set 5 Solutions The problem set deals with Hankel
More informationUDE-based Dynamic Surface Control for Strict-feedback Systems with Mismatched Disturbances
16 American Control Conference ACC) Boston Marriott Copley Place July 6-8, 16. Boston, MA, USA UDE-based Dynamic Surface Control for Strict-feedback Systems with Mismatched Disturbances Jiguo Dai, Beibei
More informationRaktim Bhattacharya. . AERO 632: Design of Advance Flight Control System. Norms for Signals and Systems
. AERO 632: Design of Advance Flight Control System Norms for Signals and. Raktim Bhattacharya Laboratory For Uncertainty Quantification Aerospace Engineering, Texas A&M University. Norms for Signals ...
More informationControls Problems for Qualifying Exam - Spring 2014
Controls Problems for Qualifying Exam - Spring 2014 Problem 1 Consider the system block diagram given in Figure 1. Find the overall transfer function T(s) = C(s)/R(s). Note that this transfer function
More information