REGULATION OF NONLINEAR SYSTEMS USING CONDITIONAL INTEGRATORS. Abhyudai Singh

Size: px
Start display at page:

Download "REGULATION OF NONLINEAR SYSTEMS USING CONDITIONAL INTEGRATORS. Abhyudai Singh"

Transcription

1 REGULATION OF NONLINEAR SYSTEMS USING CONDITIONAL INTEGRATORS By Abhyudai Singh A THESIS Submitted to Michigan State University in partial fulfillment of the requirements for the degree of MASTER OF SCIENCE Department of Electrical Engineering 2004

2 ABSTRACT REGULATION OF NONLINEAR SYSTEMS USING CONDITIONAL INTEGRATORS By Abhyudai Singh Regulation of nonlinear systems using conditional integrators is studied. Previous work introduced the tool of conditional integrators that provide integral action inside a boundary layer while acting as stable systems outside, leading to improvement in transient response while achieving asymptotic regulation in the presence of unknown constant disturbances or parameter uncertainties. The approach, however, is restricted to a sliding mode control framework. This thesis extends this tool to a fairly general class of state feedback control laws, with the stipulation that we know a Lyapunov function for the closed-loop system. Asymptotic regulation with improvement in transient response is done by using the Lyapunov redesign technique to implement the state feedback control as a saturated high-gain feedback and introducing a conditional integrator to provide integral action inside a boundary layer. Improvement in the transient response using conditional integrators is demonstrated with an experimental application to the Pendubot.

3 To my family iii

4 ACKNOWLEDGMENTS I would like to express my deepest appreciation to the following people: To my advisor Dr. Hassan K. Khalil for his guidance and insight. It has been my pleasure, and indeed privilege to have worked with him. His pure, unselfish and honest passion for the subject will always serve as an example and inspiration for me. To my grandparents (V. K. S. Chaudhary, G. B. Singh and K. D. Singh), who instilled in me an appreciation for honesty and hardwork. And to my parents (Yatindra and Neeta Singh), who have supported me, encouraged me, and believed in me. iv

5 TABLE OF CONTENTS LIST OF FIGURES vii 1 Introduction Regulation using Conditional Integrators Problem Statement and Controller design Class of Systems Asymptotic Regulation Performance Output Feedback of Minimum-Phase Systems Problem Statement Partial State Feedback Design Output Feedback Design Comparison of Controllers Application to the Pendubot The Pendubot Mathematical Model The Equilibrium Manifold Controlling The Pendubot Hardware Description v

6 4.6 Observer design Addition of Uncertainty Integral action Conditional Integrators Conclusion APPENDICES A Controller Design A.1 Linear Controller A.2 Traditional Integral Action A.3 Conditional Integrator BIBLIOGRAPHY vi

7 LIST OF FIGURES 2.1 Two-link robot Plot of error x 1 (t) x 1(t) for systems (2.38) and (2.39) with µ = 1 (solid), µ =.1 (dashed) and plot of s = y + σ (dashed) for µ = 1, with initial conditions x 1 (0) = x 1(0) = 0, x 2 (0) = x 2(0) = 20 and σ(0) = Plot of error x 1 (t) x 1(t) for systems (2.40) and (2.41) with µ = 1 (solid), µ =.1 (dashed) and plot of s = y + σ (dashed) for µ = 1, with initial conditions x 1 (0) = x 1(0) = 0, x 2 (0) = x 2(0) = 15 and σ(0) = Plots of x 1 (t), x 2 (t) and x 3 (t) for the controller design (3.20) (dashed) and the controller design using sliding mode control (solid) Plots of u 1 (t) and u 2 (t) for the controller design (3.20) (dashed) and the controller design using sliding mode control (solid) Front and side view drawing of the Pendubot The pendubot arm at a: Controllable position, b: Uncontrollable position Trajectories for angle q 1 for different balancing controls, linear controller with no disturbance (solid), linear controller with constant disturbance of 1.6V in the control (dotted), linear state feedback integral controller with constant disturbance of 1.6V in the control (dashed) Trajectories for angles q 1 for different balancing controls with disturbance of 1.6V in the control, traditional integral action (solid), conditional integrator with µ=0.25 (dashed), conditional integrator with µ=0.15 (dotted) vii

8 4.5 Trajectories for angles q 2 for different balancing controls with disturbance of 1.6V in the control, traditional integral action (solid), conditional integrator with µ=0.25 (dashed), conditional integrator with µ=0.15 (dotted) viii

9 CHAPTER 1 Introduction When a system is subjected to unknown constant disturbances or parameter uncertainties that cause a shift in the equilibrium point, we need to use integral action to achieve robust asymptotic regulation. The traditional approach for introducing integral action is to augment integrators with the system and design feedback control to stabilize the augmented system [4], [7], [8], [9], [10], [12]. The integrators are introduced in such a way that they create an equilibrium point at which the regulation error is zero. Hence, stabilizing the equilibrium point ensures asymptotic regulation. In this approach, achieving asymptotic regulation usually happens at the expense of degrading the transient response. A new approach of conditional integrators has been recently introduced in [14]. The work [14] deals with continuous implementation of sliding mode control and introduces conditional integrators that provide integral action inside a boundary layer of the sliding surface, while acting like a stable system outside it. The striking feature of the results of [14] is that the state of the conditional integrator is always of the order of µ (the width of the boundary layer) and as µ 0 the trajectories of the closed-loop system approach the 1

10 trajectories obtained under ideal sliding mode control without integral action, which provides analytical confirmation of the property that integral action is introduced without degrading the transient response. In this thesis we extend the tool of conditional integrators beyond the sliding mode control framework. Towards that end, we consider a stabilizing, locally Lipschitz, state feedback control law that stabilizes the origin of a nonlinear system and assume that we know a Lyapunov function for the closed-loop system. Then, we perturb the system with a matched uncertainty that causes a shift in the equilibrium point. To recover asymptotic regulation of the state, we introduce integral action via a conditional integrator. The key idea is to use Lyapunov redesign to implement the state feedback control as a saturated high-gain feedback and then introduce the conditional integrator to provide integral action inside a boundary layer. We prove analytically that, for sufficiently small µ, conditional integrators can recover asymptotic regulation of the state to the origin in the presence of matched uncertainty and, under certain assumptions, global asymptotic regulation can be achieved. We also show that, for a certain compact set of the initial states, as the width of the boundary layer approaches zero, trajectories of the system with conditional integrator approach those of a system with no integral action. Hence, degradation in transient response due to integral action can be attenuated by tuning the width of the boundary layer without sacrificing asymptotic regulation. In the afore-mentioned 2

11 results the goal is to regulate the state of the system to zero despite the constant disturbance or parameter uncertainty. In many regulation problems, however, such requirements could be restrictive and it might be sufficient to regulate the states to a disturbance-dependent equilibrium point at which the regulation error is zero. In Chapter 3, we consider such regulation for a class of minimum-phase, input-output linearizable, nonlinear systems which is basically the same class studied in [14], aside from some differences in the technical assumptions. As in [14], we study output feedback control using high-gain observers, but, unlike [14], we do not use sliding mode control. Instead, we use a saturated high-gain feedback implementation of the linearizing feedback control. Finally, in Chapter 4, we demonstrate the improvement in transient response with conditional integrators by an application to the Pendubot. 3

12 CHAPTER 2 Regulation using Conditional Integrators 2.1 Problem Statement and Controller design Consider a nonlinear system represented by ẋ = f(x) + G(x)u (2.1) where u R m is the control input, x R n is the state vector, G(x), f(x) are, possibly unknown, sufficiently smooth functions in a domain X R n, with {0} X, and f(0) = 0. Suppose there are, possibly unknown, locally Lipschitz function ψ(x), with ψ(0) = 0, and C 2 Lyapunov function V (x), with V (0) = 0, such that the x following assumptions are satisfied. 4

13 Assumption 1: V [f(x) + G(x)ψ(x)] W (x), x X (2.2) x for some continuous positive definite function W (x). Assumption 2: V V G(x) can be expressed as G(x) = x x ht (x)l(x) where h(x) is a known continuous function with h(0) = 0 and L(x) is a, possibly unknown, continuous function satisfying L T (x) + L(x) 2λI and L(x) k, x X where k λ > 0. Assumption 3: The square matrix M(x) = h G(x) has the property that x M(0) + M T (0) is positive definite. Let Ω = {V (x) c 1 } X be a compact set and { } [M(0) + M T (0)] λ p = λ min, k p = M(0) (2.3) 2 where λ min {.} stands for the smallest eigenvalue of a positive definite matrix. Suppose system (2.1) is perturbed by a sufficiently smooth matched uncertainty δ(x) that causes a shift in the equilibrium point, i.e, δ(0) 0. The perturbed system can be. =. 2 5

14 written as ẋ = f(x) + G(x)[u + δ(x)] = f(x) + G(x)ψ(x) + G(x)u + G(x)[δ(x) ψ(x)]. (2.4) Let β(x) 0 be a known continuous function such that δ(x) ψ(x) β(x), x Ω. Choose a locally Lipschitz function α(x) such that, x Ω, α(x) k α β(x) + α 0, (2.5) where k α = max { k λ, k p λ p } 1 and α 0 > 0. We make the following local assumption on the functions G(x), f(x), δ(x), h(x), ψ(x) and α(x). Assumption 4: There exist non-negative constants k 1 to k 6 such that δ(x) δ(0) ψ(x) k 1 h(x) + k 2 W (x) h [f(x) + G(x)ψ(x)] x k 3 h(x) + k 4 W (x) α(x) α(0) k 5 h(x) + k 6 W (x) in some neighborhood of x = 0. 6

15 If W (x) k w x 2, k w > 0, near the origin, then any locally Lipschitz function can be bounded by k a W (x) for some ka > 0. Define the vector saturation function ϕ(y) by ϕ(y) = y y if y 1 y if y 1. (2.6) It is not hard to verify that the saturated high-gain feedback control u = α(x)ϕ ( ) y, µ > 0, (2.7) µ where y = h(x), achieves practical stabilization in the sense that, within a finite time interval, x(t) reaches a neighborhood of the origin whose size is a class K function of µ. Hence, by reducing µ, we can reduce the ultimate bound on x. However, in reality, we cannot make the ultimate bound arbitrarily small because it would require µ to be arbitrarily small; hence, inducing chattering, excitation of unmodeled fast dynamics, and other well-known problems. To achieve asymptotic regulation without forcing µ to be arbitrarily small, we introduce a conditional integrator by modifying (2.7) to ( ) y + γσ u = α(x)ϕ, (2.8) µ where σ is the output of the conditional integrator ( ) y + γσ σ = γσ + µϕ, γ > 0. (2.9) µ 7

16 Because ϕ(y) 1, y R m, it can be verified that σ(t) µ/γ for all t 0 provided σ(0) µ/γ, as from (2.9) ( ) y + γσ σ T σ = γσ T σ + µσ T ϕ µ γ σ 2 + µ σ 0, σ µ γ. (2.10) Inside the boundary layer { y + γσ µ}, (2.9) reduces to ( ) y + γσ σ = γσ + µ = y, (2.11) µ which provides integral action. 2.2 Class of Systems We will now show that Assumptions 1-4 are satisfied for a class of feedback linearizable systems. Consider the system ξ i 1 = ξ i 2. ξ i ri 1 = ξ i ri (2.12) ξ ri i = b i (ξ, θ) + y i = ξ1, i m d ij (ξ, θ)u j, j=1 1 i m 8

17 where u = [u 1,, u m ] T U R m is the control input, ξ = [ξ1, 1, ξr1, 1, ξ1 m,, ξrm] m T X R n is the state vector, with {0} X, θ Θ R w is a vector of unknown constant parameters, and the functions b i (ξ, θ) and d ij (ξ, θ) are locally Lipschitz in their arguments over the domain of interest, with b i (0, θ) = 0, θ Θ. We assume that the decoupling matrix D(ξ, θ) = {d ij (ξ, θ), 1 i m, 1 j m} is nonsingular ξ X and θ Θ. System (2.12) can be written as ξ = Aξ + B[b(ξ, θ) + D(ξ, θ)u] = Aξ + B[b(ξ, θ) + D(ξ, ˆθ)D 1 (ξ, ˆθ)D(ξ, θ)u], where A = block diag[a 1,..., A m ], (2.13) A i = , ri ri 9

18 B = block diag[b 1,..., B m ], B i = b(ξ, θ) = b 1 (ξ, θ)., b m (ξ, θ) , (2.14) ri 1 1 i m, and ˆθ is a nominal value of θ. It can be seen that Assumption 1 is satisfied with ψ(ξ, θ) = D 1 (ξ, θ)[ b(ξ, θ) Kξ], V (ξ) = ξ T P ξ, W (ξ) = ξ T ξ, where the matrix K is chosen such that A BK is Hurwitz and P = P T is the solution of the Lyapunov equation P (A BK)+(A BK) T P = I. As D 1 (ξ, ˆθ)D(ξ, θ) = I for θ = ˆθ it must be true that for some perturbation of θ around ˆθ, [D 1 (ξ, ˆθ)D(ξ, θ)] + [D 1 (ξ, ˆθ)D(ξ, θ)] T 2 > 0. (2.15) Let (2.15) be satisfied for θ Θ 1 Θ. For the single input case, condition (2.15) implies that the scalars D(ξ, θ) and D(ξ, ˆθ) have the same sign. With (2.15), Assumption 2 will be satisfied with h T (ξ, ˆθ) = V (ξ) BD(ξ, ˆθ) and L(ξ, θ) = D 1 (ξ, ˆθ)D(ξ, θ). ξ 10

19 Figure 2.1. Two-link robot. As V (ξ) is quadratic and rank[d(0, ˆθ)] = m D T (ξ, ˆθ)B T 2 V (ξ) ξ 2 BD(ξ, ˆθ) > 0. ξ=0 Using similar arguments as used for (2.15), it can be shown that there exists a set Θ 2 Θ such that the matrix D 1 (θ, ˆθ) = D T (ξ, ˆθ)B T 2 V (ξ) ξ 2 BD(ξ, ˆθ)[D 1 (ξ, ˆθ)D(ξ, θ)] ξ=0 satisfies D 1 (θ, ˆθ) + D T 1 (θ, ˆθ) 2 > 0 (2.16) for θ Θ 2. Inequality (2.16) satisfies Assumption 3. Finally, Assumption 4 is also satisfied as W (ξ) is quadratic. Hence, Assumption 1-4 will be satisfied θ Θ 1 Θ 2 Θ. Example : The two-link robot shown in Figure 2.1, can be modeled [13] by the 11

20 following equation M(q) q + C(q, q) q + D q + g(q) = u, where q is a two-dimensional vector of generalized coordinates representing joint positions, u is a two-dimensional control (torque) input. M(q) is a symmetric inertia matrix, which is positive definite for all q R 2 and is given by M(q) = a 1 + 2a 4 cos q 2 a 2 + a 4 cos q 2 a 2 + a 4 cos q 2 a 3, where a 1,..., a 4 are positive constants. The term C(q, q) q accounts for centrifugal and coriolis forces and is given by C(q, q) q = a 4 sin q 2 q 2 ( q 1 + q 2 ) q 1 0. The term D q accounts for viscous damping, where D is a positive semidefinite matrix. However, in this example we will neglect damping and take D = 0. The term g(q), which accounts for the gravity forces, is given by g(q) = b 1 cos q 1 + b 2 cos(q 1 + q 2 ) b 2 cos(q 1 + q 2 ), where b 1 and b 2 are positive constants. These constants a 1,..., a 4, b 1 and b 2 depend on masses, moment of inertia, lengths of the two links, acceleration due to gravity 12

21 and the load kept on the robot. The nominal values corresponding to no load on the robot are â 1 = , â 2 = 23.5, â 3 = 122.5, â 4 = 25, ˆb 1 = 784.8, ˆb 2 = The state space equation for the two link robot is given by ẋ = Ax + B{ ˆM 1 (q) ˆM(q)M 1 (q)[u C(q, q) q g(q)]}, x = q 1 q 1 q 2 q 2, where A and B given by (2.13) and (2.14) respectively with {ri = 2, 1 i 2} and ˆM(q) = â 1 + 2â 4 cos q 2 â 2 + â 4 cos q 2 â 2 + â 4 cos q 2 â 3. It can be seen that Assumption 1 is satisfied with ψ(x) = [C(q, q) q + g(q) M(q)Kx], V (x) = x T P x, W (x) = 10x T x, P = , 13

22 K = Due to an unknown load, the actual system parameters are perturbed to a 1 = 259.7, a 2 = 58.19, a 3 = , a 4 = 56.25, b 1 = , b 2 = It can be verified that for this perturbation, q R 2, [ ˆM(q)M 1 (q)] + [ ˆM(q)M 1 (q)] T > 0 [M 1T (q)bp B T ˆM 1 (q)] + [M 1T (q)b T P B ˆM 1 (q)] T > 0. Hence, Assumptions 2 and 3 are satisfied with h T (x) = 2x T P B ˆM 1 (q), L(x) = ˆM(q)M 1 (q). Finally, Assumption 4 is also satisfied as W (x) is quadratic. 2.3 Asymptotic Regulation In this section we prove that, for sufficiently small µ, the system (2.4), (2.8), (2.9) has an asymptotically stable equilibrium point at which x = 0. Theorem 1: Suppose Assumptions 1-4 are satisfied. Then, there exists µ > 0 such 14

23 that µ (0, µ ] a) The system (2.4), (2.8), (2.9) has an asymptotically stable equilibrium point at ( ) x = 0, σ = µ δ(0) and Σ = {V (x) c α(0)γ 1 } { σ µ/γ} is a subset of the region of attraction. b) If (2.5) and Assumptions 1, 2 hold globally, V (x) is radially unbounded, and α(x) k g W (x) + c g, x R n (2.17) for some k g, c g > 0, then, (x(0), σ(0)) R n { σ µ/γ}, lim t x(t) = 0. Proof : a) The proof is divided into three parts. First we show that all trajectories of the system (2.4), (2.8), (2.9) starting in the set Σ reach a positively invariant set Σ µ = {V (x) ρ(µ)} { σ µ/γ} in finite time, where ρ is a class K function. Then we show that the trajectories in the set Σ µ enter the boundary layer { y + γσ µ} in finite time. Inside the boundary layer, we show that the system has an asymptotically stable equilibrium point at which x = 0. To show that all trajectories starting in Σ enter Σ µ in finite time, we calcu- 15

24 late V for (x, σ) Σ. V = V V V [f(x) + G(x)ψ(x)] + G(x)u + G(x)[δ(x) ψ(x)] x x x ( ) y + γσ W (x) y T α(x)l(x)ϕ + y T L(x)[δ(x) ψ(x)] µ ( ) ( ) y + γσ y + γσ = W (x) (y + γσ) T α(x)l(x)ϕ + α(x)γσ T L(x)ϕ µ µ γσ T L(x)[δ(x) ψ(x)] + (y + γσ) T L(x)[δ(x) ψ(x)]. ( If (y + γσ) µ, ϕ ) y+γσ µ 2λI, L(x) k and (2.5) we have = (y+γσ) (y+γσ. Using γ σ µ, LT (x) + L(x) V W (x) λα(x) (y + γσ) + kβ(x) (y + γσ) + k[α(x) + β(x)]µ W (x) + α 1 µ, α 1 = max k[α(x) + β(x)]. x Ω If ( ) y+γσ (y + γσ) µ, ϕ = y+γσ and µ µ V W (x) + k[α(x) + 2β(x)]µ (2.18) W (x) + α 2 µ, α 2 = max x Ω k[α(x) + 2β(x)] α 1. Thus, (x, σ) Σ, V W (x) + α2 µ. Hence, from ([11], Theorem 4.18), for sufficiently small µ, the set Σ is positively invariant and all trajectories starting in Σ enter a positively invariant set Σ µ = {V (x) ρ(µ)} { σ µ/γ} in finite time, where ρ is a class K function. 16

25 Next, we show that the trajectories reach the boundary layer { y + γσ µ} in finite time. For (x, σ) Σ µ, we have ẏ = h [f(x) + G(x)ψ(x)] + M(0)[u ψ(x) + δ(x)] x +[M(x) M(0)][u ψ(x) + δ(x)]. (2.19) From (2.8) and (2.9), ( ) y + γσ (ẏ + γ σ) = α(x)m(0)ϕ + N(x) + M(0)[δ(x) ψ(x)] γ 2 σ µ ( ) y + γσ +γµϕ, µ where N(x) = h [f(x) + G(x)ψ(x)] + [M(x) M(0)][u ψ(x) + δ(x)]. x As we are already inside the set Σ µ and N(0) = 0, N(x) + γ σ can be bounded by a class K function ρ 1 (µ), i.e. ( ) y + γσ N(x) + γµϕ γ 2 σ µ ρ 1(µ). Let V 1 = 1 2 y + γσ 2. V 1 = (y + γσ) T (ẏ + γ σ) ( ) y + γσ α(x)(y + γσ) T M(0)ϕ + (y + γσ) T M(0)[δ(x) ψ(x)] µ 17

26 +ρ 1 (µ) (y + γσ). ( ) y+γσ If y + γσ µ, ϕ = (y+γσ) and, using (2.3) and (2.5), µ (y+γσ [ V 1 λ p α(x) k p λ p [ α o ρ 1(µ) λ p β(x) ρ 1(µ) λ p λ ] p (y + γσ). ] y + γσ Thus, for sufficiently small µ, all trajectories inside Σ µ would reach the boundary layer { y + γσ µ} in finite time. Inside the boundary layer, the system (2.4), (2.8), (2.9) is given by ( ) y + γσ ẋ = f(x) + G(x)ψ(x) α(x)g(x) + G(x)[δ(x) ψ(x)] µ ( ) y + γσ σ = γσ + µ = y. µ We can see that ( x = 0, σ = µ α(0)γ δ(0) ) is an equilibrium point. Defining σ = σ σ and δ(x) = δ(x) δ(0), the system inside the boundary layer can be written as { ( ) y + γ σ ẋ = f(x) + G(x)ψ(x) + G(x) α(x) µ δ(0) } [α(x) α(0)] α(0) +G(x)[ δ(x) ψ(x)] (2.20) σ = y. (2.21) 18

27 Define the Lyapunov function V 2 = V (x) + b 2 σt σ + c 2 (y + γ σ)t (y + γ σ), where b and c are positive constants to be chosen. Calculating V 2 along the trajectories of the system (2.20) and (2.21), we obtain V 2 = V + b σ T y + c(y + γ σ) T ( h + γy). (2.22) xẋ Using Assumptions 1, 2 and 4 yields V = V { ( ) y + γ σ x [f(x) + G(x)ψ(x)] + yt L(x) α(x) µ +y T L(x)[ δ(x) ψ(x)] δ(0) } [α(x) α(0)] α(0) W (x) α(x) λ µ y 2 α(x)y T L(x)γ σ µ + k 7 y 2 + k 8 y W (x) for some non-negative constants k 7, k 8. We also have ( ) h xẋ + γy = h α(x) [f(x) + G(x)ψ(x)] M(x)(y + γ σ) x µ M(x) δ(0) α(0) [α(x) α(0)] + M(x)[ δ(x) ψ(x)] + γy. Since x {V (x) ρ(µ)}, α(x) and M(x) can be bounded by constants. Moreover, since M(0) + M T (0) is positive definite, there is a positive constant c 2 c 1 such that 19

28 M(x) + M T (x) is positive definite for all x Ω 1 = {V (x) c 2 } Ω. Let λ q = 1 2 min x Ω 1 λ min { M(x) + M T (x) }. (2.23) For sufficiently small µ, {V (x) ρ(µ)} Ω 1, hence M(x) + M T (x) 2λ q I, x {V (x) ρ(µ)}. (2.24) Using (2.24) and Assumption 4 we obtain ( ) h (y + γ σ) T xẋ + γy α(x) µ λ q (y + γ σ) 2 + k 11 y W (x) +k 12 σ W (x) + k 12 y 2 + k 10 σ y, where k 10, k 11, k 12, k Therefore, V 2 W (x) 2 α(x) λ µ y 2 + (k 7 + ck 12 ) y 2 + (k 8 + ck 11 ) y W (x) W (x) 2 +b σ T y c α(x) µ λ q (y + γ σ) 2 + ck 12 σ W (x) α(x)y T L(x)γ σ µ +ck 10 σ y. Taking b = 2cλ q γ α(0) µ, we can express the upper bound on V 2 as V 2 W 1 (x) W 2 (x) W 3 (x), 20

29 where W 1 (x) = W (x) 2 W 2 (x) = W (x) 2 W 3 (x) = α(x) µ + y 2 ( α min µ λ k 7 ck 12 ) (k 8 + ck 11 ) y W (x) + cα min γ 2 λ q 2µ σ 2 ck 12 σ W (x), α min = min α(x) x Ω { cλ q y 2 + c [ 2 γ2 λ q σ 2 γk + (2cλ qγρ 2 (µ) + ck 10 µ) α min ] σ y } and ρ 2 is a class K function such that α(x) α(0) ρ 2 (µ), x {V (x) ρ(µ)}. We can see that both W 1 (x) and W 2 (x) will be positive definite for sufficiently small µ. For W 3 (x) to be positive definite, we need c 2 2 γ2 λ 2 q > 1 [ γk + (2cλ ] 2 qγρ 2 (µ) + ck 10 µ). 4 α min Taking c = k/λ q ensures that V 2 will be negative definite for sufficiently small µ; ( ) consequently the equilibrium point x = 0, σ = µ δ(0) will be asymptotically α(0)γ stable for sufficiently small µ. b) Inequality (2.18) is valid (x, σ) R n { σ µ/γ}. Using (2.5) and (2.17) we obtain V W (x) + 3kα(x)µ 21

30 [1 3µk g k]w (x) + 3kc g µ. Hence, as in part a), for sufficiently small µ, all trajectories starting in (x, σ) {V (x) c 1 } { σ µ/γ} enter Σ µ. Since V (x) is radially unbounded, c 1 can be chosen to include any x(0) R n in the set {V (x) c 1 }. Analysis inside the set Σ µ remains the same as in part a). Thus, for sufficiently small µ, x(0) R n and σ(0) µ/γ, x(t) 0 as t. Remark: In Theorem 1 we assumed σ(0) µ/γ. However, we can relax this requirement. Given σ(0) k σ /γ, k σ µ, then, from (2.9), σ T σ γ 2 σ 2, σ 2µ γ. Hence, from ([11], Theorem 4.18), σ(t) reaches the set { σ 2µ/γ} in finite time. For σ(0) k σ /γ we can repeat the derivation preceding (2.18) to show that V W (x) + α 2 k σ, x Ω. Thus, from ([11], Theorem 4.18), there exists a positively invariant set S = {V (x) ρ 3 (k σ )}, where ρ 3 is a class K function, such that trajectories starting outside S reach it in finite time. If k σ is small enough such that S Ω, then x(0) Ω = x(t) Ω, t 0. Once σ reaches the set { σ 2µ/γ} the proof of Theorem 1 can be repeated for the set Ω { σ 2µ/γ}. 22

31 2.4 Performance In this section we show that the conditional integrator does not degrade the transient response of the system, in the sense that, as µ 0, the trajectory of the closed loop system under saturated high-gain feedback with conditional integrator (2.8) approach those of the closed-loop system under saturated high-gain feedback without integrator (2.7). The closed-loop system under (2.8) is represented by (2.4), (2.8), (2.9) while the closed-loop system under (2.7) is given by ( ) y ẋ = f(x ) + G(x )δ(x ) α(x )G(x )ϕ, (2.25) µ where y = h(x ). In both cases, the trajectories eventually enter a boundary layer, which is { y + γσ µ} for (2.4), (2.8), (2.9) and { y µ} for (2.25). Showing closeness of trajectories is relatively easy when trajectories of both systems are either outside or inside their respective boundary layers. The tricky part is to keep track of the closeness of trajectories as trajectories enter and leave the boundary layers since the entry and exit times will be different for the two systems. Note that, while trajectories eventually settle inside the boundary layers, there could be a period of time when trajectories go in and out for a finite number of times. For convenience, we restrict our analysis to a case where once the trajectories enter the boundary layers, they cannot leave. This will require us to limit the compact set of analysis and to change the choice of the function α(x) in the feedback control law (2.7) or (2.8). 23

32 Recall from the proof of Theorem 1 that Ω 1 = {V (x) c 2 } Ω is a compact set such that M T (x) + M(x) is positive definite in Ω 1. Let λ q be defined by (2.23) and k q = max x Ω 1 M(x). Let β 1 (x) be a known continuous function such that h [f(x) + G(x)ψ(x)] x β 1(x), x Ω 1. The choice of α(x) is changed to { k α(x) max λ β(x), k q β(x) + 1 } β 1 (x) + α 0. (2.26) λ q λ q The choice (2.26) is more conservative than (2.5); hence, the conclusions of Theorem 1 hold for all x(0) Ω 1. For all (x, σ) in the positively invariant set Ω 1 { σ µ/γ}, the derivative of V 1 = 1 2 y + γσ 2 satisfies V 1 = (y + γσ) T (ẏ + γ σ) ( ) y + γσ α(x)(y + γσ) T M(x)ϕ + [β 1 (x) + k q β(x) + 2γµ] (y + γσ). µ Outside the boundary layer { y + γσ µ}, we have [ V 1 λ q α o 2γµ ] y + γσ. λ q 24

33 Hence, for sufficiently small µ, all trajectories starting inside Ω 1 { σ µ/γ} will reach the boundary layer { y + γσ µ} in finite time and stay there for all future time. We can arrive at a similar conclusion for the system (2.25) using V 2 = 1 2 y 2. To compare the equations of the two systems inside the boundary layers, it is convenient to recognize that high-gain feedback creates slow and fast dynamics, which can be represented in the singularly perturbed form. The following two assumptions are used in analyzing the closeness of trajectories inside the boundary layers. Assumption 5: There exists a diffeomorphism T (x) such that T (x) x G(x) = 0 I, x Ω 1. The change of variables z = z 1 z 2 = T (x) transforms the system (2.1) into ż 1 = f 1(z 1, z 2 ) ż 2 = f 2(z 1, z 2 ) + u 25

34 and y = h(x) x=t 1 (z) can be written as y = h (z 1, z 2 ) Assumption 6: 0 = h (z 1, z 2 ) has a unique solution z 2 = υ(z 1 ) for all x Ω 1 and the system ż 1 = f 1(z 1, υ(z 1 )) has an exponentially stable equilibrium point at z 1 = 0. Theorem 2: Suppose Assumptions 1-6 are satisfied and α(x) satisfies (2.26). Let x(t) and σ(t) be the state of (2.4), (2.8), (2.9) with x(0) Ω 1, σ(0) µ/γ, and x (t) be the state of (2.25) with x (0) Ω 1. Suppose that x(0) x (0) = O(µ). Then, x(t) x (t) = O(µ), t 0. Proof : Let t 1 be the first time one of the two systems reaches its boundary layer. For all t t 1, trajectories of both the systems are outside their respective boundary layers and the systems are represented by ( σ = γσ + µϕ ) y+γσ µ ( ẋ = f(x) + G(x)δ(x) α(x)g(x) y+γσ y+γσ ( ẋ = f(x ) + G(x )δ(x ) α(x )G(x ) ) y y (2.27) ). (2.28) 26

35 As σ = O(µ), the difference in the state equations for ẋ and ẋ is O(µ) for all t t 1. From ([11], Theorem 3.4), x(t) x (t) = O(µ), t t 1. From, σ = O(µ), and x(t 1 ) x (t 1 ) = O(µ) it can be seen that if trajectories of one system reach its boundary layer, the trajectories of the other system will be in some O(µ) neighborhood of its boundary layer. As derivatives of both 1 2 (y + γσ) 2 and 1 2 y 2 are strictly negative outside their boundary layers, uniformly in µ, it must be true that trajectories of the other system also reach its boundary layer in time t 2 = t 1 + O(µ). Hence, x(t) x (t) = O(µ), t [t 1, t 2 ]. For all t > t 2, trajectories of both systems (2.27) and (2.28) are inside their respective boundary layers and can be represented by σ = y ( ẋ = f(x) + G(x)δ(x) α(x)g(x) ) y+γσ µ (2.29) ( ) ẋ = f(x ) + G(x )δ(x ) α(x )G(x y ). (2.30) µ With the change of variables z = z 1 z 2 = T (x) and z = and (2.30) can be written in the singularly perturbed forms z 1 z 2 = T (x ), (2.29) σ = h (z 1, z 2 ) ż 1 = f 1(z 1, z 2 ) µż 2 = µ[f 2(z 1, z 2 ) + δ (z 1, z 2 )] α (z 1, z 2 )[h (z 1, z 2 ) + γσ] (2.31) 27

36 z 1 = f 1(z 1, z 2) µ z 2 = µ[f 2(z 1, z 2) + δ (z 1, z 2)] α (z 1, z 2)h (z 1, z 2), (2.32) where δ (z 1, z 2 ) = δ(x) x=t 1 (z), α (z 1, z 2 ) = α(x) x=t 1 (z) and h (z 1, z 2 ) = h(x) x=t 1 (z). Let z 2 = υ 1 (z 1, σ) be the unique solution to h (z 1, z 2 ) + γσ = 0. The slow and fast models of (2.31) and (2.32) are given by σ = γσ ż 1 = f 1(z 1, υ 1 (z 1, σ)) (2.33) dż 2 = α (z dτ 1, z 2 )[h (z 1, z 2 ) + γσ] (2.34) ż 1 = f 1(z 1, υ 1 (z 1)) (2.35) dż2 = α (z dτ 1, z2)h (z1, z2), (2.36) where τ = t/µ. The differences between the right hand sides of the slow models (2.33), (2.35) and the fast models (2.34), (2.36) are O(µ). Furthermore, from Assumptions 3 and 6 it can be verified that systems (2.33), (2.34), (2.35), and (2.36) are exponentially stable. Hence, from ([11], Theorem 9.1) and Tikhonov theorem ([11], Theorem 11.2), x(t) x (t) = O(µ), t t 2. Thus, x(t) x (t) = O(µ), t 0. Example: Consider the second-order system ẋ 1 = x 1 x 2 ẋ 2 = x 1 + u. (2.37) 28

37 Assumption 1 is satisfied with ψ(x) = x 2, V (x) = (x x 2 2 )/2, W (x) = x 1 2 x 2 2 and Assumption 2 is satisfied with h(x) = x 2, L(x) = 1. It can be easily verified that M T (x) + M(x) is positive definite x R 2 and Assumptions 3-6 are also satisfied. A constant matched uncertainty δ(x) = 1 is added to the control u to shift the equilibrium point from the origin. To recover asymptotic stability of the origin, system (2.37) is augmented with the conditional integrator (2.9) and from (2.8) and (2.26), the control u is given by ( ) x2 + σ u = (2 x 2 + x ) sat. µ The closed-loop system can be written as ( σ = γσ + µ sat ẋ 1 = x 1 x 2 ẋ 2 = x 1 + u + 1 ) x 2 +σ µ. (2.38) u = (2 x 2 + x ) sat( x 2+σ µ ) We will now demonstrate through simulation that as µ 0, trajectories of the system (2.38) approach the trajectories of the closed-loop system ẋ 1 = x 1 x 2 ẋ 2 = x 1 + u + 1. (2.39) u = (2 x 2 + x ) sat( x 2 µ ) Simulation was run for the initial conditions x 1 (0) = x 1(0) = 0, x 2 (0) = x 2(0) = 20 29

38 0.3 0 error s t t Figure 2.2. Plot of error x 1 (t) x 1(t) for systems (2.38) and (2.39) with µ = 1 (solid), µ =.1 (dashed) and plot of s = y + σ (dashed) for µ = 1, with initial conditions x 1 (0) = x 1(0) = 0, x 2 (0) = x 2(0) = 20 and σ(0) = 0 and σ(0) = 0, for different values of µ. It can be seen from Figure 2.2 that the trajectories of (2.38) enter its boundary layer and stay there for all future time, and that they approach those of (2.39) as µ 0. However, if α(x) is given by (2.5), as was taken in Theorem 1, in place of (2.26) then the control u would be ( ) x2 + σ u = ( x ) sat. µ The closed-loop systems (2.38) and (2.39) would be σ = γσ + µ sat( x 2+σ µ ) ẋ 1 = x 1 x 2 ẋ 2 = x 1 + u + 1 (2.40) u = ( x ) sat( x 2+σ µ ) 30

39 error s t t Figure 2.3. Plot of error x 1 (t) x 1(t) for systems (2.40) and (2.41) with µ = 1 (solid), µ =.1 (dashed) and plot of s = y + σ (dashed) for µ = 1, with initial conditions x 1 (0) = x 1(0) = 0, x 2 (0) = x 2(0) = 15 and σ(0) = 0 and ẋ 1 = x 1 x 2 ẋ 2 = x 1 + u + 1. (2.41) u = ( x ) sat( x 2 µ ) For similar initial conditions x 1 (0) = x 1(0) = 0, x 2 (0) = x 2(0) = 20 and σ(0) = 0, it can be seen from Figure 2.3 that the trajectories of (2.40) enter and leave its boundary layer once before settling there; yet the trajectories of (2.40) still approach those of (2.41) as µ 0, a property that is not guaranteed by Theorem 2. 31

40 CHAPTER 3 Output Feedback of Minimum-Phase Systems In this section we apply the conditional integrator design of Section 2 to output feedback regulation of a class of minimum-phase, input-output linearizable systems. Aside from some technical differences, this is the same problem treated in [14], but the controller design presented here is different from the continuous sliding mode controller of [14]. 3.1 Problem Statement Consider the MIMO nonlinear system ẋ = f(x, w) + G(x, w)u (3.1) y = l(x, w) (3.2) 32

41 where u R m is the control input, x R n is the state vector, y R m is the measured output and the disturbance input w belongs to a compact set W R l. The functions G(x, w), f(x, w), l(x, w) are sufficiently smooth functions in x on a domain X R n and continuous in w for w W. We want to regulate y(t) 0 as t. We make the following Assumptions about (3.1), (3.2). Assumption 7: For each w W there is a unique pair (x ss, u ss ) that depends continuously on w and satisfies 0 = f(x ss, w) + G(x ss, w)u ss (3.3) 0 = l(x ss, w). (3.4) Subtracting (3.3) from (3.1), we have ẋ = f(x, w) + G(x, w)[u u ss ], (3.5) where f(x, w) = f(x, w) f(x ss, w) + [G(x, w) G(x ss, w)]u ss. Hence, the regulation problem reduces to the stabilization of the equilibrium point x = x ss. The system (3.5) is in the form (2.4) with u ss as the matched uncertainty δ. 33

42 Assumption 8: For all w W there exists a mapping η ζ = T w(x) which is a diffeomorphism over X onto its image X η X ζ, that transforms (3.5), (3.2) into the normal form η = φ(η, ζ, w) ζ = Aζ + B{b(η, ζ, w) + D(η, ζ, w)[u u ss ]} y = Cζ (3.6) and maps the equilibrium point x ss into (η ss, 0) with η and ζ belonging to the sets X η R n r and X ζ R r respectively. The m m matrix D(η, ζ, w) is non-singular for all (η, ζ, w) X η X ζ W. The r r matrix A and the r m matrix B are given by (2.13) and (2.15), respectively. The m r matrix C is given by C = block diag[c 1,..., C m ], (3.7) [ ] C i = 1 0 0, (3.8) 1 ri where 1 i m and r = r rm. The triple (A, B, C) represents m chains of integrators. Conditions for the existence of such a change of variables that transform (3.5), (3.2) into the normal form (3.6) are discussed in ([8], Chapters 5 and 9) 34

43 Assumption 9: For all (η, ζ, w) X η X ζ W, D(η, ζ, w) + D T (η, ζ, w) > 0 (3.9) B T P BD(η ss, 0, w) + D T (η ss, 0, w)b T P B > 0 (3.10) where P = P T is the solution of the Lyapunov equation P (A BK)+(A BK) T P = Q in which Q is a positive definite matrix and A BK is Hurwitz. In [14], Assumption 5, the decoupling matrix D(η, ζ, w), is required to be of the form D(η, ζ, w) = Γ(η, ζ, w)â(ζ, w), (3.11) where  is a known nonsingular matrix and Γ = diag[γ 1,, γ m ] with γ i (.) γ 0 > 0, 1 i m, for all (η, ζ, w) X η X ζ W and some positive constant γ 0. In the case of (3.11), without loss of generality a new control input can be defined as v = Â(ζ, w)u and with respect to the input v, the decoupling matrix D(η, ζ, w) = Γ(η, ζ, w) will satisfy (3.9). Assumption 10: There exists a Lyapunov function V r ( η, w) where η = η η ss and 35

44 a compact set X w X η X ζ W such that (η, ζ, w) X w V r ( η, w) φ(η, ζ, w) k 15 η 2 + k 16 η ζ η for some constants k 15 > 0, k With φ(η, ζ, w) being locally Lipschitz in its arguments, if η = η ss is an exponentially stable equilibrium point of η = φ(η, 0, w), then, the existence of such Lyapunov function is ensured by the converse Lyapunov theorem ([11], Theorem 4.16). In [14], Assumption 4, exponential stability of η = φ(η, 0, w) is required but the upper bound on V r( η,w) φ(η, ζ, w) does not need to be quadratic in ζ and η η over the compact set X w. 3.2 Partial State Feedback Design It can be seen that (η, ζ, w) X w, Assumption 1 is satisfied with ψ( η, ζ, w) = D 1 ( η + η ss, ζ, w)[ b( η + η ss, ζ, w) Kζ], and V ( η, ζ, w) = V r ( η, w) + k 17 ζ T P ζ, where k 17 is chosen such that k 17 > k16/4k 2 15, and that the function W ( η, ζ, w) is quadratic in ζ and η. From (3.9) it can be seen that Assumption 2 is satisfied with h T (ζ) = 2k 17 ζ T P B and L( η, ζ, w) = D( η + η ss, ζ, w). Inequality (3.10) satisfies Assumption 3 and as W ( η, ζ, w) is quadratic in ζ and η Assumption 4 is also satisfied. 36

45 Let β(ζ) be a known function such that (η, ζ, w) X w, u ss + ψ( η, ζ, w) β(ζ). (3.12) Then, from (2.8), a partial state feedback control is taken as ( ) 2k17 B T P ζ + γσ u = α(ζ)ϕ, µ > 0, (3.13) µ where α(ζ) and σ are given by (2.5) and (2.9), respectively. From Theorem 1 it can be seen that for the closed-loop system (2.9), (3.6), (3.13), for sufficiently small µ, ζ 0 as t. Thus, lim t y(t) = Output Feedback Design The controller (3.13) cannot be implemented as a state feedback controller because the partial states ζ depends on the unknown vector w through the change of variables T w (x). However, it can be implemented as an output feedback controller that uses the high-gain observer ζ = A ζ + H(y C ζ) (3.14) 37

46 to estimate ζ, where A is given by (2.13) and the observer gain H is chosen as H = block diag[h 1,..., H m ], H i = α i 1 ɛ α i 2 ɛ 2. α i ri 1 ɛ ri 1 α i ri ɛ ri ri 1 in which ɛ is a positive constant and the positive constants α i j are chosen such that the roots of s ri + α i 1s ri α i ri = 0 are in the open left-half plane, for all i = 1,, m. The output feedback control is given by ζ = A ζ + H(y C ζ) ( σ = γσ + µϕ ( u = α( ζ)ϕ ) 2k 17 B T P ζ+γσ µ 2k 17 B T P ζ+γσ µ ). (3.15) 38

47 If the closed-loop system under the state feedback control (3.13) has an exponentially stable equilibrium point at the origin then, for sufficiently small ɛ, the closed-loop system under the output feedback control (3.15) will have an exponentially stable equilibrium point [2]. If the equilibrium point is asymptotically, but not exponentially, stable we need to place an upper bound on the modeling error δ 0 (η, ζ, w) = b(η, ζ, w) + D(η, ζ, w)[u u ss ]. In the latter case, for each w, the following properties hold. The closed-loop system under state feedback can be transformed into the form ([11], Theorem 4.7) η = A 1 η + g 1 ( η, ζ, w) ζ = A 2 ζ + g2 ( η, ζ, w), (3.16) where A 1 has all its eigenvalue with zero real parts and A 2 is Hurwitz. Also ([11], Theorem 8.1), guarantees the existence of a continuously differentiable center manifold ζ = h( η, w) for all η d w, for some d w > 0. From ([11], Corollary 8.2) the origin of the reduced system η = A 1 η + g 1 ( η, h( η, w), w) = g 0 ( η, w) (3.17) is asymptotically stable and the converse Lyapunov theorem ([11], Theorem 4.16) guarantees the existence of a continuously differentiable function V 3 ( η, w) defined on 39

48 B η (0, r), a ball of radius r around η = 0, such that η B η (0, r), V 3 η g 0( η, w) α 0 ( η ), where α 0 is a class K function, possibly dependent on w. For the above properties to hold uniformly in w W, we make the following assumptions. Assumption 11: The closed-loop system under state feedback can be transformed into the form (3.16), where A 1 has all its eigenvalue with zero real parts and A 2 is Hurwitz, uniformly in w W. Assumption 12: There exist a continuously differentiable center manifold ζ = h( η, w) for all η d w, for some d w > 0, such that origin of the system (3.17) is asymptotically stable, uniformly in w W. Assumption 13: There exist a continuously differentiable function V 3 ( η, w) defined on B η (0, r), a ball of radius r around η = 0, such that η B η (0, r), w W, V 3 η g 0( η, w) α 0 ( η ), 40

49 where α 0 is a class K function, possibly dependent on w. Let δ 1 ( η, w) be the projection of the modeling error onto the center manifold, i.e. δ 0 (η, ζ, w) = δ 1 ( η, w) + δ 2 ( η, ζ, w), where δ 2 ( η, 0, w) = 0. Now we make the following assumption on the functions V 3 ( η, w) and δ 1 ( η, w). Assumption 14: η B η (0, r), w W δ 1 ( η, w) c 0 α a 0( η ) and V 3 η c 1α0( η ), b where a, b < 1 are some positive constants such that a + b = 1, c 0 0 and c 1 > 0. Under Assumption 14, from [2], the closed loop system under output feedback will have an asymptotically stable equilibrium point for sufficiently small ɛ. Furthermore, from [2], if R is the region of attraction under state feedback control, then for any compact set M in the interior of R and any compact set Q R r of initial estimate ζ(0), the set M Q is included in the region of attraction under output feedback control for sufficiently small ɛ, and trajectories under output 41

50 feedback approach those under state feedback as ɛ Comparison of Controllers In this section we compare the controller design presented in Section 3.2 to that of [14] through an example. Example: Consider the two-input, third-order system ẋ 1 = x 2 ẋ 2 = a 1 x u 1 ẋ 3 = a 2 x u 2 y = x 1 x 3, (3.18) where a 1 and a 2 are unknown constants with a 1 1 and a 2 1. It can be seen that Assumption 1 and 2 are satisfied with ψ(x) = a 1 x 2 3 x 1 2x 2 a 2 x 2 1 x 3, h(x) = x 1 + x 2 500x 3, (3.19) V (x) = 1.5x x x 1 x x 2 3, W (x) = x 1 2 x x 2 3, L(x) = 1. From (3.12) and (3.19) a function α(x) can be chosen as α(x) = ( x x 2 + x 2 3) 2 + (x x 3 )

51 and from (2.8) the control u can be taken as u = u 1 u 2 ( ) h(x) + σ = α(x)ϕ µ ( ) y + σ σ = σ + µϕ, µ (3.20) where σ = σ 1 σ 2. Now we will design a continuous sliding mode controller as presented in [14]. The first step in a sliding mode design is to specify a sliding surface on which the sliding motion occurs. The sliding surfaces are taken as s 1 = x 1 + x 2 + σ 1 s 2 = x 3 + σ 2, where σ i is the output of σ i = σ i + µ i sat ( si µ i ), σ i (0) [ µ i, µ i ], 1 i 2 and µ i are positive constants to be chosen. 43

52 From ([14], (12)), a control u can be taken as u = u 1 u 2 = (x x ) sat (x ) sat ( ) s 2 µ 2 ( ) s 1 µ 1. (3.21) Simulation was run for the following initial conditions x 1 (0) = 3, x 2 (0) = 15, x 3 (0) = 1, σ 1 (0) = 0, and σ 2 (0) = 0 with µ = µ 1 = µ 2 =.1. The difference in performance of controllers (3.20) and (3.21) can be seen in Figure 3.1. The controller (3.20) regulates the states x 1 and x 3 faster as compared to the sliding mode controller (3.21). The control inputs u 1 and u 2 are shown in Figure 3.2. From this example its clear that the controller design of Section 3.2 is a good alternative to the sliding mode controller design presented in [14] and can give better performance in some cases. 44

53 x x t a) b) x t t c) Figure 3.1. Plots of x 1 (t), x 2 (t) and x 3 (t) for the controller design (3.20) (dashed) and the controller design using sliding mode control (solid) 45

54 u u t t a) b) Figure 3.2. Plots of u 1 (t) and u 2 (t) for the controller design (3.20) (dashed) and the controller design using sliding mode control (solid) 46

55 CHAPTER 4 Application to the Pendubot We illustrate the improvement in the transient response using conditional integrators by application to the Pendubot. 4.1 The Pendubot The Pendubot is an electro-mechanical system consisting of two rigid aluminum links interconnected by a revolute joints. The first joint is actuated by a DC-motor while the second joint is unactuated, thus making the Pendubot an under actuated mechanism. The Pendubot is in some ways, similar to the inverted pendulum on a cart, where the linear motion of the cart is used to balance the pendulum. The Pendulum uses instead the rotational motion of the first link to balance the second link. In this regard, the Pendubot is also similar to the more recent rotational inverted pendulum, invented by Professor Furuta of the Tokyo Institute of Technology [6]. In the 47

56 Figure 4.1. Front and side view drawing of the Pendubot. rotational inverted pendulum, the axis of rotation of the pendulum is perpendicular to the axis of rotation of the first link. The Pendubot has both joint axes parallel, which results in some additional rotational coupling between the degrees of freedom. This additional coupling, which is not found in either the linear inverted pendulum or the rotational inverted pendulum makes the Pendubot more interesting and more challenging from both a kinematics and a dynamic stand points. For example, in both the linear inverted pendulum and the rotational inverted pendulum, the Taylor series linearization around any operating point results in a controllable linear system. Moreover, the linearized model (A, B, C) is the same at all points. In the Pendubot, the linearization is operating-point-dependent; in other words, the linearization 48

57 (A, B, C) changes at each configuration and there are even special configurations where the linearization is uncontrollable. The Pendubot possesses many attractive features for control research and education. It can be used to investigate system identification, linear control, nonlinear control, optimal control, learning control, robust and adaptive control, fuzzy logic control, intelligent control, hybrid and switching control, gain scheduling and other control paradigms. One can program the Pendubot for swing up control, balancing, regulation, tracking, identification, gain scheduling, disturbance rejection and friction compensation to name just a few of the applications. Some of these applications are described in [18], [15] and [16]. The maker of the Pendubot is Mechatronics Systems, Inc. Figure 4.1 shows the front and side view of the Pendubot. 4.2 Mathematical Model The equation of motion of the Pendubot can be found using Lagrangian dynamics [17]. In matrix form the equation is D(q) q + C(q, q) q + g(q) = u (4.1) 49

58 where D(q) = C(q, q) = g(q) = θ 1 + θ 2 + 2θ 3 cos(q 2 ) θ 2 + θ 3 cos(q 2 ) θ 2 + θ 3 cos(q 2 ) θ 2 θ 3 sin q 2 q 1 θ 3 sin(q 2 )( q 1 + q 2 ) θ 3 sin(q 2 ) q 1 0 θ 4 g cos(q 1 ) + θ 5 g cos(q 1 + q 2 ) θ 5 g cos(q 1 + q 2 ),,, q = q 1 q 2, u = τ 0. and θ 1 = m 1 l 2 c1 + m 2 l I 1, θ 2 = m 2 l 2 c2 + I 2, θ 3 = m 2 l 1 l c2, θ 4 = m 1 l c1 + m 2 l 1 θ 5 = m 2 l c2 m 1 : the total mass of link one l 1 : the length of link one l c1 : the distance to the center of mass of link 1 I 1 : the moment of inertia of link one about its centroid. m 2 : the total mass of link two l c2 : the distance to the center of mass of link two I 2 : the moment of inertia of link two about its centroid g : the acceleration due to gravity 50

59 The Pendubot parameters are θ 1 = , θ 2 = 0.016, θ 3 = , θ 4 = and θ 5 =.063. The matrix D(q) is invertible for all q R 2 ; hence, the state equations can be written as q = D 1 (q)[u C(q, q) q g(q)] x 1 = q 1, x 2 = q 1, x 3 = q 2, x 4 = q 2 ẋ 1 = x 2 ẋ 2 = q 1 ẋ 3 = x 4 ẋ 4 = q The Equilibrium Manifold For each constant value of τ the Pendubot will have a continuum of equilibrium configurations. Since at equilibrium we q 1 = q 1 = q 2 = q 2 = 0, we have θ 4 cos(q 1 ) + θ 5 cos(q 1 + q 2 ) = τ g θ 5 cos(q 1 + q 2 ) = 0 and the Pendubot will balance at ( ) τ q 1 = cos 1 θ 4 g q 2 = n π 2 q 1, n = 1, 3. 51

60 4.4 Controlling The Pendubot The Pendubot control strategy developed for the Pendubot in [18], [5] is divided into two parts: a balancing control which balances the Pendubot about the desired equilibrium point, and a swinging control that swings the Pendubot up from the downward configuration to the desired configuration. Balancing Control The balancing problem may be solved by linearizing the equation of motion about an operating point using Taylor series expansion and designing a linear state feedback controller. As we saw before, the Pendubot has an equilibrium manifold which is a continuum of balancing positions. The linearized system becomes uncontrollable at q 1 = 0, π as illustrated in Figure 4.2 which shows controllable and uncontrollable positions of the arm. Swing Up Control The problem of swinging the Pendubot up from the downward position to the inverted position is an interesting and challenging nonlinear control problem. From the equation of motion we have d 11 q 1 + d 12 q 2 + h 1 + φ 1 = τ (4.2) d 21 q 1 + d 22 q 2 + h 2 + φ 2 = 0, (4.3) 52

61 Figure 4.2. The pendubot arm at a: Controllable position, b: Uncontrollable position where D(q) = C(q, q) = d 11 d 12 d 21 d 22 h 1 h 2,, g(q) = φ 1 φ 2. Solving for q 2 from (4.3) and substituting in (4.2), we obtain ( q 1 d 11 d ) ( 12d 21 + h 1 d ) ( 12h 2 + φ 1 d ) 12φ 2 = τ. d 22 d 22 d 22 53

62 Taking the control variable τ as τ = ( d 11 d ) ( 12d 21 υ 1 + h 1 d ) ( 12h 2 + φ 1 d ) 12φ 2 d 22 d 22 d 22 results in q 1 = υ 1 d 22 q 2 + h 2 + φ 2 = d 21 υ 1. (4.4) The control υ 1 is taken as υ 1 = k p (q d 1 q 1 ) + k d ( q d 1 q 1 ) to track some reference positions r = q d 1, ṙ = q d 1 = 0 and the positive constants k p and k d are the control gains. Defining the tracking errors as e 1 = q d 1 q 1, e 2 = q d 1 q 1 η 1 = q 2, η 2 = q 2 the system (4.4) can be rewritten as ė 1 = e 2 ė 2 = k p e 1 k d e 2 54

63 η 1 = η 2 η 2 = 1 d 22 (h 2 + φ 2 ) d 12 d 22 k p (q d 1 q 1 ) + k d ( q d 1 q 1 ) y = e 1. The tracking error part can be written as ė = 0 1 k p k d = Ee, where e = [e 1, e 2 ] T, k p and k d are chosen to make E Hurwitz. On the sliding surface e = 0, the dynamics are given by η 1 = η 2 (4.5) η 2 = 1 d 22 (h 2 + φ 2 ) (4.6) which represents the zero dynamics with respect to the output y = e 1. We see from (4.5) and (4.6) that the zero dynamics are just the dynamics of the unactuated arm, which has a periodic orbit. While the error e(t) converges to zero, the steadystate behavior for the first link converges exponentially to q d 1 and the second link oscillates about ( π, 0). The swing up control job then is to excite the zero dynamics sufficiently by the motion of link 1 such that the pendulum swings close to its unstable equilibrium. When the pendulum is close to the desired equilibrium, the controller is switched from a partial feedback linearization controller to the linear balancing controller. 55

High-Gain Observers in Nonlinear Feedback Control. Lecture # 3 Regulation

High-Gain Observers in Nonlinear Feedback Control. Lecture # 3 Regulation High-Gain Observers in Nonlinear Feedback Control Lecture # 3 Regulation High-Gain ObserversinNonlinear Feedback ControlLecture # 3Regulation p. 1/5 Internal Model Principle d r Servo- Stabilizing u y

More information

High-Gain Observers in Nonlinear Feedback Control. Lecture # 2 Separation Principle

High-Gain Observers in Nonlinear Feedback Control. Lecture # 2 Separation Principle High-Gain Observers in Nonlinear Feedback Control Lecture # 2 Separation Principle High-Gain ObserversinNonlinear Feedback ControlLecture # 2Separation Principle p. 1/4 The Class of Systems ẋ = Ax + Bφ(x,

More information

High-Gain Observers in Nonlinear Feedback Control

High-Gain Observers in Nonlinear Feedback Control High-Gain Observers in Nonlinear Feedback Control Lecture # 1 Introduction & Stabilization High-Gain ObserversinNonlinear Feedback ControlLecture # 1Introduction & Stabilization p. 1/4 Brief History Linear

More information

Lecture 9 Nonlinear Control Design

Lecture 9 Nonlinear Control Design Lecture 9 Nonlinear Control Design Exact-linearization Lyapunov-based design Lab 2 Adaptive control Sliding modes control Literature: [Khalil, ch.s 13, 14.1,14.2] and [Glad-Ljung,ch.17] Course Outline

More information

Nonlinear Control Systems

Nonlinear Control Systems Nonlinear Control Systems António Pedro Aguiar pedro@isr.ist.utl.pt 7. Feedback Linearization IST-DEEC PhD Course http://users.isr.ist.utl.pt/%7epedro/ncs1/ 1 1 Feedback Linearization Given a nonlinear

More information

Topic # /31 Feedback Control Systems. Analysis of Nonlinear Systems Lyapunov Stability Analysis

Topic # /31 Feedback Control Systems. Analysis of Nonlinear Systems Lyapunov Stability Analysis Topic # 16.30/31 Feedback Control Systems Analysis of Nonlinear Systems Lyapunov Stability Analysis Fall 010 16.30/31 Lyapunov Stability Analysis Very general method to prove (or disprove) stability of

More information

Robust Stabilization of Non-Minimum Phase Nonlinear Systems Using Extended High Gain Observers

Robust Stabilization of Non-Minimum Phase Nonlinear Systems Using Extended High Gain Observers 28 American Control Conference Westin Seattle Hotel, Seattle, Washington, USA June 11-13, 28 WeC15.1 Robust Stabilization of Non-Minimum Phase Nonlinear Systems Using Extended High Gain Observers Shahid

More information

EML5311 Lyapunov Stability & Robust Control Design

EML5311 Lyapunov Stability & Robust Control Design EML5311 Lyapunov Stability & Robust Control Design 1 Lyapunov Stability criterion In Robust control design of nonlinear uncertain systems, stability theory plays an important role in engineering systems.

More information

EN Nonlinear Control and Planning in Robotics Lecture 3: Stability February 4, 2015

EN Nonlinear Control and Planning in Robotics Lecture 3: Stability February 4, 2015 EN530.678 Nonlinear Control and Planning in Robotics Lecture 3: Stability February 4, 2015 Prof: Marin Kobilarov 0.1 Model prerequisites Consider ẋ = f(t, x). We will make the following basic assumptions

More information

Nonlinear Control. Nonlinear Control Lecture # 3 Stability of Equilibrium Points

Nonlinear Control. Nonlinear Control Lecture # 3 Stability of Equilibrium Points Nonlinear Control Lecture # 3 Stability of Equilibrium Points The Invariance Principle Definitions Let x(t) be a solution of ẋ = f(x) A point p is a positive limit point of x(t) if there is a sequence

More information

Autonomous Helicopter Landing A Nonlinear Output Regulation Perspective

Autonomous Helicopter Landing A Nonlinear Output Regulation Perspective Autonomous Helicopter Landing A Nonlinear Output Regulation Perspective Andrea Serrani Department of Electrical and Computer Engineering Collaborative Center for Control Sciences The Ohio State University

More information

Nonlinear Control Lecture # 14 Tracking & Regulation. Nonlinear Control

Nonlinear Control Lecture # 14 Tracking & Regulation. Nonlinear Control Nonlinear Control Lecture # 14 Tracking & Regulation Normal form: η = f 0 (η,ξ) ξ i = ξ i+1, for 1 i ρ 1 ξ ρ = a(η,ξ)+b(η,ξ)u y = ξ 1 η D η R n ρ, ξ = col(ξ 1,...,ξ ρ ) D ξ R ρ Tracking Problem: Design

More information

Nonlinear Systems and Control Lecture # 12 Converse Lyapunov Functions & Time Varying Systems. p. 1/1

Nonlinear Systems and Control Lecture # 12 Converse Lyapunov Functions & Time Varying Systems. p. 1/1 Nonlinear Systems and Control Lecture # 12 Converse Lyapunov Functions & Time Varying Systems p. 1/1 p. 2/1 Converse Lyapunov Theorem Exponential Stability Let x = 0 be an exponentially stable equilibrium

More information

TTK4150 Nonlinear Control Systems Solution 6 Part 2

TTK4150 Nonlinear Control Systems Solution 6 Part 2 TTK4150 Nonlinear Control Systems Solution 6 Part 2 Department of Engineering Cybernetics Norwegian University of Science and Technology Fall 2003 Solution 1 Thesystemisgivenby φ = R (φ) ω and J 1 ω 1

More information

q HYBRID CONTROL FOR BALANCE 0.5 Position: q (radian) q Time: t (seconds) q1 err (radian)

q HYBRID CONTROL FOR BALANCE 0.5 Position: q (radian) q Time: t (seconds) q1 err (radian) Hybrid Control for the Pendubot Mingjun Zhang and Tzyh-Jong Tarn Department of Systems Science and Mathematics Washington University in St. Louis, MO, USA mjz@zach.wustl.edu and tarn@wurobot.wustl.edu

More information

Introduction to Nonlinear Control Lecture # 3 Time-Varying and Perturbed Systems

Introduction to Nonlinear Control Lecture # 3 Time-Varying and Perturbed Systems p. 1/5 Introduction to Nonlinear Control Lecture # 3 Time-Varying and Perturbed Systems p. 2/5 Time-varying Systems ẋ = f(t, x) f(t, x) is piecewise continuous in t and locally Lipschitz in x for all t

More information

1 Lyapunov theory of stability

1 Lyapunov theory of stability M.Kawski, APM 581 Diff Equns Intro to Lyapunov theory. November 15, 29 1 1 Lyapunov theory of stability Introduction. Lyapunov s second (or direct) method provides tools for studying (asymptotic) stability

More information

A Light Weight Rotary Double Pendulum: Maximizing the Domain of Attraction

A Light Weight Rotary Double Pendulum: Maximizing the Domain of Attraction A Light Weight Rotary Double Pendulum: Maximizing the Domain of Attraction R. W. Brockett* and Hongyi Li* Engineering and Applied Sciences Harvard University Cambridge, MA 38, USA {brockett, hongyi}@hrl.harvard.edu

More information

Feedback Linearization

Feedback Linearization Feedback Linearization Peter Al Hokayem and Eduardo Gallestey May 14, 2015 1 Introduction Consider a class o single-input-single-output (SISO) nonlinear systems o the orm ẋ = (x) + g(x)u (1) y = h(x) (2)

More information

Case Study: The Pelican Prototype Robot

Case Study: The Pelican Prototype Robot 5 Case Study: The Pelican Prototype Robot The purpose of this chapter is twofold: first, to present in detail the model of the experimental robot arm of the Robotics lab. from the CICESE Research Center,

More information

Stabilization of a 3D Rigid Pendulum

Stabilization of a 3D Rigid Pendulum 25 American Control Conference June 8-, 25. Portland, OR, USA ThC5.6 Stabilization of a 3D Rigid Pendulum Nalin A. Chaturvedi, Fabio Bacconi, Amit K. Sanyal, Dennis Bernstein, N. Harris McClamroch Department

More information

Nonlinear Control. Nonlinear Control Lecture # 8 Time Varying and Perturbed Systems

Nonlinear Control. Nonlinear Control Lecture # 8 Time Varying and Perturbed Systems Nonlinear Control Lecture # 8 Time Varying and Perturbed Systems Time-varying Systems ẋ = f(t,x) f(t,x) is piecewise continuous in t and locally Lipschitz in x for all t 0 and all x D, (0 D). The origin

More information

Adaptive fuzzy observer and robust controller for a 2-DOF robot arm Sangeetha Bindiganavile Nagesh

Adaptive fuzzy observer and robust controller for a 2-DOF robot arm Sangeetha Bindiganavile Nagesh Adaptive fuzzy observer and robust controller for a 2-DOF robot arm Delft Center for Systems and Control Adaptive fuzzy observer and robust controller for a 2-DOF robot arm For the degree of Master of

More information

Lyapunov Stability Theory

Lyapunov Stability Theory Lyapunov Stability Theory Peter Al Hokayem and Eduardo Gallestey March 16, 2015 1 Introduction In this lecture we consider the stability of equilibrium points of autonomous nonlinear systems, both in continuous

More information

Nonlinear Control Lecture # 36 Tracking & Regulation. Nonlinear Control

Nonlinear Control Lecture # 36 Tracking & Regulation. Nonlinear Control Nonlinear Control Lecture # 36 Tracking & Regulation Normal form: η = f 0 (η,ξ) ξ i = ξ i+1, for 1 i ρ 1 ξ ρ = a(η,ξ)+b(η,ξ)u y = ξ 1 η D η R n ρ, ξ = col(ξ 1,...,ξ ρ ) D ξ R ρ Tracking Problem: Design

More information

EN Nonlinear Control and Planning in Robotics Lecture 10: Lyapunov Redesign and Robust Backstepping April 6, 2015

EN Nonlinear Control and Planning in Robotics Lecture 10: Lyapunov Redesign and Robust Backstepping April 6, 2015 EN530.678 Nonlinear Control and Planning in Robotics Lecture 10: Lyapunov Redesign and Robust Backstepping April 6, 2015 Prof: Marin Kobilarov 1 Uncertainty and Lyapunov Redesign Consider the system [1]

More information

Computing Optimized Nonlinear Sliding Surfaces

Computing Optimized Nonlinear Sliding Surfaces Computing Optimized Nonlinear Sliding Surfaces Azad Ghaffari and Mohammad Javad Yazdanpanah Abstract In this paper, we have concentrated on real systems consisting of structural uncertainties and affected

More information

Efficient Swing-up of the Acrobot Using Continuous Torque and Impulsive Braking

Efficient Swing-up of the Acrobot Using Continuous Torque and Impulsive Braking American Control Conference on O'Farrell Street, San Francisco, CA, USA June 9 - July, Efficient Swing-up of the Acrobot Using Continuous Torque and Impulsive Braking Frank B. Mathis, Rouhollah Jafari

More information

Lecture 4. Chapter 4: Lyapunov Stability. Eugenio Schuster. Mechanical Engineering and Mechanics Lehigh University.

Lecture 4. Chapter 4: Lyapunov Stability. Eugenio Schuster. Mechanical Engineering and Mechanics Lehigh University. Lecture 4 Chapter 4: Lyapunov Stability Eugenio Schuster schuster@lehigh.edu Mechanical Engineering and Mechanics Lehigh University Lecture 4 p. 1/86 Autonomous Systems Consider the autonomous system ẋ

More information

Navigation and Obstacle Avoidance via Backstepping for Mechanical Systems with Drift in the Closed Loop

Navigation and Obstacle Avoidance via Backstepping for Mechanical Systems with Drift in the Closed Loop Navigation and Obstacle Avoidance via Backstepping for Mechanical Systems with Drift in the Closed Loop Jan Maximilian Montenbruck, Mathias Bürger, Frank Allgöwer Abstract We study backstepping controllers

More information

Output Regulation of Non-Minimum Phase Nonlinear Systems Using Extended High-Gain Observers

Output Regulation of Non-Minimum Phase Nonlinear Systems Using Extended High-Gain Observers Milano (Italy) August 28 - September 2, 2 Output Regulation of Non-Minimum Phase Nonlinear Systems Using Extended High-Gain Observers Shahid Nazrulla Hassan K Khalil Electrical & Computer Engineering,

More information

1 The Observability Canonical Form

1 The Observability Canonical Form NONLINEAR OBSERVERS AND SEPARATION PRINCIPLE 1 The Observability Canonical Form In this Chapter we discuss the design of observers for nonlinear systems modelled by equations of the form ẋ = f(x, u) (1)

More information

Neural Network-Based Adaptive Control of Robotic Manipulator: Application to a Three Links Cylindrical Robot

Neural Network-Based Adaptive Control of Robotic Manipulator: Application to a Three Links Cylindrical Robot Vol.3 No., 27 مجلد 3 العدد 27 Neural Network-Based Adaptive Control of Robotic Manipulator: Application to a Three Links Cylindrical Robot Abdul-Basset A. AL-Hussein Electrical Engineering Department Basrah

More information

MCE693/793: Analysis and Control of Nonlinear Systems

MCE693/793: Analysis and Control of Nonlinear Systems MCE693/793: Analysis and Control of Nonlinear Systems Systems of Differential Equations Phase Plane Analysis Hanz Richter Mechanical Engineering Department Cleveland State University Systems of Nonlinear

More information

Stabilization and Passivity-Based Control

Stabilization and Passivity-Based Control DISC Systems and Control Theory of Nonlinear Systems, 2010 1 Stabilization and Passivity-Based Control Lecture 8 Nonlinear Dynamical Control Systems, Chapter 10, plus handout from R. Sepulchre, Constructive

More information

Nonlinear Control of Electrodynamic Tether in Equatorial or Somewhat Inclined Orbits

Nonlinear Control of Electrodynamic Tether in Equatorial or Somewhat Inclined Orbits Proceedings of the 15th Mediterranean Conference on Control & Automation, July 7-9, 7, Athens - Greece T-5 Nonlinear Control of Electrodynamic Tether in Equatorial or Somewhat Inclined Martin B. Larsen

More information

ANALYSIS OF THE USE OF LOW-PASS FILTERS WITH HIGH-GAIN OBSERVERS. Stephanie Priess

ANALYSIS OF THE USE OF LOW-PASS FILTERS WITH HIGH-GAIN OBSERVERS. Stephanie Priess ANALYSIS OF THE USE OF LOW-PASS FILTERS WITH HIGH-GAIN OBSERVERS By Stephanie Priess A THESIS Submitted to Michigan State University in partial fulfillment of the requirements for the degree of Electrical

More information

Nonlinear Control Lecture 4: Stability Analysis I

Nonlinear Control Lecture 4: Stability Analysis I Nonlinear Control Lecture 4: Stability Analysis I Farzaneh Abdollahi Department of Electrical Engineering Amirkabir University of Technology Fall 2010 Farzaneh Abdollahi Nonlinear Control Lecture 4 1/70

More information

Global stabilization of feedforward systems with exponentially unstable Jacobian linearization

Global stabilization of feedforward systems with exponentially unstable Jacobian linearization Global stabilization of feedforward systems with exponentially unstable Jacobian linearization F Grognard, R Sepulchre, G Bastin Center for Systems Engineering and Applied Mechanics Université catholique

More information

Nonlinear Control. Nonlinear Control Lecture # 2 Stability of Equilibrium Points

Nonlinear Control. Nonlinear Control Lecture # 2 Stability of Equilibrium Points Nonlinear Control Lecture # 2 Stability of Equilibrium Points Basic Concepts ẋ = f(x) f is locally Lipschitz over a domain D R n Suppose x D is an equilibrium point; that is, f( x) = 0 Characterize and

More information

Introduction to Nonlinear Control Lecture # 4 Passivity

Introduction to Nonlinear Control Lecture # 4 Passivity p. 1/6 Introduction to Nonlinear Control Lecture # 4 Passivity È p. 2/6 Memoryless Functions ¹ y È Ý Ù È È È È u (b) µ power inflow = uy Resistor is passive if uy 0 p. 3/6 y y y u u u (a) (b) (c) Passive

More information

Nonlinear Control Systems

Nonlinear Control Systems Nonlinear Control Systems António Pedro Aguiar pedro@isr.ist.utl.pt 5. Input-Output Stability DEEC PhD Course http://users.isr.ist.utl.pt/%7epedro/ncs2012/ 2012 1 Input-Output Stability y = Hu H denotes

More information

Adaptive Tracking and Parameter Estimation with Unknown High-Frequency Control Gains: A Case Study in Strictification

Adaptive Tracking and Parameter Estimation with Unknown High-Frequency Control Gains: A Case Study in Strictification Adaptive Tracking and Parameter Estimation with Unknown High-Frequency Control Gains: A Case Study in Strictification Michael Malisoff, Louisiana State University Joint with Frédéric Mazenc and Marcio

More information

Lecture 9 Nonlinear Control Design. Course Outline. Exact linearization: example [one-link robot] Exact Feedback Linearization

Lecture 9 Nonlinear Control Design. Course Outline. Exact linearization: example [one-link robot] Exact Feedback Linearization Lecture 9 Nonlinear Control Design Course Outline Eact-linearization Lyapunov-based design Lab Adaptive control Sliding modes control Literature: [Khalil, ch.s 13, 14.1,14.] and [Glad-Ljung,ch.17] Lecture

More information

MCE/EEC 647/747: Robot Dynamics and Control. Lecture 12: Multivariable Control of Robotic Manipulators Part II

MCE/EEC 647/747: Robot Dynamics and Control. Lecture 12: Multivariable Control of Robotic Manipulators Part II MCE/EEC 647/747: Robot Dynamics and Control Lecture 12: Multivariable Control of Robotic Manipulators Part II Reading: SHV Ch.8 Mechanical Engineering Hanz Richter, PhD MCE647 p.1/14 Robust vs. Adaptive

More information

Nonlinear Control. Nonlinear Control Lecture # 24 State Feedback Stabilization

Nonlinear Control. Nonlinear Control Lecture # 24 State Feedback Stabilization Nonlinear Control Lecture # 24 State Feedback Stabilization Feedback Lineaization What information do we need to implement the control u = γ 1 (x)[ ψ(x) KT(x)]? What is the effect of uncertainty in ψ,

More information

Solution of Additional Exercises for Chapter 4

Solution of Additional Exercises for Chapter 4 1 1. (1) Try V (x) = 1 (x 1 + x ). Solution of Additional Exercises for Chapter 4 V (x) = x 1 ( x 1 + x ) x = x 1 x + x 1 x In the neighborhood of the origin, the term (x 1 + x ) dominates. Hence, the

More information

MCE693/793: Analysis and Control of Nonlinear Systems

MCE693/793: Analysis and Control of Nonlinear Systems MCE693/793: Analysis and Control of Nonlinear Systems Input-Output and Input-State Linearization Zero Dynamics of Nonlinear Systems Hanz Richter Mechanical Engineering Department Cleveland State University

More information

Experimental Results for Almost Global Asymptotic and Locally Exponential Stabilization of the Natural Equilibria of a 3D Pendulum

Experimental Results for Almost Global Asymptotic and Locally Exponential Stabilization of the Natural Equilibria of a 3D Pendulum Proceedings of the 26 American Control Conference Minneapolis, Minnesota, USA, June 4-6, 26 WeC2. Experimental Results for Almost Global Asymptotic and Locally Exponential Stabilization of the Natural

More information

Output Regulation of Uncertain Nonlinear Systems with Nonlinear Exosystems

Output Regulation of Uncertain Nonlinear Systems with Nonlinear Exosystems Output Regulation of Uncertain Nonlinear Systems with Nonlinear Exosystems Zhengtao Ding Manchester School of Engineering, University of Manchester Oxford Road, Manchester M3 9PL, United Kingdom zhengtaoding@manacuk

More information

The Acrobot and Cart-Pole

The Acrobot and Cart-Pole C H A P T E R 3 The Acrobot and Cart-Pole 3.1 INTRODUCTION A great deal of work in the control of underactuated systems has been done in the context of low-dimensional model systems. These model systems

More information

ROTATIONAL DOUBLE INVERTED PENDULUM

ROTATIONAL DOUBLE INVERTED PENDULUM ROTATIONAL DOUBLE INVERTED PENDULUM Thesis Submitted to The School of Engineering of the UNIVERSITY OF DAYTON In Partial Fulfillment of the Requirements for The Degree of Master of Science in Electrical

More information

Robust Control of Robot Manipulator by Model Based Disturbance Attenuation

Robust Control of Robot Manipulator by Model Based Disturbance Attenuation IEEE/ASME Trans. Mechatronics, vol. 8, no. 4, pp. 511-513, Nov./Dec. 2003 obust Control of obot Manipulator by Model Based Disturbance Attenuation Keywords : obot manipulators, MBDA, position control,

More information

NMT EE 589 & UNM ME 482/582 ROBOT ENGINEERING. Dr. Stephen Bruder NMT EE 589 & UNM ME 482/582

NMT EE 589 & UNM ME 482/582 ROBOT ENGINEERING. Dr. Stephen Bruder NMT EE 589 & UNM ME 482/582 NMT EE 589 & UNM ME 482/582 ROBOT ENGINEERING NMT EE 589 & UNM ME 482/582 Simplified drive train model of a robot joint Inertia seen by the motor Link k 1 I I D ( q) k mk 2 kk Gk Torque amplification G

More information

Prof. Krstic Nonlinear Systems MAE281A Homework set 1 Linearization & phase portrait

Prof. Krstic Nonlinear Systems MAE281A Homework set 1 Linearization & phase portrait Prof. Krstic Nonlinear Systems MAE28A Homework set Linearization & phase portrait. For each of the following systems, find all equilibrium points and determine the type of each isolated equilibrium. Use

More information

Control of Mobile Robots

Control of Mobile Robots Control of Mobile Robots Regulation and trajectory tracking Prof. Luca Bascetta (luca.bascetta@polimi.it) Politecnico di Milano Dipartimento di Elettronica, Informazione e Bioingegneria Organization and

More information

Control of Electromechanical Systems

Control of Electromechanical Systems Control of Electromechanical Systems November 3, 27 Exercise Consider the feedback control scheme of the motor speed ω in Fig., where the torque actuation includes a time constant τ A =. s and a disturbance

More information

Linearization problem. The simplest example

Linearization problem. The simplest example Linear Systems Lecture 3 1 problem Consider a non-linear time-invariant system of the form ( ẋ(t f x(t u(t y(t g ( x(t u(t (1 such that x R n u R m y R p and Slide 1 A: f(xu f(xu g(xu and g(xu exist and

More information

ASTATISM IN NONLINEAR CONTROL SYSTEMS WITH APPLICATION TO ROBOTICS

ASTATISM IN NONLINEAR CONTROL SYSTEMS WITH APPLICATION TO ROBOTICS dx dt DIFFERENTIAL EQUATIONS AND CONTROL PROCESSES N 1, 1997 Electronic Journal, reg. N P23275 at 07.03.97 http://www.neva.ru/journal e-mail: diff@osipenko.stu.neva.ru Control problems in nonlinear systems

More information

State Regulator. Advanced Control. design of controllers using pole placement and LQ design rules

State Regulator. Advanced Control. design of controllers using pole placement and LQ design rules Advanced Control State Regulator Scope design of controllers using pole placement and LQ design rules Keywords pole placement, optimal control, LQ regulator, weighting matrixes Prerequisites Contact state

More information

Stabilization of a Specified Equilibrium in the Inverted Equilibrium Manifold of the 3D Pendulum

Stabilization of a Specified Equilibrium in the Inverted Equilibrium Manifold of the 3D Pendulum Proceedings of the 27 American Control Conference Marriott Marquis Hotel at Times Square New York City, USA, July 11-13, 27 ThA11.6 Stabilization of a Specified Equilibrium in the Inverted Equilibrium

More information

Nonlinear Control. Nonlinear Control Lecture # 25 State Feedback Stabilization

Nonlinear Control. Nonlinear Control Lecture # 25 State Feedback Stabilization Nonlinear Control Lecture # 25 State Feedback Stabilization Backstepping η = f a (η)+g a (η)ξ ξ = f b (η,ξ)+g b (η,ξ)u, g b 0, η R n, ξ, u R Stabilize the origin using state feedback View ξ as virtual

More information

Video 8.1 Vijay Kumar. Property of University of Pennsylvania, Vijay Kumar

Video 8.1 Vijay Kumar. Property of University of Pennsylvania, Vijay Kumar Video 8.1 Vijay Kumar 1 Definitions State State equations Equilibrium 2 Stability Stable Unstable Neutrally (Critically) Stable 3 Stability Translate the origin to x e x(t) =0 is stable (Lyapunov stable)

More information

Output Feedback and State Feedback. EL2620 Nonlinear Control. Nonlinear Observers. Nonlinear Controllers. ẋ = f(x,u), y = h(x)

Output Feedback and State Feedback. EL2620 Nonlinear Control. Nonlinear Observers. Nonlinear Controllers. ẋ = f(x,u), y = h(x) Output Feedback and State Feedback EL2620 Nonlinear Control Lecture 10 Exact feedback linearization Input-output linearization Lyapunov-based control design methods ẋ = f(x,u) y = h(x) Output feedback:

More information

Nonlinear System Analysis

Nonlinear System Analysis Nonlinear System Analysis Lyapunov Based Approach Lecture 4 Module 1 Dr. Laxmidhar Behera Department of Electrical Engineering, Indian Institute of Technology, Kanpur. January 4, 2003 Intelligent Control

More information

IN [1], an Approximate Dynamic Inversion (ADI) control

IN [1], an Approximate Dynamic Inversion (ADI) control 1 On Approximate Dynamic Inversion Justin Teo and Jonathan P How Technical Report ACL09 01 Aerospace Controls Laboratory Department of Aeronautics and Astronautics Massachusetts Institute of Technology

More information

ECEN 420 LINEAR CONTROL SYSTEMS. Lecture 6 Mathematical Representation of Physical Systems II 1/67

ECEN 420 LINEAR CONTROL SYSTEMS. Lecture 6 Mathematical Representation of Physical Systems II 1/67 1/67 ECEN 420 LINEAR CONTROL SYSTEMS Lecture 6 Mathematical Representation of Physical Systems II State Variable Models for Dynamic Systems u 1 u 2 u ṙ. Internal Variables x 1, x 2 x n y 1 y 2. y m Figure

More information

NONLINEAR CONTROLLER DESIGN FOR ACTIVE SUSPENSION SYSTEMS USING THE IMMERSION AND INVARIANCE METHOD

NONLINEAR CONTROLLER DESIGN FOR ACTIVE SUSPENSION SYSTEMS USING THE IMMERSION AND INVARIANCE METHOD NONLINEAR CONTROLLER DESIGN FOR ACTIVE SUSPENSION SYSTEMS USING THE IMMERSION AND INVARIANCE METHOD Ponesit Santhanapipatkul Watcharapong Khovidhungij Abstract: We present a controller design based on

More information

Nonlinear disturbance observers Design and applications to Euler-Lagrange systems

Nonlinear disturbance observers Design and applications to Euler-Lagrange systems This paper appears in IEEE Control Systems Magazine, 2017. DOI:.19/MCS.2017.2970 Nonlinear disturbance observers Design and applications to Euler-Lagrange systems Alireza Mohammadi, Horacio J. Marquez,

More information

Robotics. Dynamics. University of Stuttgart Winter 2018/19

Robotics. Dynamics. University of Stuttgart Winter 2018/19 Robotics Dynamics 1D point mass, damping & oscillation, PID, dynamics of mechanical systems, Euler-Lagrange equation, Newton-Euler, joint space control, reference trajectory following, optimal operational

More information

Adaptive fuzzy observer and robust controller for a 2-DOF robot arm

Adaptive fuzzy observer and robust controller for a 2-DOF robot arm Adaptive fuzzy observer and robust controller for a -DOF robot arm S. Bindiganavile Nagesh, Zs. Lendek, A.A. Khalate, R. Babuška Delft University of Technology, Mekelweg, 8 CD Delft, The Netherlands (email:

More information

LMI Methods in Optimal and Robust Control

LMI Methods in Optimal and Robust Control LMI Methods in Optimal and Robust Control Matthew M. Peet Arizona State University Lecture 15: Nonlinear Systems and Lyapunov Functions Overview Our next goal is to extend LMI s and optimization to nonlinear

More information

Lecture 8. Chapter 5: Input-Output Stability Chapter 6: Passivity Chapter 14: Passivity-Based Control. Eugenio Schuster.

Lecture 8. Chapter 5: Input-Output Stability Chapter 6: Passivity Chapter 14: Passivity-Based Control. Eugenio Schuster. Lecture 8 Chapter 5: Input-Output Stability Chapter 6: Passivity Chapter 14: Passivity-Based Control Eugenio Schuster schuster@lehigh.edu Mechanical Engineering and Mechanics Lehigh University Lecture

More information

Handout 2: Invariant Sets and Stability

Handout 2: Invariant Sets and Stability Engineering Tripos Part IIB Nonlinear Systems and Control Module 4F2 1 Invariant Sets Handout 2: Invariant Sets and Stability Consider again the autonomous dynamical system ẋ = f(x), x() = x (1) with state

More information

MCE/EEC 647/747: Robot Dynamics and Control. Lecture 8: Basic Lyapunov Stability Theory

MCE/EEC 647/747: Robot Dynamics and Control. Lecture 8: Basic Lyapunov Stability Theory MCE/EEC 647/747: Robot Dynamics and Control Lecture 8: Basic Lyapunov Stability Theory Reading: SHV Appendix Mechanical Engineering Hanz Richter, PhD MCE503 p.1/17 Stability in the sense of Lyapunov A

More information

Converse Lyapunov theorem and Input-to-State Stability

Converse Lyapunov theorem and Input-to-State Stability Converse Lyapunov theorem and Input-to-State Stability April 6, 2014 1 Converse Lyapunov theorem In the previous lecture, we have discussed few examples of nonlinear control systems and stability concepts

More information

A Normal Form for Energy Shaping: Application to the Furuta Pendulum

A Normal Form for Energy Shaping: Application to the Furuta Pendulum Proc 4st IEEE Conf Decision and Control, A Normal Form for Energy Shaping: Application to the Furuta Pendulum Sujit Nair and Naomi Ehrich Leonard Department of Mechanical and Aerospace Engineering Princeton

More information

System Control Engineering

System Control Engineering System Control Engineering 0 Koichi Hashimoto Graduate School of Information Sciences Text: Feedback Systems: An Introduction for Scientists and Engineers Author: Karl J. Astrom and Richard M. Murray Text:

More information

Adaptive Robust Tracking Control of Robot Manipulators in the Task-space under Uncertainties

Adaptive Robust Tracking Control of Robot Manipulators in the Task-space under Uncertainties Australian Journal of Basic and Applied Sciences, 3(1): 308-322, 2009 ISSN 1991-8178 Adaptive Robust Tracking Control of Robot Manipulators in the Task-space under Uncertainties M.R.Soltanpour, M.M.Fateh

More information

Minimum-Phase Property of Nonlinear Systems in Terms of a Dissipation Inequality

Minimum-Phase Property of Nonlinear Systems in Terms of a Dissipation Inequality Minimum-Phase Property of Nonlinear Systems in Terms of a Dissipation Inequality Christian Ebenbauer Institute for Systems Theory in Engineering, University of Stuttgart, 70550 Stuttgart, Germany ce@ist.uni-stuttgart.de

More information

Robust Semiglobal Nonlinear Output Regulation The case of systems in triangular form

Robust Semiglobal Nonlinear Output Regulation The case of systems in triangular form Robust Semiglobal Nonlinear Output Regulation The case of systems in triangular form Andrea Serrani Department of Electrical and Computer Engineering Collaborative Center for Control Sciences The Ohio

More information

Swinging-Up and Stabilization Control Based on Natural Frequency for Pendulum Systems

Swinging-Up and Stabilization Control Based on Natural Frequency for Pendulum Systems 9 American Control Conference Hyatt Regency Riverfront, St. Louis, MO, USA June -, 9 FrC. Swinging-Up and Stabilization Control Based on Natural Frequency for Pendulum Systems Noriko Matsuda, Masaki Izutsu,

More information

Control of industrial robots. Centralized control

Control of industrial robots. Centralized control Control of industrial robots Centralized control Prof. Paolo Rocco (paolo.rocco@polimi.it) Politecnico di Milano ipartimento di Elettronica, Informazione e Bioingegneria Introduction Centralized control

More information

LMI Methods in Optimal and Robust Control

LMI Methods in Optimal and Robust Control LMI Methods in Optimal and Robust Control Matthew M. Peet Arizona State University Lecture 20: LMI/SOS Tools for the Study of Hybrid Systems Stability Concepts There are several classes of problems for

More information

Global output regulation through singularities

Global output regulation through singularities Global output regulation through singularities Yuh Yamashita Nara Institute of Science and Techbology Graduate School of Information Science Takayama 8916-5, Ikoma, Nara 63-11, JAPAN yamas@isaist-naraacjp

More information

Multi-Robotic Systems

Multi-Robotic Systems CHAPTER 9 Multi-Robotic Systems The topic of multi-robotic systems is quite popular now. It is believed that such systems can have the following benefits: Improved performance ( winning by numbers ) Distributed

More information

Asymptotic Smooth Stabilization of the Inverted 3D Pendulum

Asymptotic Smooth Stabilization of the Inverted 3D Pendulum IEEE TRANSACTIONS ON AUTOMATIC CONTROL 1 Asymptotic Smooth Stabilization of the Inverted 3D Pendulum Nalin A. Chaturvedi, N. Harris McClamroch, Fellow, IEEE, and Dennis S. Bernstein, Fellow, IEEE Abstract

More information

Review: control, feedback, etc. Today s topic: state-space models of systems; linearization

Review: control, feedback, etc. Today s topic: state-space models of systems; linearization Plan of the Lecture Review: control, feedback, etc Today s topic: state-space models of systems; linearization Goal: a general framework that encompasses all examples of interest Once we have mastered

More information

SLIDING MODE FAULT TOLERANT CONTROL WITH PRESCRIBED PERFORMANCE. Jicheng Gao, Qikun Shen, Pengfei Yang and Jianye Gong

SLIDING MODE FAULT TOLERANT CONTROL WITH PRESCRIBED PERFORMANCE. Jicheng Gao, Qikun Shen, Pengfei Yang and Jianye Gong International Journal of Innovative Computing, Information and Control ICIC International c 27 ISSN 349-498 Volume 3, Number 2, April 27 pp. 687 694 SLIDING MODE FAULT TOLERANT CONTROL WITH PRESCRIBED

More information

Output Feedback Stabilization with Prescribed Performance for Uncertain Nonlinear Systems in Canonical Form

Output Feedback Stabilization with Prescribed Performance for Uncertain Nonlinear Systems in Canonical Form Output Feedback Stabilization with Prescribed Performance for Uncertain Nonlinear Systems in Canonical Form Charalampos P. Bechlioulis, Achilles Theodorakopoulos 2 and George A. Rovithakis 2 Abstract The

More information

Stability in the sense of Lyapunov

Stability in the sense of Lyapunov CHAPTER 5 Stability in the sense of Lyapunov Stability is one of the most important properties characterizing a system s qualitative behavior. There are a number of stability concepts used in the study

More information

Chapter III. Stability of Linear Systems

Chapter III. Stability of Linear Systems 1 Chapter III Stability of Linear Systems 1. Stability and state transition matrix 2. Time-varying (non-autonomous) systems 3. Time-invariant systems 1 STABILITY AND STATE TRANSITION MATRIX 2 In this chapter,

More information

MCE693/793: Analysis and Control of Nonlinear Systems

MCE693/793: Analysis and Control of Nonlinear Systems MCE693/793: Analysis and Control of Nonlinear Systems Lyapunov Stability - I Hanz Richter Mechanical Engineering Department Cleveland State University Definition of Stability - Lyapunov Sense Lyapunov

More information

Nonlinear Control Lecture 9: Feedback Linearization

Nonlinear Control Lecture 9: Feedback Linearization Nonlinear Control Lecture 9: Feedback Linearization Farzaneh Abdollahi Department of Electrical Engineering Amirkabir University of Technology Fall 2011 Farzaneh Abdollahi Nonlinear Control Lecture 9 1/75

More information

1 Relative degree and local normal forms

1 Relative degree and local normal forms THE ZERO DYNAMICS OF A NONLINEAR SYSTEM 1 Relative degree and local normal orms The purpose o this Section is to show how single-input single-output nonlinear systems can be locally given, by means o a

More information

DO NOT DO HOMEWORK UNTIL IT IS ASSIGNED. THE ASSIGNMENTS MAY CHANGE UNTIL ANNOUNCED.

DO NOT DO HOMEWORK UNTIL IT IS ASSIGNED. THE ASSIGNMENTS MAY CHANGE UNTIL ANNOUNCED. EE 533 Homeworks Spring 07 Updated: Saturday, April 08, 07 DO NOT DO HOMEWORK UNTIL IT IS ASSIGNED. THE ASSIGNMENTS MAY CHANGE UNTIL ANNOUNCED. Some homework assignments refer to the textbooks: Slotine

More information

Represent this system in terms of a block diagram consisting only of. g From Newton s law: 2 : θ sin θ 9 θ ` T

Represent this system in terms of a block diagram consisting only of. g From Newton s law: 2 : θ sin θ 9 θ ` T Exercise (Block diagram decomposition). Consider a system P that maps each input to the solutions of 9 4 ` 3 9 Represent this system in terms of a block diagram consisting only of integrator systems, represented

More information

MEM04: Rotary Inverted Pendulum

MEM04: Rotary Inverted Pendulum MEM4: Rotary Inverted Pendulum Interdisciplinary Automatic Controls Laboratory - ME/ECE/CHE 389 April 8, 7 Contents Overview. Configure ELVIS and DC Motor................................ Goals..............................................3

More information

Reverse Order Swing-up Control of Serial Double Inverted Pendulums

Reverse Order Swing-up Control of Serial Double Inverted Pendulums Reverse Order Swing-up Control of Serial Double Inverted Pendulums T.Henmi, M.Deng, A.Inoue, N.Ueki and Y.Hirashima Okayama University, 3-1-1, Tsushima-Naka, Okayama, Japan inoue@suri.sys.okayama-u.ac.jp

More information

CONTROL OF ROBOT CAMERA SYSTEM WITH ACTUATOR S DYNAMICS TO TRACK MOVING OBJECT

CONTROL OF ROBOT CAMERA SYSTEM WITH ACTUATOR S DYNAMICS TO TRACK MOVING OBJECT Journal of Computer Science and Cybernetics, V.31, N.3 (2015), 255 265 DOI: 10.15625/1813-9663/31/3/6127 CONTROL OF ROBOT CAMERA SYSTEM WITH ACTUATOR S DYNAMICS TO TRACK MOVING OBJECT NGUYEN TIEN KIEM

More information