Stability theory is a fundamental topic in mathematics and engineering, that include every

Similar documents
EML5311 Lyapunov Stability & Robust Control Design

Nonlinear Systems and Control Lecture # 12 Converse Lyapunov Functions & Time Varying Systems. p. 1/1

Chapter III. Stability of Linear Systems

Lecture 4. Chapter 4: Lyapunov Stability. Eugenio Schuster. Mechanical Engineering and Mechanics Lehigh University.

1 Lyapunov theory of stability

Introduction to Nonlinear Control Lecture # 3 Time-Varying and Perturbed Systems

There is a more global concept that is related to this circle of ideas that we discuss somewhat informally. Namely, a region R R n with a (smooth)

Nonlinear Control Lecture 5: Stability Analysis II

Topic # /31 Feedback Control Systems. Analysis of Nonlinear Systems Lyapunov Stability Analysis

Chapter One. Introduction

Nonlinear Control. Nonlinear Control Lecture # 8 Time Varying and Perturbed Systems

Dynamical Systems and Chaos Part I: Theoretical Techniques. Lecture 4: Discrete systems + Chaos. Ilya Potapov Mathematics Department, TUT Room TD325

EN Nonlinear Control and Planning in Robotics Lecture 3: Stability February 4, 2015

Modeling and Analysis of Dynamic Systems

Convergence Rate of Nonlinear Switched Systems

Converse Lyapunov theorem and Input-to-State Stability

Stability of Nonlinear Systems An Introduction

Disturbance Attenuation Properties for Discrete-Time Uncertain Switched Linear Systems

Nonlinear Systems and Control Lecture # 19 Perturbed Systems & Input-to-State Stability

(x k ) sequence in F, lim x k = x x F. If F : R n R is a function, level sets and sublevel sets of F are any sets of the form (respectively);

ORDINARY DIFFERENTIAL EQUATIONS

Nonlinear Control. Nonlinear Control Lecture # 8 Time Varying and Perturbed Systems

Nonlinear systems. Lyapunov stability theory. G. Ferrari Trecate

Nonlinear Control Systems

CONTROL SYSTEMS, ROBOTICS AND AUTOMATION Vol. XI Stochastic Stability - H.J. Kushner

Module 06 Stability of Dynamical Systems

Georgia Institute of Technology Nonlinear Controls Theory Primer ME 6402

Applied Math Qualifying Exam 11 October Instructions: Work 2 out of 3 problems in each of the 3 parts for a total of 6 problems.

(Refer Slide Time: 00:32)

Practical Stabilization of Integrator Switched Systems

LMI Methods in Optimal and Robust Control

THREE DIMENSIONAL SYSTEMS. Lecture 6: The Lorenz Equations

Global stabilization of feedforward systems with exponentially unstable Jacobian linearization

Nonlinear Control Lecture 1: Introduction

Robust Stability. Robust stability against time-invariant and time-varying uncertainties. Parameter dependent Lyapunov functions

Nonlinear Systems Theory

1 The Observability Canonical Form

Lyapunov Stability Theory

Outline. Input to state Stability. Nonlinear Realization. Recall: _ Space. _ Space: Space of all piecewise continuous functions

Multi-Robotic Systems

B5.6 Nonlinear Systems

Global Attractors in PDE

CDS Solutions to the Midterm Exam

EN Nonlinear Control and Planning in Robotics Lecture 10: Lyapunov Redesign and Robust Backstepping April 6, 2015

Chapter 3 Pullback and Forward Attractors of Nonautonomous Difference Equations

Introduction to Real Analysis Alternative Chapter 1

ECE504: Lecture 8. D. Richard Brown III. Worcester Polytechnic Institute. 28-Oct-2008

Stability of Impulsive Switched Systems in Two Measures

CDS 101/110a: Lecture 2.1 Dynamic Behavior

ẋ = f(x, y), ẏ = g(x, y), (x, y) D, can only have periodic solutions if (f,g) changes sign in D or if (f,g)=0in D.

Existence and Uniqueness

CDS 101/110a: Lecture 2.1 Dynamic Behavior

An introduction to Mathematical Theory of Control

Nonlinear Autonomous Systems of Differential

Autonomous Systems and Stability

Department of Mathematics IIT Guwahati

3 Stability and Lyapunov Functions

Solutions: Problem Set 4 Math 201B, Winter 2007

Remarks on stability of time-varying linear systems

THE area of robust feedback stabilization for general

An asymptotic ratio characterization of input-to-state stability

In essence, Dynamical Systems is a science which studies differential equations. A differential equation here is the equation

Stability in the sense of Lyapunov

EE C128 / ME C134 Feedback Control Systems

Autonomous systems. Ordinary differential equations which do not contain the independent variable explicitly are said to be autonomous.

Control Systems. Internal Stability - LTI systems. L. Lanari

DYNAMICAL SYSTEMS

8. Qualitative analysis of autonomous equations on the line/population dynamics models, phase line, and stability of equilibrium points (corresponds

Abstract. Previous characterizations of iss-stability are shown to generalize without change to the

Observability for deterministic systems and high-gain observers

6.2 Brief review of fundamental concepts about chaotic systems

Asymptotic Stability by Linearization

Nonlinear Dynamical Systems Lecture - 01

Discrete and continuous dynamic systems

Propagating terraces and the dynamics of front-like solutions of reaction-diffusion equations on R

Nonlinear Adaptive Estimation Andits Application To Synchronization Of Lorenz System

Asymptotic Disturbance Attenuation Properties for Continuous-Time Uncertain Switched Linear Systems

TTK4150 Nonlinear Control Systems Solution 6 Part 2

Nonlinear Control. Nonlinear Control Lecture # 3 Stability of Equilibrium Points

LECTURE 8: DYNAMICAL SYSTEMS 7

Intelligent Control. Module I- Neural Networks Lecture 7 Adaptive Learning Rate. Laxmidhar Behera

Stability of Polynomial Differential Equations: Complexity and Converse Lyapunov Questions

arxiv: v3 [math.ds] 22 Feb 2012

Set, functions and Euclidean space. Seungjin Han

Lecture 4: Numerical solution of ordinary differential equations

MCE/EEC 647/747: Robot Dynamics and Control. Lecture 8: Basic Lyapunov Stability Theory

MOST control systems are designed under the assumption

On Existence and Uniqueness of Solutions of Ordinary Differential Equations

Nonlinear Control. Nonlinear Control Lecture # 2 Stability of Equilibrium Points

Solution of Linear State-space Systems

B553 Lecture 1: Calculus Review

Large Deviations for Small-Noise Stochastic Differential Equations

2 Discrete Dynamical Systems (DDS)

Video 8.1 Vijay Kumar. Property of University of Pennsylvania, Vijay Kumar

High-Gain Observers in Nonlinear Feedback Control. Lecture # 3 Regulation

MCE693/793: Analysis and Control of Nonlinear Systems

7 Planar systems of linear ODE

Target Localization and Circumnavigation Using Bearing Measurements in 2D

Lecture 1: A Preliminary to Nonlinear Dynamics and Chaos

1. Continuous Functions between Euclidean spaces

Transcription:

Stability Theory Stability theory is a fundamental topic in mathematics and engineering, that include every branches of control theory. For a control system, the least requirement is that the system is stable, since only a stable system can operate in the presence of unknown disturbances or noises. There are many kinds of stability concepts such as input-output stability, absolute stability, Lyapunov stability, and stability of periodic solutions. These stability concept has been studied extensively for almost one hundred years, and there is a rich literature on this topic. In analyzing and designing a nonlinear control system, Lyapunov stability theory plays a vital role for the following reasons. First, the Lyapunov s direct method uses an energy-like function, so-called Lyapunov function, to study behavior of dynamical systems, which reflects in many cases physical properties of the system under study. Second, the Lyapunov s second method is applicable to nonlinear systems. Finally, many results in input-output stability can be obtained using Lyapunov stability theory. Lyapunov stability theory generally includes Lyapunov s first and second methods. The Lyapunov s first method and center manifold theory developed later are basically techniques based on lowest order approximation around a given point or a nominal trajectory. The stability results achieved using these two methods are inherently local, and stability region may be hard to estimate and are often very small. Since the objective of this class is to study nonlinear systems directly, we choose to skip these two approximation techniques. Our primary interest is in stability theory based on Lyapunov s second method for systems described by ordinary differential equations. 1

Stability Concepts The following definitions together with theorems presented in the coming sections form the necessary foundation for the analysis presented in subsequent chapters. We begin with several standard definitions on stability in the sense of Lyapunov. The equilibrium point x = 0 is said to be Lyapunov stable (LS) at time t 0 if, for each ɛ > 0, there exists a constant δ(t 0, ɛ) > 0 such that x(t 0 ) < δ(t 0, ɛ) = x(t) ɛ t t 0. It is said to be uniformly Lyapunov stable (ULS) over [t 0, ) if, for each ɛ > 0, the constant δ(t 0, ɛ) = δ(ɛ) > 0 is independent of initial time t 0. The equilibrium point x = 0 is said to be attractive at time t 0 if, for some δ > 0 and each ɛ > 0, there exists a finite time interval T (t 0, δ, ɛ) such that x(t 0 ) < δ = x(t) ɛ t t 0 + T (t 0, δ, ɛ). It is said to be uniformly attractive (UA) over [t 0, ) if for all ɛ satisfying 0 < ɛ < δ, the finite time interval T (t 0, δ, ɛ) = T (δ, ɛ) is independent of initial time t 0. The equilibrium point x = 0 is asymptotically stable (AS) at time t 0 if it is Lyapunov stable at time t 0 and if it is attractive, or equivalently, there exists δ > 0 such that x(t 0 ) < δ = x(t) 0 as t. It is uniformly asymptotically stable (UAS) over [t 0, ) if it is uniformly Lyapunov stable over [t 0, ), and if x = 0 is uniformly attractive. The concepts of uniform stability, uniform attraction, and uniform asymptotic stability are motivated by the fact that stability and performance properties of many systems are independent of initial time t 0, for example, autonomous systems. Uniformity is important for establishing many results in Lyapunov stability theory, for instance, converse theorems. 2

Comparisons between stability, attraction, and asymptotic stability can be made in the twodimensional space, as shown by Figure 1. By definition, asymptotic stability implies both attraction and Lyapunov stability. The difference between attraction and Lyapunov stability is twofold. First, in the definition of stability, δ has to be chosen to be at least not larger and often much smaller than the given constant ɛ, that is, ɛ δ(ɛ); in the definition of attraction, ɛ is independent of δ and can be chosen to anything smaller than δ. This implies that stability does not mean attraction. Second, attraction does not imply stability either since, no matter how small δ is chosen, the outer boundary ɛ for x(t), t t 0, may not necessarily become small. Beyond asymptotic stability and uniformity, many control applications require certain speed of convergence. In this case, the following definition provides a common used terminology. The equilibrium point x = 0 at time t 0 is exponentially attractive (EA) if, for some δ > 0, there exist constants α(δ) > 0 and β > 0 such that x(t 0 ) < δ = x(t) α(δ)e β(t t 0). It is said to be exponentially stable (ES) if, for some δ > 0, there exist constants α > 0 and β > 0 such that x(t 0 ) < δ = x(t) α(δ) x(t 0 ) e β(t t0). Exponential stability always implies uniform asymptotic stability. The converse is true for linear systems but not for nonlinear systems in general. As an example, the solution of the scalar nonlinear system ẋ = x 5 can be found easily to show asymptotic but not exponential stability. The above definitions are phrased for systems which equilibrium point is at origin. It can be easily extended to systems with any finite, known equilibrium state since a given finite point (or a given trajectory for tracking problem) can always be translated to the origin under simple coordinate transformation. For uncertain systems in which some of dynamics 3

are unknown, it is impossible for one to determine the equilibrium state. The lack of any information on equilibrium point(s) causes problems in stability analysis. First, the above definitions, strictly speaking, become useless since no coordinate translation can be done. Second, even if we choose not to check equilibrium point, the system is in general not Lyapunov stable or attractive about x = 0 or any known, fixed point. The reason is that uncertain system may never settle down, and even if it does, the system will not converge to any given point. A simple example is the system ẋ 1 = x 2 + d(t) and ẋ 2 = u where uncertainty d(t) has magnitude bounded by one. There is no control under which a fixed point is the equilibrium state of the system. As a result, stability or attraction with respect to a fixed point can never be achieved. This implies that, rather than requiring that system trajectory stay in or converge to an arbitrarily small neighborhood around x = 0, stability concepts oriented to uncertain systems should be in terms of certain measure of closeness between the solution and the origin. The following are two definitions along this line, which are somewhat less familiar than the above definitions but are crucial to a discussion of robust control of uncertain systems. A solution x : R + R n, x(t 0 ) = x 0, is said to be uniformly bounded (UB) if, for some δ > 0, there is a positive constant d(δ) <, possibly dependent on δ (or x 0 ) but not on t 0, such that x(t 0 ) < δ = x(t) d(δ) t t 0. A solution x : R + R n, x(t 0 ) = x 0, is said to be uniformly ultimately bounded (UUB) with respect to a set W R n containing the origin if there is a non-negative constant T (x 0, W ) <, possibly dependent on x 0 and W but not on t 0, such that x(t 0 ) < δ implies x(t) W for all t t 0 + T (x 0, W ). The set W in the above Definition, called residue set, is usually characterized by a hyperball W = B(0, ɛ) centered at the origin and of radius ɛ. If ɛ is chosen such that ɛ d(δ), 4

UUB stability reduces to UB stability. Although not explicitly stated in the definition, UUB stability is used mainly for the case that ɛ is small, which presents a better stability result than UB stability. The relations between two kinds of boundedness and the previous stability definitions can be seen by comparing Figures 1 and 2, again in the two-dimensional space. Specifically, Lyapunov stability implies uniform boundedness, the converse is not true in general since uniform boundedness does not imply Lyapunov stability unless lim δ d(δ) = 0; attraction implies ultimate boundedness, the converse is not true in general since uniform boundedness does not imply attraction unless the set W becomes arbitrarily small and eventually a single point, the origin, in the limit as T (x 0, W ) tends to infinity. Thus, the smaller d(η) (or W = B(0, ɛ)) we can make, the closer UB (or UUB) to Lyapunov stability (or attraction). If both d(η) and W can be made arbitrarily small, UB and UUB approach uniform asymptotic stability in the limit. In some literature, UUB stability is called practical stability. The desired outcome of robust control is to make state or output of an uncertain system exponentially stable if possible, or uniformly asymptotically stable if ES cannot be achieved, or UUB if UAS is not achievable. The UUB stability is less restrictive than AS or ES, but, as to be shown later, it can be made arbitrarily close to UAS in many cases through making the set W small enough by properly designing robust control. Also, UUB stability gives certain measure on convergence speed by offering the time interval T (x 0, W ). Therefore, the UUB stability is often the best result achievable for controlling uncertain systems, which will be reflected by the results available in this book. All the definitions above qualitatively state certain properties of the solutions of differential equations in a neighborhood of the equilibrium state. For a given system, there exists theoretically a maximum (or supremum) value for δ in the definitions. This supremum value is usually used to characterize the region of stability or stability region since it represents the radius of n-dimensional hyper-ball, centered at the origin of R n such that the system 5

is stable (in the sense of either Lyapunov stable, or attractive, or asymptotically stable, or exponentially stable, or uniformly bounded, or ultimately bounded) at every point inside the ball. If the supremum value is finite, the system is stable only inside a finite ball and therefore called to be locally stable; and globally stable or stable in the large if otherwise. The adjectives global, uniform, and asymptotic can be used together. If more than one of them are used, intersections of conditions in the corresponding definitions are implied. The stability definitions may be introduced using norms different from Euclidean norm used above. Since all norms are equivalent as summarized in the Appendix, stability concepts are independent of norms. However, different norms represent different geometrical shapes for stability region. It is therefore true that, for a locally stable system, the estimate of its stability region may be maximized by choosing a proper norm. Since most results in this book are concerned about global stability and since Euclidean norm is popular in stability analysis and in defining Lyapunov function, the details of choosing various norm are not pursued. The various stability concepts defined above are based on properties of the solution to the differential equation of the system. For nonlinear systems, analyzing stability through finding explicit solution is generally very difficult, and becomes impossible for uncertain systems since solution can never be found. The only general way of pursuing stability analysis and control design for uncertain systems is Lyapunov direct method which determines stability without explicitly solving for solution. Therefore, Lyapunov direct method provides mathematical foundation for analysis and can be used as the means of designing robust control, and we choose it as the main approach taken in this book. 6