Stability Theory Stability theory is a fundamental topic in mathematics and engineering, that include every branches of control theory. For a control system, the least requirement is that the system is stable, since only a stable system can operate in the presence of unknown disturbances or noises. There are many kinds of stability concepts such as input-output stability, absolute stability, Lyapunov stability, and stability of periodic solutions. These stability concept has been studied extensively for almost one hundred years, and there is a rich literature on this topic. In analyzing and designing a nonlinear control system, Lyapunov stability theory plays a vital role for the following reasons. First, the Lyapunov s direct method uses an energy-like function, so-called Lyapunov function, to study behavior of dynamical systems, which reflects in many cases physical properties of the system under study. Second, the Lyapunov s second method is applicable to nonlinear systems. Finally, many results in input-output stability can be obtained using Lyapunov stability theory. Lyapunov stability theory generally includes Lyapunov s first and second methods. The Lyapunov s first method and center manifold theory developed later are basically techniques based on lowest order approximation around a given point or a nominal trajectory. The stability results achieved using these two methods are inherently local, and stability region may be hard to estimate and are often very small. Since the objective of this class is to study nonlinear systems directly, we choose to skip these two approximation techniques. Our primary interest is in stability theory based on Lyapunov s second method for systems described by ordinary differential equations. 1
Stability Concepts The following definitions together with theorems presented in the coming sections form the necessary foundation for the analysis presented in subsequent chapters. We begin with several standard definitions on stability in the sense of Lyapunov. The equilibrium point x = 0 is said to be Lyapunov stable (LS) at time t 0 if, for each ɛ > 0, there exists a constant δ(t 0, ɛ) > 0 such that x(t 0 ) < δ(t 0, ɛ) = x(t) ɛ t t 0. It is said to be uniformly Lyapunov stable (ULS) over [t 0, ) if, for each ɛ > 0, the constant δ(t 0, ɛ) = δ(ɛ) > 0 is independent of initial time t 0. The equilibrium point x = 0 is said to be attractive at time t 0 if, for some δ > 0 and each ɛ > 0, there exists a finite time interval T (t 0, δ, ɛ) such that x(t 0 ) < δ = x(t) ɛ t t 0 + T (t 0, δ, ɛ). It is said to be uniformly attractive (UA) over [t 0, ) if for all ɛ satisfying 0 < ɛ < δ, the finite time interval T (t 0, δ, ɛ) = T (δ, ɛ) is independent of initial time t 0. The equilibrium point x = 0 is asymptotically stable (AS) at time t 0 if it is Lyapunov stable at time t 0 and if it is attractive, or equivalently, there exists δ > 0 such that x(t 0 ) < δ = x(t) 0 as t. It is uniformly asymptotically stable (UAS) over [t 0, ) if it is uniformly Lyapunov stable over [t 0, ), and if x = 0 is uniformly attractive. The concepts of uniform stability, uniform attraction, and uniform asymptotic stability are motivated by the fact that stability and performance properties of many systems are independent of initial time t 0, for example, autonomous systems. Uniformity is important for establishing many results in Lyapunov stability theory, for instance, converse theorems. 2
Comparisons between stability, attraction, and asymptotic stability can be made in the twodimensional space, as shown by Figure 1. By definition, asymptotic stability implies both attraction and Lyapunov stability. The difference between attraction and Lyapunov stability is twofold. First, in the definition of stability, δ has to be chosen to be at least not larger and often much smaller than the given constant ɛ, that is, ɛ δ(ɛ); in the definition of attraction, ɛ is independent of δ and can be chosen to anything smaller than δ. This implies that stability does not mean attraction. Second, attraction does not imply stability either since, no matter how small δ is chosen, the outer boundary ɛ for x(t), t t 0, may not necessarily become small. Beyond asymptotic stability and uniformity, many control applications require certain speed of convergence. In this case, the following definition provides a common used terminology. The equilibrium point x = 0 at time t 0 is exponentially attractive (EA) if, for some δ > 0, there exist constants α(δ) > 0 and β > 0 such that x(t 0 ) < δ = x(t) α(δ)e β(t t 0). It is said to be exponentially stable (ES) if, for some δ > 0, there exist constants α > 0 and β > 0 such that x(t 0 ) < δ = x(t) α(δ) x(t 0 ) e β(t t0). Exponential stability always implies uniform asymptotic stability. The converse is true for linear systems but not for nonlinear systems in general. As an example, the solution of the scalar nonlinear system ẋ = x 5 can be found easily to show asymptotic but not exponential stability. The above definitions are phrased for systems which equilibrium point is at origin. It can be easily extended to systems with any finite, known equilibrium state since a given finite point (or a given trajectory for tracking problem) can always be translated to the origin under simple coordinate transformation. For uncertain systems in which some of dynamics 3
are unknown, it is impossible for one to determine the equilibrium state. The lack of any information on equilibrium point(s) causes problems in stability analysis. First, the above definitions, strictly speaking, become useless since no coordinate translation can be done. Second, even if we choose not to check equilibrium point, the system is in general not Lyapunov stable or attractive about x = 0 or any known, fixed point. The reason is that uncertain system may never settle down, and even if it does, the system will not converge to any given point. A simple example is the system ẋ 1 = x 2 + d(t) and ẋ 2 = u where uncertainty d(t) has magnitude bounded by one. There is no control under which a fixed point is the equilibrium state of the system. As a result, stability or attraction with respect to a fixed point can never be achieved. This implies that, rather than requiring that system trajectory stay in or converge to an arbitrarily small neighborhood around x = 0, stability concepts oriented to uncertain systems should be in terms of certain measure of closeness between the solution and the origin. The following are two definitions along this line, which are somewhat less familiar than the above definitions but are crucial to a discussion of robust control of uncertain systems. A solution x : R + R n, x(t 0 ) = x 0, is said to be uniformly bounded (UB) if, for some δ > 0, there is a positive constant d(δ) <, possibly dependent on δ (or x 0 ) but not on t 0, such that x(t 0 ) < δ = x(t) d(δ) t t 0. A solution x : R + R n, x(t 0 ) = x 0, is said to be uniformly ultimately bounded (UUB) with respect to a set W R n containing the origin if there is a non-negative constant T (x 0, W ) <, possibly dependent on x 0 and W but not on t 0, such that x(t 0 ) < δ implies x(t) W for all t t 0 + T (x 0, W ). The set W in the above Definition, called residue set, is usually characterized by a hyperball W = B(0, ɛ) centered at the origin and of radius ɛ. If ɛ is chosen such that ɛ d(δ), 4
UUB stability reduces to UB stability. Although not explicitly stated in the definition, UUB stability is used mainly for the case that ɛ is small, which presents a better stability result than UB stability. The relations between two kinds of boundedness and the previous stability definitions can be seen by comparing Figures 1 and 2, again in the two-dimensional space. Specifically, Lyapunov stability implies uniform boundedness, the converse is not true in general since uniform boundedness does not imply Lyapunov stability unless lim δ d(δ) = 0; attraction implies ultimate boundedness, the converse is not true in general since uniform boundedness does not imply attraction unless the set W becomes arbitrarily small and eventually a single point, the origin, in the limit as T (x 0, W ) tends to infinity. Thus, the smaller d(η) (or W = B(0, ɛ)) we can make, the closer UB (or UUB) to Lyapunov stability (or attraction). If both d(η) and W can be made arbitrarily small, UB and UUB approach uniform asymptotic stability in the limit. In some literature, UUB stability is called practical stability. The desired outcome of robust control is to make state or output of an uncertain system exponentially stable if possible, or uniformly asymptotically stable if ES cannot be achieved, or UUB if UAS is not achievable. The UUB stability is less restrictive than AS or ES, but, as to be shown later, it can be made arbitrarily close to UAS in many cases through making the set W small enough by properly designing robust control. Also, UUB stability gives certain measure on convergence speed by offering the time interval T (x 0, W ). Therefore, the UUB stability is often the best result achievable for controlling uncertain systems, which will be reflected by the results available in this book. All the definitions above qualitatively state certain properties of the solutions of differential equations in a neighborhood of the equilibrium state. For a given system, there exists theoretically a maximum (or supremum) value for δ in the definitions. This supremum value is usually used to characterize the region of stability or stability region since it represents the radius of n-dimensional hyper-ball, centered at the origin of R n such that the system 5
is stable (in the sense of either Lyapunov stable, or attractive, or asymptotically stable, or exponentially stable, or uniformly bounded, or ultimately bounded) at every point inside the ball. If the supremum value is finite, the system is stable only inside a finite ball and therefore called to be locally stable; and globally stable or stable in the large if otherwise. The adjectives global, uniform, and asymptotic can be used together. If more than one of them are used, intersections of conditions in the corresponding definitions are implied. The stability definitions may be introduced using norms different from Euclidean norm used above. Since all norms are equivalent as summarized in the Appendix, stability concepts are independent of norms. However, different norms represent different geometrical shapes for stability region. It is therefore true that, for a locally stable system, the estimate of its stability region may be maximized by choosing a proper norm. Since most results in this book are concerned about global stability and since Euclidean norm is popular in stability analysis and in defining Lyapunov function, the details of choosing various norm are not pursued. The various stability concepts defined above are based on properties of the solution to the differential equation of the system. For nonlinear systems, analyzing stability through finding explicit solution is generally very difficult, and becomes impossible for uncertain systems since solution can never be found. The only general way of pursuing stability analysis and control design for uncertain systems is Lyapunov direct method which determines stability without explicitly solving for solution. Therefore, Lyapunov direct method provides mathematical foundation for analysis and can be used as the means of designing robust control, and we choose it as the main approach taken in this book. 6