An Introduction for Scientists and Engineers SECOND EDITION

Size: px
Start display at page:

Download "An Introduction for Scientists and Engineers SECOND EDITION"

Transcription

1 Feedback Systems An Introduction for Scientists and Engineers SECOND EDITION Karl Johan Åström Richard M. Murray Version v3.i (8--6) This is the electronic edition of Feedback Systems and is available from Hardcover editions may be purchased from Princeton University Press, This manuscript is for personal use only and may not be reproduced, in whole or in part, without written consent from the publisher (see PRINCETON UNIVERSITY PRESS PRINCETON AND OXFORD

2 Chapter Five Dynamic Behavior It Don t Mean a Thing If It Ain t Got That Swing. Duke Ellington ( ) In this chapter we present a broad discussion of the behavior of dynamical systems focused on systems modeled by nonlinear differential equations. This allows us to consider equilibrium points, stability, limit cycles, andotherkeyconceptsin understanding dynamic behavior. We also introduce some methods for analyzing the global behavior of solutions. 5. Solving Differential Equations In the previous two chapters we saw that one of the methods of modeling dynamical systems is through the use of ordinary differential equations (ODEs). A state space, input/output system has the form = f (x,u), y = h(x,u), (5.) where x =(x,...,x n ) R n is the state, u R p is the input, and y R q is the output. The smooth maps f : R n R p R n and h : R n R p R q represent the dynamics and measurements for the system. In general, they can be nonlinear functions of their arguments. Systems with many inputs and many outputs are calledmulti- input, multi-output systems (MIMO) systems. We will usually focusonsingle- input, single-output (SISO) systems, for which p = q =. We begin by investigating systems in which the input has been set to a function of the state, u = α(x). This is one of the simplest types of feedback, in which the system regulates its own behavior. The differential equations in this case become = f (x,α(x)) =: F(x). (5.) To understand the dynamic behavior of this system, we need to analyze the features of the solutions of equation (5.). While in some simple situations we can write down the solutions in analytical form, often we must rely on computational approaches. We begin by describing the class of solutions for thisproblem. We say that x(t) is a solution of the differential equation (5.) on the time interval t R to t f R if (t) = F(x(t)) for all t < t < t f.

3 5- CHAPTER 5. DYNAMIC BEHAVIOR Agivendifferentialequationmayhavemanysolutions.Wewill most often be interested in the initial value problem, where x(t) is prescribed at a given time t R and we wish to find a solution valid for all future time t > t. We say that x(t) is a solution of the differential equation (5.) with initialvalue x R n at t R if (t) x(t )=x and = F(x(t)) for all t < t < t f. For most differential equations we will encounter, there is a unique solution that is defined for t < t < t f. The solution may be defined for all time t > t,inwhich case we take t f =. Becausewewillprimarilybeinterestedinsolutionsofthe initial value problem for differential equations, we will usually referto this simply as the solution of a differential equation. We will typically assume that t is equal to. In the case when F is independent of time (as in equation (5.)), we can do so without loss of generality by choosing anewindependent(time)variable,τ = t t (Exercise 5.). Example 5. Damped oscillator Consider a damped linear oscillator with dynamics of the form q + ζω q + ω q =, where q is the displacement of the oscillator from its rest position. These dynamics are equivalent to those of a spring mass system, as shown in Exercise 3.6. We assume that ζ <, corresponding to a lightly damped system (the reason for this particular choice will become clear later). We can rewrite this in state space form by setting x = q and x = q/ω,giving = ω x, = ω x ζω x. In vector form, the right-hand side can be written as ω F(x)= x. ω x ζω x The solution to the initial value problem can be written in a number of different ways and will be explored in more detail in Chapter 6. Here we simply assert that the solution can be written as ( x (t)=e ζω t x cosω d t + ) (ω ζ x + x )sinω d t, ω d ( x (t)=e ζω t x cosω d t ) (ω x + ω ζ x )sinω d t, ω d where x =(x,x ) is the initial condition and ω d = ω ζ. This solution can be verified by substituting it into the differential equation. We see that the solution is explicitly dependent on the initial condition, anditcanbeshownthatthis solution is unique. A plot of the initial condition response is shown in Figure 5..

4 5.. SOLVING DIFFERENTIAL EQUATIONS 5-3 x x States x, x Time t [s] Figure 5.: Response of the damped oscillator to the initial condition x =(,). The solution is unique for the given initial conditions and consists of an oscillatory solution for each state, with an exponentially decaying magnitude. We note that this form of the solution holds only for < ζ <, corresponding to an underdamped oscillator. Without imposing some mathematical conditions on the function F, thedifferential equation (5.) may not have a solution for all t, anhereisnoguarantee that the solution is unique. We illustrate these possibilities with two examples. Example 5. Finite escape time Let x R and consider the differential equation = x (5.3) with the initial condition x()=. By differentiation we can verify that the function x(t)= t satisfies the differential equation and that it also satisfies the initial condition. A graph of the solution is given in Figure 5.a;s notice that the solution goes to infinity as t goes to. We say that this system has finite escape time. Thus the solution exists only in the time interval t <. Example 5.3 Nonunique solution Let x R and consider the differential equation = x (5.4) with initial condition x()=. We can show that the function { if t a, x(t)= (t a) if t > a satisfies the differential equation for all values of the parameter a. To see this,

5 5-4 CHAPTER 5. DYNAMIC BEHAVIOR State x 5 State x 5 a.5.5 Time t (a) Finite escape time Time t (b) Nonunique solutions Figure 5.: Existence and uniqueness of solutions. Equation (5.3) has a solution only for time t <, at which point the solution goes to infinity, as shown in (a). Equation (5.4) is an example of a system with many solutions, as shown in (b). For each value of a, wegeta different solution starting from the same initial condition. we differentiate x(t) to obtain = { if t a, (t a) if t > a, and hence ẋ = x for all t withx() =. A graph of some of the possible solutions is given in Figure 5.b. Notice that in this case there are many solutions to the differential equation. These simple examples show that there may be difficulties even with simple differential equations. Existence and uniqueness can be guaranteed by requiring that the function F have the property that for some fixed c R, F(x) F(y) < c x y for all x,y, which is called Lipschitz continuity. A sufficient condition for a function to be Lipschitz is that the Jacobian F/ x is uniformly bounded for all x. The difficulty in Example 5. is that the derivative F/ x becomes large for large x, anhe difficulty in Example 5.3 is that the derivative F/ x is infinite at the origin. 5. Qualitative Analysis The qualitative behavior of nonlinear systems is important in understandingsome of the key concepts of stability in nonlinear dynamics. We will focus on an important class of systems known as planar dynamical systems. These systems have two state variables x R,allowingtheirsolutionstobeplottedinthe(x,x ) plane. The basic concepts that we describe hold more generally and can be useo understand dynamical behavior in higher dimensions. Phase Portraits A convenient way to understand the behavior of dynamical systems with state x R is to plot the phase portrait of the system, briefly introduced in Chapter 3.

6 5.. QUALITATIVE ANALYSIS x x x.5.5 x (a) Vector field (b) Phase portrait Figure 5.3: Phase portraits. (a) This plot shows the vector field for a planar dynamical system. Each arrow shows the velocity at that point in the state space. (b) This plot includes the solutions (sometimes called streamlines) from different initial conditions, with the vector field superimposed. We start by introducing the concept of a vector field. Forasystemofordinary differential equations = F(x), the right-hand side of the differential equation defines at every x R n avelocity F(x) R n. This velocity tells us how x changes and can be represented as a vector F(x) R n. For planar dynamical systems, each state corresponds to a point in the plane and F(x) is a vector representing the velocity of that state. We can plot these vectors on a grid of points in the plane and obtain a visual image of the dynamics of the system, as shown in Figure 5.3a. The points where the velocities are zero are of particular interest since they define stationary points of the flow: if we start at such astate,westayatthatstate. A phase portrait is constructed by plotting the flow of the vector field corresponding to the planar dynamical system. That is, for a set of initial conditions, we plot the solution of the differential equation in the plane R. This corresponds to following the arrows at each point in the phase plane and drawing the resulting trajectory. By plotting the solutions for several different initial conditions, we obtain a phase portrait, as show in Figure 5.3b. Phase portraits are also sometimes called phase plane diagrams. Phase portraits give insight into the dynamics of the system by showing the solutions plotted in the (two-dimensional) state space of the system.for example,we can see whether all trajectories tend to a single point as time increasesorwhether there are more complicated behaviors. In the example in Figure 5.3, corresponding to a damped oscillator, the solutions approach the origin for all initial conditions. This is consistent with our simulation in Figure 5., but it allows us to infer the behavior for all initial conditions rather than a single initial condition. However, the phase portrait does not readily tell us the rate of change of the states (although

7 5-6 CHAPTER 5. DYNAMIC BEHAVIOR m θ (a) l u (b) x π π π π x (c) Figure 5.4: Equilibrium points for an inverted pendulum. An inverted pendulum is a model for a class of balance systems in which we wish to keep a system upright, such as a rocket (a). Using a simplified model of an inverted pendulum (b), we can develop a phase portrait that shows the dynamics of the system (c). The system has multiple equilibrium points, marked by the solid dots along the x = line. this can be inferred from the lengths of the arrows in the vector field plot). Equilibrium Points and Limit Cycles An equilibrium point of a dynamical system represents a stationary condition for the dynamics. We say that a state x e is an equilibrium point for a dynamical system = F(x) if F(x e )=. If a dynamical system has an initial condition x()=x e,thenitwill stay at the equilibrium point: x(t)=x e for all t, where we have taken t =. Equilibrium points are one of the most important features of a dynamical system since they define the states corresponding to constant operating conditions. A dynamical system can have zero, one, or more equilibrium points. Example 5.4 Inverted pendulum Consider the inverted pendulum in Figure 5.4, which is a part ofthebalancesystem we considered in Chapter 3. The inverted pendulum is a simplified versionofthe problem of stabilizing a rocket: by applying forces at the base of the rocket, we seek to keep the rocket stabilized in the upright position. The statevariablesare the angle θ = x and the angular velocity dθ/ = x,thecontrolvariableisthe acceleration u of the pivot, and the output is the angle θ. For simplicity we assume that mgl/j t =, l/j t = andsetc = γ/j t,sothatthe dynamics (equation (3.)) become = x. (5.5) sinx cx + ucosx This is a nonlinear time-invariant system of second order. This same set of equations can also be obtained by appropriate normalization of the systemdynamics as illustrated in Example 3..

8 5.. QUALITATIVE ANALYSIS x.5 x, x x x.5 x (a) 3 Time t (b) Figure 5.5: Phase portrait and time domain simulation for a system with a limit cycle. The phase portrait (a) shows the states of the solution plotted for different initial conditions. The limit cycle corresponds to a closed loop trajectory. The simulation (b) shows a single solution plotted as a function of time, with the limit cycle corresponding to a steady oscillation of fixed amplitude. We consider the open loop dynamics by setting u =. The equilibrium points for the system are given by x e = ±nπ, where n =,,,... The equilibrium points for n even correspond to the pendulum pointing up and those for n odd correspond to the pendulum hanging down. A phase portrait for this system (without corrective inputs) is shown in Figure 5.4c. The phase portrait shows π x π, sofiveoftheequilibriumpointsare shown. Nonlinear systems can exhibit rich behavior. Apart from equilibria they can also exhibit stationary periodic solutions. This is of great practical value in generating sinusoidally varying voltages in power systems or in generatingperiodic signals for animal locomotion. A simple example is given in Exercise 5.3, which shows the circuit diagram for an electronic oscillator. A normalized model of the oscillator is given by the equation = x + x ( x x ), = x + x ( x x ). (5.6) The phase portrait and time domain solutions are given in Figure 5.5. The figure shows that the solutions in the phase plane converge to a circular trajectory. In the time domain this corresponds to an oscillatory solution. Mathematically the circle is called a limit cycle. Moreformally,wecallanonconstantsolutionx p (t) alimit cycle of period T > ifx p (t + T )=x p (t) for all t R and nearby trajectories converge to x p ( ) as t (stable limit cycle) or t (unstable limit cycle). There are methods for determining limit cycles for second-order systems, but for general higher-order systems we have to resort to computational analysis. Computer algorithms find limit cycles by searching for periodic trajectories in state

9 5-8 CHAPTER 5. DYNAMIC BEHAVIOR State x 4 ε Time t Figure 5.6: Illustration of Lyapunov s concept of a stable solution. The solution represented by the solid line is stable if we can guarantee that all solutions remain within a tube of diameter ε by choosing initial conditions sufficiently close the solution. space that satisfy the dynamics of the system. In many situations, stable limit cycles can be found by simulating the system with different initial conditions. 5.3 Stability The stability of a solution determines whether or not solutions nearby the solution remain close, get closer, or move further away. We now give a formal definition of stability and describe tests for determining whether a solution is stable. Definitions Let x(t;a) be a solution to the differential equation with initial condition a. A solution is stable if other solutions that start near a stay close to x(t;a). Formally, we say that the solution x(t;a) is stable if for all ε >, there exists a δ > such that b a < δ = x(t;b) x(t;a) < ε for all t >. Note that this definition does not imply that x(t;b) approaches x(t;a) as time increases but just that it stays nearby. Furthermore, the value of δ may depend on ε, sothatifwewishtostayveryclosetothesolution,wemayhave to start very, very close (δ ε). This type of stability, which is illustrated in Figure 5.6, isalso called stability in the sense of Lyapunov.Ifasolutionisstableinthissenseanhe trajectories do not converge, we say that the solution is neutrally stable. An important special case is when the solution x(t;a) =x e is an equilibrium solution. In this case the condition for stability becomes x() x e < δ = x(t) x e < ε for all t >. (5.7) Instead of saying that the solution is stable, we simply say that the equilibrium point is stable. An example of a neutrally stable equilibrium pointisshownin Figure 5.7. From the phase portrait, we see that if we start near the equilibrium point, then we stay near the equilibrium point. Furthermore, if we choose an initial condition from within the inner dashed circle (of radius δ)thenalltrajectorieswill remain inside the region defined by the outer dashed circle (of radius ε). Note,

10 5.3. STABILITY ẋ = x ẋ = x x x x x x, x Time t Figure 5.7: Phase portrait and time domain simulation for a system with a single stable equilibrium point. The equilibrium point x e at the origin is stable since all trajectories that start near x e stay near x e. however, that trajectories may not remain confined remain inside the individual circles (and hence we must choose δ < ε). Asolutionx(t;a) is asymptotically stable if it is stable in the sense of Lyapunov and, in addition, x(t;b) approaches x(t;a) as t approaches infinity for b sufficiently close to a. Hence,thesolutionx(t;a) is asymptotically stable if for every ε > there exists a δ > suchthat b a < δ = x(t;b) x(t;a) < ε and lim t x(t;b) x(t;a) =. This corresponds to the case where all nearby trajectories converge to the stable solution for large time. In the case of an equilibrium solution x e,wecanwritethis condition as x() x e < δ = x(t) x e < ε and lim t x(t)=x e. (5.8) Figure 5.8 shows an example of an asymptotically stable equilibrium point. Indeed, as seen in the phase portrait, not only do all trajectories stay near the equilibrium point at the origin, but they also all approach the origin as t gets large (the directions of the arrows on the phase portrait show the direction inwhichthetrajectories move). Asolutionx(t;a) is unstable if it is not stable. More specifically, we say that a solution x(t;a) is unstable if given some ε >, there does not exist a δ > such that if b a < δ,then x(t;b) x(t;a) < ε for all t.anexampleofanunstable equilibrium point x e is shown in Figure 5.9. Note that no matter how small we make δ, thereisalwaysaninitialconditionwith x() x e < δ that flows away from x e. The definitions above are given without careful description of their domain of applicability. More formally, we define a solution to be locally stable (or locally asymptotically stable) ifitisstableforallinitialconditionsx B r (a), where B r (a)={x : x a < r}

11 5- CHAPTER 5. DYNAMIC BEHAVIOR ẋ = x.5 ẋ = x x x x x.5 x, x.5.5 x Time t Figure 5.8: Phase portrait and time domain simulation for a system with a single asymptotically stable equilibrium point. The equilibrium point x e at the origin is asymptotically stable since the trajectories converge to this point as t. is a ball of radius r around a and r >. A solution is globally asymptotically stable if it is asymptotically stable for all r >. For planar dynamical systems, equilibrium points have been assigned names based on their stability type. An asymptotically stable equilibrium point is called a sink or sometimes an attractor. Anunstableequilibriumpointcanbeeithera source, ifalltrajectoriesleadawayfromtheequilibriumpoint,orasaddle, if some trajectories lead to the equilibrium point and others move away (this is the situation pictured in Figure 5.9). Finally, an equilibrium point that is stable but not asymptotically stable (i.e., neutrally stable, such as the one in Figure 5.7) is called a center. Example 5.5 Congestion control The TCP protocol is used to adjust the rate of packet transmission on the Internet. Stability of this system is important to insure smooth and efficient flow of information across the network. The model for congestion control in a network consisting of N identical computers connected to a single router, described in more detail in Section 4.4, is given by dw = c ( ) b ρc + w, db = N wc b c, where w is the window size and b is the buffer size of the router. The equilibrium points are given by ) b = Nw, where w ( + w = ρ N. Since w(+w /) is monotone, there is only one equilibrium point. Phase portraits are shown in Figure 5. for two different sets of parameter values. In each case we see that the system converges to an equilibrium point in which thebufferisbelow its full capacity of 5 packets. The equilibrium size of the buffer represents a

12 5.3. STABILITY 5- ẋ = x x.5 ẋ = x + x x x.5 x, x x.5.5 x 3 Time t Figure 5.9: Phase portrait and time domain simulation for a system with a single unstable equilibrium point. The equilibrium point x e at the origin is unstable since not all trajectories that start near x e stay near x e. The sample trajectory on the right shows that the trajectories very quickly depart from zero. balance between the transmission rates for the sources and the capacity of the link. We see from the phase portraits that the equilibrium points are asymptotically stable since all initial conditions result in trajectories that converge to these points. Stability of Linear Systems Alineardynamicalsystemhastheform = Ax, x()=x, (5.9) where A R n n is a square matrix, corresponding to the dynamics matrix of a linear control system (3.6). For a linear system, the stability of the equilibrium at the origin can be determined from the eigenvalues of the matrix A: λ(a) := {s C :det(si A)=}. The polynomial det(si A) is the characteristic polynomial and the eigenvalues are its roots. We use the notation λ j for the jth eigenvalue of A,sothatλ j λ(a). In general λ can be complex-valued, although if A is real-valued, then for any eigenvalue λ, itscomplexconjugateλ will also be an eigenvalue. The origin is always an equilibrium for a linear system. Since the stability of a linear system depends only on the matrix A, we find that stability is a property of the system. For alinearsystemwecanthereforetalkaboutthestabilityofthe system rather than the stability of a particular solution or equilibrium point. The easiest class of linear systems to analyze are those whose system matrices

13 5- CHAPTER 5. DYNAMIC BEHAVIOR Buffer size, b [pkts] Window size, w [pkts] (a) ρ = 4, c = pkts/ms Buffer size, b [pkts] Window size, w [pkts] (b) ρ = 4 4, c = pkts/ms Figure 5.: Phase portraits for a congestion control protocol running with N = 6 identical source computers. The equilibrium values correspond to a fixed window at the source, which results in a steady-state buffer size and corresponding transmission rate. A faster link (b) uses a smaller buffer size since it can handle packets at a higher rate. are in diagonal form. In this case, the dynamics have the form λ = λ... x. (5.) λ n It is easy to see that the state trajectories for this system are independent of each other, so that we can write the solution in terms of n individual systems ẋ j = λ j x j. Each of these scalar solutions is of the form x j (t)=e λ jt x j (). We see that the equilibrium point x e = isstableifλ j andasymptotically stable if λ j <. Another simple case is when the dynamics are in the block diagonal form σ ω = ω σ..... x. σ m ω m ω m σ m In this case, the eigenvalues can be shown to be λ j = σ j ± iω j.weonceagaincan separate the state trajectories into independent solutionsforeachpairofstates,and the solutions are of the form x j (t)=e σ jt ( x j ()cosω j t + x j ()sinω j t ), x j (t)=e σ jt ( x j ()sinω j t + x j ()cosω j t ),

14 5.3. STABILITY 5-3 where j =,,...,m. Weseethatthissystemisasymptoticallystableifandonly if σ j = Reλ j <. It is also possible to combine real and complex eigenvalues in (block) diagonal form, resulting in a mixture of solutions of thetwotypes. Very few systems are in one of the diagonal forms above, but many systems can be transformed into these forms via coordinate transformations. One such class of systems is those for which the dynamics matrix has distinct (nonrepeating) eigenvalues. In this case there is a matrix T R n n such that the matrix TAT is in (block) diagonal form, with the block diagonal elements corresponding to the eigenvalues of the original matrix A (see Exercise 5.5). If we choose new coordinates z = Tx,then dz = T ẋ = TAx = TAT z and the linear system has a (block) diagonal dynamics matrix. Furthermore, the eigenvalues of the transformed system are the same as those oftheoriginalsystem since if v is an eigenvector of A, thenw = Tv can be shown to be an eigenvector of TAT.Wecanreasonaboutthestabilityoftheoriginalsystembynoting that x(t)=t z(t), andsoifthetransformedsystemisstable(orasymptotically stable), then the original system has the same type of stability. This analysis shows that for linear systems with distinct eigenvalues, the stability of the system can be completely determined by examining the real part of the eigenvalues of the dynamics matrix. For more general systems, we make use of the following theorem, proved in the next chapter: Theorem 5. (Stability of a linear system). The system = Ax is asymptotically stable if and only if all eigenvalues of A have a strictly negative real part and is unstable if any eigenvalue of A has a strictly positive real part. Note that it is not enough to have eigenvalues with Re(λ). As a simple example, consider the system q =, which can be written in state space form as d x = x. x x The system has eigenvalues λ = butthesolutionsarenotboundedsincewehave x (t)=x, + x, t, x (t)=x,. Example 5.6 Compartment model Consider the two-compartment module for drug delivery described in Section 4.6. Using concentrations as state variables and denoting the state vector by x,thesystem dynamics are given by = k k k x + b u, y = x, k k

15 5-4 CHAPTER 5. DYNAMIC BEHAVIOR where the input u is the rate of injection of a drug into compartment and the concentration of the drug in compartment is the measured output y. Wewishto design a feedback control law that maintains a constant output given by y = y d. We choose an output feedback control law of the form u = k(y y d )+u d, where u d is the rate of injection required to maintain the desired concentration y = y d,andk is a feedback gain that should be chosen such that the closed loop system is stable. Substituting the control law into the system, we obtain = k k k b k x + b (u k k d + ky d )=: Ax + Bu e, y = x =: Cx. The equilibrium concentration x e R can be obtained by solving the equation Ax e + Bu e = andsomesimplealgebrayields x,e = x,e = y d, u e = u d = k b y d. To analyze the system around the equilibrium point, we choosenewcoordi- nates z = x x e.inthesecoordinatestheequilibriumpointisattheoriginand the dynamics become dz = k k k b k z. k k We can now apply the results of Theorem 5. to determine the stability of the system. The eigenvalues of the system are given by the roots of the characteristic polynomial λ(s)=s +(k + k + k )s +(k k + b k k). While the specific form of the roots is messy, it can be shown using the Routh Hurwitz criterion that the roots have negative real part as long as the linear term and the constant term are both positive (see Section., page -). Hence the system is stable for any k >. Stability Analysis via Linear Approximation An important feature of differential equations is that it is often possible to determine the local stability of an equilibrium point by approximating the system by a linear system. The following example illustrates the basic idea. Example 5.7 Inverted pendulum Consider again an inverted pendulum whose open loop dynamicsaregivenby = x, sinx cx

16 5.3. STABILITY 5-5 x z π/ π 3π/ π x π π/ π/ π z (a) Nonlinear model (b) Linear approximation Figure 5.: Comparison between the phase portraits for the full nonlinear system (a) and its linear approximation around the origin (b). Notice that near the equilibrium point at the center of the plots, the phase portraits (and hence the dynamics) are almost identical. where we have defined the state as x =(θ, θ). We first consider the equilibrium point at x =(,),correspondingtothestraight-upposition.ifweassumethat the angle θ = x remains small, then we can replace sinx with x and cosx with, which gives the approximate system = x = x. (5.) x cx c Intuitively, this system should behave similarly to the more complicated model as long as x is small. In particular, it can be verified that the equilibrium point (, ) is unstable by plotting the phase portrait or computing the eigenvalues of the dynamics matrix in equation (5.) We can also approximate the system around the stable equilibrium point at x =(π,).inthiscasewehavetoexpandsinx and cosx around x = π,according to the expansions sin(π + θ)= sinθ θ, cos(π + θ)= cos(θ). If we define z = x π and z = x,theresultingapproximatedynamicsaregiven by dz = z = z. (5.) z cz c It can be shown that the eigenvalues of the dynamics matrix have negative real parts, confirming that the downward equilibrium point is asymptotically stable. Figure 5. shows the phase portraits for the original system and the approximate system around the stable equilibrium point. Note that z =(,) is the equilibrium point for this system and that it has the same basic form as the dynamics shown in Figure 5.8. The solutions for the original system and the approximate are very similar, although not exactly the same. It can be shown that if a linear approximation has either asymptotically stable or unstable equilibrium points, then

17 5-6 CHAPTER 5. DYNAMIC BEHAVIOR the local stability of the original system must be the same (see Theorem 5.3 on page 5-6 for the case of asymptotic stability). More generally, suppose that we have a nonlinear system = F(x) that has an equilibrium point at x e.computingthetaylorseriesexpansionofthe vector field, we can write = F(x e)+ F x (x x e )+higher-order terms in (x x e ). xe Since F(x e )=, we can approximate the system by choosing a new state variable z = x x e and writing dz F = Az, where A = x. (5.3) xe We call the system (5.3) the linear approximation of the original nonlinear system or the linearization at x e. The following example illustrates the idea. Example 5.8 Stability a tanker The normalized steering dynamics of a large ship can be modeled by the following equations: dv = a v + a r + αv v + b δ, dr = a 3v + a 4 r + b δ, where v is the component of the velocity vector that is orthogonal to the ship direction, r is the turning rate, and δ is the rudder angle. The variables are normalized by using the ship length l as length unit and the time to travel one ship length as the time unit. The mass is normalized by ρl 3 /, where ρ is the density of water. The normalized parameters are a =.6, a =.3, a 3 = 5, a 4 =, α =, b =., and b =.8. Setting the rudder angle δ =, we find that the equilibria are given by the equations a v + a r + αv v =, a 3 v + a 4 r =. Elimination of the variable r in these equations give (a a 3 a a 4 )v + αa 4 v v = There are three equilibrium solutions: v e = andv e = ±.75. Linearizing the equation gives a second order system with dynamics matrices A =, A 5 =. 5 The linearized matrix A,fortheequilibriav e = ±.75, has the characteristic polynomial s + 3.4s +.3, which has all roots in the left half-plane. The equilibria are

18 5.3. STABILITY 5-7 y 3 4 x (a) Trajectories Normalized turning rate r Rudder angle δ (b) Rudder curve Figure 5.: Equilibria for tanker. The trajectories are shown in (a) and the rudder characteristics in (b), where equilibria are marked by circles. thus stable. The matrix A,fortheequilibriumv e =, has the characteristic polynomial s +.3s.9, which has one root in the right half-plane. This equilibrium is unstable. Summarizing, we find that the equilibrium v e = r e =, which corresponds to the ship moving forward at constant speed, is unstable. The other equilibria v e =.75, r e =.875, and v e =.75, r e =.875 are stable (see Figure 5.b). These equilibria correspond to the tanker moving in a circle to theleftoftothe right. Hence if the rudder is set to δ = anheshipismovingforwarditwillthus eitherturn to therightorto theleft andapproachoneofthestable equilibria, which way it goes depends on the exact value of the initial condition. The trajectories are shown in Figure 5.a. The fact that a linear model can be used to study the behavior of anonlinear system near an equilibrium point is a powerful one. Indeed, we can take this even further and use a local linear approximation of a nonlinear system to design afeedbacklawthatkeepsthesystemnearitsequilibriumpoint (design of dynamics). Thus, feedback can be used to make sure that solutions remain close to the equilibrium point, which in turn ensures that the linear approximation used to stabilize it is valid. Stability of Limit Cycles Stability of nonequilibrium solutions can also be investigated as illustrated by the following example. Example 5.9 Stability of an oscillation Consider the system given by equation (5.6), = x + x ( x x ), = x + x ( x x ), whose phase portrait is shown in Figure 5.5. The differential equation has a peri-

19 5-8 CHAPTER 5. DYNAMIC BEHAVIOR odic solution x p = x ()cost + x ()sint, (5.4) x ()cost x ()sint with x ()+x ()=. Notice that the nonlinear terms disappear on the periodic solution. To explore the stability of this solution, we introduce polarcoordinatesr and ϕ, whicharerelateothestatevariablesx and x by x = r cosϕ, x = r sinϕ. Differentiation gives the following linear equations for ṙ and ϕ: ẋ = ṙ cosϕ r ϕ sinϕ, ẋ = ṙ sinϕ + r ϕ cosϕ. Solving this linear system for ṙ and ϕ gives, after some calculation, dr = r( dϕ r ), =. (5.5) Notice that the equations are decoupled; hence we can analyze thestabilityofeach state separately. The equation for r has two equilibria: r = andr = (noticethatr is assumed to be non-negative). The derivative dr/ is positive for < r < andnegativefor r >. The variable r will therefore increase if < r < anddecreaseifr >, and we find that the equilibrium r = isunstableanheequilibriumr = isstable. Solutions with initial conditions different from will thus all convergeto the stable equilibrium r = astimeincreases. To study the stability of the full system, we must also investigate the behavior of angle ϕ. The equation for ϕ can be integrated analytically to give ϕ(t)= t + ϕ(),whichshowsthatsolutionsstartingatdifferentinitialangles ϕ() will grow linearly with time, remaining separated by a constant amount. The solution r =, ϕ = t is thus stable but not asymptotically stable. The unit circle in the phase plane is attracting, inthesensethatallsolutionswithr() > convergetothe unit circle, as illustrated in the simulation in Figure 5.3. Notice that the solutions approach the circle rapidly, but that there is a constant phase shift between the solutions. 5.4 Lyapunov Stability Analysis We now return to the study of the full nonlinear system = F(x), x Rn. (5.6) Having defined when a solution for a nonlinear dynamical system is stable,we can now ask how to prove that a given solution is stable, asymptotically stable, or unstable. For physical systems, one can often argue about stability based on

20 5.4. LYAPUNOV STABILITY ANALYSIS x x 5 5 x Time t x Figure 5.3: Solution curves for a stable limit cycle. The phase portrait on the left shows that the trajectory for the system rapidly converges to the stable limit cycle. The starting points for the trajectories are marked by circles in the phase portrait. The time domain plots on the right show that the states do not converge to the solution but instead maintain a constant phase error. dissipation of energy. The generalization of that technique to arbitrary dynamical systems is based on the use of Lyapunov functions in place of energy. In this section we will describe techniques for determining the stability of solutions for a nonlinear system (5.6). We will generally be interested in stability of equilibrium points, and it will be convenient to assume that x e = istheequilibrium point of interest. (If not, rewrite the equations in a new set of coordinates z = x x e.) Lyapunov Functions A Lyapunov function V : R n R is an energy-like function that can be used to determine the stability of a system. Roughly speaking, if we can find a nonnegative function that always decreases along trajectories of the system, we can conclude that the minimum of the function is a stable equilibrium point (locally). To describe this more formally, we start with a few definitions. Let B r = B r () be a ball of radius r around the origin. We say that a continuous function V is positive definite on B r if V (x) > forallx B r, x andv ()=. Similarly, a function is negative definite on B r if V (x) < forallx B r, x andv ()=. We say that a function V is positive semidefinite if V (x) forallx B r,butv (x) can be zero at points other than just x =. To illustrate the difference between a positive definite function and a positive semidefinite function, suppose that x R and let V (x)=x, V (x)=x + x. Both V and V are always nonnegative. However, it is possible for V to be zero even if x. Specifically, if we set x =(,c),wherec R is any nonzero number, then V (x)=. On the other hand, V (x)=ifandonlyifx =(,). Thus V is positive semidefinite and V is positive definite.

21 5- CHAPTER 5. DYNAMIC BEHAVIOR V x V (x)=c < c V (x)=c Figure 5.4: Geometric illustration of Lyapunov s stability theorem. The closed contours represent the level sets of the Lyapunov function V (x)=c. If / points inward to these sets at all points along the contour, then the trajectories of the system will always cause V (x) to decrease along the trajectory. We can now characterize the stability of an equilibrium point x e = forthe system (5.6). Theorem 5. (Lyapunov stability theorem). Let V be a function on R n and let V represent the time derivative of V along trajectories of the system dynamics (5.6): V = V x = V x F(x). If there exists r > such that V is positive definite and V isnegativesemidefiniteon B r,thenx= is (locally) stable in the sense of Lyapunov. If V is positive definite and V isnegativedefiniteinb r,thenx= is (locally) asymptotically stable. If V satisfies one of the conditions above, we say that V is a (local) Lyapunov function for the system. These results have a nice geometric interpretation. The level curves for a positive definite function are the curves defined by V (x) =c, c >, and for each c this gives a closed contour, as shown in Figure 5.4. The condition that V (x) is negative simply means that the vector field points toward lower-level contours. This means that the trajectories move to smaller and smaller values of V and if V is negative definite then x must approach. Finding Lyapunov functions is not always easy. For example, consider the linear system = x, = x αx, α >. Since the system is linear, it can be easily verified that the eigenvalues of the corresponding dynamics matrix are given by λ = α ± α 4. These eigenvalues always have negative real part for α > andhencethesystemis asymptotically stable. It follows that x(t) an and so a natural Lyapunov

22 5.4. LYAPUNOV STABILITY ANALYSIS 5- function candidate would be the squared magnitude of the state: V (x)= x + x. Taking the time derivative of this function and evaluating along the trajectories of the system we find that V (x)= αx. But this function is not positive definite, as can be seen by evaluating V at the point x =(,), whichgives V (x)=. Hence even though the system is asymptotically stable, a Lyapunov function that proves stability is not as simple as the squared magnitude of the state. We now consider some additional examples. Example 5. Scalar nonlinear system Consider the scalar nonlinear system = + x x. This system has equilibrium points at x = an =. We consider the equilibrium point at x = andrewritethedynamicsusingz = x : dz = + z z, which has an equilibrium point at z =. Now consider the candidate Lyapunov function V (z)= z, which is globally positive definite. The derivative of V along trajectories of the system is given by V (z)=zż = z + z z z. If we restrict our analysis to an interval B r,wherer <, then +z > andwecan multiply through by + z to obtain z (z + z)( + z)= z 3 3z = z (z + 3) <, z B r, r <. It follows that V (z) < forallz B r, z, and hence the equilibrium point x = is locally asymptotically stable. A slightly more complicated situation occurs if V is negative semidefinite. In this case it is possible that V (x)=whenx, and hence x could stop decreasing in value. The following example illustrates this case. Example 5. Hanging pendulum Anormalizedmodelforahangingpendulumis = x, = sinx,

23 5- CHAPTER 5. DYNAMIC BEHAVIOR where x is the angle between the pendulum and the vertical, with positive x corresponding to counterclockwise rotation. The equation has an equilibrium x = x =, which corresponds to the pendulum hanging straight down. To explore the stability of this equilibrium we choose the total energy as a Lyapunov function: V (x)= cosx + x x + x. The Taylor series approximation shows that the function is positive definite for small x. The time derivative of V (x) is V = ẋ sinx + ẋ x = x sinx x sinx =. Since this function is negative semidefinite, it follows from Lyapunov s theorem that the equilibrium is stable but not necessarily asymptotically stable. When perturbed, the pendulum actually moves in a trajectory that corresponds to constant energy. As demonstrated already, Lyapunov functions are not always easy to find, and they are also not unique. In many cases energy functions can be usedasastarting point, as was done in Example 5.. It turns out that Lyapunov functions can always be found for any stable system (under certain conditions), and hence one knows that if a system is stable, a Lyapunov function exists (and vice versa). Recent results using sum-of-squares methods have provided systematic approaches for finding Lyapunov systems [PPP]. Sum-of-squares techniques can be applied to a broad variety of systems, including systems whose dynamics are described by polynomial equations, as well as hybrid systems, which can have different models for different regions of state space. For a linear dynamical system of the form = Ax, it is possible to construct Lyapunov functions in a systematic manner. To do so, we consider quadratic functions of the form V (x)=x T Px, where P R n n is a symmetric matrix (P = P T ). The condition that V be positive definite on B r for some r > isequivalenttotheconditionthatp be a positive definite matrix: x T Px >, for all x, which we write as P. It can be shown that if P is symmetric, then P is positive definite if and only if all of its eigenvalues are real and positive. Given a candidate Lyapunov function V (x) =x T Px, wecannowcomputeits derivative along flows of the system: V = V x = xt (A T P + PA)x =: x T Qx.

24 5.4. LYAPUNOV STABILITY ANALYSIS 5-3 The requirement that V be negative definite on B r (for asymptotic stability) becomes a condition that the matrix Q be positive definite. Thus, to find a Lyapunov function for a linear system it is sufficient to choose a Q andsolvethelyapunov equation: A T P + PA = Q. (5.7) This is a linear equation in the entries of P, andhenceitcanbesolvedusing linear algebra. It can be shown that the equation always has a solution if all of the eigenvalues of the matrix A are in the left half-plane. Moreover, the solution P is positive definite if Q is positive definite. It is thus always possible to find aquadraticlyapunovfunctionforastablelinearsystem.wewill defer a proof of this until Chapter 6, where more tools for analysis of linear systems will be developed. Knowing that we have a direct method to find Lyapunov functions for linear systems, we can now investigate the stability of nonlinear systems. Consider the system = F(x)=: Ax + F(x), (5.8) where F() = and F(x) contains terms that are second order and higher in the elements of x. The function Ax is an approximation of F(x) near the origin, and we can determine the Lyapunov function for the linear approximation and investigate if it is also a Lyapunov function for the full nonlinear system. The following examples illustrate the approach. Example 5. Spring mass system Consider a simple spring mass system, whose state space dynamics are given by = x, = k m x b m x, m,b,k >. Note that this is equivalent to the example we used on page 5- ifk = m and b/m = α. To find a Lyapunov function for the system, we choose Q = I and the equation (5.7) becomes k/m p p + p p =. b/m p p p p k/m b/m By evaluating each element of this matrix equation, we can obtaining a set of linear equations for p ij : k m p =, p b m p k m p =, p b m p =. These equations can be solved for p, p,andp to obtain b + k(k + m) m P = bk k m m(k + m) k bk

25 5-4 CHAPTER 5. DYNAMIC BEHAVIOR u 5 4 z, f (z ) z, f (z ) A B z, f (z) 3 u (a) Circuit diagram z, f (z ) (b) Equilibrium points Figure 5.5: Stability of a genetic switch. The circuit diagram in (a) represents two proteins that are each repressing the production of the other. The inputs u and u interfere with this repression, allowing the circuit dynamics to be modified. The equilibrium points for this circuit can be determined by the intersection of the two curves shown in (b). Finally, it follows that V (x)= b + k(k + m) x + m bk k x m(k + m) x + x bk. Notice that while it can be verified that this function is positive definite, its level sets are rotated ellipses. Example 5.3 Genetic switch Consider the dynamics of a set of repressors connected together in a cycle, as shown in Figure 5.5a. The normalized dynamics for this system were given in Exercise 3.9: dz dτ = µ + z n z, dz dτ = µ + z n z, (5.9) where z and z are scaled versions of the protein concentrations, n > andµ > are parameters that describe the interconnection between the genes, and we have set the external inputs u and u to zero. The equilibrium points for the system are found by equating the time derivatives to zero. We define so that our dynamics become f (u)= µ + u n, f (u)= df du = µnun ( + u n ), dz dτ = f (z dz ) z, dτ = f (z ) z, and the equilibrium points are defined as the solutions of the equations z = f (z ), z = f (z ). If we plot the curves (z, f (z )) and ( f (z ),z ) on a graph, then these equations

26 5.4. LYAPUNOV STABILITY ANALYSIS 5-5 will have a solution when the curves intersect, as shown in Figure 5.5b. Because of the shape of the curves, it can be shown that there will always be three solutions: one at z e = z e,onewithz e < z e and one with z e > z e.ifµ, then we can show that the solutions are given approximately by z e µ, z e µ n ; z e = z e ; z e µ n, z e µ. (5.) To check the stability of the system, we write f (u) in terms of its Taylor series expansion about u e : f (u)= f (u e )+ f (u e ) (u u e )+ f (u e ) (u u e ) + higher-order terms, where f represents the first derivative of the function, and f the second. Using these approximations, the dynamics can then be written as dw = f (z e ) f w + F(w), (z e ) where w = z z e is the shifted state and F(w) represents quadratic and higher-order terms. We now use equation (5.7) to search for a Lyapunov function. Choosing Q = I and letting P R have elements p ij,wesearchforasolutionoftheequation f f p p + p p p p p p f f =, where f = f (z e ) and f = f (z e ).Notethatwehavesetp = p to force P to be symmetric. Multiplying out the matrices, we obtain p + f p p f p + p f p f p + p f p + f p =, which is a set of linear equations for the unknowns p ij.wecansolvetheselinear equations to obtain p = f f f + 4( f f ), p = f + f 4( f f ), p = f f f + 4( f f ). To check that V (w)=w T Pw is a Lyapunov function, we must verify that V (w) is positive definite function or equivalently that P. Since P is a symmetric matrix, it has two real eigenvalues λ and λ that satisfy λ + λ = trace(p), λ λ = det(p). In order for P to be positive definite λ and λ must be positive, and we thus require that trace(p)= f f f + f f f >, det(p)= f f f + f f f >.

27 5-6 CHAPTER 5. DYNAMIC BEHAVIOR We see that trace(p)=4det(p) and the numerator of the expressions is just ( f f ) + 4 >, so it suffices to check the sign of f f.inparticular,forp to be positive definite, we require that f (z e ) f (z e ) <. We can now make use of the expressions for f defined earlier and evaluate at the approximate locations of the equilibrium points derived in equation(5.).for the equilibrium points where z e z e,wecanshowthat f (z e ) f (z e ) f (µ) f ( µ ( + µ n ) ( + µ n(n ) ) n µ n +n. n )= µnµn µnµ (n ) Using n = andµ from Exercise 3.9, we see that f (z e ) f (z e ) and hence P is positive definite. This implies that V is a positive definite function and hence a potential Lyapunov function for the system. To determine if the equilibria z e z e are stable for the system (5.9), we now compute V at the equilibrium point. By construction, V = w T (PA + A T P)w + F T (w)pw + w T P F(w) = w T w + F T (w)pw + w T P F(w). Since all terms in F are quadratic or higher order in w, itfollowsthat F T (w)pw and w T P F(w) consist of terms that are at least third order in w. Therefore if w is sufficiently close to zero, then the cubic and higher-order terms will be smaller than the quadratic terms. Hence, sufficiently close to w =, V is negative definite, allowing us to conclude that these equilibrium points are both stable. Figure 5.6 shows the phase portrait and time traces for a system with µ = 4, illustrating the bistable nature of the system. When the initial condition starts with aconcentrationofproteinbgreaterthanthatofa,thesolution converges to the equilibrium point at (approximately) (/µ n, µ). IfAisgreaterthanB,thenit goes to (µ,/µ n ). The equilibrium point with z e = z e is unstable. More generally, we can investigate what the linear approximation tells about the stability of a solution to a nonlinear equation. The following theorem gives a partial answer for the case of stability of an equilibrium point. Theorem 5.3. Consider the dynamical system (5.8) with F() = and F such that lim F(x) / x as x. If the real parts of all eigenvalues of A are strictly less than zero, then x e = is a locally asymptotically stable equilibrium point of equation (5.8). This theorem implies that asymptotic stability of the linear approximation implies local asymptotic stability of the original nonlinear system. The theorem is very important for control because it implies that stabilization of a linear approximation of a nonlinear system results in a stable equilibrium for the nonlinear system. The proof of this theorem follows the technique used in Example 5.3. A formal proof can be found in [Kha].

28 5.4. LYAPUNOV STABILITY ANALYSIS 5-7 Protein B [scaled] Protein A [scaled] (a) Phase portrait Protein concentrations [scaled] z (A) z (B) Time t [scaled] (b) Simulation time traces Figure 5.6: Dynamics of a genetic switch. The phase portrait on the left shows that the switch has three equilibrium points, corresponding to protein A having a concentration greater than, equal to, or less than protein B. The equilibrium point with equal protein concentrations is unstable, but the other equilibrium points are stable. The simulation on the right shows the time response of the system starting from two different initial conditions. The initial portion of the curve corresponds to initial concentrations z() =(, 5) and converges to the equilibrium where z e < z e. At time t =, the concentrations are perturbed by + in z and in z, moving the state into the region of the state space whose solutions converge to the equilibrium point where z e < z e. It can also be shown that if A has one or more eigenvalues with strictly positive real part, then x e = isanunstableequilibriumforthenonlinearsystem. Krasovski Lasalle Invariance Principle KJA Sept 5 Removed one bend For general nonlinear systems, especially those in symbolic form, it can be difficult to find a positive definite function V whose derivative is strictly negative definite. The Krasovski Lasalle theorem enables us to conclude the asymptotic stability of an equilibrium point under less restrictive conditions, namely, in the case where V is negative semidefinite, which is often easier to construct. It only applies to time-invariant or periodic systems, which are thecasesweconsiderhere. This section makes use of some additional concepts from dynamical systems; see Hahn [Hah67] or Khalil [Kha] for a more detailed description. We will deal with the time-invariant case and begin by introducing a few more definitions. We denote the solution trajectories of the time-invariant system = F(x) (5.) as x(t;a),whichisthesolutionofequation(5.)attimet starting from a at t =. The ω limit set of a trajectory x(t;a) is the set of all points z R n such that there exists a strictly increasing sequence of times t n such that x(t n ;a) z as n. AsetM R n is said to be an invariant set if for all b M, wehavex(t;b) M for all t. It can be proved that the ω limit set of every trajectory is closed and

29 5-8 CHAPTER 5. DYNAMIC BEHAVIOR m θ 4 l x u (a) Physical system 4 π π π π x (b) Phase portrait (c) Manifold view Figure 5.7: Stabilized inverted pendulum. A control law applies a force u at the bottom of the pendulum to stabilize the inverted position (a). The phase portrait (b) shows that the equilibrium point corresponding to the vertical position is stabilized. The shaded region indicates the set of initial conditions that converge to the origin. The ellipse corresponds to a level set of a Lyapunov function V (x) for which V (x) > and V (x) < for all points inside the ellipse. This can be used as an estimate of the region of attraction of the equilibrium point. The actual dynamics of the system evolve on a manifold (c). invariant. We may now state the Krasovski Lasalle principle. Theorem 5.4 (Krasovski Lasalle principle). Let V : R n R be a locally positive definite function such that on the compact set Ω r = {x R n : V (x) r} we have V (x). Define S = {x Ω r : V (x)=}. As t, thetrajectorytendstothelargestinvariantsetinsides;i.e., its ω limit set is contained inside the largest invariant set in S. In particular, if S contains no invariant sets other than x =,thenisasymptoticallystable. Proofs are given in [Kra63] and [LaS6]. Lyapunov functions can often be used to design stabilizing controllers, as is illustrated by the following example, which also illustrates how the Krasovski Lasalle principle can be applied. Example 5.4 Inverted pendulum Following the analysis in Example 3., an inverted pendulum can be described by the following normalized model: = x, = sinx + ucosx, (5.) where x is the angular deviation from the upright position and u is the (scaled) acceleration of the pivot, as shown in Figure 5.7a. The system has an equilibrium point at x = x =, which corresponds to the pendulum standing upright. This equilibrium point is unstable.

30 5.5. PARAMETRIC AND NONLOCAL BEHAVIOR 5-9 To find a stabilizing controller we consider the following candidate for a Lyapunov function: V (x)=(cosx )+a( cos x )+ x ( a ) x + x. The Taylor series expansion shows that the function is positive definite near the origin if a >.5. The time derivative of V (x) is V = ẋ sinx + aẋ sinx cosx + ẋ x = x (u + asinx )cosx. Choosing the feedback law gives u = asinx x cosx V = x cos x. It follows from Lyapunov s theorem that the equilibrium is (locally) stable. However, since the function is only negative semidefinite, we cannot conclude asymptotic stability using Theorem 5.. However, note that V = impliesthatx = or x = π/ ± nπ. If we restrict our analysis to a small neighborhood of the origin Ω r, r π/, then we can define S = {(x,x ) Ω r : x = } and we can compute the largest invariant set inside S. Foratrajectorytoremain in this set we must have x = forallt and hence ẋ (t) =aswell.usingthe dynamics of the system (5.), we see that x (t)=andẋ (t)=impliesx (t)= as well. Hence the largest invariant set inside S is (x,x )=, and we can use the Krasovski Lasalle principle to conclude that the origin is locally asymptotically stable. A phase portrait of the closed loop system is shown in Figure 5.7b. In the analysis and the phase portrait, we have treated the angle of the pendulum θ = x as a real number. In fact, θ is an angle with θ = π equivalent to θ =. Hence the dynamics of the system actually evolve on a manifold (smooth surface) as shown in Figure 5.7c. Analysis of nonlinear dynamical systems on manifolds is more complicated, but uses many of the same basic ideas presented here. 5.5 Parametric and Nonlocal Behavior Most of the tools that we have explored are focused on the local behaviorofa fixed system near an equilibrium point. In this section we briefly introduce some concepts regarding the global behavior of nonlinear systems anhedependence of a system s behavior on parameters in the system model. Regions of Attraction To get some insight into the behavior of a nonlinear system we can start by finding the equilibrium points. We can then proceed to analyze the local behavior around

31 5-3 CHAPTER 5. DYNAMIC BEHAVIOR the equilibria. The behavior of a system near an equilibrium point is called the local behavior of the system. The solutions of the system can be very different far away from an equilibrium point. This is seen, for example, in the stabilized pendulum in Example 5.4. The inverted equilibrium point is stable, with small oscillations that eventually converge to the origin. But far away from this equilibrium point there are trajectories that converge to other equilibrium points or even cases in which the pendulum swings around the top multiple times, giving very long oscillations that are topologically different from those near the origin. To better understand the dynamics of the system, we can examine the set of all initial conditions that converge to a given asymptotically stable equilibrium point. This set is called the region of attraction for the equilibrium point. An example is shown by the shaded region of the phase portrait in Figure 5.7b. In general, computing regions of attraction is difficult. However, even if wecannotdetermine the region of attraction, we can often obtain patches around the stable equilibria that are attracting. This gives partial information about thebehaviorofthesystem. One method for approximating the region of attraction is through the use of Lyapunov functions. Suppose that V is a local Lyapunov function for a system around an equilibrium point x. Let Ω r be a set on which V (x) has a value less than r, Ω r = {x R n : V (x) r}, and suppose that V (x) forallx Ω r,withequalityonlyattheequilibrium point x. Then Ω r is inside the region of attraction of the equilibrium point. Since this approximation depends on the Lyapunov function and the choice of Lyapunov function is not unique, it can sometimes be a very conservative estimate. It is sometimes the case that we can find a Lyapunov function V such that V is positive definite and V is negative (semi-) definite for all x R n.inmanyinstances it can then be shown that the region of attraction for the equilibrium point is the entire state space, and the equilibrium point is globally asymptotically stable. More detailed conditions for global stability can be found in [Kha]. Example 5.5 Stabilized inverted pendulum Consider again the stabilized inverted pendulum from Example 5.4. The Lyapunov function for the system was V (x)=(cosx )+a( cos x )+ x. With a >.5, V was negative semidefinite for all x and nonzero when x ±π/. Hence any x such that x < π/ andv (x) > willbeinsidetheinvariantsetdefined by the level curves of V (x). One of these level sets is shown in Figure 5.7b.

32 5.5. PARAMETRIC AND NONLOCAL BEHAVIOR 5-3 Bifurcations Another important property of nonlinear systems is how theirbehaviorchangesas the parameters governing the dynamics change. We can study this in the context of models by exploring how the location of equilibrium points, their stability, their regions of attraction and other dynamic phenomena, such as limit cycles, vary based on the values of the parameters in the model. Consider a differential equation of the form = F(x, µ), x Rn, µ R k, (5.3) where x is the state and µ is a set of parameters that describe the family of equations. The equilibrium solutions satisfy F(x, µ)=, and as µ is varied, the corresponding solutions x e (µ) can also vary. We say that the system (5.3) has a bifurcation at µ = µ if the behavior of the system changes qualitatively at µ. This can occur either because of a change in stability type or a change in the number of solutions at a given value of µ. Example 5.6 Predator prey Consider the predator prey system described in Example 3.4 and modeled as a continuous time system as described in Section 4.7. The dynamics of the system are given by dh ( = rh H ) ahl k c + H, dl = b ahl dl, (5.4) c + H where H and L are the numbers of hares (prey) and lynxes (predators) and a, b, c, d, k, andr are parameters that model a given predator prey system (described in more detail in Section 4.7). The system has an equilibrium point at H e > and L e > thatcanbefoundnumerically. To explore how the parameters of the model affect the behavior of the system, we choose to focus on two specific parameters of interest: a, theinteractioncoefficient between the populations and c,aparameteraffectingthepreyconsumption rate. Figure 5.8a is a numerically computed parametric stability diagram showing the regions in the chosen parameter space for which the equilibrium point is stable (leaving the other parameters at their nominal values). We see from this figure that for certain combinations of a and c we get a stable equilibrium point, while at other values this equilibrium point is unstable. Figure 5.8b is a numerically computed bifurcation diagram for the system. In this plot, we choose one parameter to vary (a) anhenplottheequilibriumvalue of one of the states (H) on the vertical axis. The remaining parameters are set to their nominal values. A solid line indicates that the equilibrium point is stable; a dashed line indicates that the equilibrium point is unstable. Note that the stability in the bifurcation diagram matches that in the parametric stability diagram for c = 5 (the nominal value) and a varying from.35 to 4. For the predator prey

33 5-3 CHAPTER 5. DYNAMIC BEHAVIOR 5 5 c Unstable Stable H 5 Unstable a a (a) Stability diagram (b) Bifurcation diagram Figure 5.8: Bifurcation analysis of the predator prey system. (a) Parametric stability diagram showing the regions in parameter space for which the system is stable. (b) Bifurcation diagram showing the location and stability of the equilibrium point as a function of a. The solid line represents a stable equilibrium point, and the dashed line represents an unstable equilibrium point. The dash-dotted lines indicate the upper and lower bounds for the limit cycle at that parameter value (computed via simulation). The nominal values of the parameters in the model are a = 3., b =.6, c = 5, d =.56, k = 5, and r =.6. system, when the equilibrium point is unstable, the solution convergestoastable limit cycle. The amplitude of this limit cycle is shown by the dash-dotted line in Figure 5.8b. A particular form of bifurcation that is very common when controlling linear systems is that the equilibrium remains fixed but the stability of the equilibrium changes as the parameters are varied. In such a case it is revealing to plot the eigenvalues of the system as a function of the parameters. Such plots are called root locus diagrams because they give the locus of the eigenvalues when parameters change. Bifurcations occur when parameter values are such that there are eigenvalues with zero real part. Computing environments such LabVIEW, MATLAB, and Mathematica have tools for plotting root loci. A more detailed discussion of the root locus is given in Section.5. Example 5.7 Root locus diagram for a bicycle model Consider the linear bicycle model given by equation (4.8) in Section 4.. Introducing the state variables x = ϕ, x = δ, x 3 = ϕ, an 4 = δ and setting the steering torque T =, the equations can be written as = I M (K + K v ) x =: Ax, M Cv where I is a identitymatrixandv is the velocity of the bicycle. Figure 5.9a shows the real parts of the eigenvalues as a function of velocity. Figure 5.9b shows the dependence of the eigenvalues of A on the velocity v. The figures show that the bicycle is unstable for low velocities because two eigenvalues are in the right half-plane. As the velocity increases, these eigenvalues move into the left

34 5.5. PARAMETRIC AND NONLOCAL BEHAVIOR Unstable Stable Unstable 5 v = 6. Reλ 5 Imλ 5 v v v = 6. 5 Velocity v [m/s] (a) Stability diagram Reλ (b) Root locus diagram Figure 5.9: Stability plots for a bicycle moving at constant velocity. The plot in (a) shows the real part of the system eigenvalues as a function of the bicycle velocity v. The system is stable when all eigenvalues have negative real part (shaded region). The plot in (b) shows the locus of eigenvalues on the complex plane as the velocity v is varied and gives a different view of the stability of the system. This type of plot is called a root locus diagram. half-plane, indicating that the bicycle becomes self-stabilizing. As the velocity is increased further, there is an eigenvalue close to the origin thatmovesintotheright half-plane, making the bicycle unstable again. However, this eigenvalue is small and so it can easily be stabilized by a rider. Figure 5.9a shows thatthebicycleis self-stabilizing for velocities between 6 and m/s. Parametric stability diagrams and bifurcation diagrams can provide valuable insights into the dynamics of a nonlinear system. It is usually necessary to carefully choose the parameters that one plots, including combining the natural parameters of the system to eliminate extra parameters when possible. Computer programs such as AUTO, LOCBIF,andXPPAUT provide numerical algorithms for producing stability and bifurcation diagrams. Design of Nonlinear Dynamics Using Feedback In most of the text we will rely on linear approximations to design feedback laws that stabilize an equilibrium point and provide a desired level of performance. However, for some classes of problems the feedback controller must be nonlinear to accomplish its function. By making use of Lyapunov functions we can often design a nonlinear control law that provides stable behavior, as we saw in Example 5.4. One way to systematically design a nonlinear controller is tobeginwithacandidate Lyapunov function V (x) and a control system ẋ = f (x,u).wesaythatv(x) is a control Lyapunov function if for every x there exists a u such that V (x) = V x f (x,u) <. In this case, it may be possible to find a function α(x) such that u = α(x) stabilizes the system. The following example illustrates the approach. Example 5.8 Noise cancellation Noise cancellation is used in consumer electronics and in industrial systems to re-

35 5-34 CHAPTER 5. DYNAMIC BEHAVIOR n External microphone Headphone z Σ S Σ e Interior microphone Controller a,b Filter w Internal External Microphone Microphone (a) Headphone Parameters (b) Block diagram Figure 5.: Headphones with noise cancellation. Noise is sensed by the exterior microphone (a) and sent to a filter in such a way that it cancels the noise that penetrates the headphone (b). The filter parameters a and b are adjusted by the controller. S represents the input signal to the headphones. duce the effects of noise and vibrations. The idea is to locally reducetheeffect of noise by generating opposing signals. A pair of headphones withnoisecancellation such as those shown in Figure 5.a is a typical example. A schematic diagram of the system is shown in Figure 5.b. The system has twomicrophones, one outside the headphones that picks up exterior noise n and another inside the headphones that picks up the signal e,whichisacombinationofthedesiredsignal S and the external noise that penetrates the headphone. The signal from the exterior microphone is filtered and sent to the headphones in such a way that it cancels the external noise that penetrates into the headphones. The parameters of the filter are adjusted by a feedback mechanism to make the noise signal in the internal microphone as small as possible. The feedback is inherently nonlinearbecauseit acts by changing the parameters of the filter. To analyze the system we assume for simplicity that the propagation of external noise into the headphones is modeled by the first-order dynamical system dz = a z + b n, (5.5) where n is the external noise signal, z is the sound level inside the headphones, and the parameters a < andb are not known. Assume that the filter is a dynamical system of the same type: dw = aw + bn, where the parameters a and b are adjustable. We wish to find a controller that updates a and b so that they converge to the (unknown) parameters a and b.if a = a and b = b we have e = S and the noise effect of the noise is eliminated. Assuming for simplicity that S =, introduce x = e = z w, x = a a,and

An Introduction for Scientists and Engineers SECOND EDITION

An Introduction for Scientists and Engineers SECOND EDITION Feedback Systems An Introduction for Scientists and Engineers SECOND EDITION Karl Johan Åström Richard M. Murray Version v3.h (25 Sep 216) This is the electronic edition of Feedback Systems and is available

More information

Chapter Four. Dynamic Behavior. 4.1 Solving Differential Equations

Chapter Four. Dynamic Behavior. 4.1 Solving Differential Equations Feedback Systems by Astrom and Murray, v.d http://www.cds.caltech.edu/~murray/fbswiki Chapter Four Dynamic Behavior It Don t Mean a Thing If It Ain t Got That Swing. Duke Ellington (899 974) In this chapter

More information

Predictability: Does the Flap of a Butterfly s Wings in Brazil set off a Tornado

Predictability: Does the Flap of a Butterfly s Wings in Brazil set off a Tornado Chapter 4 Dynamic Behavior Predictability: Does the Flap of a Butterfly s Wings in Brazil set off a Tornado in Texas? Talk given by Edward Lorenz, December 972 meeting of the American Association for the

More information

CDS 101/110a: Lecture 2.1 Dynamic Behavior

CDS 101/110a: Lecture 2.1 Dynamic Behavior CDS 11/11a: Lecture 2.1 Dynamic Behavior Richard M. Murray 6 October 28 Goals: Learn to use phase portraits to visualize behavior of dynamical systems Understand different types of stability for an equilibrium

More information

CDS 101/110a: Lecture 2.1 Dynamic Behavior

CDS 101/110a: Lecture 2.1 Dynamic Behavior CDS 11/11a: Lecture.1 Dynamic Behavior Richard M. Murray 6 October 8 Goals: Learn to use phase portraits to visualize behavior of dynamical systems Understand different types of stability for an equilibrium

More information

1 Lyapunov theory of stability

1 Lyapunov theory of stability M.Kawski, APM 581 Diff Equns Intro to Lyapunov theory. November 15, 29 1 1 Lyapunov theory of stability Introduction. Lyapunov s second (or direct) method provides tools for studying (asymptotic) stability

More information

Fundamentals of Dynamical Systems / Discrete-Time Models. Dr. Dylan McNamara people.uncw.edu/ mcnamarad

Fundamentals of Dynamical Systems / Discrete-Time Models. Dr. Dylan McNamara people.uncw.edu/ mcnamarad Fundamentals of Dynamical Systems / Discrete-Time Models Dr. Dylan McNamara people.uncw.edu/ mcnamarad Dynamical systems theory Considers how systems autonomously change along time Ranges from Newtonian

More information

EN Nonlinear Control and Planning in Robotics Lecture 3: Stability February 4, 2015

EN Nonlinear Control and Planning in Robotics Lecture 3: Stability February 4, 2015 EN530.678 Nonlinear Control and Planning in Robotics Lecture 3: Stability February 4, 2015 Prof: Marin Kobilarov 0.1 Model prerequisites Consider ẋ = f(t, x). We will make the following basic assumptions

More information

Topic # /31 Feedback Control Systems. Analysis of Nonlinear Systems Lyapunov Stability Analysis

Topic # /31 Feedback Control Systems. Analysis of Nonlinear Systems Lyapunov Stability Analysis Topic # 16.30/31 Feedback Control Systems Analysis of Nonlinear Systems Lyapunov Stability Analysis Fall 010 16.30/31 Lyapunov Stability Analysis Very general method to prove (or disprove) stability of

More information

Handout 2: Invariant Sets and Stability

Handout 2: Invariant Sets and Stability Engineering Tripos Part IIB Nonlinear Systems and Control Module 4F2 1 Invariant Sets Handout 2: Invariant Sets and Stability Consider again the autonomous dynamical system ẋ = f(x), x() = x (1) with state

More information

Math 216 Final Exam 24 April, 2017

Math 216 Final Exam 24 April, 2017 Math 216 Final Exam 24 April, 2017 This sample exam is provided to serve as one component of your studying for this exam in this course. Please note that it is not guaranteed to cover the material that

More information

Nonlinear Autonomous Systems of Differential

Nonlinear Autonomous Systems of Differential Chapter 4 Nonlinear Autonomous Systems of Differential Equations 4.0 The Phase Plane: Linear Systems 4.0.1 Introduction Consider a system of the form x = A(x), (4.0.1) where A is independent of t. Such

More information

MCE693/793: Analysis and Control of Nonlinear Systems

MCE693/793: Analysis and Control of Nonlinear Systems MCE693/793: Analysis and Control of Nonlinear Systems Systems of Differential Equations Phase Plane Analysis Hanz Richter Mechanical Engineering Department Cleveland State University Systems of Nonlinear

More information

154 Chapter 9 Hints, Answers, and Solutions The particular trajectories are highlighted in the phase portraits below.

154 Chapter 9 Hints, Answers, and Solutions The particular trajectories are highlighted in the phase portraits below. 54 Chapter 9 Hints, Answers, and Solutions 9. The Phase Plane 9.. 4. The particular trajectories are highlighted in the phase portraits below... 3. 4. 9..5. Shown below is one possibility with x(t) and

More information

Nonlinear System Analysis

Nonlinear System Analysis Nonlinear System Analysis Lyapunov Based Approach Lecture 4 Module 1 Dr. Laxmidhar Behera Department of Electrical Engineering, Indian Institute of Technology, Kanpur. January 4, 2003 Intelligent Control

More information

1. < 0: the eigenvalues are real and have opposite signs; the fixed point is a saddle point

1. < 0: the eigenvalues are real and have opposite signs; the fixed point is a saddle point Solving a Linear System τ = trace(a) = a + d = λ 1 + λ 2 λ 1,2 = τ± = det(a) = ad bc = λ 1 λ 2 Classification of Fixed Points τ 2 4 1. < 0: the eigenvalues are real and have opposite signs; the fixed point

More information

MATH 415, WEEKS 7 & 8: Conservative and Hamiltonian Systems, Non-linear Pendulum

MATH 415, WEEKS 7 & 8: Conservative and Hamiltonian Systems, Non-linear Pendulum MATH 415, WEEKS 7 & 8: Conservative and Hamiltonian Systems, Non-linear Pendulum Reconsider the following example from last week: dx dt = x y dy dt = x2 y. We were able to determine many qualitative features

More information

21 Linear State-Space Representations

21 Linear State-Space Representations ME 132, Spring 25, UC Berkeley, A Packard 187 21 Linear State-Space Representations First, let s describe the most general type of dynamic system that we will consider/encounter in this class Systems may

More information

CDS 101 Precourse Phase Plane Analysis and Stability

CDS 101 Precourse Phase Plane Analysis and Stability CDS 101 Precourse Phase Plane Analysis and Stability Melvin Leok Control and Dynamical Systems California Institute of Technology Pasadena, CA, 26 September, 2002. mleok@cds.caltech.edu http://www.cds.caltech.edu/

More information

STABILITY. Phase portraits and local stability

STABILITY. Phase portraits and local stability MAS271 Methods for differential equations Dr. R. Jain STABILITY Phase portraits and local stability We are interested in system of ordinary differential equations of the form ẋ = f(x, y), ẏ = g(x, y),

More information

Nonlinear dynamics & chaos BECS

Nonlinear dynamics & chaos BECS Nonlinear dynamics & chaos BECS-114.7151 Phase portraits Focus: nonlinear systems in two dimensions General form of a vector field on the phase plane: Vector notation: Phase portraits Solution x(t) describes

More information

MCE693/793: Analysis and Control of Nonlinear Systems

MCE693/793: Analysis and Control of Nonlinear Systems MCE693/793: Analysis and Control of Nonlinear Systems Lyapunov Stability - I Hanz Richter Mechanical Engineering Department Cleveland State University Definition of Stability - Lyapunov Sense Lyapunov

More information

Lecture 4. Chapter 4: Lyapunov Stability. Eugenio Schuster. Mechanical Engineering and Mechanics Lehigh University.

Lecture 4. Chapter 4: Lyapunov Stability. Eugenio Schuster. Mechanical Engineering and Mechanics Lehigh University. Lecture 4 Chapter 4: Lyapunov Stability Eugenio Schuster schuster@lehigh.edu Mechanical Engineering and Mechanics Lehigh University Lecture 4 p. 1/86 Autonomous Systems Consider the autonomous system ẋ

More information

Chapter 6 Nonlinear Systems and Phenomena. Friday, November 2, 12

Chapter 6 Nonlinear Systems and Phenomena. Friday, November 2, 12 Chapter 6 Nonlinear Systems and Phenomena 6.1 Stability and the Phase Plane We now move to nonlinear systems Begin with the first-order system for x(t) d dt x = f(x,t), x(0) = x 0 In particular, consider

More information

(Refer Slide Time: 00:32)

(Refer Slide Time: 00:32) Nonlinear Dynamical Systems Prof. Madhu. N. Belur and Prof. Harish. K. Pillai Department of Electrical Engineering Indian Institute of Technology, Bombay Lecture - 12 Scilab simulation of Lotka Volterra

More information

4. Complex Oscillations

4. Complex Oscillations 4. Complex Oscillations The most common use of complex numbers in physics is for analyzing oscillations and waves. We will illustrate this with a simple but crucially important model, the damped harmonic

More information

MCE/EEC 647/747: Robot Dynamics and Control. Lecture 8: Basic Lyapunov Stability Theory

MCE/EEC 647/747: Robot Dynamics and Control. Lecture 8: Basic Lyapunov Stability Theory MCE/EEC 647/747: Robot Dynamics and Control Lecture 8: Basic Lyapunov Stability Theory Reading: SHV Appendix Mechanical Engineering Hanz Richter, PhD MCE503 p.1/17 Stability in the sense of Lyapunov A

More information

ENGI 9420 Lecture Notes 4 - Stability Analysis Page Stability Analysis for Non-linear Ordinary Differential Equations

ENGI 9420 Lecture Notes 4 - Stability Analysis Page Stability Analysis for Non-linear Ordinary Differential Equations ENGI 940 Lecture Notes 4 - Stability Analysis Page 4.01 4. Stability Analysis for Non-linear Ordinary Differential Equations A pair of simultaneous first order homogeneous linear ordinary differential

More information

Math 216 Final Exam 24 April, 2017

Math 216 Final Exam 24 April, 2017 Math 216 Final Exam 24 April, 2017 This sample exam is provided to serve as one component of your studying for this exam in this course. Please note that it is not guaranteed to cover the material that

More information

Linearization of Differential Equation Models

Linearization of Differential Equation Models Linearization of Differential Equation Models 1 Motivation We cannot solve most nonlinear models, so we often instead try to get an overall feel for the way the model behaves: we sometimes talk about looking

More information

One Dimensional Dynamical Systems

One Dimensional Dynamical Systems 16 CHAPTER 2 One Dimensional Dynamical Systems We begin by analyzing some dynamical systems with one-dimensional phase spaces, and in particular their bifurcations. All equations in this Chapter are scalar

More information

Review: control, feedback, etc. Today s topic: state-space models of systems; linearization

Review: control, feedback, etc. Today s topic: state-space models of systems; linearization Plan of the Lecture Review: control, feedback, etc Today s topic: state-space models of systems; linearization Goal: a general framework that encompasses all examples of interest Once we have mastered

More information

EC Control Engineering Quiz II IIT Madras

EC Control Engineering Quiz II IIT Madras EC34 - Control Engineering Quiz II IIT Madras Linear algebra Find the eigenvalues and eigenvectors of A, A, A and A + 4I Find the eigenvalues and eigenvectors of the following matrices: (a) cos θ sin θ

More information

Coordinate Curves for Trajectories

Coordinate Curves for Trajectories 43 The material on linearizations and Jacobian matrices developed in the last chapter certainly expanded our ability to deal with nonlinear systems of differential equations Unfortunately, those tools

More information

Complex Dynamic Systems: Qualitative vs Quantitative analysis

Complex Dynamic Systems: Qualitative vs Quantitative analysis Complex Dynamic Systems: Qualitative vs Quantitative analysis Complex Dynamic Systems Chiara Mocenni Department of Information Engineering and Mathematics University of Siena (mocenni@diism.unisi.it) Dynamic

More information

MTH 464: Computational Linear Algebra

MTH 464: Computational Linear Algebra MTH 464: Computational Linear Algebra Lecture Outlines Exam 4 Material Prof. M. Beauregard Department of Mathematics & Statistics Stephen F. Austin State University April 15, 2018 Linear Algebra (MTH 464)

More information

Chapter #4 EEE8086-EEE8115. Robust and Adaptive Control Systems

Chapter #4 EEE8086-EEE8115. Robust and Adaptive Control Systems Chapter #4 Robust and Adaptive Control Systems Nonlinear Dynamics.... Linear Combination.... Equilibrium points... 3 3. Linearisation... 5 4. Limit cycles... 3 5. Bifurcations... 4 6. Stability... 6 7.

More information

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education MTH 3 Linear Algebra Study Guide Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education June 3, ii Contents Table of Contents iii Matrix Algebra. Real Life

More information

8.1 Bifurcations of Equilibria

8.1 Bifurcations of Equilibria 1 81 Bifurcations of Equilibria Bifurcation theory studies qualitative changes in solutions as a parameter varies In general one could study the bifurcation theory of ODEs PDEs integro-differential equations

More information

Topic 5 Notes Jeremy Orloff. 5 Homogeneous, linear, constant coefficient differential equations

Topic 5 Notes Jeremy Orloff. 5 Homogeneous, linear, constant coefficient differential equations Topic 5 Notes Jeremy Orloff 5 Homogeneous, linear, constant coefficient differential equations 5.1 Goals 1. Be able to solve homogeneous constant coefficient linear differential equations using the method

More information

Complex Numbers. The set of complex numbers can be defined as the set of pairs of real numbers, {(x, y)}, with two operations: (i) addition,

Complex Numbers. The set of complex numbers can be defined as the set of pairs of real numbers, {(x, y)}, with two operations: (i) addition, Complex Numbers Complex Algebra The set of complex numbers can be defined as the set of pairs of real numbers, {(x, y)}, with two operations: (i) addition, and (ii) complex multiplication, (x 1, y 1 )

More information

Daba Meshesha Gusu and O.Chandra Sekhara Reddy 1

Daba Meshesha Gusu and O.Chandra Sekhara Reddy 1 International Journal of Basic and Applied Sciences Vol. 4. No. 1 2015. Pp.22-27 Copyright by CRDEEP. All Rights Reserved. Full Length Research Paper Solutions of Non Linear Ordinary Differential Equations

More information

Dynamical Systems & Lyapunov Stability

Dynamical Systems & Lyapunov Stability Dynamical Systems & Lyapunov Stability Harry G. Kwatny Department of Mechanical Engineering & Mechanics Drexel University Outline Ordinary Differential Equations Existence & uniqueness Continuous dependence

More information

4 Second-Order Systems

4 Second-Order Systems 4 Second-Order Systems Second-order autonomous systems occupy an important place in the study of nonlinear systems because solution trajectories can be represented in the plane. This allows for easy visualization

More information

The Derivative. Appendix B. B.1 The Derivative of f. Mappings from IR to IR

The Derivative. Appendix B. B.1 The Derivative of f. Mappings from IR to IR Appendix B The Derivative B.1 The Derivative of f In this chapter, we give a short summary of the derivative. Specifically, we want to compare/contrast how the derivative appears for functions whose domain

More information

Control Systems. Internal Stability - LTI systems. L. Lanari

Control Systems. Internal Stability - LTI systems. L. Lanari Control Systems Internal Stability - LTI systems L. Lanari outline LTI systems: definitions conditions South stability criterion equilibrium points Nonlinear systems: equilibrium points examples stable

More information

7 Planar systems of linear ODE

7 Planar systems of linear ODE 7 Planar systems of linear ODE Here I restrict my attention to a very special class of autonomous ODE: linear ODE with constant coefficients This is arguably the only class of ODE for which explicit solution

More information

Math 266: Phase Plane Portrait

Math 266: Phase Plane Portrait Math 266: Phase Plane Portrait Long Jin Purdue, Spring 2018 Review: Phase line for an autonomous equation For a single autonomous equation y = f (y) we used a phase line to illustrate the equilibrium solutions

More information

MATH 415, WEEKS 14 & 15: 1 Recurrence Relations / Difference Equations

MATH 415, WEEKS 14 & 15: 1 Recurrence Relations / Difference Equations MATH 415, WEEKS 14 & 15: Recurrence Relations / Difference Equations 1 Recurrence Relations / Difference Equations In many applications, the systems are updated in discrete jumps rather than continuous

More information

Introduction to Modern Control MT 2016

Introduction to Modern Control MT 2016 CDT Autonomous and Intelligent Machines & Systems Introduction to Modern Control MT 2016 Alessandro Abate Lecture 2 First-order ordinary differential equations (ODE) Solution of a linear ODE Hints to nonlinear

More information

Math Ordinary Differential Equations

Math Ordinary Differential Equations Math 411 - Ordinary Differential Equations Review Notes - 1 1 - Basic Theory A first order ordinary differential equation has the form x = f(t, x) (11) Here x = dx/dt Given an initial data x(t 0 ) = x

More information

2.10 Saddles, Nodes, Foci and Centers

2.10 Saddles, Nodes, Foci and Centers 2.10 Saddles, Nodes, Foci and Centers In Section 1.5, a linear system (1 where x R 2 was said to have a saddle, node, focus or center at the origin if its phase portrait was linearly equivalent to one

More information

Solution of Linear State-space Systems

Solution of Linear State-space Systems Solution of Linear State-space Systems Homogeneous (u=0) LTV systems first Theorem (Peano-Baker series) The unique solution to x(t) = (t, )x 0 where The matrix function is given by is called the state

More information

Stability in the sense of Lyapunov

Stability in the sense of Lyapunov CHAPTER 5 Stability in the sense of Lyapunov Stability is one of the most important properties characterizing a system s qualitative behavior. There are a number of stability concepts used in the study

More information

Control of Mobile Robots

Control of Mobile Robots Control of Mobile Robots Regulation and trajectory tracking Prof. Luca Bascetta (luca.bascetta@polimi.it) Politecnico di Milano Dipartimento di Elettronica, Informazione e Bioingegneria Organization and

More information

Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games

Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games Alberto Bressan ) and Khai T. Nguyen ) *) Department of Mathematics, Penn State University **) Department of Mathematics,

More information

Robotics. Control Theory. Marc Toussaint U Stuttgart

Robotics. Control Theory. Marc Toussaint U Stuttgart Robotics Control Theory Topics in control theory, optimal control, HJB equation, infinite horizon case, Linear-Quadratic optimal control, Riccati equations (differential, algebraic, discrete-time), controllability,

More information

A plane autonomous system is a pair of simultaneous first-order differential equations,

A plane autonomous system is a pair of simultaneous first-order differential equations, Chapter 11 Phase-Plane Techniques 11.1 Plane Autonomous Systems A plane autonomous system is a pair of simultaneous first-order differential equations, ẋ = f(x, y), ẏ = g(x, y). This system has an equilibrium

More information

Homogeneous Constant Matrix Systems, Part II

Homogeneous Constant Matrix Systems, Part II 4 Homogeneous Constant Matrix Systems, Part II Let us now expand our discussions begun in the previous chapter, and consider homogeneous constant matrix systems whose matrices either have complex eigenvalues

More information

AIMS Exercise Set # 1

AIMS Exercise Set # 1 AIMS Exercise Set #. Determine the form of the single precision floating point arithmetic used in the computers at AIMS. What is the largest number that can be accurately represented? What is the smallest

More information

Final 09/14/2017. Notes and electronic aids are not allowed. You must be seated in your assigned row for your exam to be valid.

Final 09/14/2017. Notes and electronic aids are not allowed. You must be seated in your assigned row for your exam to be valid. Final 09/4/207 Name: Problems -5 are each worth 8 points. Problem 6 is a bonus for up to 4 points. So a full score is 40 points and the max score is 44 points. The exam has 6 pages; make sure you have

More information

Nonlinear Control Lecture 2:Phase Plane Analysis

Nonlinear Control Lecture 2:Phase Plane Analysis Nonlinear Control Lecture 2:Phase Plane Analysis Farzaneh Abdollahi Department of Electrical Engineering Amirkabir University of Technology Fall 2010 r. Farzaneh Abdollahi Nonlinear Control Lecture 2 1/53

More information

EE363 homework 7 solutions

EE363 homework 7 solutions EE363 Prof. S. Boyd EE363 homework 7 solutions 1. Gain margin for a linear quadratic regulator. Let K be the optimal state feedback gain for the LQR problem with system ẋ = Ax + Bu, state cost matrix Q,

More information

THE WAVE EQUATION. F = T (x, t) j + T (x + x, t) j = T (sin(θ(x, t)) + sin(θ(x + x, t)))

THE WAVE EQUATION. F = T (x, t) j + T (x + x, t) j = T (sin(θ(x, t)) + sin(θ(x + x, t))) THE WAVE EQUATION The aim is to derive a mathematical model that describes small vibrations of a tightly stretched flexible string for the one-dimensional case, or of a tightly stretched membrane for the

More information

Physics 200 Lecture 4. Integration. Lecture 4. Physics 200 Laboratory

Physics 200 Lecture 4. Integration. Lecture 4. Physics 200 Laboratory Physics 2 Lecture 4 Integration Lecture 4 Physics 2 Laboratory Monday, February 21st, 211 Integration is the flip-side of differentiation in fact, it is often possible to write a differential equation

More information

Chapter 1, Section 1.2, Example 9 (page 13) and Exercise 29 (page 15). Use the Uniqueness Tool. Select the option ẋ = x

Chapter 1, Section 1.2, Example 9 (page 13) and Exercise 29 (page 15). Use the Uniqueness Tool. Select the option ẋ = x Use of Tools from Interactive Differential Equations with the texts Fundamentals of Differential Equations, 5th edition and Fundamentals of Differential Equations and Boundary Value Problems, 3rd edition

More information

Computer Problems for Methods of Solving Ordinary Differential Equations

Computer Problems for Methods of Solving Ordinary Differential Equations Computer Problems for Methods of Solving Ordinary Differential Equations 1. Have a computer make a phase portrait for the system dx/dt = x + y, dy/dt = 2y. Clearly indicate critical points and separatrices.

More information

Nonlinear Control. Nonlinear Control Lecture # 3 Stability of Equilibrium Points

Nonlinear Control. Nonlinear Control Lecture # 3 Stability of Equilibrium Points Nonlinear Control Lecture # 3 Stability of Equilibrium Points The Invariance Principle Definitions Let x(t) be a solution of ẋ = f(x) A point p is a positive limit point of x(t) if there is a sequence

More information

Prof. Krstic Nonlinear Systems MAE281A Homework set 1 Linearization & phase portrait

Prof. Krstic Nonlinear Systems MAE281A Homework set 1 Linearization & phase portrait Prof. Krstic Nonlinear Systems MAE28A Homework set Linearization & phase portrait. For each of the following systems, find all equilibrium points and determine the type of each isolated equilibrium. Use

More information

Linear Systems. Chapter Basic Definitions

Linear Systems. Chapter Basic Definitions Chapter 5 Linear Systems Few physical elements display truly linear characteristics. For example the relation between force on a spring and displacement of the spring is always nonlinear to some degree.

More information

1. Find the solution of the following uncontrolled linear system. 2 α 1 1

1. Find the solution of the following uncontrolled linear system. 2 α 1 1 Appendix B Revision Problems 1. Find the solution of the following uncontrolled linear system 0 1 1 ẋ = x, x(0) =. 2 3 1 Class test, August 1998 2. Given the linear system described by 2 α 1 1 ẋ = x +

More information

Nonlinear Control. Nonlinear Control Lecture # 2 Stability of Equilibrium Points

Nonlinear Control. Nonlinear Control Lecture # 2 Stability of Equilibrium Points Nonlinear Control Lecture # 2 Stability of Equilibrium Points Basic Concepts ẋ = f(x) f is locally Lipschitz over a domain D R n Suppose x D is an equilibrium point; that is, f( x) = 0 Characterize and

More information

Stability of Nonlinear Systems An Introduction

Stability of Nonlinear Systems An Introduction Stability of Nonlinear Systems An Introduction Michael Baldea Department of Chemical Engineering The University of Texas at Austin April 3, 2012 The Concept of Stability Consider the generic nonlinear

More information

1. The Transition Matrix (Hint: Recall that the solution to the linear equation ẋ = Ax + Bu is

1. The Transition Matrix (Hint: Recall that the solution to the linear equation ẋ = Ax + Bu is ECE 55, Fall 2007 Problem Set #4 Solution The Transition Matrix (Hint: Recall that the solution to the linear equation ẋ Ax + Bu is x(t) e A(t ) x( ) + e A(t τ) Bu(τ)dτ () This formula is extremely important

More information

APPLICATIONS The eigenvalues are λ = 5, 5. An orthonormal basis of eigenvectors consists of

APPLICATIONS The eigenvalues are λ = 5, 5. An orthonormal basis of eigenvectors consists of CHAPTER III APPLICATIONS The eigenvalues are λ =, An orthonormal basis of eigenvectors consists of, The eigenvalues are λ =, A basis of eigenvectors consists of, 4 which are not perpendicular However,

More information

Stabilization of a 3D Rigid Pendulum

Stabilization of a 3D Rigid Pendulum 25 American Control Conference June 8-, 25. Portland, OR, USA ThC5.6 Stabilization of a 3D Rigid Pendulum Nalin A. Chaturvedi, Fabio Bacconi, Amit K. Sanyal, Dennis Bernstein, N. Harris McClamroch Department

More information

Lyapunov Stability Analysis: Open Loop

Lyapunov Stability Analysis: Open Loop Copyright F.L. Lewis 008 All rights reserved Updated: hursday, August 8, 008 Lyapunov Stability Analysis: Open Loop We know that the stability of linear time-invariant (LI) dynamical systems can be determined

More information

Copyright (c) 2006 Warren Weckesser

Copyright (c) 2006 Warren Weckesser 2.2. PLANAR LINEAR SYSTEMS 3 2.2. Planar Linear Systems We consider the linear system of two first order differential equations or equivalently, = ax + by (2.7) dy = cx + dy [ d x x = A x, where x =, and

More information

BIBO STABILITY AND ASYMPTOTIC STABILITY

BIBO STABILITY AND ASYMPTOTIC STABILITY BIBO STABILITY AND ASYMPTOTIC STABILITY FRANCESCO NORI Abstract. In this report with discuss the concepts of bounded-input boundedoutput stability (BIBO) and of Lyapunov stability. Examples are given to

More information

1 The Observability Canonical Form

1 The Observability Canonical Form NONLINEAR OBSERVERS AND SEPARATION PRINCIPLE 1 The Observability Canonical Form In this Chapter we discuss the design of observers for nonlinear systems modelled by equations of the form ẋ = f(x, u) (1)

More information

Lyapunov Stability Theory

Lyapunov Stability Theory Lyapunov Stability Theory Peter Al Hokayem and Eduardo Gallestey March 16, 2015 1 Introduction In this lecture we consider the stability of equilibrium points of autonomous nonlinear systems, both in continuous

More information

Seminar 6: COUPLED HARMONIC OSCILLATORS

Seminar 6: COUPLED HARMONIC OSCILLATORS Seminar 6: COUPLED HARMONIC OSCILLATORS 1. Lagrangian Equations of Motion Let consider a system consisting of two harmonic oscillators that are coupled together. As a model, we will use two particles attached

More information

LECTURE 8: DYNAMICAL SYSTEMS 7

LECTURE 8: DYNAMICAL SYSTEMS 7 15-382 COLLECTIVE INTELLIGENCE S18 LECTURE 8: DYNAMICAL SYSTEMS 7 INSTRUCTOR: GIANNI A. DI CARO GEOMETRIES IN THE PHASE SPACE Damped pendulum One cp in the region between two separatrix Separatrix Basin

More information

1 Assignment 1: Nonlinear dynamics (due September

1 Assignment 1: Nonlinear dynamics (due September Assignment : Nonlinear dynamics (due September 4, 28). Consider the ordinary differential equation du/dt = cos(u). Sketch the equilibria and indicate by arrows the increase or decrease of the solutions.

More information

ẋ n = f n (x 1,...,x n,u 1,...,u m ) (5) y 1 = g 1 (x 1,...,x n,u 1,...,u m ) (6) y p = g p (x 1,...,x n,u 1,...,u m ) (7)

ẋ n = f n (x 1,...,x n,u 1,...,u m ) (5) y 1 = g 1 (x 1,...,x n,u 1,...,u m ) (6) y p = g p (x 1,...,x n,u 1,...,u m ) (7) EEE582 Topical Outline A.A. Rodriguez Fall 2007 GWC 352, 965-3712 The following represents a detailed topical outline of the course. It attempts to highlight most of the key concepts to be covered and

More information

Lecture 9 Nonlinear Control Design

Lecture 9 Nonlinear Control Design Lecture 9 Nonlinear Control Design Exact-linearization Lyapunov-based design Lab 2 Adaptive control Sliding modes control Literature: [Khalil, ch.s 13, 14.1,14.2] and [Glad-Ljung,ch.17] Course Outline

More information

Lecture 4: Numerical solution of ordinary differential equations

Lecture 4: Numerical solution of ordinary differential equations Lecture 4: Numerical solution of ordinary differential equations Department of Mathematics, ETH Zürich General explicit one-step method: Consistency; Stability; Convergence. High-order methods: Taylor

More information

1 Pushing your Friend on a Swing

1 Pushing your Friend on a Swing Massachusetts Institute of Technology MITES 017 Physics III Lecture 05: Driven Oscillations In these notes, we derive the properties of both an undamped and damped harmonic oscillator under the influence

More information

Physics 351, Spring 2015, Homework #6. Due at start of class, Friday, February 27, 2015

Physics 351, Spring 2015, Homework #6. Due at start of class, Friday, February 27, 2015 Physics 351, Spring 2015, Homework #6. Due at start of class, Friday, February 27, 2015 Course info is at positron.hep.upenn.edu/p351 When you finish this homework, remember to visit the feedback page

More information

Solutions for B8b (Nonlinear Systems) Fake Past Exam (TT 10)

Solutions for B8b (Nonlinear Systems) Fake Past Exam (TT 10) Solutions for B8b (Nonlinear Systems) Fake Past Exam (TT 10) Mason A. Porter 15/05/2010 1 Question 1 i. (6 points) Define a saddle-node bifurcation and show that the first order system dx dt = r x e x

More information

Boulder School for Condensed Matter and Materials Physics. Laurette Tuckerman PMMH-ESPCI-CNRS

Boulder School for Condensed Matter and Materials Physics. Laurette Tuckerman PMMH-ESPCI-CNRS Boulder School for Condensed Matter and Materials Physics Laurette Tuckerman PMMH-ESPCI-CNRS laurette@pmmh.espci.fr Dynamical Systems: A Basic Primer 1 1 Basic bifurcations 1.1 Fixed points and linear

More information

Econ 204 Differential Equations. 1 Existence and Uniqueness of Solutions

Econ 204 Differential Equations. 1 Existence and Uniqueness of Solutions Econ 4 Differential Equations In this supplement, we use the methods we have developed so far to study differential equations. 1 Existence and Uniqueness of Solutions Definition 1 A differential equation

More information

x 1. x n i + x 2 j (x 1, x 2, x 3 ) = x 1 j + x 3

x 1. x n i + x 2 j (x 1, x 2, x 3 ) = x 1 j + x 3 Version: 4/1/06. Note: These notes are mostly from my 5B course, with the addition of the part on components and projections. Look them over to make sure that we are on the same page as regards inner-products,

More information

1.1 Single Variable Calculus versus Multivariable Calculus Rectangular Coordinate Systems... 4

1.1 Single Variable Calculus versus Multivariable Calculus Rectangular Coordinate Systems... 4 MATH2202 Notebook 1 Fall 2015/2016 prepared by Professor Jenny Baglivo Contents 1 MATH2202 Notebook 1 3 1.1 Single Variable Calculus versus Multivariable Calculus................... 3 1.2 Rectangular Coordinate

More information

1 The pendulum equation

1 The pendulum equation Math 270 Honors ODE I Fall, 2008 Class notes # 5 A longer than usual homework assignment is at the end. The pendulum equation We now come to a particularly important example, the equation for an oscillating

More information

Springs: Part I Modeling the Action The Mass/Spring System

Springs: Part I Modeling the Action The Mass/Spring System 17 Springs: Part I Second-order differential equations arise in a number of applications We saw one involving a falling object at the beginning of this text (the falling frozen duck example in section

More information

State Regulator. Advanced Control. design of controllers using pole placement and LQ design rules

State Regulator. Advanced Control. design of controllers using pole placement and LQ design rules Advanced Control State Regulator Scope design of controllers using pole placement and LQ design rules Keywords pole placement, optimal control, LQ regulator, weighting matrixes Prerequisites Contact state

More information

Modeling and Analysis of Dynamic Systems

Modeling and Analysis of Dynamic Systems Modeling and Analysis of Dynamic Systems Dr. Guillaume Ducard Fall 2017 Institute for Dynamic Systems and Control ETH Zurich, Switzerland G. Ducard c 1 / 57 Outline 1 Lecture 13: Linear System - Stability

More information

Hybrid Systems - Lecture n. 3 Lyapunov stability

Hybrid Systems - Lecture n. 3 Lyapunov stability OUTLINE Focus: stability of equilibrium point Hybrid Systems - Lecture n. 3 Lyapunov stability Maria Prandini DEI - Politecnico di Milano E-mail: prandini@elet.polimi.it continuous systems decribed by

More information

Definition of Reachability We begin by disregarding the output measurements of the system and focusing on the evolution of the state, given by

Definition of Reachability We begin by disregarding the output measurements of the system and focusing on the evolution of the state, given by Chapter Six State Feedback Intuitively, the state may be regarded as a kind of information storage or memory or accumulation of past causes. We must, of course, demand that the set of internal states be

More information