Polynomial level-set methods for nonlinear dynamical systems analysis

Similar documents
Analytical Validation Tools for Safety Critical Systems

Lecture 6 Verification of Hybrid Systems

LMI Methods in Optimal and Robust Control

Estimating the Region of Attraction of Ordinary Differential Equations by Quantified Constraint Solving

Nonlinear Systems Theory

Constructing Lyapunov-Krasovskii Functionals For Linear Time Delay Systems

Local Stability Analysis for Uncertain Nonlinear Systems

Converse Results on Existence of Sum of Squares Lyapunov Functions

3 Stability and Lyapunov Functions

SUM-OF-SQUARES BASED STABILITY ANALYSIS FOR SKID-TO-TURN MISSILES WITH THREE-LOOP AUTOPILOT

arxiv: v1 [math.ds] 23 Apr 2017

1 Lyapunov theory of stability

Stability Region Analysis using polynomial and composite polynomial Lyapunov functions and Sum of Squares Programming

Simulation-aided Reachability and Local Gain Analysis for Nonlinear Dynamical Systems

Computation of continuous and piecewise affine Lyapunov functions by numerical approximations of the Massera construction

Converse Lyapunov theorem and Input-to-State Stability

Lecture Note 5: Semidefinite Programming for Stability Analysis

Stability and Robustness Analysis of Nonlinear Systems via Contraction Metrics and SOS Programming

Nonlinear Analysis of Adaptive Flight Control Laws

An explicit construction of distinguished representations of polynomials nonnegative over finite sets

L 2 -induced Gains of Switched Systems and Classes of Switching Signals

Minimum-Phase Property of Nonlinear Systems in Terms of a Dissipation Inequality

On optimal quadratic Lyapunov functions for polynomial systems

Example: feasibility. Interpretation as formal proof. Example: linear inequalities and Farkas lemma

Passivity-based Stabilization of Non-Compact Sets

Stability of Nonlinear Systems An Introduction

Analysis and Control of Nonlinear Actuator Dynamics Based on the Sum of Squares Programming Method

Local Stability Analysis For Uncertain Nonlinear Systems Using A Branch-and-Bound Algorithm

Optimization of Polynomials

Estimating the Region of Attraction of Ordinary Differential Equations by Quantified Constraint Solving

Nonlinear Systems and Control Lecture # 12 Converse Lyapunov Functions & Time Varying Systems. p. 1/1

MATHEMATICAL ECONOMICS: OPTIMIZATION. Contents

6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC

ẋ = f(x, y), ẏ = g(x, y), (x, y) D, can only have periodic solutions if (f,g) changes sign in D or if (f,g)=0in D.

Continuous and Piecewise Affine Lyapunov Functions using the Yoshizawa Construction

Minimum Ellipsoid Bounds for Solutions of Polynomial Systems via Sum of Squares

On the Difficulty of Deciding Asymptotic Stability of Cubic Homogeneous Vector Fields

Power Systems Transient Stability Analysis via Optimal Rational Lyapunov Functions

4. Algebra and Duality

Local Robust Performance Analysis for Nonlinear Dynamical Systems

Safety Verification of Hybrid Systems Using Barrier Certificates

DOMAIN OF ATTRACTION: ESTIMATES FOR NON-POLYNOMIAL SYSTEMS VIA LMIS. Graziano Chesi

EN Nonlinear Control and Planning in Robotics Lecture 3: Stability February 4, 2015

Continuous methods for numerical linear algebra problems

GLOBAL ANALYSIS OF PIECEWISE LINEAR SYSTEMS USING IMPACT MAPS AND QUADRATIC SURFACE LYAPUNOV FUNCTIONS

Distributed Stability Analysis and Control of Dynamic Networks

Time Delay Margin Analysis Applied to Model Reference Adaptive Control

Time Delay Margin Analysis for an Adaptive Controller

Region of attraction approximations for polynomial dynamical systems

Topic # /31 Feedback Control Systems. Analysis of Nonlinear Systems Lyapunov Stability Analysis

Hybrid Systems - Lecture n. 3 Lyapunov stability

Nonlinear Control. Nonlinear Control Lecture # 3 Stability of Equilibrium Points

Convergence rates of moment-sum-of-squares hierarchies for volume approximation of semialgebraic sets

Module 04 Optimization Problems KKT Conditions & Solvers

Stability lectures. Stability of Linear Systems. Stability of Linear Systems. Stability of Continuous Systems. EECE 571M/491M, Spring 2008 Lecture 5

Power System Transient Stability Analysis Using Sum Of Squares Programming

On Polynomial Optimization over Non-compact Semi-algebraic Sets

Problem List MATH 5173 Spring, 2014

An homotopy method for exact tracking of nonlinear nonminimum phase systems: the example of the spherical inverted pendulum

EE C128 / ME C134 Feedback Control Systems

Hybrid Systems Course Lyapunov stability

Chapter III. Stability of Linear Systems

Partial Derivatives. w = f(x, y, z).

Vector Barrier Certificates and Comparison Systems

Convex Optimization. (EE227A: UC Berkeley) Lecture 28. Suvrit Sra. (Algebra + Optimization) 02 May, 2013

SOS Methods for Delay-Dependent Stability of Neutral Differential Systems

Economics 204 Fall 2013 Problem Set 5 Suggested Solutions

MCE693/793: Analysis and Control of Nonlinear Systems

A Globally Stabilizing Receding Horizon Controller for Neutrally Stable Linear Systems with Input Constraints 1

Analysis and synthesis: a complexity perspective

Modern Optimal Control

Complexity Reduction for Parameter-Dependent Linear Systems

Optimization-based Modeling and Analysis Techniques for Safety-Critical Software Verification

1 Directional Derivatives and Differentiability

Stability of linear time-varying systems through quadratically parameter-dependent Lyapunov functions

SOSTOOLS and its Control Applications

CHAPTER 7. Connectedness

Robust and Optimal Control, Spring 2015

Representations of Positive Polynomials: Theory, Practice, and

Copositive Programming and Combinatorial Optimization

(x k ) sequence in F, lim x k = x x F. If F : R n R is a function, level sets and sublevel sets of F are any sets of the form (respectively);

CONTRACTION AND SUM OF SQUARES ANALYSIS OF HCCI ENGINES

Stability Analysis of Linear Systems with Time-Varying Delays: Delay Uncertainty and Quenching

Nonlinear systems. Lyapunov stability theory. G. Ferrari Trecate

Global Optimization with Polynomials

Disturbance Attenuation Properties for Discrete-Time Uncertain Switched Linear Systems

APPROXIMATE SIMULATION RELATIONS FOR HYBRID SYSTEMS 1. Antoine Girard A. Agung Julius George J. Pappas

Introduction to Nonlinear Control Lecture # 3 Time-Varying and Perturbed Systems

Summary Notes on Maximization

Convex computation of the region of attraction for polynomial control systems

Agenda. Applications of semidefinite programming. 1 Control and system theory. 2 Combinatorial and nonconvex optimization

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

Calculating the domain of attraction: Zubov s method and extensions

A conjecture on sustained oscillations for a closed-loop heat equation

9. Interpretations, Lifting, SOS and Moments

Complexity Reduction for Parameter-Dependent Linear Systems

CHAPTER 4: HIGHER ORDER DERIVATIVES. Likewise, we may define the higher order derivatives. f(x, y, z) = xy 2 + e zx. y = 2xy.

1 Strict local optimality in unconstrained optimization

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

Mathematical Economics. Lecture Notes (in extracts)

Transcription:

Proceedings of the Allerton Conference on Communication, Control and Computing pages 64 649, 8-3 September 5. 5.7..4 Polynomial level-set methods for nonlinear dynamical systems analysis Ta-Chung Wang,4 Sanjay Lall,4 Matthew West 3,4 Submitted to Allerton Conference on Communication, Control, and Computing 5 Abstract In this paper, we present a method for computing the domain of attraction for non-linear dynamical systems. We propose a level-set method where sets are represented as sublevel sets of polynomials. The problem of flowing these sets under the advection map of a dynamical system is converted to a semidefinite program, which we use to compute the coefficients of the polynomials. We further address the related problems of constraining the degree of the polynomials and the connectedness of the associated sets. Keywords: Domain of attraction, semi-definite programming, level-set methods. Introduction For nonlinear control systems, one would often like to know the region of attraction of an equilibrium point. Often, this region is difficult to both find and represent computationally. In this paper, we present an approach using polynomials to represent the domain of attraction, and semidefinite programming to perform the computation. The algorithm is iterative, and proceeds by advecting the sublevel set of the polynomial under the inverse flow map. The usual mathematical tool used for analysis of the region of attraction is Lyapunov s method. This gives us a sufficient condition for local stability, although it is often difficult to find a Lyapunov function that can be used as a certificate for the whole domain of attraction. Several prior approaches have used quadratic functions, for example [, 7, ]. In particular, the approach of [] makes use of semidefinite programming to find a quadratic function whose sublevel-set is a good inner approximation to the region of attraction. For system in which the region is complicated, an ellipsoid may not provide a good approximation, and the above methods leave a large unexplored region within the domain of attraction. With recent developments in algebra and sum-of-squares techniques, it is now possible to solve for a Lyapunov function with a more general polynomial form [, ]. Positive definiteness properties are replace by sum-of-squares constraints which can be efficiently solved using convex optimization. The SOSTOOLS [3] toolbox for MATLAB has been developed as an easy computational tool to solve problems that utilizes the sum-of-squares techniques. This approach has also allowed finding a Lyapunov function within some specified semi-algebraic region [, 3]. However, while this provides a method to certify a given inner approximation to the region of attraction, it does not immediately provide a way to find it. In this paper, we make use of backward advection of a small initial neighborhood of the equilibrium in order to give an algorithm that in many cases converges to the true domain of attraction. The approach is similar in spirit to the level-set methods that have been used for computation of reachable sets [8, 9]. The key distinction is that most level-set methods represent the function on a mesh; we represent the function as a polynomial. The consequence of this is that the computational requirements may grow more slowly with dimension, if one may fix a-priori the required degrees of the polynomials. By contrast, a mesh-based Email: tachung@stanford.edu Email: lall@stanford.edu 3 Email: westm@stanford.edu 4 Department of Aeronautics and Astronautics 435, Stanford University, Stanford CA 9435-435, U.S.A. The first two authors were partially supported by the Stanford URI Architectures for Secure and Robust Distributed Infrastructures, AFOSR DoD award number 496---365. 64

method has computational costs which grow exponentially with dimension. In this paper we give numerical examples in two and three dimensions, computed using the SOSCODE [5] toolbox. We also provide numerical comparisons of our results with several other techniques.. Preliminaries Throughout this paper, we will use R + to represent the set of nonnegative real numbers. For z R n and ε > we denote the open ball by B ε (z) given by B ε (z) = { x R n x z < ε } Suppose f : R n R n. For S R n, we say f is Lipschitz on S if there exists c > such that f(x) f(y) c x y for all x, y S We say f is locally Lipschitz on S if for all z S there exists ε > such that f is Lipschitz on B ε (z). Note that if f is locally Lipschitz on S then it is Lipschitz on any compact subset of S. For vector spaces X and Y, let C(X, Y ) be the set of functions mapping X to Y. For a function g : R n R, the derivative Dg(x) is a linear map Dg(x) : R n R at each point x. We use R[x] to represent the ring of polynomials in x with real coefficients. A polynomial f R[x] is called positive semidefinite (PSD) if f(x), for all x R n. A polynomial f is called a sum-of-squares (SOS) if there exist polynomials g,..., g s R[x] such that f = g +g + +g s. Clearly if f is SOS then f is PSD. It is also well-known that the converse is not true. We use Σ to denote the set of all SOS polynomials in R[x]. Suppose g : R n R is C. Define -sublevel set of g to be C(g) R n given by C(g) = { x R n g(x) }. Further define the variety of g by be V(g) R n given by V(g) = { x R n g(x) = }. Then we have V(g) C(g), and both V(g) and C(g) are closed when g is a continuous function. Set advection and the region of attraction Suppose f : R n R n. In this paper, we will consider the following autonomous system ẋ(t) = f(x) () We make the following assumptions about f, to which we will refer later. Assumption. The assumptions are as follows. (i) f() =, i.e., the origin is an equilibrium point. (ii) f is locally Lipschitz on R n. (iii) For any z R n there exists a unique differentiable function x : R R n such that x() = z and ẋ(t) = f(x(t)) for all t R Notice that we assume existence of solutions for both positive and negative time. We make the following standard definitions. For f satisfying Assumption, the flow map φ : R n R R n is defined to be the unique solution of φ t (z) = f(φ t (z)) for all t, z R n t φ (z) = z We say the origin is stable if for all ε > there exists δ > such that φ t (z) < ε for all z B δ () and t 64

We say the origin is asymptotically stable if the origin is stable and there exists δ > such that lim φ t(z) = for all z B δ () t A set S R n is called positively invariant if φ t (S) S for all t and it is called invariant if φ t (S) S for all t R. Definition. Suppose f : R n R n satisfies Assumption. Define the domain of attraction of f to be R R n given by { R = x R } n lim φ t (x) = t Lemma 3. The domain of attraction is invariant; that is φ t (R) R for all t R Proof. Suppose x φ t (R). Then there exists y R such that x = φ t (y). Hence lim φ s(x) = lim φ s+t(y) s s = and so x R as desired. Lemma 4. For any t R, the map φ t : R n R n is continuous, invertible and has a continuous inverse; that is it is a topological homeomorphism on R n. Proof. This is standard; see for example Theorem 3.5 in [4]. Lemma 5. Suppose f satisfies Assumption, and R. Suppose also S R and S, and S is a connected closed positively invariant set. Let h > be a positive constant, and define the backwards advection of S to be S, given by S = φ h (S ) Then S S R, and S is also connected, closed and positively invariant. Further S = φ h ( S ). Proof. Firstly, we will show S S. Since S is positively invariant, we have φ h (S ) S and since φ h is a topological homeomorphism we have S φ h (S ), that is S S as desired. To show positive invariance of S, notice that for any t φ h ( φt (S ) ) = φ t ( φh (S ) ) = φ t (S ) S since S is positively invariant. Taking φ h of both sides, we have φ t (S ) S as desired. To show that S R, suppose z S. Then lim φ ( t(z) = lim φ t h φh (z) ) t t = since φ h (z) S. Finally, connectedness, closedness and preservation of the boundary follow because φ h is a topological homeomorphism on R n. 3 64

. Convergence of advection Theorem 6. Suppose f satisfies Assumption, and h >. Also suppose S R is a closed connected positively invariant set, such that there exists ε > such that B ε () S. Define the sequence of sets S, S, S,... by S k+ = φ h (S k ) for k =,,,... Then this sequence converges to R in the following sense: (i) S k R for all k N. (ii) S k S k+ for all k N. (iii) For every x R, there exists n such that x S n Proof. Parts (i) and (ii) follow from Lemma 5. To show part (iii), notice that given x R, there exists T such that φ t (x) B ε () for all t T. Pick n to be any integer such that n > T/h. Then φ nh (x) S and hence x S n. 3 Level-set representations We represent a subset of R n as the sublevel-set of a continuous function. Given h >, we define the time h advection operator A h : C(R n, R) C(R n, R) by p = A h q if p(x) = q ( φ h (x) ) for all x R n The map A h is also called the Liouville operator associated with f. A very important property is that it is linear. In this paper our objective is to use convex optimization to construct the advection A h q given q. To do this we need to specify a class of such functions with additional properties. We now turn to the first such property. Definition 7. A set S R n is called star-shaped if for all x S, λx S for all λ [, ] The set S is called strictly star-shaped if for all x S, λx int(s) for all λ [, ) Note that a star shaped set S is connected. We now give a simple sufficient condition that ensures that a sublevel set is star-shaped. We make the following definition. Definition 8. Suppose g : R n R. We call g strictly star-shaped if g is C and further satisfies g() < and Dg(x)x > for all x Lemma 9. Suppose g : R n R is strictly star-shaped. Then C(g) is star-shaped. Proof. Suppose x C(g). Let y : R + R n be the function y(t) = e t x We would like to show that y(t) C(g) for all t. We have d dt g(y(t)) = Dg( y(t) ) y(t) < 4 643

for all t. Also g ( y(t) ) g ( y() ) = and since y() = x we have g ( y(t) ) < for all t as desired. t d dt g( y(t) ) dt For the purposes of this paper, we would like to construct a convex set of functions whose sublevel sets are connected. Although the convex set of all convex functions on R n will suffice, using it would unnecessarily restrict the class of sets describable to be convex. One cannot simply use the set of all functions whose -sublevel set is connected, since this set of functions is not convex. We therefore choose the set of strictly star-shaped functions, which is a convex set. We will use strictly-star-shaped polynomials to represent sets. This is significantly more general than previous approaches using quadratic functions. Theorem. Suppose g is strictly star-shaped. Then C(g) is strictly star-shaped. Proof. We know C(g) is star-shaped. We need to show that if y C(g) then there does not exist λ [, ) such that λy C(g). Suppose for the sake of a contradiction that there does exist such a y and λ. We know that λ >, since g() <. Define the function h : [, ] R by Then the derivative of h is h(θ) = g(θy) for θ [, ] h (θ) = θ Dg(θy)(θy) > for all θ (, ). From the assumptions we know h(λ) = and h() =. Since h is C on [λ, ], by the mean-value theorem there must exist θ (λ, ) such that h (θ) =, which is a contradiction. Lemma. Suppose S R n is bounded and S. Suppose x R n and x, and define the half-line L(x) by L(x) = { λx λ } Then L(x) S. Proof. One may construct a point in the intersection as follows. Let θ = sup { λ R λx S } Then θ is finite since x and S is bounded. Also θx L(x) S as required. Theorem. Suppose g is strictly star shaped and C(g) is bounded. Then V(g) = C(g). Proof. We know that V(g) C(g), so all we need to show is that V(g) C(g). Suppose for the sake of a contradiction that this is not so, that is there exists x V(g) such that x C(g). Then x, since g is strictly star shaped, and by Lemma there exists q L(x) C(g). Also q x since x C(g). Now define the function h : [, ] R by h(θ) = g(θq) for all θ [, ] There exists λ (, ) such that x = λq, since q L(x). As in the proof of Theorem, we know h (θ) > for all θ (λ, ), but h(λ) = and h() =, and so the mean-value theorem implies that there exists θ (λ, ) such that h (θ) =, a contradiction. The following simple lemma relates the advection operator to the advection of sets. Lemma 3. Suppose g, g are functions mapping R n to R. If g = A h g then C(g ) = φ h ( C(g ) ). Proof. The proof follows from the definitions above. 5 644

4 Computation We use the following approach [, 6] to computation with semialgebraic sets. In particular, we would like to computationally find a polynomial p, satisfying additional linear constraints (such as p = A h q), such that C(p) C(q). The following result shows that this may be achieved using semidefinite programming. Lemma 4. Given p, q R[x], suppose there exist s, s Σ such that s s q + p = for all x R n () Then C(q) C(p). Further, given q, the set of coefficients of p, s, and s satisfying () is the feasible set of a semidefinite program. Proof. See, for example, [] or []. 4. Time-stepping Since we are performing advection, we must using an approximation to the flow map φ h. Many such approximations are possible, and are provided by the theory of numerical integration. Here we present two simple approximations. The first-order Taylor approximation to p = A h q is the map B h : C(R n, R) C(R n, R) given by p = B h q if p(x) = q(x) hdq(x)f(x) An alternative approximation is C h, given by p = C h q if p(x) = q ( x hf(x) ) Both of these have the following properties; they are both linear maps, and if q and f are polynomials then so is p. In this paper, we concentrate on B h, since typically B h q is a polynomial whose degree is less than that of C h q. 4. An algorithm for backward advection Given a strictly star-shaped polynomial g k such that C(g k ) R, and C(g k ) is bounded and positively invariant, we compute an approximation g k+ such that g k+ A h g k as follows. Pick α > and positive integer d. Solve, using semidefinite programming, the following feasibility problem for g k+ R[x], s, s, s 3, s 4 Σ. g k+ () = Dg k+ (x)x > s 3 s 4 g k + B h g k+ = s + s g k B h+α g k+ = deg(g k+ ) d This algorithm is an implicit method; it is approximately solving B h g k+ = g k. Alternatively one could form an explicit version by constructing g k+ = B h g k. One parameter that must be chosen is d, the degree of the desired approximations; how large a d is necessary depends on the complexity of the system and its advected sets. The cost of computation grows polynomially with d, and so typically d is chosen as small as possible. Once the degree has been chosen, the map B h is a matrix, operating on the coefficients of the polynomials. The other important parameter is α, which we think of as follows. The above algorithm finds a degree d polynomial g k+ such that g k+ is strictly star shaped, φ h+α C(g k+ ) C(g k ), and φ h C(g k+ ) C(g k ). Hence the parameter α may be thought of as a tolerance that allows for the constraint that g k+ is required to have degree d or less. 6 645

4.3 Finding the initial semi-algebraic set We find a local Lyapunov function in order to construct an initial star-shaped positively invariant set. The following result is standard. Proposition 5. Suppose f : R n R n satisfies Assumption, V : R n R is a C function, γ >, and the set D = { x R n V (x) γ } is compact. Further suppose V (x) > for all x V () = DV (x)f(x) < for all x, x D Let g (x) = V (x) γ. Then C(g ) is positively invariant, and C(g ) R. One simple approach to finding an initial sublevel set is to find a quadratic Lyapunov function for the linearization of the system, and use a small sublevel-set of this quadratic for the polynomial g. An alternative method which often gives a much large initial set is as follows. Choose a polynomial p R[x] such that C(p).5 R. We then solve the following convex feasibility problem. Find V R[x] and s, s Σ such that DV (x)x > for all x V (x) > for all x V () = DV (x)f(x) + s s p = for all x Similar methods for finding local Lyapunov functions may be found in [3, ], along with details on the construction of the associated semidefinite program. Here we have added the first constraint to ensure that if γ > then V γ is strictly starshaped. Note that these constraints imply that all sublevel sets of V are compact. Given V, we then solve the convex program maximize γ subject to V γ s s p ε = for all x s, s Σ.5.5 X X.5.5.5.5.5 Figure : Example of stopping conditions. where ε > is small. The optimal γ satisfies C(V γ) C(p). Then V and γ satisfy the assumptions of Proposition 5 and so we may use g = V γ as the function defining our initial level set. 4.4 Stopping conditions By using the proposed level-set method, we can successfully propagate the system states backward in time. However, we still need a stopping criterion for this iteration. We use the following method to measure the closeness of two semi-algebraic sets. Theorem 6. Suppose g and g are strictly star-shaped functions, C(g ) C(g ), and C(g ) is bounded. Suppose x, x R n are two points such that x V(g ) and x V(g ) and x = αx for some α. Define the function q : R n R by where λ >. Then if C(q) C(g ), we have q(x) = g (λx) x x x for all x R n λ 3 7 646

Proof. We know q is strictly star-shaped since g is, and therefore by Lemma there is a unique y L(x ) V(q). We can construct this explicitly; it is y = λ x. Since C(q) C(g ) C(g ) we have λ α. Therefore we have x x = ( α) x and hence x x ( λ ) x as desired. Hence to determine when the algorithm should terminate, one uses Lemma 4 to determine the smallest λ > such that C(q) C(g k ), where q(x) = g (λx). Again, this may be evaluated using semidefinite programming. Note that if one has a known bound on the size of x, for example given by an outer bound on the region of attraction R, then this stopping criterion gives an absolute bound of λr. In practice, one picks a λ > in advance, and checks this condition after each iteration. Figure shows an example; here curve is C(g ), curve is C(g ) and curve 3 is C(q). The largest radial deviation between curves and is less than.3. 5 Numerical examples Example. Consider the following dynamical system ẋ =.5y x( x.5y ) ẏ = x.5y( x.5y ).5.5.5.5 Figure : Successive iterates of the level-set algorithm The origin is a locally stable equilibrium point. Here we start with the initial polynomial g = x +y. The results of the level-set method are shown in Figure, using time step h =.. It can be seen that the successive iterates approach the true domain of attraction. 3 3 4 3 3 3 3 3 3 3 Figure 3: The Van der Pol oscillator. The left figure shows the sequence of iterations. The right figure shows the final iterate, and some other approximations. Example. Consider the Van der Pol oscillator ẋ = y ẏ = x y( x ) 8 647

Again the system is locally stable around the origin. Here we use an initial sublevel set given by the quadratic polynomial g = 4x + 4y, which can be verified to be positively invariant. A time step of h =. is used. The even-numbered iterates g, g, g 4,... are shown in Figure 3. Some of the iterates are below, normalized to allow integer coefficients. p = + 5 y 88 y 4 + y 6 97 xy 56 xy 3 4 xy 5 + 3883 x + 36 x y 57 x y 4 + 66 x 3 y x 3 y 3 47 x 4 + x 4 y + 8 x 5 y + 6 x 6 p 4 = + 64 y 37 y 4 + 6 y 6 654 xy 7 xy 3 + 4 xy 5 + 36 x + 48 x y 43 x y 4 + 94 x 3 y 35 x 3 y 3 + 44 x 4 x 4 y + 9 x 5 y + 335 x 6 p 8 = + 5 y 56 y 4 + y 6 436 xy + 4 xy 3 + 4 xy 5 + 499 x + 5 x y + x y 4 + 3 x 3 y 7 x 3 y 3 687 x 4 x 4 y + x 5 y + 84 x 6 It can also be seen that the iterates gradually approach the exact boundary of the domain of attraction. After 3 iterations, the solution covers most of the stable region. After 4 iterations, the stopping criteria allowing an absolute radial change of. has been met. The final result is shown in Figure 3. Curve is the boundary of C(g ), when g is obtained through the sdp-based procedure in Section 4.3. For comparison, curve is the result of [7] and curve 3 is the result of []..5.5.5 Example 3. S4, from []. The following dynamical system is Example ẋ = x + y + x 3 + y 5 ẏ = x y + x y 3.5 Figure 4: The sublevel sets of g and g. After iterations, the estimated boundary of the DOA has reached the pre-specified bound. The final iterate, and the true boundary, are shown in Figure 4. Example 4. The proposed level-set method also works in higher dimensions. In the derivation of the algorithm, we do not make any assumptions about the system dimensions. Consider the following 3D dynamical system. ẋ = x + x 3 + x y +.5xy + xyz + xz ẏ = y + x y + xy +.5y 3 + y z + yz ż = z + x z + xyz +.5y z + yz + z 3 This system has a known domain of attraction given by C(p) where p = x + xy +.5y + yz + z 9 648

After 3 iterations with a time step of., the solution has reached the specified stopping condition (an absolute increment less than..) After removing the insignificant coefficients, this is g 3 =.4z.84z 4 +.37z 6 +.5yz.5yz 3 +.739yz 5 +.67y.4657y z +.383y z 4.37y 3 z +.65y 3 z 3.878y 4 +.689y 4 z +.5496y 5 z +.839y 6.39xz +.394xz 3.57xz 5 +.4xy.576xyz +.745xyz 4.587xy z.7966xy z 3.46xy 3 +.397xy 3 z +.4539xy 4 z +.649xy 5 +.9x.33x z +.6x z 4.7976x yz +.774x yz 3.75x y +.3683x y z +.667x y 3 z +.68x y 4 +.8x 3 z.69x 3 z 3.x 3 y +.949x 3 yz.646x 3 y z +.883x 3 y 3.8753x 4 +.3x 4 z +.69x 4 yz +.76x 4 y.7735x 5 z +.335x 5 y +.96x 6 We may compare this result with the known domain of attraction, using similar techniques to those for the stopping criterion. The maximum radial deviation from the exact bound is less than.4% of the longest radial distance of the true domain of attraction. 6 Conclusions In this paper, we developed a level-set algorithm to advect subsets of the state-space using semidefinite programming. The sets are represented as semialgebraic sets, i.e., sublevel-sets of polynomials. We present an implicit time-stepping algorithm, whose solution converges to the domain of attraction when it is starshaped. This level-set algorithm not only works for two-dimensional systems but also works for higher dimensional systems. We also provide a stopping criterion, and several numerical examples. References [] E. J. Davision and E. M. Kurak. A computational method for determining quadratic Lyapunov functions for non-linear systems. Automatica, 7:67 636, 97. [] G. Ghesi, A. Tesi, and A. Vicino. On optimal quadratic Lyapunov functions for polynomial systems. In 5th Int. Symp. on Mathematical Theory of Networks and Systems,. [3] Z. W. Jarvis-Wloszek. Lyapunov based analysis and controllers synthesis for polynomial systems using sum-ofsquares optimization. PhD thesis, University of California, Berkeley, 3. [4] H. Khalil. Nonlinear systems. Prentice-Hall Inc., third edition,. [5] S. Lall, M. Peet, and T. Wang. SOSCODE: Sum of squares optimization toolbox for MATLAB, 5. Available from http://www.stanford.edu/~lall/. [6] J. B. Lasserre. Global optimization with polynomials and the problem of moments. SIAM Journal on Optimization, (3):796 87,. [7] A. Levin. An analytical method of estimating the domain of attraction for polynomial differential equations. IEEE Transactions on Automatic Control, 39():47 475, 994. [8] I. M. Mitchell and C. J. Tomlin. Level set methods for computation in hybrid systems. In Hybrid Systems: Computation and Control, LNCS 79, pages 3 33. Springer Verlag,. [9] S. Osher and R. Fedkiw. Level Set Methods and Dynamic Implicit Surfaces. Springer Verlag,. [] A. Papachristodoulou and S. Prajna. On the construction of Lyapunov functions using the sum of squares decomposition. In Proceedings of the 4st IEEE Conference on Decision and Control,. [] P. Parrilo. Structured Semidefinite Programs and Semialgebraic Geometry Methods in Robustness and Optimization. PhD thesis, California Institute of Technology,. [] P. A. Parrilo and S. Lall. Semidefinite programming relaxations and algebraic optimization in control. European Journal of Control, 9( 3):37 3, April 3. [3] S. Prajna, A. Papachristodoulou, and P. A. Parrilo. Introducing sostools: A general purpose sum of squares programming solver. In Proceedings of the 4st IEEE Conference on Decision and Control, December. 649