Nonlinear Systems Theory

Similar documents
LMI Methods in Optimal and Robust Control

Nonlinear Control Systems

Nonlinear Systems and Control Lecture # 12 Converse Lyapunov Functions & Time Varying Systems. p. 1/1

Half of Final Exam Name: Practice Problems October 28, 2014

Optimization of Polynomials

Converse Lyapunov theorem and Input-to-State Stability

ẋ = f(x, y), ẏ = g(x, y), (x, y) D, can only have periodic solutions if (f,g) changes sign in D or if (f,g)=0in D.

Nonlinear Control. Nonlinear Control Lecture # 8 Time Varying and Perturbed Systems

Metric Spaces and Topology

Nonlinear Control. Nonlinear Control Lecture # 2 Stability of Equilibrium Points

Lecture 4. Chapter 4: Lyapunov Stability. Eugenio Schuster. Mechanical Engineering and Mechanics Lehigh University.

Existence and uniqueness of solutions for nonlinear ODEs

Introduction to Nonlinear Control Lecture # 3 Time-Varying and Perturbed Systems

Nonlinear equations. Norms for R n. Convergence orders for iterative methods

(x k ) sequence in F, lim x k = x x F. If F : R n R is a function, level sets and sublevel sets of F are any sets of the form (respectively);

Dynamical Systems as Solutions of Ordinary Differential Equations

EE222 - Spring 16 - Lecture 2 Notes 1

THE INVERSE FUNCTION THEOREM

Nonlinear Control Lecture 5: Stability Analysis II

Polynomial level-set methods for nonlinear dynamical systems analysis

1 Lyapunov theory of stability

1 The Observability Canonical Form

Main topics and some repetition exercises for the course MMG511/MVE161 ODE and mathematical modeling in year Main topics in the course:

MCE693/793: Analysis and Control of Nonlinear Systems

Ordinary Differential Equation Theory

11 Chaos in Continuous Dynamical Systems.

MCE693/793: Analysis and Control of Nonlinear Systems

Lyapunov Stability Theory

Prof. Krstic Nonlinear Systems MAE281A Homework set 1 Linearization & phase portrait

Modern Optimal Control

Stabilization of Control-Affine Systems by Local Approximations of Trajectories

Applied Math Qualifying Exam 11 October Instructions: Work 2 out of 3 problems in each of the 3 parts for a total of 6 problems.

Computation of continuous and piecewise affine Lyapunov functions by numerical approximations of the Massera construction

An introduction to Mathematical Theory of Control

Nonlinear Control. Nonlinear Control Lecture # 8 Time Varying and Perturbed Systems

Existence and Uniqueness

3 Stability and Lyapunov Functions

In essence, Dynamical Systems is a science which studies differential equations. A differential equation here is the equation

System Control Engineering 0

Approximating solutions of nonlinear second order ordinary differential equations via Dhage iteration principle

Economics 204 Fall 2011 Problem Set 2 Suggested Solutions

Continuous and Piecewise Affine Lyapunov Functions using the Yoshizawa Construction

Problem List MATH 5173 Spring, 2014

LMI Methods in Optimal and Robust Control

Nonlinear Control. Nonlinear Control Lecture # 3 Stability of Equilibrium Points

An introduction to Birkhoff normal form

Passivity-based Stabilization of Non-Compact Sets

d(x n, x) d(x n, x nk ) + d(x nk, x) where we chose any fixed k > N

Convergence Rate of Nonlinear Switched Systems

I. The space C(K) Let K be a compact metric space, with metric d K. Let B(K) be the space of real valued bounded functions on K with the sup-norm

Lecture 4: Numerical solution of ordinary differential equations

Lyapunov Theory for Discrete Time Systems

ODE Homework 1. Due Wed. 19 August 2009; At the beginning of the class

Chapter III. Stability of Linear Systems

Handout 2: Invariant Sets and Stability

Topic # /31 Feedback Control Systems. Analysis of Nonlinear Systems Lyapunov Stability Analysis

Lecture 6 Verification of Hybrid Systems

CDS Solutions to the Midterm Exam

Solution of Additional Exercises for Chapter 4

Real Analysis Math 131AH Rudin, Chapter #1. Dominique Abdi

Modeling and Analysis of Dynamic Systems

1. Nonlinear Equations. This lecture note excerpted parts from Michael Heath and Max Gunzburger. f(x) = 0

Stability theory is a fundamental topic in mathematics and engineering, that include every

EN Nonlinear Control and Planning in Robotics Lecture 3: Stability February 4, 2015

Stability and Robustness Analysis of Nonlinear Systems via Contraction Metrics and SOS Programming

Multivariable Calculus

MAT 570 REAL ANALYSIS LECTURE NOTES. Contents. 1. Sets Functions Countability Axiom of choice Equivalence relations 9

ON FRACTAL DIMENSION OF INVARIANT SETS

Nonlinear Control Lecture # 1 Introduction. Nonlinear Control

FIXED POINT METHODS IN NONLINEAR ANALYSIS

16 1 Basic Facts from Functional Analysis and Banach Lattices

SMSTC (2017/18) Geometry and Topology 2.

Measure and Integration: Solutions of CW2

arxiv: v1 [math.ds] 23 Apr 2017

Math 328 Course Notes

Lorenz like flows. Maria José Pacifico. IM-UFRJ Rio de Janeiro - Brasil. Lorenz like flows p. 1

Continuous Functions on Metric Spaces

Selçuk Demir WS 2017 Functional Analysis Homework Sheet

Solution of Linear State-space Systems

Modeling & Control of Hybrid Systems Chapter 4 Stability

x 2 x n r n J(x + t(x x ))(x x )dt. For warming-up we start with methods for solving a single equation of one variable.

Stabilization and Passivity-Based Control

Lecture 25: Subgradient Method and Bundle Methods April 24

Stability lectures. Stability of Linear Systems. Stability of Linear Systems. Stability of Continuous Systems. EECE 571M/491M, Spring 2008 Lecture 5

Nonlinear D-set contraction mappings in partially ordered normed linear spaces and applications to functional hybrid integral equations

Economics 204 Summer/Fall 2011 Lecture 5 Friday July 29, 2011

MA5510 Ordinary Differential Equation, Fall, 2014

THE STONE-WEIERSTRASS THEOREM AND ITS APPLICATIONS TO L 2 SPACES

THREE DIMENSIONAL SYSTEMS. Lecture 6: The Lorenz Equations

Ordinary Differential Equations II

Minimum-Phase Property of Nonlinear Systems in Terms of a Dissipation Inequality

L2 gains and system approximation quality 1

NORMS ON SPACE OF MATRICES

2.1 Dynamical systems, phase flows, and differential equations

MATH The Chain Rule Fall 2016 A vector function of a vector variable is a function F: R n R m. In practice, if x 1, x n is the input,

M2PM1 Analysis II (2008) Dr M Ruzhansky List of definitions, statements and examples Preliminary version

Applied Analysis (APPM 5440): Final exam 1:30pm 4:00pm, Dec. 14, Closed books.

MCE/EEC 647/747: Robot Dynamics and Control. Lecture 8: Basic Lyapunov Stability Theory

2 Sequences, Continuity, and Limits

X. Linearization and Newton s Method

Transcription:

Nonlinear Systems Theory Matthew M. Peet Arizona State University Lecture 2: Nonlinear Systems Theory

Overview Our next goal is to extend LMI s and optimization to nonlinear systems analysis. Today we will discuss 1. Nonlinear Systems Theory 1.1 Existence and Uniqueness 1.2 Contractions and Iterations 1.3 Gronwall-Bellman Inequality 2. Stability Theory 2.1 Lyapunov Stability 2.2 Lyapunov s Direct Method 2.3 A Collection of Converse Lyapunov Results The purpose of this lecture is to show that Lyapunov stability can be solved Exactly via optimization of polynomials. M. Peet Lecture 2: 1 / 56

Ordinary Nonlinear Differential Equations Computing Stability and Domain of Attraction Consider: A System of Nonlinear Ordinary Differential Equations ẋ(t) = f(x(t)) Problem: Stability Given a specific polynomial f : R n R n, find the largest X R n such that for any x() X, lim t x(t) =. M. Peet Lecture 2: 2 / 56

Nonlinear Dynamical Systems Long-Range Weather Forecasting and the Lorentz Attractor A model of atmospheric convection analyzed by E.N. Lorenz, Journal of Atmospheric Sciences, 1963. ẋ = σ(y x) ẏ = rx y xz ż = xy bz M. Peet Lecture 2: 3 / 56

Stability and Periodic Orbits The Poincaré-Bendixson Theorem and van der Pol Oscillator 3 An oscillating circuit model: ẏ = x (x 2 1)y ẋ = y y 2 1 1 2 Domain of attraction 3 3 2 1 1 2 3 x Figure : The van der Pol oscillator in reverse Theorem 1 (Poincaré-Bendixson). Invariant sets in R 2 always contain a limit cycle or fixed point. M. Peet Lecture 2: 4 / 56

Stability of Ordinary Differential Equations Consider ẋ(t) = f(x(t)) with x() R n. Theorem 2 (Lyapunov Stability). Suppose there exists a continuous V and α, β, γ > where β x 2 V (x) α x 2 V (x) T f(x) γ x 2 for all x X. Then any sub-level set of V in X is a Domain of Attraction. M. Peet Lecture 2: 5 / 56

Mathematical Preliminaries Cauchy Problem The first question people ask is the Cauchy problem: Autonomous System: Definition 3. The system ẋ(t) = f(x(t)) is said to satisfy the Cauchy problem if there exists a continuous function x : [, t f ] R n such that ẋ is defined and ẋ(t) = f(x(t)) for all t [, t f ] If f is continuous, the solution must be continuously differentiable. Controlled Systems: For a controlled system, we have ẋ(t) = f(x(t), u(t)). At this point u is undefined, so for the Cauchy problem, we take ẋ(t) = f(t, x(t)) In this lecture, we consider the autonomous system. Including t complicates the analysis. However, results are almost all the same. M. Peet Lecture 2: 6 / 56

Ordinary Differential Equations Existence of Solutions There exist many systems for which no solution exists or for which a solution lems onlyof exists solutions over a finite time interval. Finite Escape Time exponential stability quadratic stability time-varying systems invariant sets center manifold theorem Even for something as simple as ation ẋ(t) = x(t) 2 x() = x x()= x has the solution t, t< 1 x x(t) = x 1 x t which clearly has escape time = 1 x x(t) 5 4.5 4 3.5 3 2.5 2 1.5 1.5 Finite escape time of dx/dt = x 2 t e = 1 x 1 2 3 4 5 Time t ssproblems Figure : Simulation of ẋ = x 2 for several x() Previous problem is like the water-tank problem in backwar time M. Peet Lecture 2: 7 / 56

Ordinary Differential Equations Non-Uniqueness A classical example of a system without a unique solution is ẋ(t) = x(t) 1/3 x() = For the given initial condition, it is easy to verify that ( 2t x(t) = and x(t) = 3 both satisfy the differential equation. ) 3/2 1.6 1.6 1.4 1.4 1.2 1.2 1 1.8.8.6.6.4.4.2.2.2.4.6.8 1 1.2 1.4 1.6 1.8 2.2.4.6.8 1 1.2 1.4 1.6 1.8 2 Figure : Matlab simulation of ẋ(t) = x(t) 1/3 with x() = Figure : Matlab simulation of ẋ(t) = x(t) 1/3 with x() =.1 M. Peet Lecture 2: 8 / 56

Ordinary Differential Equations Non-Uniqueness Uniqueness Problems Example: The equation ẋ= x, x()=has man Another Example of a system with several solutions is given by ẋ(t) = { (t C) x(t) x(t) x() = 2 /4 t> C = t C For the given initial condition, it is easy to verify that for any C, { (t C) 2 x(t) = 4 t > C t C satisfies the differential equation. x(t) 2 1.5 1.5.5 1 1 2 3 4 5 Time t Compare with water tank: Figure : Several solutions of ẋ = x M. Peet Lecture 2: 9 / 56

Ordinary Differential Equations Customary Notions of Continuity Definition 4. For normed linear spaces X, Y, a function f : X Y is said to be continuous at the point x X if for any ɛ >, there exists a δ > such that x x < δ implies f(x) f(x ) < ɛ. M. Peet Lecture 2: 1 / 56

Ordinary Differential Equations Customary Notions of Continuity Definition 5. For normed linear spaces X, Y, a function f : A X Y is said to be continuous on B A if it is continuous for any point x B. A function is said to be simply continuous if B = A. Definition 6. For normed linear spaces X, Y, a function f : A X Y is said to be uniformly continuous on B A if for any ɛ >, there exists a δ > such that for any points x, y B, x y < δ implies f(x) f(y) < ɛ. A function is said to be simply uniformly continuous if B = A. M. Peet Lecture 2: 11 / 56

Lipschitz Continuity A Quantitative Notion of Continuity Definition 7. We say the function f is Lipschitz continuous on X if there exists some L > such that f(x) f(y) L x y for any x, y X. The constant L is referred to as the Lipschitz constant for f on X. Definition 8. We say the function f is Locally Lipschitz continuous on X if for every x X, there exists a neighborhood, B of x such that f is Lipschitz continuous on B. Definition 9. We say the function f is globally Lipschitz if it is Lipschitz continuous on its entire domain. It turns out that smoothness of the vector field is the critical factor. Not a Necessary condition, however. The Lipschitz constant, L, allows us to quantify the roughness of the vector field. M. Peet Lecture 2: 12 / 56

Ordinary Differential Equations Existence and Uniqueness Theorem 1 (Simple). Suppose x R n, f : R n R n and there exist L, r such that for any x, y B(x, r), f(x) f(y) L x y and f(x) c. Let b < min{ 1 L, r c }. Then there exists a unique differentiable map x C[, b], such that x() = x, x(t) B(x, r) and ẋ(t) = f(x(t)). Because the approach to its proof is so powerful, it is worth presenting the proof of the existence theorem. M. Peet Lecture 2: 13 / 56

Ordinary Differential Equations Contraction Mapping Theorem Theorem 11 (Contraction Mapping Principle). Let (X, ) be a complete normed space and let P : X X. Suppose there exists a ρ < 1 such that P x P y ρ x y for all x, y X. Then there is a unique x X such that P x = x. Furthermore for y X, define the sequence {x i } as x 1 = y and x i = P x i 1 for i > 2. Then lim i x i = x. Some Observations: Proof: Show that P k y is a Cauchy sequence for any y X. For a differentiable function P, P is a contraction if and only if P < 1. In our case, X is the space of solutions. The contraction is (P x)(t) = x + t f(x(s))ds M. Peet Lecture 2: 14 / 56

Ordinary Differential Equations Contraction Mapping Theorem This contraction derives from the fundamental theorem of calculus. Theorem 12 (Fundamental Theorem of Calculus). Suppose x C and f : M R n R n is continuous and M = [, t f ] R. Then the following are equivalent. x is differentiable at any t M and ẋ(t) = f(x(t)) at all t M (1) x() = x (2) x(t) = x + t f(x(s))ds for all t M M. Peet Lecture 2: 15 / 56

Ordinary Differential Equations Existence and Uniqueness First recall what we are trying to prove: Theorem 13 (Simple). Suppose x R n, f : R n R n and there exist L, r such that for any x, y B(x, r), f(x) f(y) L x y and f(x) c. Let b < min{ 1 L, r c }. Then there exists a unique differentiable map x C[, b], such that x() = x, x(t) B(x, r) and ẋ(t) = f(x(t)). We will show that is a contraction. (P x)(t) = x + t f(x(s))ds M. Peet Lecture 2: 16 / 56

Proof of Existence Theorem Proof. For given x, define the space B = {x C[, b] : x() = x, x(t) B(x, r)} with norm sup t [,b] x(t) which is complete. Define the map P as P x(t) = x + t f(x(s))ds We first show that P maps B to B. Suppose x B. To show P x B, we first show that P x a continuous function of t. P x(t 2 ) P x(t 1 ) = t2 t 1 f(x(s))ds t2 t 1 f(x(s)) ds c(t 2 t 1 ) Thus P x is continuous. Clearly P x() = x. Now we show x(t) B(x, r). sup P x(t) x = sup t [,b] t [,b] b bc < r t f(x(s)) ds f(x(s))ds M. Peet Lecture 2: 17 / 56

Proof of Existence Theorem Proof. Now we have shown that P : B B. To prove existence and uniqueness, we show that Φ is a contraction. P x P y = sup sup ( t [,b] b L t [,b] t t f(x(s)) f(y(s))ds f(x(s)) f(y(s)) ds) x(s) y(s) ds Lb x y b f(x(s)) f(y(s)) ds Thus, since Lb < 1, the map is a contraction with a unique fixed point x B such that t x(t) = x + f(x(s))ds By the fundamental theorem of calculus, this means that x is a differentiable function such that for t [, b] ẋ = f(x(t)) M. Peet Lecture 2: 18 / 56

Picard Iteration Make it so This proof is particularly important because it provides a way of actually constructing the solution. Picard-Lindelöf Iteration: From the proof, unique solution of P x = x is a solution of ẋ = f(x ), where t (P x)(t) = x + f(x(s))ds From the contraction mapping theorem, the solution P x = x can be found as x = lim P k z for any z B k M. Peet Lecture 2: 19 / 56

Extension of Existence Theorem Note that this existence theorem only guarantees existence on the interval [ t, 1 ] [ or t, r ] L c Where r is the size of the neighborhood near x L is a Lipschitz constant for f in the neighborhood of x c is a bound for f in the neighborhood of x Note further that this theorem only gives a solution for a particular initial condition x It does not imply existence of the Solution Map However, convergence of the solution map can also be proven. M. Peet Lecture 2: 2 / 56

Illustration of Picard Iteration This is a plot of Picard iterations for the solution map of ẋ = x 3. z(t, x) = ; P z(t, x) = x; P 2 z(t, x) = x tx 3 ; P 3 z(t, x) = x tx 3 + 3t 2 x 5 3t 3 x 7 + t 4 x 9 1.5 1..5.2.4.6.8 1. Figure : The solution for x = 1 Convergence is only guaranteed on interval t [,.333]. M. Peet Lecture 2: 21 / 56

Extension of Existence Theorem Theorem 14 (Extension Theorem). For a given set W and r, define the set W r := {x : x y r, y W }. Suppose that there exists a domain D and K > such that f(t, x) f(t, y) K x y for all x, y D R n and t >. Suppose there exists a compact set W and r > such that W r D. Furthermore suppose that it has been proven that for x W, any solution to ẋ(t) = f(t, x), x() = x must lie entirely in W. Then, for x W, there exists a unique solution x with x() = x such that x lies entirely in W. M. Peet Lecture 2: 22 / 56

Illustration of Extended Picard Iteration Picard iteration can also be used with the extension theorem Final time of previous Picard iterate is used to seed next Picard iterate. Definition 15. Suppose that the solution map φ exists on t [, ] and φ(t, x) K x for any x B r. Suppose that f has Lipschitz factor L on B 4Kr and is bounded on B 4Kr with bound Q. Given T < min{ 2Kr Q, 1 L }, let z = and define G k (t, x) := (P k z)(t, x) and for i >, define the functions G i recursively as G k i+1(t, x) := (P k z)(t, G k i (T, x)). Define the concatenation of the G k i as G k (t, x) := G k i (t it, x) t [it, it + T ] and i = 1,,. M. Peet Lecture 2: 23 / 56

Illustration of Extended Picard Iteration We take the previous approximation to the solution map and extend it. x t 1..9.8.7.6.5.2.4.6.8 1. t Figure : The Solution map φ and the functions G k i for k = 1, 2, 3, 4, 5 and i = 1, 2, 3 for the system ẋ(t) = x(t) 3. The interval of convergence of the Picard Iteration is T = 1 3. M. Peet Lecture 2: 24 / 56

Stability Definitions Whenever you are trying to prove stability, Please define your notion of stability! We denote the set of bounded continuous functions by C := {x C : x(t) r, r } with norm x = sup t x(t). Definition 16. The system is locally Lyapunov stable on D where D contains an open neighborhood of the origin if it defines a unique map Φ : D C which is continuous at the origin. The system is locally Lyapunov stable on D if for any ɛ >, there exists a δ(ɛ) such that for x() δ(ɛ), x() D we have x(t) ɛ for all t M. Peet Lecture 2: 25 / 56

Stability Definitions Definition 17. The system is globally Lyapunov stable if it defines a unique map Φ : R n C which is continuous at the origin. We define the subspace of bounded continuous functions which tend to the origin by G := {x C : lim t x(t) = } with norm x = sup t x(t). Definition 18. The system is locally asymptotically stable on D where D contains an open neighborhood of the origin if it defines a map Φ : D G which is continuous at the origin. M. Peet Lecture 2: 26 / 56

Stability Definitions Definition 19. The system is globally asymptotically stable if it defines a map Φ : R n G which is continuous at the origin. Definition 2. The system is locally exponentially stable on D if it defines a map Φ : D G where (Φx)(t) Ke γt x for some positive constants K, γ > and any x D. Definition 21. The system is globally exponentially stable if it defines a map Φ : R n G where (Φx)(t) Ke γt x for some positive constants K, γ > and any x R n. M. Peet Lecture 2: 27 / 56

Lyapunov Theorem ẋ = f(x), f() = Theorem 22. Let V : D R be a continuously differentiable function such that V () = V (x) > for x D, x V (x) T f(x) for x D. Then ẋ = f(x) is well-posed and locally Lyapunov stable on the largest sublevel set of V contained in D. Furthermore, if V (x) < for x D, x, then ẋ = f(x) is locally asymptotically stable on the largest sublevel set of V contained in D. M. Peet Lecture 2: 28 / 56

Lyapunov Theorem Sublevel Set: For a given Lyapunov function V and positive constant γ, we denote the set V γ = {x : V (x) γ}. Proof. Existence: Denote the largest bounded sublevel set of V contained in the interior of D by V γ. Because V (x(t)) = V (x(t)) T f(x(t)) is continuous, if x() V γ, then x(t) V γ for all t. Therefore since f is continuous and V γ is compact, by the extension theorem, there is a unique solution for any initial condition x() V γ. Lyapunov Stability: Given any ɛ >, choose e < e with B(ɛ) V γ, choose γ i such that V γi B(ɛ). Now, choose δ > such that B(δ) V γi. Then B(δ) V γi B(ɛ) and hence if x() B(δ), we have x() V γi B(ɛ) B(ɛ ). Asymptotic Stability: V monotone decreasing implies lim t V (x(t)) =. V (x) = implies x =. Proof omitted. M. Peet Lecture 2: 29 / 56

Lyapunov Theorem Exponential Stability Theorem 23. Suppose there exists a continuously differentiable function V and constants c 1, c 2, c 3 > and radius r > such that the following holds for all x B(r). c 1 x p V (x) c 2 x p V (x) T f(x) c 3 x p Then ẋ = f(x) is exponentially stable on any ball contained in the largest sublevel set contained in B(r). Exponential Stability allows a quantitative prediction of system behavior. M. Peet Lecture 2: 3 / 56

The Gronwall-Bellman Inequality Exponential Stability Lemma 24 (Gronwall-Bellman). Let λ be continuous and µ be continuous and nonnegative. Let y be continuous and satisfy for t b, Then y(t) λ(t) + If λ and µ are constants, then y(t) λ(t) + t a t a µ(s)y(s)ds. [ t ] λ(s)µ(s) exp µ(τ)dτ ds s y(t) λe µt. For λ(t) = y(), the condition can be differentiated to obtain ẏ(t) µ(t)y(t). M. Peet Lecture 2: 31 / 56

Lyapunov Theorem Exponential Stability Proof. We begin by noting that we already satisfy the conditions for existence, uniqueness and asymptotic stability and that x(t) B(r). For simplicity, we take p = 2. Now, observe that V (x(t)) c 3 x(t) 2 c 3 c 2 V (x(t)) Which implies by the Gronwall-Bellman inequality (µ = c3 c 2, λ = V (x())) that V (x(t)) V (x())e c 3 t c2. Hence x(t) 2 1 V (x(t)) 1 e c 3 t c2 V (x()) c 2 e c t 2 x() 2. c 1 c 1 c 1 c 3 M. Peet Lecture 2: 32 / 56

Lyapunov Theorem Invariance Sometimes, we want to prove convergence to a set. Recall Definition 25. V γ = {x, V (x) γ} A set, X, is Positively Invariant if x() X implies x(t) X for all t. Theorem 26. Suppose that there exists some continuously differentiable function V such that V (x) > for x D, x V (x) T f(x) for x D. for all x D. Then for any γ such that the level set X = {x : V (x) = γ} D, we have that V γ is positively invariant. Furthermore, if V (x) T f(x) for x D, then for any δ such that X V δ D, we have that any trajectory starting in V δ will approach the sublevel set V γ. M. Peet Lecture 2: 33 / 56

Converse Lyapunov Theory In fact, stable systems always have Lyapunov functions. Suppose that there exists a continuously differentiable function function, called the solution map, g(x, s) such that is satisfied. Converse Form 1: g(x, s) = f(g(x, s)) and g(x, ) = x s V (x) = δ g(s, x) T g(s, x)ds M. Peet Lecture 2: 34 / 56

Converse Lyapunov Theory Converse Form 1: V (x) = δ g(s, x) T g(s, x)ds For a linear system, g(s, x) = e As x. This recalls the proof of feasibility of the Lyapunov inequality A T P + P A < The solution was given by x T P x = x T e AT s e As xds = g(s, x) T g(s, x)ds M. Peet Lecture 2: 35 / 56

Converse Lyapunov Theory Theorem 27. Suppose that there exist K and λ such that g satisfies g(x, t) K g(x, ) e λt Then there exists a function V and constants c 1, c 2, and c 3 such that V satisfies c 1 x 2 V (x) c 2 x 2 V (x) T f(x) c 3 x 2 M. Peet Lecture 2: 36 / 56

Converse Lyapunov Theory Proof. There are 3 parts to the proof, of which 2 are relatively minor. But part 3 is tricky. The main hurdle is to choose δ > sufficiently large Part 1: Show that V (x) c 2 x 2. Then V (x) = δ g(s, x) 2 ds δ K 2 g(x, ) 2 e 2λs ds = x 2 K2 2λ (1 e 2λδ ) = c 2 x 2 where c 2 = K2 2λ (1 e 2λδ ). This part holds for any δ >. M. Peet Lecture 2: 37 / 56

Converse Lyapunov Theory Proof. Part 2: Show that V (x) c 1 x 2. Lipschitz continuity of f implies f(x) L x. By the fundamental identity x(t) x() + t Hence by the Gronwall-Bellman inequality f(x(s)) ds x() + x() e Lt x(t) x() e Lt. Thus we have that g(x, t) 2 x 2 e Lt. This implies V (x) = δ t δ g(s, x) 2 ds x 2 e 2Ls ds = x 2 1 2L (1 e 2Lδ ) = c 1 x 2 where c 1 = 1 2L (1 e 2Lδ ). This part also holds for any δ >. L x(s) ds M. Peet Lecture 2: 38 / 56

Converse Lyapunov Theory Proof, Part 3. Part 3: Show that V (x) T f(x) c 3 x 2. This requires differentiating the solution map with respect to initial conditions. We first prove the identity g t (t, x) = g x (t, x)f(x) We start with a modified version of the fundamental identity g(t, x) = g(, x) + t f(g(s, x))ds = g(, x) + By the Leibnitz rule for the differentiation of integrals, we find Also, we have g t (t, x) = f(g(, x)) + = f(x) + t g x (t, x) = I + t t f(g(s + t, x))ds f(g(s + t, x)) T g s (s + t, x)ds f(g(s, x)) T g s (s, x)ds t f(g(s, x)) T g x (s, x)ds M. Peet Lecture 2: 39 / 56

Converse Lyapunov Theory Proof, Part 3. Now g t (t, x) g x (t, x)f(x) = x + = t t f(g(s, x)) T g s (s, x)ds + f(x) + t f(g(s, x)) T (g s (s, x) g x (s, x)f(x)) ds By, e.g., Gronwall-Bellman, this implies g t (t, x) g x (t, x)f(x) =. f(g(s, x)) T g x (s, x)f(x)ds We conclude that Which is interesting. g t (t, x) = g x (t, x)f(x) M. Peet Lecture 2: 4 / 56

Converse Lyapunov Theory Proof, Part 3. With this identity in hand, we proceed: ( T δ V (x) T f(x) = x g(s, x) T g(s, x)ds) f(x) = 2 = 2 δ δ g(s, x) T g x (s, x)f(x)ds g(s, x) T g s (s, x)ds = = g(δ, x) 2 g(, x) 2 K 2 x 2 e 2λδ x 2 = ( 1 K 2 e 2λδ) x 2 δ d ds g(s, x) 2 ds Thus the third inequality is satisfied for c 3 = 1 K 2 e 2λδ. However, this constant is only positive if δ > log K λ. M. Peet Lecture 2: 41 / 56

Massera s Converse Lyapunov Theory The Lyapunov function inherits many properties of the solution map and hence the vector field. V (x) = δ Massera: Let D α = i g(s, x) T g(s, x)ds g(t, x) = g(, x) + α i x α i i D α V (x) is continuous if D α f(x) is continuous.. t f(g(s, x))ds M. Peet Lecture 2: 42 / 56

Massera s Converse Lyapunov Theory Formally, this means Theorem 28 (Massera). Consider the system defined by ẋ = f(x) where D α f C(R n ) for any α 1 s. Suppose that there exist constants µ, δ, r > such that (Ax )(t) 2 µ x 2 e δt for all t and x 2 r. Then there exists a function V : R n R and constants α, β, γ > such that α x 2 2 V (x) β x 2 2 V (x) T f(x) γ x 2 2 for all x 2 r. Furthermore, D α V C(R n ) for any α with α 1 s. M. Peet Lecture 2: 43 / 56

Finding L = sup x D α V Given a Lipschitz bound for f, lets find a Lipschitz constant for V? V (x) = δ g(s, x) T g(s, x)ds g(t, x) = x + We first need a Lipschitz bound for the solution map: From the identity L g = sup x g(s, x) x t f(g(s, x))ds g x (t, x) = I + we get t g x (t, x) = 1 + f(g(s, x))g x (s, x)ds t L g x (s, x) ds which implies by Gronwall-Bellman that g x (t, x) e Lt M. Peet Lecture 2: 44 / 56

Finding L = sup x D α V Faà di Bruno s formula What about a bound for D α V (x)? D α g(t, x) = t Faà di Bruno s formula: For scalar functions D α f(g(s, x))g x (s, x)ds d n dx n f(g(y)) = f ( π ) (g(y)) g ( B ) (x). π Π B π where Π is the set of partitions of {1,..., n}, and denotes cardinality. We can generalize Faà di Bruno s formula to functions f : R n R n and g : R n R n. The combinatorial notation allows us to keep track of terms. M. Peet Lecture 2: 45 / 56

A Generalized Chain Rule Definition 29. Let Ω i r denote the set of partitions of (1,..., r) into i non-empty subsets. Lemma 3 (Generalized Chain Rule). Suppose f : R n R and z : R n R n are r-times continuously differentiable. Let α N n with α 1 = r. Let {a i } Z r be any decomposition of α so that α = r i=1 a i. D α x f(z(x)) = r n i=1 j 1=1 n j i=1 i x j1 x ji f(z(x)) i β Ω i r k=1 l β a D l k z jk (x) M. Peet Lecture 2: 46 / 56

A Quantitative Massera-style Converse Lyapunov Result We can use the generalized chain rule to get the following. Theorem 31. Suppose that D β f L for β r Suppose ẋ = f(x) satisfies x(t) k x() e λt for x() 1. Then there exists a function V such that V is exponentially decreasing on x 1. D α V is continuous on x 1 for α r with upper bound for some c 1 (k, n) and c 2 (k, n). ( max α Dα V (x) c 1 2 r B(r) L ) er! L 1<r λ ec2 λ Also a bound on the continuity of the solution map. B(r) is the Ball number. M. Peet Lecture 2: 47 / 56

Approximation Theory for Lyapunov Functions Theorem 32 (Approximation). Suppose f is bounded on compact X. Suppose that D α V is continuous for α 3. Then for any δ >, there exists a polynomial, p, such that for x X, V (x) p(x) δ x 2 and (V (x) p(x)) T f(x) δ x 2 Polynomials can approximate differentiable functions arbitrarily well in Sobolev norms with a quadratic upper bound on the error. M. Peet Lecture 2: 48 / 56

Polynomial Lyapunov Functions A Converse Lyapunov Result Consider the system ẋ(t) = f(x(t)) Theorem 33. Suppose ẋ(t) = f(x(t)) is exponentially stable for x() r. Suppose D α f is continuous for α 3. Then there exists a Lyapunov function V : R n R such that V is exponentially decreasing on x r. V is a polynomial. Implications: Using polynomials is not conservative. Question: What is the degree of the Lyapunov function How many coefficients do we need to optimize? M. Peet Lecture 2: 49 / 56

Degree Bounds in Approximation Theory This result uses the Bernstein polynomials to give a degree bound as a function of the error bound. Theorem 34. Suppose V : R n R n has Lipschitz constant L on the unit ball. V (x) V (y) 2 < L x y 2 Then for any ɛ >, there exists a polynomial, p, which satisfies sup p(x) V (x) 2 < ɛ x 1 where ) 2 degree(p) n 4 2 ( L ɛ To find a bound on L, we can use a bound on D α V. M. Peet Lecture 2: 5 / 56

A Bound on the Complexity of Lyapunov Functions Theorem 35. Suppose x(t) K x() e λt for x() r. Suppose f is polynomial and f(x) L on x r. Then there exists a polynomial V Σ s such that V is exponentially decreasing on x r. The degree of V is less than degree(v ) 2q 2(Nk 1) = 2q 2c 1 L λ where q is the degree of the vector field, f. V (x) = δ G k (x, s) T G k (x, s) G k is an extended Picard iteration. k is the number of Picard iterations and N is the number of extensions. Note that the Lyapunov function is a square of polynomials. M. Peet Lecture 2: 51 / 56

1 6 1 5 Degree Bound 1 4 1 3 1 2 1 1 1.5.1.2.3.4.5.6.7 Exponential Decay Rate 1 3 Degree Bound 1 2 1 1 1.7 1 2 3 4 5 Exponential Decay Rate Figure : Degree bound vs. Convergence Rate for K = 1.2, r = L = 1, and q = 5 M. Peet Lecture 2: 52 / 56

Returning to the Lyapunov Stability Conditions Consider ẋ(t) = f(x(t)) with x() R n. Theorem 36 (Lyapunov Stability). Suppose there exists a continuous V and α, β, γ > where β x 2 V (x) α x 2 V (x) T f(x) γ x 2 for all x X. Then any sub-level set of V in X is a Domain of Attraction. M. Peet Lecture 2: 53 / 56

The Stability Problem is Convex Convex Optimization of Functions: Variables V C[R n ] and γ R max V,γ γ subject to V (x) x T x x V (x) T f(x) + γx T x Moreover, since we can assume V is polynomial with bounded degree, the problem is finite-dimensional. Convex Optimization of Polynomials: Variables c R n and γ R max c,γ γ subject to c T Z(x) x T x x c T Z(x)f(x) + γx T x x x Z(x) is a fixed vector of monomial bases. M. Peet Lecture 2: 54 / 56

Can we solve optimization of polynomials? Problem: max b T x subject to A (y) + n x ia i(y) i y The A i are matrices of polynomials in y. e.g. Using multi-index notation, A i(y) = α A i,α y α Computationally Intractable The problem: Is p(x) for all x R n? (i.e. p R + [x]? ) is NP-hard. M. Peet Lecture 2: 55 / 56

Conclusions Nonlinear Systems are relatively well-understood. Well-Posed Existence and Uniqueness guaranteed if vector field and its gradient are bounded. Contraction Mapping Principle The dependence of the solution map on the initial conditions Properties are inherited from the vector field via Gronwall-Bellman Lyapunov Stability Lyapunov s conditions are necessary and sufficient for stability. Problem is to find a Lyapunov function. Converse forms provide insight. Capture the inherent energy stored in an initial condition We can assume the Lyapunov function is polynomial of bounded degree. Degree may be very large. We need to be able to optimize the cone of positive polynomial functions. M. Peet Lecture 2: 56 / 56