Lecture 10: Singular Perturbations and Averaging 1

Similar documents
Nonlinear Systems and Control Lecture # 12 Converse Lyapunov Functions & Time Varying Systems. p. 1/1

An Application of Perturbation Methods in Evolutionary Ecology

Lecture 5: Lyapunov Functions and Storage Functions 1

Lyapunov Stability Theory

Solution of Additional Exercises for Chapter 4

L2 gains and system approximation quality 1

7.1 Linear Systems Stability Consider the Continuous-Time (CT) Linear Time-Invariant (LTI) system

CDS Solutions to the Midterm Exam

3 Stability and Lyapunov Functions

Nonlinear Control. Nonlinear Control Lecture # 8 Time Varying and Perturbed Systems

Nonlinear Control. Nonlinear Control Lecture # 8 Time Varying and Perturbed Systems

Modeling and Analysis of Dynamic Systems

Poincaré Map, Floquet Theory, and Stability of Periodic Orbits

Nonlinear Control. Nonlinear Control Lecture # 2 Stability of Equilibrium Points

Logistic Map, Euler & Runge-Kutta Method and Lotka-Volterra Equations

Lecture 9. Systems of Two First Order Linear ODEs

Linear Systems of ODE: Nullclines, Eigenvector lines and trajectories

Prof. Krstic Nonlinear Systems MAE281A Homework set 1 Linearization & phase portrait

Introduction to Nonlinear Control Lecture # 3 Time-Varying and Perturbed Systems

MATH 215/255 Solutions to Additional Practice Problems April dy dt

Lecture 20/Lab 21: Systems of Nonlinear ODEs

Chapter III. Stability of Linear Systems

Linearization problem. The simplest example

Lyapunov stability ORDINARY DIFFERENTIAL EQUATIONS

Topic # /31 Feedback Control Systems. Analysis of Nonlinear Systems Lyapunov Stability Analysis

Nonlinear Control. Nonlinear Control Lecture # 3 Stability of Equilibrium Points

LMI Methods in Optimal and Robust Control

IN [1], an Approximate Dynamic Inversion (ADI) control

Problem set 7 Math 207A, Fall 2011 Solutions

Lyapunov Theory for Discrete Time Systems

Balanced Truncation 1

Nonlinear Systems and Control Lecture # 19 Perturbed Systems & Input-to-State Stability

Nonlinear Dynamical Systems Lecture - 01

8 Example 1: The van der Pol oscillator (Strogatz Chapter 7)

Stability of Stochastic Differential Equations

Department of Mathematics IIT Guwahati

Stability in the sense of Lyapunov

Reflected Brownian Motion

Introduction to Modern Control MT 2016

One-Sided Laplace Transform and Differential Equations

Chapter 6 Nonlinear Systems and Phenomena. Friday, November 2, 12

Nonlinear Control Lecture # 14 Input-Output Stability. Nonlinear Control

A review of stability and dynamical behaviors of differential equations:

EN Nonlinear Control and Planning in Robotics Lecture 3: Stability February 4, 2015

Entrance Exam, Differential Equations April, (Solve exactly 6 out of the 8 problems) y + 2y + y cos(x 2 y) = 0, y(0) = 2, y (0) = 4.

Discrete and continuous dynamic systems

Nonlinear Systems Theory

Applied Math Qualifying Exam 11 October Instructions: Work 2 out of 3 problems in each of the 3 parts for a total of 6 problems.

Dynamical Systems as Solutions of Ordinary Differential Equations

Linear Systems of ODE: Nullclines, Eigenvector lines and trajectories

Lecture 4. Chapter 4: Lyapunov Stability. Eugenio Schuster. Mechanical Engineering and Mechanics Lehigh University.

Math 312 Lecture Notes Linearization

Lecture 4: Numerical solution of ordinary differential equations

Principles of Optimal Control Spring 2008

Solutions to Homework 11

3 Gramians and Balanced Realizations

Dynamic Inversion of Multi-input Nonaffine Systems via Time-scale Separation

Global stabilization of feedforward systems with exponentially unstable Jacobian linearization

Nonlinear Control Lecture 5: Stability Analysis II

Problem Set Number 5, j/2.036j MIT (Fall 2014)

Control Systems I. Lecture 2: Modeling. Suggested Readings: Åström & Murray Ch. 2-3, Guzzella Ch Emilio Frazzoli

6 Linear Equation. 6.1 Equation with constant coefficients

NOTES ON LINEAR ODES

LMI Methods in Optimal and Robust Control

E209A: Analysis and Control of Nonlinear Systems Problem Set 6 Solutions

Exam 2 Study Guide: MATH 2080: Summer I 2016

Large Deviations for Small-Noise Stochastic Differential Equations

Half of Final Exam Name: Practice Problems October 28, 2014

Large Deviations for Small-Noise Stochastic Differential Equations

Path integrals for classical Markov processes

Mathematical Economics. Lecture Notes (in extracts)

B5.6 Nonlinear Systems

LMI Methods in Optimal and Robust Control

System Control Engineering 0

2 Lyapunov Stability. x(0) x 0 < δ x(t) x 0 < ɛ

Converse Lyapunov theorem and Input-to-State Stability

Lecture 38. Almost Linear Systems

Global Analysis of Piecewise Linear Systems Using Impact Maps and Quadratic Surface Lyapunov Functions

1. Find the solution of the following uncontrolled linear system. 2 α 1 1

MCE693/793: Analysis and Control of Nonlinear Systems

Lecture 8. Chapter 5: Input-Output Stability Chapter 6: Passivity Chapter 14: Passivity-Based Control. Eugenio Schuster.

EE363 homework 8 solutions

Introduction to multiscale modeling and simulation. Explicit methods for ODEs : forward Euler. y n+1 = y n + tf(y n ) dy dt = f(y), y(0) = y 0

Math Ordinary Differential Equations

2 Discrete growth models, logistic map (Murray, Chapter 2)

ME 234, Lyapunov and Riccati Problems. 1. This problem is to recall some facts and formulae you already know. e Aτ BB e A τ dτ

you expect to encounter difficulties when trying to solve A x = b? 4. A composite quadrature rule has error associated with it in the following form

Chapter #4 EEE8086-EEE8115. Robust and Adaptive Control Systems

y + p(t)y = g(t) for each t I, and that also satisfies the initial condition y(t 0 ) = y 0 where y 0 is an arbitrary prescribed initial value.

Control Systems Design

Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games

Numerical Algorithms as Dynamical Systems

Complex Dynamic Systems: Qualitative vs Quantitative analysis

Lecture 19: Solving linear ODEs + separable techniques for nonlinear ODE s

High-Gain Observers in Nonlinear Feedback Control. Lecture # 3 Regulation

1 The Observability Canonical Form

Numerical integration of DAE s

6.241 Dynamic Systems and Control

Nonlinear and robust MPC with applications in robotics

High-Gain Observers in Nonlinear Feedback Control. Lecture # 2 Separation Principle

Transcription:

Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.243j (Fall 2003): DYNAMICS OF NONLINEAR SYSTEMS by A. Megretski Lecture 10: Singular Perturbations and Averaging 1 This lecture presents results which describe local behavior of parameter-dependent ODE models in cases when dependence on a parameter is not continuous in the usual sense. 10.1 Singularly perturbed ODE In this section we consider parameter-dependent systems of equations ẋ(t) = f(x(t), y(t), t), (10.1) ɛẏ = g(x(t), y(t), t), where ɛ [0, ɛ 0 ] is a small positive parameter. When ɛ > 0, (10.1) is an ODE model. For ɛ = 0, (10.1) is a combination of algebraic and differential equations. Models such as (10.1), where y represents a set of less relevant, fast changing parameters, are frequently studied in physics and mechanics. One can say that singular perturbations is the classical approach to dealing with uncertainty, complexity, and nonlinearity. 10.1.1 The Tikhonov s Theorem A typical question asked about the singularly perturbed system (10.1) is whether its solutions with ɛ > 0 converge to the solutions of (10.1) with ɛ = 0 as ɛ 0. A sufficient condition for such convergence is that the Jacobian of g with respect to its second argument should be a Hurwitz matrix in the region of interest. Theorem 10.1 Let x 0 : [t 0, t 1 ] R n, y 0 : [t 0, t 1 ] R m be continuous functions satisfying equations 1 Version of October 15, 2003 ẋ 0 (t) = f(x 0 (t), y 0 (t), t), 0 = g(x 0 (t), y 0 (t), t),

2 where f : R n R m R R n and g : R n R m R R m are continuous functions. Assume that f, g are continuously differentiable with respect to their first two arguments in a neigborhood of the trajectory x 0 (t), y 0 (t), and that the derivative A(t) = g 2 (x 0 (t), y 0 (t), t) is a Hurwitz matrix for all t [t 0, t 1 ]. Then for every t 2 (t 0, t 1 ) there exists d > 0 and C > 0 such that inequalities x 0 (t) x(t) Cɛ for all t [t 0, t 1 ] and y 0 (t) y(t) Cɛ for all t [t 2, t 1 ] for all solutions of (10.1) with x(t 0 ) x 0 (t 0 ) ɛ, y(t 0 ) y 0 (t 0 ) d, and ɛ (0, d). The theorem was originally proven by A. Tikhonov in 1930-s. It expresses a simple principle, which suggests that, for small ɛ > 0, x = x(t) can be considered a constant when predicting the behavior of y. From this viewpoint, for a given t (t 0, t 1 ), one can expect that y(t + ɛτ) y 1 (τ), where y 1 : [0, ) is the solution of the fast motion ODE ẏ 1 (τ) = g(x 0 (t ), y 1 (τ)), y 1 (0) = y(t ). Since y 0 (t ) is an equilibrium of the ODE, and the standard linearization around this equilibrium yields δ (τ) A(t )δ(τ) where δ(τ) = y 1 (τ) y 0 (t ), one can expect that y 1 (τ) y 0 (t ) exponentially as τ whenever A(t ) is a Hurwitz matrix and y(t ) y 0 (t ) is small enough. Hence, when ɛ > 0 is small enough, one can expect that y(t) y 0 (t). 10.1.2 Proof of Theorem 10.1 First, let us show that the interval [t 0, t 1 ] can be subdivided into subintervals k = [τ k 1, τ k ], where k {1, 2,..., N} and t 0 = τ 0 < τ 1 < < τ N = t 1 in such a way that for every k there exists a symmetric matrix P k = P k > 0 for which P k A(t) + A(t) P k < I t [τ k 1, τ k ]. Indeed, since A(t) is a Hurwitz matrix for every t [t 0, t 1 ], there exists P (t) = P (t) > 0 such that P (t)a(t) + A(t) P (t) < I. Since A depends continuously on t, there exists an open interval (t) such that t (t) and P (t)a(τ) + A(τ) P (t) < I τ (t).

3 Now the open intervals (t) with t [t 0, t 1 ] cover the whole closed bounded interval [t 0, t 1 ], and taking a finite number of t k, k = 1,..., m such that [t 0, t 1 ] is completely covered by (t k ) yields the desired partition subdivision of [t 0, t 1 ]. Second, note that, due to the continuous differentiability of f, g, for every µ > 0 there exist C, r > 0 such that and f(x 0 (t) + δ x, y 0 (t) + δ y, t) f(x 0 (t), y 0 (t), t) C( δ x + δ y ) g(x 0 (t) + δ x, y 0 (t) + δ y, t) A(t)δ y C δ x + µ δ y for all t R, δ x R n, δ y R m satisfying t [t 0, t 1 ], δ x x 0 (t) r, δ y y 0 (t) r. For t k let δ y k = (δ y P k δ y ) 1/2. Then, for δ x (t) = x(t) x 0 (t), δ y (t) = y(t) y 0 (t), we have δ x C 1 ( δ x + δ y k ), ɛ δ y k q δ y k + C 1 δ x + ɛc 1 (10.2) as long as δ x, δ y are sufficiently small, where C 1, q are positive constants which do not depend on k. Combining these two derivative bounds yields d ( δx + (ɛc 1 /q) δ y ) C 2 δ x + ɛc 2 for some constant C 2 independent of k. Hence δ x (τ k 1 + τ) e C 3τ ( δ x (τ k 1 ) + (ɛc 1 /q) δ y (τ k 1 ) ) + C 3 ɛ for τ [0, τ k τ k 1 ]. With the aid of this bound for the growth of δ x, inequality (10.2) yields a bound for δ y k : δ y (τ k 1 + τ) exp( qτ/ɛ) δ y (τ k 1 ) + C 4 ( δ x (τ k 1 ) + (ɛc 1 /q) δ y (τ k 1 ) ) + C 4 ɛ, which in turn yields the result of Theorem 10.1.

4 10.2 Averaging Another case of potentially discontinuous dependence on parameters is covered by the following averaging result. Theorem 10.2 Let f : R n R R R n be a continuous function which is τ-periodic with respect to its second argument t, and continuously differentiable with respect to its first argument. Let x 0 R n be such that f( x 0, t, ɛ) = 0 for all t, ɛ. For x R n define f( x, ɛ) = τ 0 f( x, t, ɛ). If /dx x=0,ɛ=0 is a Hurwitz matrix, then, for sufficiently small ɛ > 0, the equilibrium x 0 of the system ẋ(t) = ɛf(x, t, ɛ) (10.3) is exponentially stable. Though the parameter dependence in Theorem 10.2 is continuous, the question asked is about the behavior at t =, which makes system behavior for ɛ = 0 not a valid indicator of what will occur for ɛ > 0 being sufficiently small. (Indeed, for ɛ = 0 the equilibrium x 0 is not asymptotically stable.) To prove Theorem 10.2, consider the function S : R n R R n which maps x(0), ɛ to x(τ) = S(x(0), ɛ), where x( ) is a solution of (10.3). It is sufficient to show that the derivative (Jacobian) Ṡ( x, ɛ) of Ṡ with respect to its first argument, evaluated at x = x 0 and ɛ > 0 sufficiently small, is a Schur matrix. Note first that, according to the rules on differentiating with respect to initial conditions, Ṡ( x 0, ɛ) = (τ, ɛ), where d (t, ɛ) Consider D(t, ɛ) defined by d (t, ɛ) = ɛ (0, t, ɛ) (t, ɛ), (0, ɛ) = I. dx = ɛ (0, t, 0) (t, ɛ), (0, ɛ) = I. dx Let δ(t) be the derivative of (t, ɛ) with respect to ɛ at ɛ = 0. According to the rule for differentiating solutions of ODE with respect to parameters, t δ(t) = (0, t 1, 0) 1. 0 dx Hence δ(τ) = /dx x=0,ɛ=0

5 is by assumption a Hurwitz matrix. On the other hand, (τ, ɛ) (τ, ɛ) = o(ɛ). Combining this with (τ, ɛ) = I + δ(τ)ɛ + o(ɛ) yields (τ, ɛ) = I + δ(τ)ɛ + o(ɛ). Since δ(τ) is a Hurwitz matrix, this implies that all eigenvalues of (τ, ɛ) have absolute value strictly less than one for all sufficiently small ɛ > 0.