Nonlinear Systems and Elements of Control

Similar documents
Math Ordinary Differential Equations

7 Planar systems of linear ODE

Old Math 330 Exams. David M. McClendon. Department of Mathematics Ferris State University

Examples include: (a) the Lorenz system for climate and weather modeling (b) the Hodgkin-Huxley system for neuron modeling

Chapter 6 Nonlinear Systems and Phenomena. Friday, November 2, 12

APPM 2360: Final Exam 10:30am 1:00pm, May 6, 2015.

Math 4B Notes. Written by Victoria Kala SH 6432u Office Hours: T 12:45 1:45pm Last updated 7/24/2016

21 Linear State-Space Representations

Solutions to Math 53 Math 53 Practice Final

Half of Final Exam Name: Practice Problems October 28, 2014

1. Find the solution of the following uncontrolled linear system. 2 α 1 1

Math 216 Final Exam 24 April, 2017

MATH 4B Differential Equations, Fall 2016 Final Exam Study Guide

LMI Methods in Optimal and Robust Control

Ordinary Differential Equation Theory

Math 266: Phase Plane Portrait

Nonlinear Control Lecture 2:Phase Plane Analysis

ODE Homework 1. Due Wed. 19 August 2009; At the beginning of the class

Econ 204 Differential Equations. 1 Existence and Uniqueness of Solutions

Solutions to Dynamical Systems 2010 exam. Each question is worth 25 marks.

1 Lyapunov theory of stability

Linear ODEs. Existence of solutions to linear IVPs. Resolvent matrix. Autonomous linear systems

Solution of Linear State-space Systems

Problem set 7 Math 207A, Fall 2011 Solutions

Solutions of Spring 2008 Final Exam

Solutions to Final Exam Sample Problems, Math 246, Spring 2011

The goal of this chapter is to study linear systems of ordinary differential equations: dt,..., dx ) T

Complex Dynamic Systems: Qualitative vs Quantitative analysis

Math 215/255 Final Exam (Dec 2005)

Understand the existence and uniqueness theorems and what they tell you about solutions to initial value problems.

Mathematics for Engineers II. lectures. Differential Equations

A brief introduction to ordinary differential equations

154 Chapter 9 Hints, Answers, and Solutions The particular trajectories are highlighted in the phase portraits below.

Nonlinear Systems Theory

NOTES ON LINEAR ODES

B. Differential Equations A differential equation is an equation of the form

Problem List MATH 5173 Spring, 2014

The Liapunov Method for Determining Stability (DRAFT)

REVIEW OF DIFFERENTIAL CALCULUS

Section 1.8/1.9. Linear Transformations

Polytechnic Institute of NYU MA 2132 Final Practice Answers Fall 2012

= A(λ, t)x. In this chapter we will focus on the case that the matrix A does not depend on time (so that the ODE is autonomous):

Solution of Additional Exercises for Chapter 4

STABILITY. Phase portraits and local stability

MATH 215/255 Solutions to Additional Practice Problems April dy dt

Section 9.3 Phase Plane Portraits (for Planar Systems)

The first order quasi-linear PDEs

Entrance Exam, Differential Equations April, (Solve exactly 6 out of the 8 problems) y + 2y + y cos(x 2 y) = 0, y(0) = 2, y (0) = 4.

n=0 ( 1)n /(n + 1) converges, but not

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

Differential Equations and Modeling

PUTNAM PROBLEMS DIFFERENTIAL EQUATIONS. First Order Equations. p(x)dx)) = q(x) exp(

Math 3313: Differential Equations Second-order ordinary differential equations

Implicit Functions, Curves and Surfaces

Math 312 Lecture Notes Linear Two-dimensional Systems of Differential Equations

1 Systems of Differential Equations

Homogeneous Constant Matrix Systems, Part II

Additional Homework Problems

ODEs Cathal Ormond 1

Existence Theory: Green s Functions

4 Second-Order Systems

ODE Final exam - Solutions

Poincaré Map, Floquet Theory, and Stability of Periodic Orbits

2.1 Dynamical systems, phase flows, and differential equations

A plane autonomous system is a pair of simultaneous first-order differential equations,

Department of Mathematics IIT Guwahati

MA5510 Ordinary Differential Equation, Fall, 2014

Metric Spaces and Topology

Nonlinear dynamics & chaos BECS

Tangent spaces, normals and extrema

M469, Fall 2010, Practice Problems for the Final

Lecture Notes for Math 524

Econ Lecture 14. Outline

Lecture 4. Chapter 4: Lyapunov Stability. Eugenio Schuster. Mechanical Engineering and Mechanics Lehigh University.

Extreme Values and Positive/ Negative Definite Matrix Conditions

Copyright (c) 2006 Warren Weckesser

Linear ODE s with periodic coefficients

Linearization of Differential Equation Models

7.1. Calculus of inverse functions. Text Section 7.1 Exercise:

Exam 2 Study Guide: MATH 2080: Summer I 2016

Mathematical Economics. Lecture Notes (in extracts)

Slope Fields: Graphing Solutions Without the Solutions

Nonlinear Systems and Control Lecture # 12 Converse Lyapunov Functions & Time Varying Systems. p. 1/1

Calculus for the Life Sciences II Assignment 6 solutions. f(x, y) = 3π 3 cos 2x + 2 sin 3y

MCE693/793: Analysis and Control of Nonlinear Systems

z x = f x (x, y, a, b), z y = f y (x, y, a, b). F(x, y, z, z x, z y ) = 0. This is a PDE for the unknown function of two independent variables.

Table of contents. d 2 y dx 2, As the equation is linear, these quantities can only be involved in the following manner:

2.10 Saddles, Nodes, Foci and Centers

Exercises for Multivariable Differential Calculus XM521

A: Brief Review of Ordinary Differential Equations

Chapter III. Stability of Linear Systems

EE363 homework 7 solutions

Calculus IV - HW 3. Due 7/ Give the general solution to the following differential equations: y = c 1 e 5t + c 2 e 5t. y = c 1 e 2t + c 2 e 4t.

Nonlinear Autonomous Systems of Differential

MIDTERM 1 PRACTICE PROBLEM SOLUTIONS

Linear algebra and differential equations (Math 54): Lecture 19

Worksheet # 2: Higher Order Linear ODEs (SOLUTIONS)

Math 331 Homework Assignment Chapter 7 Page 1 of 9

Math 250B Midterm III Information Fall 2018 SOLUTIONS TO PRACTICE PROBLEMS

(x) = lim. dx = f f(x + h) f(x)

Transcription:

Nonlinear Systems and Elements of Control Rafal Goebel April 3, 3 Brief review of basic differential equations A differential equation is an equation involving a function and one or more of the function s derivatives. An initial value problem consists of a differential equation and initial conditions specifying the value/values of the function and/or of the function s derivatives at a certain point. Example. The height h, as a function of time t, of a body falling due to force of gravity and without friction is described by the differential equation d h = g, dt where g is the constant acceleration in metric system, g = 9.8m/s. In different notation, h t = g. Integration yields h t = gt + c, ht = gt + ct + d, where c and d are real constants. Hence, ht = gt +ct+d is a solution to. The heigh h as above, of a body which at time t = is at height 4 and has velocity 5, is described by the initial value problem d h dt = g, h = 4, dh = 5. dt Since the general solution to the differential equation in is given by ht = gt +ct+d, one just needs to determine the constants c and d that cause the solution to satisfy the initial conditions h = 4 and h = 5: h = g + c + d = d = 4, h = g + c = 5, and so d = 4, c = 5, and the unique solution to is ht = gt + 5t + 4. In more generality, the unique solution to the initial value problem h t = g, h = h, h = v 3 is ht = gt + v t + h. 4

If the values of h and h are predescribed at time t which is not necessarily, one can rely on the general solution ht = gt + ct + d and use the initial conditions ht = h and h t = v to determine c and d: ht = gt + ct + d = h, h t = gt + c = v which imply that c = v gt, d = h gt v gt t and so ht = gt + v gt t + h gt v gt t = gt t + v t t + h, where the last expression is obtained from the previous one through some algebra. Note that the last expression can be deduced from 4 by considering t t, the amount of time after t. Example. Consider the differential equation y + 3y y =. 5 To confirm that y t = e t is a solution to 5, take the derivatives y t = et, y t = 4et and plug them into the equation to get 4e t + 3 e t e t =, which is true because 4 + 6 =. Similarly, one confirms that y t = e 5t is a solution to 5. The differential equation 5 is a linear differential equation, because it has the form A n ty n t + A n ty n t + + A ty t + A tyt = Qt, 6 where y n denotes the n-th derivative of y, and A n, A n,...a, A, Q are some functions of t. In 5, A =, A = 3, A =, while Q and all other A n are. Because A n = for n > and A, 5 is a second order equation. Because A, A, A are constant, 5 is a second order equation with constant coefficients. Finally, because the right-hand side is, it is a homogeneous second order equation with constant coefficients. The practical aspect of linearity and homogeneity of 5 is that any linear combination of solutions to 5 is a solution to 5. Two previously guessed solutions are y t = e t, y t = e 5t, but the reasoning below works for any other two solutions y, y to For some fixed α, β R, let yt = αy t + βy t. Then y + 3y y = αy + βy + 3αy + βy αy + βy = αy + βy + 3αy + βy αy + βy = α y + 3y y + β y + 3y y = α + β =. Hence, y is a solution to 5. With the two solutions y t = e t, y t = e 5t to 5, let s solve the initial value problem y + 3y y =, y = 4, y =. 7 Neither y t = e t nor y t = e 5t satisfy the initial conditions y = 4, y =. However, the general solution yt = αy t + βy t might, with an appropriate choice of

constants α and β. The condition y = 4 turns to α + β =, the condition y = turns to α 5β =. Solving the system of two equations for α and β yields α = 3, β =. Hence, yt = 3y t + y t is the solution to 7. Finally, note that the solutions y t = e t nor y t = e 5t to 5 can be found by first solving the quadratic characteristic equation of to 5: r + 3r =, which gives r = or r = 5. Other homogeneous second order linear differential equations, for which the characteristic equations have a repeated real root or complex roots, are solved similarly. See [, Chapter 3] or [3, Chapter 4] for details. Example.3 Consider the differential equation dx dt = x. 8 It is a separable equation: all occurrences of x can be moved to one side of the equation while all occurrences of t can be moved to the other side. One obtains dx dx x = dt, x = dt, x = t + c and ultimately, xt = c t, where c is a constant. Consider the function yt = 7xt = 7. Note that c t dy dt = 7 c t 49 c t = y and thus y is not a solution to 8. Consequently, 8 is not a linear differential equation, which could have been concluded by noticing the x term anyway. Example.4 Given sufficient room and feed, the population of bunnies, the size of which at time t is bt, grows exponentially according to b t = αbt, 9 where α > is some constant. The equation 9 is a homogeneous first order linear differential equation, and the solution to it is bt = be αt. Note that the function above is well-defined for all t R. Similarly, given sufficient room and no bunnies to eat, the population of wolves, the size of which at time t is wt, decays exponentially according to w t = βwt, where β > is some constant. When the populations of bunnies and wolves interact, bunnies get eaten by wolves, which has a negative effect on the former but a positive effect on the latter. This can be modeled by the system of differential equations b t = αbt γbtwt, w t = βwt + δbtwt, where γ, δ > are some constants. The terms γbtwt, δbtwt appear because the number of interactions between bunnies and wolves is proportional to the product btwt. 3

Example.5 Given insufficient room and feed, the population of bunnies bt does not grow exponentially. Rather, it can be modeled by b t = αbt m bt, where α, m > are some constants. Note that when bt > m then b t <, when < bt < m then b t >, and when bt < which has no practical interpretation in terms of bunnies b t <. Furthermore, bt and bt m are two constant equilibrium solutions to. The solution to the initial value problem consisting of and b = b decreases from b to m if b > m and increases from b to m if < b < m. This can be verified explicitly, because is separable and can be solved. As long as bt, bt m, one obtains db db = αbm b, dt bm b = αdt, m b + db = αdt, m b b db = αmdt b m and thus ln b ln b m = αmt + c, ln b b m = αmt + c, b b m = ec e αmt. b Because b m = ± b b m, one then obtains, with a constant d which can be positive or negative, b b m = deαmt and solving this for b yields bt = dm d e αmt. Consider, for example, an initial condition b = 3m. Then 3m = d dm yields d = 3 and 3m then bt =, which is a decreasing function with lim 3 e αmt t bt = m. If b = m/ m then d = and bt = +e, which is an increasing function with lim αmt t bt = m. Examples of simple control problems Example. Consider a cart moving along an infinite track the x-axis with no friction. The cart has two rocket engines attached to it; one firing to the right in the positive direction, one firing to the left in the negative direction. Let xt be the position of the cart along the track at time t, so that ẋt is the velocity, and ẍt is the acceleration. For simplicity, suppose that the mass of the cart and the force applied to the cart by either of the rocket engines is. The differential equations describing the motion of the cart are as follows: ẍt = if both engines are off; ẍt = if the right engine is on; ẍt = if the left engine is on. 4

Suppose that, at time, x = and ẋ = 3; in other words, that the cart is at x = and is moving to the left with speed 3. How should one switch the engine or engines on and off in order to make the rocket cart stop at the origin at some time T > i.e., xt =, ẋt =? One idea is to do nothing for a while, let the cart roll to the left, then, at some time T, fire the left engine and let it run until the velocity is. T must be picked carefully, so that the cart stops at, not elsewhere. The velocity of the cart after T, with the left engine running, solves the initial value problem ẋt = 3, ẍt = and thus, for t T and as long as the engine is running, ẋt = 3 + t T. We want the cart to stop at some unknown for now time T, so we want ẋt =, and thus = 3 + T T. Consequently, T = T +3, in common words, the cart stops after 3 seconds of the left engine running. The question now is, what should this T be for the cart to stop at, i.e, with xt =. For t [, T ], the position of the cart solves the initial value problem and thus, for t [, T ], x =, ẋ = 3, ẍt = xt = 3t, and in particular, xt = 3T and ẋt = 3. For t [T, T], the position of the cart solves the initial value problem and thus, for t [T, T], xt = 3T, ẋt = 3, ẍt = xt = 3T 3t T + t T, and in particular, xt = xt + 3 = 3T 3 + 3. We need xt =, so = 3T 3 + 3 and consequently, T =.5. Hence, a winning strategy is: Do nothing for.5 seconds. Then, start the left engine and run it for 3 seconds. Then switch the engine off and keep both engines off forever. Some questions arise about the strategy above, they are formulated in the problem below. Problem. The questions below pertain to Example.. First, two questions regarding the strategy. a What is the result of the strategy if the initial condition x = was not accurate? For example, what if x =.8 and the same strategy is used? b What is the result of the strategy if the initial condition x = 3 was not accurate? For example, what if x = 3. and the same strategy is used? 5

More interesting questions arise for other initial conditions. c Suppose that x =, ẋ =. What is a strategy that makes the rocket cart stop at the origin at some time T >. d Suppose that x =, ẋ = 8. What is a strategy that makes the rocket cart stop at the origin at some time T >. e If one obtained a particular strategy in b, can one find another strategy that obtains the same goal with less fuel consumption less time of fuel burning? f Suppose that x =, ẋ = 8. What is the infimum of the fuel burning time over all strategies that stop the car at the origin at some time T >? Problem.3 This problem is about driving the rocket cart to the origin in the shortest possible time. Note: this is very different from using as little fuel as possible, i.e., from firing the engines for as little time as possible. For that purpose, common sense suggests that coasting, i.e., having both engines off, is never an appropriate thing to do unless the cart is already at the origin and stopped. Hence, the rocket cart moves along the x-axis according to the differential equations ẍt = if the right engine is on; ẍt = if the left engine is on. Answer the following questions: a Suppose that x =, ẋ =. What is a strategy that makes the rocket cart stop at the origin in the shortest possible time? b Suppose that x =, ẋ = 8. What is a strategy that makes the rocket cart stop at the origin in the shortest possible time? Example.4 Let zt denote the temperature in a room with a heater which can be on or off. Let z off be the natural temperature of the room with the heater off and z on the natural temperature of the room with the heater on. If the heater is off, the temperature zt evolves according to z t = αzt z off, where α > is some constant. If the heater is on, the temperature zt evolves according to z t = βzt z on. For simplicity, suppose that z off = 4, z on = 8, α = ln, and β = ln.5 and furthermore, say that z = 5. How should one switch the thermostat on and off in order to raise the temperature to be in between 6 and 65 and later keep the temperature in that range? 6

3 Standard Form The standard form of an autonomous differential equation is ẋ = fx, where x R n and f : R n R n is a function. Similarly, the standard form of an nonautonomous differential equation is ẋ = fx, t, 3 where x R n, t R, and f : R n+ R n is a function. The standard form of an autonomous control system is ẋ = fx, u, 4 where x R n is the state of the system, u R k is the control input, and f : R n+k R n is a function. Example 3. To write the differential equation y + 3y y = of Example. in the form, let x = y, x = y and note that ẋ = y = x, which comes from the choice of x, x while ẋ = y = y 3y = x 3x, which comes from the differential equation. x Then, with x = R, one obtains x Thus ẋ = Ax with A = x ẋ x = = = x ẋ x 3x 3 3 x Example 3. The system of differential equations from the predator prey Example.4 becomes x ẋ αx γx = = x 5 x ẋ βx t + δx x αx γx after the change of variables x = b, x = w. Here, ẋ = fx with fx = x. βx t + δx x Example 3.3 The rocket cart in Example., with x denoting the position and x = ẋ denoting the velocity of the cart and the control u being either,, or, takes the form x = x x = u x x + x. u 6 7

4 Linear Systems This section considers autonomous and homogeneous linear differential equations of first order, or linear systems in short, of the form ẋ = Ax 7 where x R n and A is a real n n matrix, i.e., A R n n. Recall first that linear differential equations as in 6, when homogeneous, and when brought to standard form, take the shape of 7. Solving linear equations 6 of second order, for example y +3y y = from Example. can be done without matrix methods. This is discussed in [, Chapter 3] or [3, Chapter 4]. Note that if λ R is an eigenvalue of A with an associated eigenvector v R n, that is, when Av = λv, then e λt v solves 7. Indeed, e λt v = e λt λv = e λt Av = A e λt v. Furthermore, given several such eigenvalues and eigenvectors λ i, v i and constants c i, i = m,,..., m, one can easily check that c i e λit v i is a solution to 7. Hence, solving liner i= systems 7 with matrix analysis tools works particularly well when A has n independent eigenvectors, which is in particular true when A has n distinct real eigenvalues. This is illustrated in the example below. λ Example 4. Let A =. To find eigenvalues, set the determinant of 4 4 λ to and solve for λ. λ 4 = yields two eigenvalues λ = 3, λ =. To find an eigenvalue v, solve Av = λ v, so A 3Iv =, so v 4 v = Get v =, and similarly, get v =. Note that v and v are linearly independent this always happens for distinct real eigenvalues why? and so any initial value problem ẋ = x, x = x 4. can be solved by picking constants c, c in the general solution xt = c e 3t + c e t to match the initial condition x = x. Exercise 4. Find the solution to ẋ = 4 x, x =. 3 4 3 8

The situation is less simple when there is no n linearly independent eigenvectors for A. The following fact helps with the analysis of 7. Fact 4.3 For every A R n n there exists a nonsingular matrix M R n n and a matrix J R n n in real Jordan form so that A = MJM, J = M AM. Consider the change of coordinates z = M x, Then ẋ = Ax turns to M ẋ = M AMM x, so x = Mz. M x = JM x, and hence ż = Jz. 8 Advantages of dealing with 8 rather than 7 are due to the form of J. 4. Linear systems in dimensions For matrices, the real Jordan form can take the following forms: λ λ λ α β,,,, 9 λ λ λ β α and these forms occur, respectively, when A has two distinct real eigenvalues λ, λ ; when A has a single repeated real eigenvalue λ and two distinct eigenvectors; when A has a single repeated real eigenvalue λ and one eigenvector; and when A has complex eigenvalues α + iβ, α iβ. The examples below follow the general discussion in []. The case of A having two distinct real eigenvalues λ, λ is illustrated first. Example 4.4 Let A = = 3 4 3 3 so in this case M =, J =, and M 3 = 3 and so = = 3 4 3 3 which shows that = 3 4, 3 4 3 3 =. Note that AM = MJ. 3 Thus λ = is an eigenvalue of A with eigenvector x = and λ = 3 is an eigenvalue of A with eigenvector x =. Note that M x =, M x =. 3 9

Now, let φt be a solution to 7. Let ψt = M φt and note that ψt solves 8. ψ t Write ψt as. Then 8 turns to ψ t ψ t ψ t ψ = = t ψ t ψ t = ψ t ψ t ψ t and ψ t satisfies ψ t = ψ while ψ t satisfies ψ t = ψ. Consequently, ψ t = ψ e t, ψ t = ψ e t, and so ψ e ψt = t ψ e t = ψ e t + ψ e t Such solutions ψt can then be easily sketched in the z coordinate system, and later translated to the x coordinate system. Explicitly, the solution φt to 7 is ψ e φt = Mψt = t ψ e 3 ψ e t = t + ψ e t ψ e t + 3ψ e t = ψ e t +ψ e t 3 Given the general form c e t + c e t 3 of a solution to ẋ = x, the solution to this differential equation with the initial condition x = is found by picking the constants c 3 4 and c to satisfy the initial condition. One obtains ψ = 8, ψ = 3, and consequently the solution is 8e t 3e t 3 Similar analysis can be done when A has two distinct nonzero real eigenvalues of different signs, for example 3 3 /4 /4 A = = 6 3 5 3/4 /4 and when A has two distinct negative eigenvalues, for example 8 7 5/4 /4 A = = 5 5 3 /4 /4 The situation when A has a single repeated nonzero real eigenvalue λ with two eigenvectors λ is also similar but not all that interesting: in such a case, A =. The situation is λ more interesting when to such a single eigenvalue there corresponds only one eigenvector. The following example illustrates this. Example 4.5 Let A = 7 9 = 5 3 3 5 5 3

Here λ = is the unique repeated eigenvalue of A with eigenvector x = z = z z yields z t = z e t, z t = z + z te t. Correspondingly, 3 z + z xt = Mzt = te t 3 z 5 z e t = e t + 5 z z 3. Solving 5 3 z 5 te t and consequently xt = xe t + 3 5x 5 + 3x te t Both z and z tend to as t, same for x and x. Now, to practice working in more generality, suppose that α β A = M M β α which corresponds to A having complex eigenvalues α ± iβ. Then z = M x evolves according to α β ż = z. β α It is illustrative to look at r = z + z, θ = arctan z z. ṙ = z ż + z ż z + z θ = + z z = z αz + βz + z βz + αz z + z ż z z ż z = α z + z = αr = z + αz + βz z z βz + αz = β z That is, ṙ = αr, so r grows or decays exponentially or stays constant depending on α, and θ = β, so θ changes grows or decays linearly or stays constant depending on β. The actual solutions see [3, Section 9.6] are z t z t = e αt cosβtz e αt sin βtz e αt sinβtz + e αt cos βtz and so zt = e αt cosβt sin βt sinβt cos βt matrix notation. = e αt cosβt sin βt sin βt cos βt z z z. It is fun to verify that this is a solution using Exercise 4.6 Let A = = 6 3 5 5 3 Find a solution to ẋ = Ax with x = 4, 6 by finding the corresponding z, using this initial condition to solve ẋ = Jz, and then finding the solution x = M z.

Exercise 4.7 Find a general solution to ẋ = Ax if 35/6 /6 6 5/6 /6 A = 5/6 3/6 5 = 5 / / 5/3 5/3 3 /6 /6 It remains to discuss the cases of one or both eigenvalues being. 4. Matrix exponential Given a n n matrix A, the matrix exponential of A, e A, is defined as e A = I + A +! A + 3! A3 +.... Ignring the issues of convergence of the infinite series involved, one can show directly from the definition of e A that e λi = e λ I for any A R n n, λ R; e A+B = e A e B for any A, B R n n d ; dt eat = Ae At A R n n, t R; and e A = Me J M, e At = Me Jt M if A = MJM. For example, if A = MJM then A = MJM MJM = MJ M and, similarly, A i = MJ i M for i = 3, 4,..., and consequently e At = I + At +! At + 3! At3 +... = I + At +! A t + 3! A3 t 3 +... = I + MJM t +! MJ M t + 3! MJ3 M t 3 +... = MIM + MJtM + M! J t M + M 3! J3 t 3 M +... = M I + Jt +! J t + 3! J3 t 3 +... M = Me Jt M. These properties then imply that the solution assuming there is only one to the initial value problem ẋ = Ax, x = x is provided by xt = e At x. Indeed, then dt d xt dt d e At x = dt d e At x = Ae At x = Axt. It is instructive to verify λ λ λ this based on the matrix A in one of the following forms:,, or, λ λ λ with the last case handled by looking at first. The real Jordan form representation of A may help with computing e At because e At = e MJM t = Me Jt M. The special structure of J makes { it easier } to find e Jt in comparison to a general matrix A. If J = diag{λ i } then e Jt = diag e λ it. If λ R is such that J λi p = for some p N, then e Jt = e λti e J λit = e I λt + J λit +! J λi t + + p! J λip t p. The advantage of such an expression is that it is not an infinite series; rather, it has finitely many terms.

Example 4.8 Let J = 3. Then 3 5 5 5 q J 3I =, J 3I =,..., J 3I q = for every q > and hence 5 5 i t i e 5t e t e Jt = e 3t I + t + = e 3t t = e 3t te 3t i= e 3t Exercise 4.9 Find e Jt for J =. 4.3 Large matrices Knowing how to solve differential equations ż = Jz with a matrix J in the form λ λ λ α β,,,, λ λ λ β α lets one deal with larger matrices J that may show up as a real Jordan form of a large matrix A. For example, suppose that A = MJM with λ λ J = λ 3 λ 3, α β β α Then ż = Jz breaks down into ż = λ z, ż = λ z, z3 λ3 = z 4 λ 3 z3 z 4, z5 α β = β α z 6 z5 z 6 Solutions to these four separate differential equations are obtained separately, following Section 4.. With given initial conditions, one obtains z3 t = e λ 3t z 4 t t z t = e λt z, z t = e λt z, z3 z5 t, z 4 z 6 t = e αt cosβt sinβt sinβt cos βt z5, z 6 3

and consequently, zt = e λ t z e λ t z e λ 3t z 3 + tz 4 e λ 3t z 4 e αt cosβtz 5 sin βtz 6 e αt sinβtz 5 + cos βtz 6 Unfortunately, the discussion above does not cover all the smaller blocks that may show up in a real Jordan form of a matrix. One example of another kind of a block is in Example 4.9. Another is: α β β α α β β α Exercise 4. Find e Jt if J is given by. 4.4 Non-autonomous linear systems Exercise 4. can be nicely handled by relying on the general framework for solving ẋ = Ax + bt where, as before, x R n, A is a real n n matrix, i.e., A R n n, and b : [, R n is a vector-valued function. Equipped with the notion of the matrix exponential, the solution goes as follows: ẋt Axt = bt, and consequently e At xt = e At ẋt e At Axt = e At bt, t c = x, and the solution becomes d e At xt = e At bt dt e As bs ds + c. With an initial condition x, one gets xt = e At x + t e As bs ds. 4.5 Qualitative properties of solutions to linear systems Several properties, of solutions to 7, given by ẋ = Ax, and, given by ẋ = Ax + bt, are summarized below. Given solutions x, x,..., x k to 7 on an interval [, T] and constants c, c,..., c k, the function x defined by xt = c x t + c x t + + c k x k t is a solution to 7. This also holds for if c + c + + c k =. In particular, if x : [, T] R n is a solution to 7 on some interval, then so is cx, for any constant c. This homogeneity property means that local behavior and global behavior of solutions to 7 is the same: for example, the saddle point behavior of solutions to 7 with a matrix A with one negative and one positive real eigenvalue, is visible no matter how close one zooms in onto the origin or how far one zooms out. 4

The result below, which will then lead to several properties of solutions to 7 and, relies on the fact that for every n n matrix A there exists a constant k such that for every vector x R n, Ax k x. In more technical language, this constant is the induced norm of A, usually denoted by A, and defined by A = max v, v Av. Lemma 4. Let k be such that for every vector x R n, Ax k x. If y : [, T] R n is a solution to 7 then, for every t [, T], yt e kt y. Proof. Consider the function α : [, T] [, given by αt = yt. Then α is differentiable, because y and the norm squared are, and d dt αt = yt ẏt = yt Ayt yt Ayt yt k yt = k yt = kαt, where the first inequality just says that the dot product of two vectors u and v is bounded above by u v. Hence d dt αt kαt and consequently αt ekt α. Then, yt e kt y. First consequence of Lemma 4. is that the size of a solution x to 7 is bounded above, for any t in the domain of x, by an exponential function of t. Thus, there are no solutions x : [, T R n to 7 such that lim t T xt =. In other words, solutions to autonomous and homogeneous linear systems do not experience finite-time blow-up. This is in contrast to simple nonlinear differential equations, for example dx dt = x from Example.3. For x >, solutions have the form xt = x t and so lim t x xt =. Let x, x : [, T] R n be two solutions to with x = x. Consider y : [, T] R n defined by yt = x t x t, which is a solution to 7. Then, for some k, yt e kt y =, and consequently x t = x t for all t [, T]. That is, solutions to are unique. This is in contrast to simple nonlinear differential equations, see Example 6.. Let x, x : [, T] R n be two solutions to. Consider y : [, T] R n defined by yt = x t x t, which is a solution to 7. Then, for some k, yt e kt y, and consequently, for every t [, T], x t x t e kt x x e kt x x, where the constant e kt depends on the matrix A, the length T of the interval [, T], but not on particular solutions x, x. One can conclude that, subject to their existence, the solutions to depend uniformly continuously on initial conditions: for every ε >, every T >, there exists δ > which can be taken to be e kt ε such that for every initial point x and every initial point x with x x < δ, one has x t xt < ε for all t [, T], where x, x : [, T] R n are solutions to with initial conditions x, x. Problem 4. Let xt = x t, yt = x t y t y t 5

be two solutions to the differential equation ẋ = Ax on the interval [, T] with A = Let Wt be the determinant of a Show that b Show that dw dt = det x t y t. x t y t ẋ t ẏ t + det x t y t dw dt = a + dw. x t x t ẋ t ẏ t.. 3 4 c Prove that either xt and yt are linearly independent for every t [, T] or xt and yt are linearly dependent for every t [, T]. 5 Introduction to feedback control The rocket cart as described in Example. moves according to ẍt = if both engines are off, ẍt = if the right engine is on, and ẍt = if the left engine is on. In the example, it was established that to make the cart stop at given initial conditions x = and ẋ = 3, one applies the strategy which results in the position of the card xt solving the differential equation ẍt = ut, where the function u : [, {,, } is given by if t [,.5 ut = if t [.5, 5.5 if t [5.5, The solution x : [, R is then xt = 3t, t [,.5; xt = 4.5 3t.5 + t.5, t [.5, 5.5; and xt =, t [5.5,. Questions a and b in Problem. showed that the desired outcome, the cart resting at for all large enough t, is sensitive to small changes in initial conditions. Using the same ut with the initial condition x close but not equal to results in the cart resting but not at ; using this ut with the initial condition ẋ close but not equal to 3 results in the cart moving with constant speed for all t 5.5. In other words, different initial conditions require different strategies; in fact strategies significantly different as underlined by question d in Problem.. Would not it be lovely to have one formula that determines when to fire which engine, independently of the initial conditions? In other words, is there a function k : R {,, } so that all solutions to the differential equation ẍ = kx, ẋ get to and stop there? The problem below attempts to find such a formula with an additional goal of minimizing the time it takes the cart to get to. Problem 5. This problem revisits the issue of driving the rocket cart to the origin in the shortest possible time, treated before in Problem.3. Do the following: a Knowing that xt = x + v t ±.5t, vt = v ± t, depending on whether the left engine or the right engine is on, sketch two pictures, one for the case of the left engine being on, one for the case of the right engine being on, showing the trajectories of the rocket cart its position and velocity in the xv-plane. This function is called the Wronskian of xt and yt. 6

b Superimpose the two pictures and sketch the results of strategies from Problem.3 a and b in the superimposed picture. c Design a feedback control law for driving the cart to the origin in the shortest possible time. That is, come up with a rule that determines whether the left or the right engine should be running based on the state of the rocket cart, i.e., based on the current position and velocity of the cart. The answer to Problem 5. c is this: if the current state x, v of the cart is,, that is, if the cart is resting at the origin, have both engines of. If not, use a =, that is, fire the right engine, if v and x v or if v < and x > v ; use a =, that is, fire the left engine, if v and x v or if v > and v < v. { x if x < A different way to formulate this is to consider a function αx = x if x and say that one should have a = if v > αx; one should have a = if v < αx; and if v = αx then have a = if x < and a = if x >. Consider the function if v > αx or v = αx, x < kx, v = if v < αx or v = αx, x > if x =, v = The task of controlling the rocket cart in a way that makes it stop at the origin in the shortest possible time results in the state of the cart, consisting of the position x and the velocity v = ẋ, solving the differential equation ẋ = kx, v. Note that the right-hand side of this differential equation is given by a discontinuous function. 5. Linear feedback control problems Problem 5. and the discussion surrounding it considered a rocket cart the engines of which can cause acceleration of,, or and the the goal of stopping the cart at the origin in the shortest possible time. The resulting strategy is a discontinuous function of the carts current position and velocity. Now, let us insist on finding a continuous function of the carts current position and velocity that achieves the goal of stopping the cart at the origin but not necessarily in the shortest possible time. In fact, lets weaken the goal further: we wish to have the carts position and velocity to converge to, but strengthen the condition about the strategy: we want it to be a linear function of the current position and velocity. Of course, this is impossible with the old rocket engines; so lets suppose that an upgraded engines are available, which can cause the cart to have any acceleration, positive or negative, large or small. We arrive at the following problem: Example 5. Find a function kx, v = ax + bv such that setting u = kx, v in the differential equation ẍ = u results in a differential equation the solutions of which and the derivatives of which converge to. When written in the standard form, the differential equation turns to x = x x = u x x + ax + bx = a b x From Section 4., it follows that a linear differential equation ẋ = Ax has all of its solutions converging to in these cases: 7 x

i A has two real eigenvalues λ, λ, not necessarily distinct, and both are negative; ii A has complex eigenvalues α ± iβ and α <. The eigenvalues of the matrix solve the quadratic equation λ a b bλ a =. Case i occurs if b + 4a and both b b +4a and b+ b +4a are negative, which occurs when b+ b + 4a is negative. Now, b+ b + 4a < if b +4a and a <. Case ii occurs if b + 4a <, which implies that a <, and b <. The second inequality comes from seeing that when b +4a <, the complex solutions to the quadratic equation are b ± i b + 4a. Thus, all solutions to the differential equation ẋ = Ax converge to if a < and b <. 6 Introduction to nonlinear differential equations and nonlinear systems Several properties of linear systems were collected in Section 4.5. For autonomous and homogeneous linear systems ẋ = Ax solutions are unique, have at most exponential growth, and depend continuously on initial conditions. Several of these properties can fail for even simple nonlinear differential equations but, as it is proven later, still do hold for many nonlinear differential equations. Of course, there is little hope for a linear combination of solutions to a nonlinear differential equation to be a solution to that differential equation, just as one should not expect a linear combination of two solutions to a quadratic algebraic equation to be a solution to it. The example below illustrates that an initial value problem with a nonlinear differential equation can have many, in fact infinitely many, solutions. Example 6. Consider the differential equation ẋ = x to be solved with nonnegative solutions. One solution can be guessed xt = for all t. Separating the variables, under the condition that x, yields xt = t + c. For the initial condition x =, one obtains a solution xt = t and this is different than the previously obtained xt. One can obtain other solutions with x = by combining the two solutions found so far. Note that, for any T, the function { if t T xt = t T if t > T is a solution to ẋ = x. Consequently, the initial value problem ẋ = x, x = has infinitely many solutions. Exercise 6. Consider the initial value problem ẋ = 3 x 3, x =. a Find a solution xt on [, such that x4 = 8. b Find a solution xt on [, such that x = 8. 8

c Find two different solutions x t and x t on [, such that x t = x t for all t [, 5]. d Is there a solution xt such that x3 = 8. Prove your answer. Example.3 illustrated the case there solutions to a differential equation ẋ = x blowup in finite time. That is, solutions to this differential equation with x > satisfy lim t xt =. The example below asks the reader to find a differential equation x with a faster blow-up time. Problem 6.3 Find a function fx such that the solution to the initial value problem ẋ = fx, x = satisfies lim t. xt =. Another feature of nonlinear differential equations is that they can have many isolated equilibria, that is, points where the dynamics are. In contrast, a linear system ẋ = Ax has an isolated equilibrium at if A is an invertible matrix, or a subspace consisting of equilibria if A is not invertible. Example 6.4 Consider the differential equation ẋ = x x x x 3 3 which has equilibria at,,, and 3. Inspecting the sign of fx = x x x x 3 3 lets one determine the behavior of solutions to the differential equation. For example, because fx > for x <, solutions with initial points x < increase and converge to. Because fx > for < x <, solutions with initial points < x < increase and converge to. Furthermore, the equilibrium point is locally asymptotically stable because fx < for < x < and solutions with < x < decrease and converge to. Similar analysis can be done about the equilibria and 3. Note that the behavior of solutions with initial points < x < can be verified through a use of V x = x which measures the distance squared of x from. If xt is a solution to the differential equation, then d dt V x = d dt x = x ẋ = x x x x x 3 3 = x x x x 3 3. One can now check that the function on the right end of the equation, x x x x 3 3, is negative on, with the exception of the point x =, where it is. Consequently, any solution in the interval, or, decreases its distance from as time goes by. A big reason to study linear systems carefully is that they can be used to approximate nonlinear systems. This is done in detail for systems in dimensions in the next section. The idea is illustrated below for a one-dimensional equation from the example above. Consider the equilibrium x =. Then f =, f = 8 and tangent line approximation says that for x near x, fx fx + f xx x, and in this case fx 8x. If xt is a solution to the differential equation with xt near x then considering yt = xt x, in this case yt = xt, gives a differential equation for y: ẏ f xy, in this case ẏ 8y. It can be expected that behavior of y resembles the behavior of solutions to ẏ = 8y. For this differential equation, y = is an asymptotically stable equilibrium, and this suggests that for ẋ = x x x x 3 3, the equilibrium x = is asymptotically stable. Similar linear approximation can be done at x =, to discover that x = is an unstable equilibrium. At x = and x = 3, f x = and no useful information is obtained. 9

6. Nonlinear systems in dimensions This section suggests how one can try to sketch the behavior of solutions to nonlinear differential equations in two dimensions. This is desired, as frequently, solving such equations is hard or just impossible. Throughout the section, f : R R is a function given as f x fx, x =, x. f x, x First, one may want to at least roughly determine the directions in which the solutions flow. This can be done by first determining the nullclines, that is, curves in the plane along which ẋ =, so curves given by f x, x =, or curves along which ẋ =, so curves given by f x, x =. Then, one determines the regions in the plane where the solutions move up and right ẋ >, ẋ >, up and left ẋ <, ẋ >, down and right ẋ <, ẋ >, or down and left ẋ <, ẋ <. Example 6.5 Consider the differential equation x fx, x = x 6x + x + x 3. Then ẋ = x x and so ẋ = if x = x, ẋ > if x > x, and ẋ < if x < x. Furthermore, ẋ = 6x +x +x 3 and so ẋ = if x = x 3 +6x, ẋ > if x > x 3 +6x, and ẋ < if x < x 3 + 6x. Consequently, for example, in the region where x > and x < x < x 3 + 6x, one has ẋ >, ẋ <, and so the solutions move down and right. The points at which the nullclines intersect, that is, points where ẋ = and ẋ =, are equilibria. More precisely, an equilibrium of ẋ = fx is a point x such that fx =. Each equilibrium point x gives rise to a constant solution xt = x for all t [, to ẋ = fx. Equilibria can be categorized into isolated equilibria and non-isolated equilibria. Isolated equilibria are equilibria x such that a sufficiently small neighborhood of x does not contain any other equilibria. Non-isolated equilibria are equilibria which are not isolated, and so, any neighborhood of a non-isolated equilibrium x contains an equilibrium different from x. Example 6.6 Consider fx, x = x x x x x + x x x. Equilibria occur at any x with x x =, and so, at every point x of the unit circle. Each such x is a non-isolated equilibrium. Another nullcline where ẋ = is the line x x = and another nullcline where ẋ = is the line x +x =. These two nullclines intersect at,, which is an isolated equilibrium. Further information about the behavior of solutions to the differential equation near isolated equilibria can be obtained through linearization. Let the functions f and f defining f be continuously differentiable. Near a point x, x, the function f can be approximated by f x, x + f x, x x x + f x, x x x x x fx, x f x, x + f x x, x x x + f x x, x x x

Consider now the differential equation ẋ = fx and suppose that x = x, x is an equilibrium, that is, fx =. If xt = x t, x t is a solution to this differential equation then the function x t x yt = x t x satisfies ẋ = ẏ and, if xt is near x, it also satisfies f x, x y + f f x, x y x ẏt x x f x, x y + f, x + f x, x = x x f x, x y x, x + f x, x x x x x where fx is the Jacobian of f at x, f x, x + f x, x fx = x x f x, x + f x, x x x y y y y = fxyt Thus, one can expect the behavior of yt when yt is small to resemble the behavior of solutions to the linear system Consequently, one can expect the behavior of solutions xt to ẋ = fx when xt is near an equilibrium x to resemble the behavior of solutions yt to ẏ = fxy. This lets one classify the equilibria x of ẋ = fx based on the properties of ẏ = fxy, 3 and so, based on the eigenvalues of fx at the equilibria. Let x be such an equilibrium, i.e., let fx =. x is a stable node for if the origin is a stable node for 3, i.e., if fx has two distinct real negative eigenvalues; x is an unstable node for if the origin is an unstable node for 3, i.e., if fx has two distinct real positive eigenvalues; x is a saddle point for if the origin is a saddle point for 3, i.e., if fx has two real eigenvalues with opposite signs; x is a stable focus for if the origin is a stable focus for 3, i.e., if fx has complex eigenvalues α ± iβ with α < ; x is an unstable focus for if the origin is an unstable focus for 3, i.e., if fx has complex eigenvalues α ± iβ with α >. Example 6.7 Consider the differential equation from Example 6.5. To find equilibria, set fx =, get x x =, 6x + x + x 3 = ; first equation gives x = x, using this in the second equation gives 6x + x + x3 = x x x + 3 =. Hence the equilibria are,,, 4, and 3, 9. The Jacobian is x fx = 6 + 3x.

At the equilibrium x =,, the Jacobian is fx =, the characteristic polynomial is λ λ + 6 = λ λ + 6, and the eigenvalues are ± i 3 6. Consequently, 4 this equilibrium is an unstable focus. At x =, 4, fx =, the characteristic 6 polynomial is λ + 3λ 6, the eigenvalues are 3 ± 33, and so this equilibrium is a saddle point. Similarly, x = 3, 9 turns out to be a saddle point. Exercise 6.8 Consider x x fx, x = 3 x x. Find nullclines, equilibria, classify the equilibria, and sketch the behavior of solutions as well as you can. Example 6.9 Consider the differential equation from Example 6.6. The only isolated equilibrium occurs at x =,. The linearization at x =, can be found quickly by just ignoring all terms in f that are not linear this does not work at equilibria which are not,. One has x x fx, x = 3 x x x + x x x 3 x x 3 x x + x x x x 3 and so the linearized differential equation is ẋ x x =. ẋ x + x Of course, one can calculate the Jacobian directly and obtain fx =. Eigen- values are λ = ± i, and so x is an unstable focus. Further information about the behavior of solutions can be sometimes obtained by considering polar coordinates. If xt = x t, x t is a solution to ẋ = fx and rt = x t + x t and θ = arctan x t, then ṙ < indicates solutions moving towards the origin, ṙ > indicates solutions moving away from the origin, while θ < x t indicates clockwise rotation, θ > indicates counterclockwise rotation. Trying to get information through polar coordinates is natural when some circular motion is expected or even when a circle plays a special role for f. Example 6. Consider the differential equation from Example 6.6. Every point on the unit circle is an equilibrium. Consider then solutions in polar coordinates. One obtains d r = d x dt dt + x = x ẋ + x ẋ = x x x r + x x + x r = r r and because d r = rṙ, one gets ṙ = r r + r. Hence ṙ > if < r < and dt ṙ < if r >. It is already known, from Example 6.6, that every point with r = is an

equilibrium, so not surprisingly ṙ = if r =. The same comment applies to the unique point with r =. Similarly one can calculate θ = d arctan x = ẋ x x ẋ dt x + x x = r. x Consequently, inside the unit circle solutions move counterclockwise and outside the unit circle the solutions move clockwise. 7 Existence, uniqueness of solutions, etc. 7. Existence and uniqueness of solutions Lipschitz f case Given a continuous function f : R n R n and x R n, suppose that a function x : [, T] R n satisfies the following integral equation xt = x + Then, the function x is a solution to the initial value problem t fxs ds. 4 ẋ = fx, x = x. 5 Indeed, x = x + fxs ds = x, and the fundamental theorem of calculus suggests that ẋt = d t x + fxs ds = d t fxs ds = fxt. dt dt On the other hand, any differentiable function x : [, T] R n is an integral of its derivative: xt x = t ẋs ds, and so if x is a solution to the initial value problem 5, which ensures that x = x and ẋs = fxs, then x satisfies the integral equation 4. Note that given any function x : [, T] R n, the integral formula 4 leads to a new function, say x t, defined by x t = x + t fxs. As noted above, if it turns out that x t = xt for all t [, T], then x is a solution to 5. Examples below illustrate what happens if one applies this integral formula repeatedly to functions that are not solutions to 5. Example 7. Consider the initial value problem ẋ = cx, x = x in R and the following iterative procedure: let x t = x for all t [, and, for n =,,..., given x n : [, R define x n+ : [, R by x n+ t = x + t cx n s ds. 3

Then x t = x + t and, in general, cx ds = x + ct, x t = x + t x n t = x + ct + ct + + ctn. n! cx + cs ds = x + ct + c t Note that as n, x n t x e ct and x e ct happens to be the solution to the initial value problem. Example 7. Consider the initial value problem ẋ = x, x = in R and the following iterative procedure: let x t = for all t [, and, for n =,,..., given x n : [, R define x n+ : [, R by t x n+ t = + x n s ds. Then t t x t = + ds = + ds = t t t s t x t = + ds = + ds = / s t t s x 3 t = + / t s t ds = + s + s ds = / / t + t 3 /6 In the limit, one obtains t / + t 4 /4! t 6 /6! +... t + t 3 /3! t 5 = /5! +... cos t sint Exercise 7.3 Repeat the iterative procedure of Example 7. starting with x t = x + t. Exercise 7.4 Repeat the iterative procedure of Example 7. for the initial value problem Pick your own x t. ẋ = x, x =. In the examples and exercises above examples and exercises which involve differential equations that one can solve by hand the iterative procedure x n+ t = x + t fx n s ds 6 yields a sequence of functions which converge to a solution of the initial value problem 5. This suggests that maybe the iterative procedure can be used to show the existence of solutions to a general initial value problem. Roughly, if the iterations produce a sequence of functions which converge in an appropriate sense and the limit in an appropriate sense of this sequence satisfies 4 then the limit is a solution to 5. To make this precise and the proof somewhat rigorous, some background material is needed. 4

7.. Lipschitz continuity and contractions A function f : R n R n is Lipschitz continuous if there exists a constant L > such that for every x, y R n, fx fy L x y. 7 Any constant L for which 7 holds is a Lipschitz constant for f. The basic example of a Lipschitz function is a linear function fx = Ax for some matrix A R n n. Because for every matrix A there exists a constant L > such that Av L v for all v R n, for every x, y R n, fx fy = Ax Ay = Ax y L x y. Another basic example comes from functions f : R R with bounded derivative. Suppose that, for some L >, f x L for all x R. Mean Value Theorem says that for all x, y R, x y, there exists z in between x and y such that fx fy x y = f z. Then fx fy = f zx y = f z x y L x y. Similarly, a continuously differentiable f : R n R n is Lipschitz continuous with constant L if the Jacobian matrix f is bounded in the sense that, for every x, v R n, fxv L v. Example 7.5 The function f : R R given by fx = x is Lipschitz continuous. + x Indeed, it is differentiable with f x = x + x and the function f x = x + x is continuous and approaches when x, hence, f x is bounded. In fact, because x + x = + x, f x, so f is Lipschitz continuous with + x constant. Exercise 7.6 Show that the following functions f : R R are Lipschitz continuous: fx = x, fx = arctanx. Exercise 7.7 Show that if f, g : R n R n are Lipschitz continuous then f + g is Lipschitz continuous. Find an example where f and g are Lipschitz continuous with constant L > and f + g is Lipschitz continuous with a constant smaller than L. A function f : R n R n which is Lipschitz continuous with constant L < is called a contraction. The name illustrates the fact that the distance between fx and fy is less than the distance between x and y. Theorem 7.8 Let f : R n R n be a contraction. Then a There exists exactly one fixed point for f, that is, a point x such that fx = x. b For every initial point x R n, the sequence of points x n defined by x n+ = fx n is convergent and such that lim n x n = x. 5

Proof. Let L, be a Lipschitz constant for f. Suppose that there are two fixed points for f, x y. Then x y = fx fy L x y, where the equality holds because x, y are fixed points and the inequality comes from the definition of a contraction. Because x y, one obtains L which contradicts L <. Consequently, there is at most one fixed point for f. Showing that it exists entails proving b. Take any x R n and consider the sequence as defined in b. Then, for n =,,..., x n+ x n = fx n fx n L x n x n L n x x, where the last inequality comes from repeating the argument n times. Furthermore, for any m > n, x m x n = x m x m + x m x m + + x n+ x n x m x m + x m x m + + x n+ x n L m x x + L m x x + + L n x x = L m + L m + + L n x x = L n L m n + L m n + + x x L n L x x Consequently, x n is a Cauchy sequence, meaning that for every ε > there exists N > such that for all m > n > N, x m x n < ε. In a complete space, and R n is a complete space, Cauchy sequences have limits. Thus the limit of x n exists, let s call it x. Then x = lim n x n = lim n fx n = f and so x is a fixed point for f. The proof is finished. lim n x n = fx 7.. Contractions on spaces of functions This section discusses Lipschitz functions and contractions not on R n but rather on the space of continuous functions on an interval. To avoid confusion, the term mapping rather than function will be used for the association, to each continuous function f on an interval, of another continuous function Gf on that interval. Example 7.9 For every function x : [, ] R let Gx : [, ] R be the function defined by Gxt = x t. So, if xt = 7 for all t [, ] then Gxt = t for all t [, ]; if xt = t then Gft = t = t t; etc. Note that if f is continuous then so is Gf, and that for every function f, G f = GGf = f. A mapping G assigning to each continuous function x : [, T] R n another continuous function Gx : [, T] R n is Lipschitz continuous with Lipschitz constant L > if, for every pair of continuous functions x, x : [, T] R n, max Gx t Gx t L max x t x t t [,T] t [,T] and it is a contraction if it is Lipschitz continuous with constant L <. 6

Example 7. The mapping G from Example 7.9 is Lipschitz continuous with constant. In fact, because for every continuous function x : [, ] R, max xt = L max x t, t [,T] t [,T] one has max Gx t Gx t = max x t x t. t [,T] t [,T] Theorem 7. Let the function f : R n R n be Lipschitz continuous with constant L > and let x R n. Define a mapping G from the space of continuous functions x : [, T] R n to the space of continuous functions x : [, T] R by Gxt = x + t fxs ds. Then G is Lipschitz continuous with constant LT. If LT <, then G is a contraction. Proof. Consider any two continuous functions x, x : [, T] R n. Then t t Gx t Gx t = x + fx s ds x fx s ds t = fx s fx s ds and consequently t t fx s fx s ds t L x s x s ds L max x s x s ds = max x s x s s [,T] s [,T] = LT max t [,T] x t x t max Gx t Gx t LT max x t x t. t [,T] t [,T] T L ds Exercise 7. Verify directly that the sequence of functions x n t from Example 7., when considered on an interval [, T], satisfies the inequality for every n =,,.... max x n+t x n t ct max x nt x n+ t t [,T] t [,T] Problem 7.3 The mapping G from the space of real-valued continuous function on [, ] to the space of real-valued continuous functions on [, ] is given by Gxt = 3xt t sinxs ds. Show that G is Lipschitz continuous and find its Lipschitz constant. 7

Problem 7.4 Verify that the mapping G on the space of continuous functions from [, T] to R, defined by t Gxt = xsds is not Lipschitz continuous, no matter how small T > is. 7..3 Proof of existence and uniqueness Fact 7.5 If G is a contraction on the space of continuous functions x : [, T] R n then there exists exactly one fixed point for G, that is, a function x : [, T] R n such that Gxt = xt for every t [, T]. Theorem 7.6 Let f : R n R n be a Lipschitz continuous function. Then, for every x R n there exists a solution x : [, R n to the initial value problem 5 and this solution is unique on every interval [, T]. Proof. Let L > be a Lipschitz constant for f and let T > be such that LT <. Consider the mapping G defined in Theorem 7.. According to that theorem, G is a contraction on the space of continuous functions x : [, T] R n. Fact 7.5 implies that G has a unique fixed point, let s denote it by x. The discussion at the beginning of Section 7. implies that x : [, T] R n is then the unique solution to 5 on [, T]. To obtain the solution x on [,, proceed recursively. Given x n : [, T] R n, repeated the argument above for the initial value problem ẋ = fx, x = x n T to obtain a unique solution x n+ : [, T] R n. Then, the desired solution on [, is obtained by considering xt = x n t n T for t [n T, nt]. The uniqueness claimed in Theorem 7.6 can be dealt with in another way. Suppose that, for some τ >, there are two solutions x, x : [, τ] R n to the initial value problem 5. Consider the differentiable function vt = x t x t and note that v = and d dt vt = x t x t ẋ t ẋ t x t x t ẋ t ẋ t = x t x t fx t fx t x t x t L x t x t Lvt Consequently, vt ve Lt = and so x t x t =. 7. Related results and more general cases The argument carried out after Theorem 7.6 in order to show another way of verifying uniqueness of solutions to the initial value problem 5 generalizes to the following: if f is Lipschitz continuous with constant L, then, for every two solutions x, x on [, to ẋ = fx, for every t >, x t x t e Lt x x. As a consequence, one obtains a result about continuous dependence of solutions to ẋ = fx on initial conditions. 8