Math 634 Course Notes. 374 TMCB, Provo, UT address: Todd Fisher

Similar documents
Ordinary Differential Equation Theory

Math Ordinary Differential Equations

MATH 4B Differential Equations, Fall 2016 Final Exam Study Guide

ODEs Cathal Ormond 1

Lecture Notes for Math 524

Linear ODEs. Existence of solutions to linear IVPs. Resolvent matrix. Autonomous linear systems

Econ 204 Differential Equations. 1 Existence and Uniqueness of Solutions

Econ Lecture 14. Outline

Mathematics for Engineers II. lectures. Differential Equations

Half of Final Exam Name: Practice Problems October 28, 2014

Entrance Exam, Differential Equations April, (Solve exactly 6 out of the 8 problems) y + 2y + y cos(x 2 y) = 0, y(0) = 2, y (0) = 4.

2tdt 1 y = t2 + C y = which implies C = 1 and the solution is y = 1

NOTES ON EXISTENCE AND UNIQUENESS THEOREMS FOR ODES

2nd-Order Linear Equations

NORMS ON SPACE OF MATRICES

A brief introduction to ordinary differential equations

2.1 Dynamical systems, phase flows, and differential equations

Continuous Functions on Metric Spaces

LMI Methods in Optimal and Robust Control

1 Lyapunov theory of stability

8 Periodic Linear Di erential Equations - Floquet Theory

ODE Homework 1. Due Wed. 19 August 2009; At the beginning of the class

Theory of Ordinary Differential Equations

Chapter III. Stability of Linear Systems

Nonlinear Control Systems

Math 266: Phase Plane Portrait

A Second Course in Elementary Differential Equations

6 Linear Equation. 6.1 Equation with constant coefficients

Several variables. x 1 x 2. x n

I. The space C(K) Let K be a compact metric space, with metric d K. Let B(K) be the space of real valued bounded functions on K with the sup-norm

Nonlinear Systems Theory

The Heine-Borel and Arzela-Ascoli Theorems

First and Second Order Differential Equations Lecture 4

Notes on uniform convergence

Math 4B Notes. Written by Victoria Kala SH 6432u Office Hours: T 12:45 1:45pm Last updated 7/24/2016

Main topics and some repetition exercises for the course MMG511/MVE161 ODE and mathematical modeling in year Main topics in the course:

Existence and uniqueness of solutions for nonlinear ODEs

7 Planar systems of linear ODE

Real Analysis Problems

Applied Differential Equation. November 30, 2012

Summary of topics relevant for the final. p. 1

Real Analysis Math 131AH Rudin, Chapter #1. Dominique Abdi

Old Math 330 Exams. David M. McClendon. Department of Mathematics Ferris State University

DYNAMICAL SYSTEMS AND NONLINEAR PARTIAL DIFFERENTIAL EQUATIONS

Metric Spaces and Topology

NATIONAL BOARD FOR HIGHER MATHEMATICS. Research Scholarships Screening Test. Saturday, February 2, Time Allowed: Two Hours Maximum Marks: 40

Second Order Linear Equations

Ordinary Differential Equations. Raz Kupferman Institute of Mathematics The Hebrew University

On the Ψ - Exponential Asymptotic Stability of Nonlinear Lyapunov Matrix Differential Equations

1+t 2 (l) y = 2xy 3 (m) x = 2tx + 1 (n) x = 2tx + t (o) y = 1 + y (p) y = ty (q) y =

FIXED POINT METHODS IN NONLINEAR ANALYSIS

The first order quasi-linear PDEs

Newtonian Mechanics. Chapter Classical space-time

WELL-POSEDNESS FOR HYPERBOLIC PROBLEMS (0.2)

2. Metric Spaces. 2.1 Definitions etc.

1. Let A R be a nonempty set that is bounded from above, and let a be the least upper bound of A. Show that there exists a sequence {a n } n N

NOTES ON LINEAR ODES

SPRING 2006 PRELIMINARY EXAMINATION SOLUTIONS

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability...

Math 140A - Fall Final Exam

Additional Homework Problems

I forgot to mention last time: in the Ito formula for two standard processes, putting

THE INVERSE FUNCTION THEOREM

The Arzelà-Ascoli Theorem

Lecture 4: Numerical solution of ordinary differential equations

The goal of this chapter is to study linear systems of ordinary differential equations: dt,..., dx ) T

APPM 2360: Final Exam 10:30am 1:00pm, May 6, 2015.

MIDTERM REVIEW AND SAMPLE EXAM. Contents

Solution of Linear State-space Systems

Prof. M. Saha Professor of Mathematics The University of Burdwan West Bengal, India

Applied Dynamical Systems

Linear Differential Equations. Problems

The Contraction Mapping Principle

SPACES ENDOWED WITH A GRAPH AND APPLICATIONS. Mina Dinarvand. 1. Introduction

Work sheet / Things to know. Chapter 3

Non-linear wave equations. Hans Ringström. Department of Mathematics, KTH, Stockholm, Sweden

Mathematical Analysis Outline. William G. Faris

On graph differential equations and its associated matrix differential equations

Examples include: (a) the Lorenz system for climate and weather modeling (b) the Hodgkin-Huxley system for neuron modeling

Higher Order Averaging : periodic solutions, linear systems and an application

M.S.N. Murty* and G. Suresh Kumar**

Honours Analysis III

Linear ODE s with periodic coefficients

1. Bounded linear maps. A linear map T : E F of real Banach

g 2 (x) (1/3)M 1 = (1/3)(2/3)M.

Math 209B Homework 2

FUNCTIONAL ANALYSIS-NORMED SPACE

Fundamentals of Linear Algebra. Marcel B. Finan Arkansas Tech University c All Rights Reserved

Dynamical Systems as Solutions of Ordinary Differential Equations

This is a closed book exam. No notes or calculators are permitted. We will drop your lowest scoring question for you.

Formal Groups. Niki Myrto Mavraki

An Undergraduate s Guide to the Hartman-Grobman and Poincaré-Bendixon Theorems

21 Linear State-Space Representations

Lyapunov Stability Theory

STABILITY. Phase portraits and local stability

ORDINARY DIFFERENTIAL EQUATIONS

Xinfu Chen. Topics in Differential Equations. Department of mathematics, University of Pittsburgh Pittsburgh, PA 15260, USA

Section 2.8: The Existence and Uniqueness Theorem

1 The Existence and Uniqueness Theorem for First-Order Differential Equations

MATH 722, COMPLEX ANALYSIS, SPRING 2009 PART 5

Transcription:

Math 634 Course Notes 374 TMCB, Provo, UT 84602 E-mail address: tfisher@math.byu.edu Todd Fisher

2000 Mathematics Subject Classification. Primary Key words and phrases. Ordinary differential equations Abstract.

Contents Chapter 1. Introduction and basic concepts................................ 1 1.1. Introduction.............................................. 1 1.1.1. Flows.............................................. 2 1.2. Preliminaries on R n......................................... 2 1.3. First-order systems of equations................................... 4 1.4. Existence............................................... 5 1.4.1. Another approach for successive approximations...................... 6 1.5. Uniqueness.............................................. 7 1.6. Continuation............................................. 8 1.7. Continuity in initial conditions................................... 9 1.8. Numerical approximations...................................... 11 1.8.1. Euler s method......................................... 11 1.8.2. Midpoint method....................................... 11 1.8.3. Runga-Kutta.......................................... 11 1.9. Exercises................................................ 12 Chapter 2. Linear Differential Equations.................................. 13 2.1. Basic properties............................................ 13 2.2. Fundamental matrices........................................ 14 2.3. Higher order linear equations.................................... 16 2.4. Complex linear equations and variation of parameters...................... 17 2.4.1. Variation of Paramters.................................... 18 2.5. Exercises................................................ 19 Chapter 3. Constant Coefficients...................................... 21 3.1. Exponential of a matrix....................................... 21 3.2. Generalized eigenspaces....................................... 23 3.3. Canonical forms............................................ 25 3.3.1. Real canonical form...................................... 27 3.4. Higher order equations........................................ 28 3.5. Integrals................................................ 29 3.6. Exercises................................................ 30 Chapter 4. Qualitative theory........................................ 33 4.1. Qualitative approach......................................... 33 4.2. Stability................................................ 33 4.2.1. Fixed Points.......................................... 33 4.3. Lyapunov s Theorem......................................... 34 4.4. Damped and undamped equations................................. 36 4.5. Proofs of Lyapunov s Theorems................................... 37 4.6. Invariant sets and stability...................................... 38 4.7. Stability and Constant Coefficients................................. 38 4.7.1. Almost linear systems..................................... 39 4.8. Stability and general systems.................................... 41 4.9. Floquet Theory............................................ 43 v

vi CONTENTS 4.10. Exercises............................................... 44 Chapter 5. Hyperbolicity........................................... 47 5.1. Hyperbolic linear differential equations............................... 47 5.1.1. Topological conjugacy..................................... 47 5.2. Linearizations............................................. 48 5.3. Hartman-Grobman Theorem.................................... 50 5.4. Hamiltonian Equations........................................ 53 5.5. Exercises................................................ 55 Chapter 6. Poincaré Sections and Planar Dynamics............................ 57 6.1. Transversal Sections......................................... 57 6.2. Planar Dynamics........................................... 59 6.3. Periodic Solutions.......................................... 60 6.3.1. Periodic solutions for Van der Pol Equations........................ 60 6.4. Recurrence.............................................. 61 6.5. Exercises................................................ 62 Chapter 7. Bifurcations........................................... 63

CHAPTER 1 Introduction and basic concepts In this chapter we introduce the fundamental concepts of ordinary differential equations. After a brief introduction we review material from analysis on R n that will be useful. Then we explain the existence and uniqueness of solutions. After this we explain the continuation of solutions. Lastly, we address continuity of initial conditions and briefly mention numerical approximations. 1.1. Introduction Ordinary differential equations are equations involving functions and their derivatives. These arise naturally in many applications. As an example we look at a mass-spring system. Suppose we have a single spring and a weight hanging vertically from the spring. For simplicity assume the spring is massless, that their is no air resistance, and the weight can only move vertically. We let y be the displacement of the system. In this case Hooke s law states that the restoring force is proportional to the amount the spring is stretched where the constant is positive and depends on the spring. In this case the system has two forces F 1 = mg = the force of gravity, and F 2 = Ky = the restoring force. We have an equilibrium when F 1 + F 2 = 0 or when the displacement y = mg/k. From Newton s Second Law we know that the overall force is my so we have Ky + mg = my or y + K m y = g. This is an example of a second order equation. It is called a second order equation since the second derivative of y is in the expression, but not a third order derivative. A solution to the equation is a function y that solves the expression. Sometimes initial conditions are given such as the initial position and velocity and a solution is a function that solves the equation and such that at t = 0 the function has the prescribed position and velocity. Another classic example is a pendulum. Suppose we have a pendulum with length L and a mass m at the end. For simplicity we assume the pendulum is massless and there is no air resistance. Now we let θ be the angle moved from the vertical. Also, we let s be the distance moved along the arc, so s = Lθ. Then v = s = Lθ. From Newton s second law we have F = mv = mg sin θ. Combining the two expressions we have Lθ = g sin θ. This is again a second order equation. In mechanics second order equations arise very naturally. More generally, let f : D R n be continuous where D R R n is open. We let (1.1) x = f(t, x). A function x : (a, b) R n where a and b is a solution to (1.1) if x is C 1, (t, x(t)) D for all t (a, b), and x (t) = f(t, x(t)) for all t (a, b). Example: Let x = x + t. If x(t) is a solution, then (e t x) = e t x + e t x = e t (x + x ) = e t t. So e t x = e t (t 1) + c and x = t 1 + ce t. If we let (x 0, t 0 ) D then the initial value problem is the equation { x = f(t, x) x(t 0 ) = x 0 and a solution is a function x(t) such that x(t) = f(t, x(t)) and x(t 0 ) = x 0. 1

2 1. INTRODUCTION AND BASIC CONCEPTS For the previous example if we specify that x(0) = 0, then x(0) = 0 1 + ce 0 or c = 1 and x(t) = t 1 + e t. One approach we will use in working with ordinary differential equations is to transform an equation involving derivatives into an equation involving integrals. This can be done due to the next result. Proposition 1.1. Let f : D R n be continuous on an open set D R R n. Given (t 0, x 0 ) D a C 1 function x : (a, b) R n is a solution to the initial value problem { x = f(t, x) x(t 0 ) = x 0 for an interval (a, b) containing t 0 if and only if x(t) = x 0 + t t 0 f(s, x(s)) ds for all t (a, b). Proof. Suppose that x(t) is a solution to the initial value problem. Notice that the function t f(t, x(t)) is continuous since the composition of continuous functions is continuous. So the function is integrable on bounded intervals. For all t (a, b) we know that x(t) x 0 = x(t) x(t 0 ) = t t 0 x (s) ds = t t 0 f(s, x(s)) ds. Now suppose that x(t) = x 0 + t t 0 f(s, x(s)) ds for all t (a, b). Then x(t 0 ) = x 0 and x (t) = f(t, x(t)) by the Fundamental Theorem of Calculus. This follows since f is continuous and x is C 1. 1.1.1. Flows. Suppose we have x = f(x) where f : D R n is continuous where D R n is open. Then the ordinary differential equation does not depend on t and the equation is autonomous. A family of maps ϕ t : D R n where D R n is open and t R is a flow if ϕ o = Id and ϕ t+s = ϕ t ϕ s for t, s R. Example: Let y R n and ϕ t (x) = x + ty. Then ϕ 0 (x) = x and ϕ t+s (x) = x + (t + s)y = (x + sy) + ty = ϕ t (ϕ s (x)). Proposition 1.2. Let f : R n R n be continuous such that { x = f(x) x(0) = x 0 has a unique solution x(t, x 0 ) defined for all t R, then ϕ t = x(t, x 0 ) is a flow. Proof Given s R and y : R R n defined by y(t) = x(t + s, x 0 ) we have y(0) = x(s, x 0 ) and y (t) = x (t + s, x 0 ) = f(x(t + s, x 0 )) = f(y(t)). So y is a solution to x = f(x). Since a solution is unique we have y(t) = x(t, y(0)) = x(t, x(s, x 0 )) = x(t + s, x 0 ). So ϕ t+s (x 0 ) = (ϕ t ϕ s )(x 0 ). Also, ϕ 0 (x 0 ) = x(0, x 0 ) = x 0. 1.2. Preliminaries on R n Let R n be an n-dimensional vector space over R. A point will be represented by x = (x 1,..., x n ). Note we will assume we are using the typical topology on R n unless stated otherwise. Definition 1.3. A function : R n R is a norm if i. x > 0 for all x R n, where x 0, and 0 = 0, ii. cx = c x for all c R and all x R n, and iii. x + y x + y for all x, y R n. Example: The Euclidean norm is x 2 = x 2 1 + x2 2 + + x2 n. The one-norm is x 1 = x 1 + x n and the uniform or infinity norm is x = max{ x 1,..., x n }. Definition 1.4. Two norms α and β are equivalent if there exist constants A, B > 0 such that A x α x β B x α. A very useful fact in analysis on R n is the following: Theorem 1.5. Any two norms on R n are equivalent.

1.2. PRELIMINARIES ON R n 3 Proof. It is sufficient to show that any norm is equivalent to 2. Let e i = (0,..., 0, 1, 0,..., 0) where the 1 occurs in the i th spot. Let c = max{ e j : 1 j m}. Then we have x = n i=1 x ie j and n n x x i e i c x i nc x 2 i=1 since x i x 2 for all 1 i n. Notice that x y x y mc x y 2. We know that 2 is a continuous function on R n so the above implies that is continuous on R n. The set {x R n : x 2 = 1} is a compact set. So we know there exists some A > 0 such that x A > 0 for x 2 = 1. So for all x 0 we have 1 x x 2 = 1 2 which implies that Hence, i=1 A 1 x x 2 = 1 x. x 2 A x 2 x and x 1 A x. Remark 1.6. The statement above is not true in general for infinite dimensional spaces. Definition 1.7. A sequence of functions f k : I R n is equicontinuous on and interval I R if given ɛ > 0 there exists some δ > 0 such that for all m 1 we have f m (s) = f m (t) < ɛ whenever s t < δ. Often the sequences of functions we will look at in this class are equicontinuous. The next theorem will be very useful. Theorem 1.8. (Ascoli) Let f m : I R n be a sequence of functions defined over a bounded interval I R. If {f m } is equicontinuous and for all t I the sequence f m (t) is bounded, then there exists a subsequence of f m converging uniformly on I. Proof. We will use Cantor s diagonalization method. Let {r 1,..., } be an enumeration of the rationals in I. Let f (p,1) be a subsequence of functions from {f M } such that f (p,1) (r 1 ) converges as p. Now choose a subsequence f (p,2) of f (p,1) such that f (p,2) (r 2 ) converges as p. For each k > 1 we can choose a subsequence of f (p,k 1) such that f (p,k) (r k ) converges. So we also have f (p,k) (r j ) converges for each 1 j k. Let g p = f (p,p). For any k N we know that g p (r k ) converges as p. Fix ɛ > 0. By equicontinuity there exists some δ > 0 such that g p (s) g p (t) < ɛ/3 for all s t < δ and s, t I. Since the rationals are dense in I we know there exists some K N such that for all t I there exists some r i where t r i < δ and i < K. (So K is sufficiently large that r 1,..., r K is δ/2-dense in I.) Since g k (r i ) for 1 i K is Cauchy there exists some N N where g p (r i ) g q (r i ) < ɛ/3 for 1 i K and p, g > N. Let t I and p, q N. Select r i for 1 i K such that t r i < δ. Then g p (t) g q (t) g p (t) g p (r i ) + g p (r i ) g q (r i ) + g q (r i ) g q (t) < ɛ. So g p is Cauchy and converges to g(t) for all t I. Now let p. Then g(t) g q (t) < ɛ for all t I and q N. So g p (t) converges uniformly to g(t) on I. The next theorem is one of the most useful theorems in analysis. Theorem 1.9. (Contraction Mapping Theorem) Let X be a complete metric space and f : X X be a continuous map such that d(f(x), f(y)) < λd(x, y) for some λ (0, 1) and all x, y X. Then there exists a unique x X such that f(x ) = x and for all x X the sequence {x, f(x), f 2 (x),..., f n (x),...} converges to x. The above theorem will be used for instance to find a solution to a differential equation. In this case the space X will be a Banach space of function and the map f will be an operator on the Banach space.

4 1. INTRODUCTION AND BASIC CONCEPTS 1.3. First-order systems of equations Many equations involve the rate of change for multiple variables such as predator-prey equations. Due to this we will often examine equations of the form (1.2) x 1 = f 1 (t, x 1, x 2,..., x n ) x 2 = f 2 (t, x 1, x 2,..., x n ). x n = f n (t, x 1, x 2,..., x n ) Also suppose we have a higher order equation of the form x n + f 1 x n 1 + + f n 1 x + f n x = g(x). Then the equation can be rewritten in the form of (1.2) by introducing dummy variables. Let y 1 = x, y 1 = y 2 = x, y 2 = y 3 = x,..., y n = x n. Then we have y 1 = y 2. y n 1 = y n y n = f 1 y n 1 f 2 y n 2 f n y 2 + g(y 1 ) Example: For the equation of a spring of the form x + k mx = 0 we then have { y 1 = y 2 y 2 = k m y 1 Throughout this course we will be studying systems of the form (1.2). Example: Suppose we have linked springs hanging vertically from a ceiling with mass one attached to spring one and mass two attached to spring two. We let y 1 be the displacement of spring one and y 2 the displacement for spring two. Then we have the equations This can be rewritten in the form m 1 y 1 = k 1 y 1 + k 2 (y 2 y 1 ) m 2 y 2 = k 2 (y 2 y 1 ) If we let x 1 = x 2 x 2 = k1 m 1 x 1 + k2 m 1 x 3 k2 m 1 x 1 x 3 = x 4 x 4 = k2 m 2 (x 3 x 1 ) A = 0 1 0 0 m 1 ) 0 k2 m 1 0 0 0 0 1 k 2 m 2 0 k2 m 2 0 ( k1+k2 Then we can also write the expression as x = Ax where x = More generally, the expression (1.2) can be rewritten as x = f(t, x) where f(t, x) = (f 1 (t, x),..., f n (t, x)). Working with systems of equations can be difficult. One of the main techniques is simply to adapt the techniques learned for first order equations in the undergraduate course to this setting. x 1 x 2 x 3 x 4.

1.4. EXISTENCE 5 Example: Suppose we start with the expression y 1 = y1 2 y 2 = y 1 + y 2 y 1 (t 0 ) = η 1 where η 1 > 0 y 2 (t 0 ) = η 2 To solve this we use separation of variables on the first equations and obtain 1 dy 1 = dt. So we have 1/y 1 = t + c or y 2 1 y 1 (t) = From the initial condition we then solve and obtain y 1 (t) = Substituting this into the second equation we have y 2 + y 2 = 1 t + c. η 1 1 + η 1 (t t 0 ). η 1 1 + η 1 (t t 0 ). This is first order equation of one variable and we can solve using techniques from undergraduate ODEs to obtain t e t s y 2 (t) = η 2 e t t0 + η 1 1 η 1 (s t 0 ) ds. t 0 1.4. Existence In this section we address when there is a solution to (1.2). Our standing assumption is that the function function f(t, x) is continuous. Theorem 1.10. (Peano) If f(t, x) is continuous in D, then for each (τ, ξ) D there exists at least one solution to the initial value problem x = f(t, x) x(τ) = ξ. Proof. The approach uses successive approximations and Ascoli s theorem. We know D is open so there exists a rectangle R D where R = {(t, x) D : t τ b and x ξ 1 b} for some b > 0. Since f is continuous and R is compact we know there exists M = sup{ f(t, x) 1 : (t, x) R} <. Let α = min{b, b M }. The idea is that the solutions are bounded by a cone of slope ±M in R so we know the solutions exist for at least time α. Let I = (τ α, τ + α). We know that f is uniformly continuous on R so there exists some δ(m) > 0 such that f(t, x) f(s, y) 1/M when t s < δ and x y 1 < δ for (t, x), (s, y) R. We want to construct a sequence of functions that satisfy Ascoli s Theorem. Let (M) = min{δ(m), δ(m)/m, 1/M} and fix a partition of I such that the distance between any two elements in the partition is less than (M) and τ α = t p < t p+1 < < t 1 < t 0 = τ < t 1 < < t p = τ + α. Now we define ϕ M for each M N. For t [t 1, t 1 ] let ϕ M (t) = ξ + (t t 0 )f(t 0, ξ) = ξ + (t τ)f(t 0, ξ). For t [t 1, t 2 ] define ϕ M (t) = ϕ M (t 1 ) + (t t 1 )f(t 1, ϕ M (t 1 )). Continue in this way to define ϕ M over I. We obtain a collection of (2p 1) line segments joined at the vertices. So ϕ M is continuous for each M. We now show (t, ϕ M (t)) stays in R for t I. To do this we show that ϕ M (t) ϕ M (s) 1 < (t s)m for t, s I.

6 1. INTRODUCTION AND BASIC CONCEPTS Assume that τ s < t. Then there exists some j, k {0,..., p} such that t j s < t j+1 and t k < t t k+1. If j = k then ϕ M (t) ϕ M (s) = t s f(t j, ϕ M (t j ) 1 t s M. If j < k, then ϕ M (t) ϕ M (s) ϕ M (t) ϕ M (t k ) 1 + + ϕ M (t j+1 ) ϕ M (s) 1 ( t t k + t k t k 1 + + t j+1 s )M = t s M. For s < τ < t we proceed similar to the above and for τ α < s < t < τ. Also, ϕ M (t) 1 ϕ M (t) ϕ M (τ) 1 + ϕ M (τ) 1 t τ M + ξ 1 αm + ξ 1. This implies that ϕ M is a uniformly bounded sequence that is equicontinuous over a bounded interval so we can apply Ascoli s Theorem. Notice that if t I such that t t i, then ϕ m(t) exists and equals f(t k, ϕ M (t k )) for some unique t k where t t k < (M). So ϕ m(t) f(t, ϕ M (t)) 1 = f(t k, ϕ M (t k )) f(t, ϕ M (t)) 1 < 1 M. (So ϕ M is almost a solution to the ODE.) Now let ϕ Mi be a uniformly convergent subsequence on I to a function ϕ : I R n. We now show that ϕ satisfies the integral equation. Notice that ϕ(t) ϕ(τ) t τ f(s, ϕ(s))ds 1 ϕ(t) ϕ Mi (t) 1 + ϕ Mi (t) ϕ(τ) t τ f(s, ϕ M i (s))ds 1 + t τ f(s, ϕ M i (s)) f(s, ϕ(s))ds 1. The first term goes to zero by convergence. The third term goes to zero since the sequence converges uniformly so we can pass the limit inside the integral. For the second term we know that by the definition of ϕ M. Then ϕ M (t) = ϕ M (τ) + t τ ϕ M i (s) f(s, ϕ Mi (s))ds 1 t τ ϕ M (s)ds t τ ϕ M i (s) f(s, ϕ Mi (s)) 1 ds t 1 τ M i ds α M i. So ϕ(t) ϕ(τ) t τ f(s, ϕ(s))ds 1 = 0 and ϕ(t) = ϕ(τ) + t f(s, ϕ(s))ds. Corollary 1.11. If f is continuous on an open set D, then every point in D has at least one solution passing through it. Corollary 1.12. If f is continuous on D and C D is compact, then there exists some α > 0 such that if (t, ξ) C then the initial value problem x = f(t, x), x(τ) = ξ has a solution defined for the interval (τ α, τ + α). 1.4.1. Another approach for successive approximations. In other resources another method is used to construct a sequence of functions that converges to a solution. The idea is to use the integral equation to obtain a solution. Let ϕ 0 (t) = ξ and ϕ 1 (t) = ξ + t τ f(s, ϕ 0(s))ds. In general, let ϕ j (t) = ξ + t τ f(s, ϕ j 1(s))ds for j > 1. As in the last section one can show that ϕ j converges to a solution to the initial value problem. As an example let x = x and x(0) = 1. Then ϕ 0 (t) = 1 and ϕ 1 (t) = 1 + t 0 1ds = 1 t. For ϕ 2(t) = 1 + t 0 (1 s)ds = 1 t + t2 2. In general, we have k ( 1) i )t i ϕ k (t) =. i! i=0 τ

1.5. UNIQUENESS 7 So we have ( 1) i )t i ϕ(t) = i! i=0 = e t. 1.5. Uniqueness Now that we know solutions exist we are often concerned about uniqueness. Not every ODE will have a unique solution. Think of an ODE that represents the evaporation of a rain drop on the ground. After the rain drop has evaporated the zero solution will also be valid. So we need extra conditions on the ODE not just continuity of the function f. As another example of an ODE with non-unique solutions let x = x and x(0) = 0. Then x(t) = 0 is a solution. However, for any c 1 0 c 2 the following is a solution x(t) = (c1 t)2 4 t < c 1 0 c 1 t c 2 (t c 2) 2 4 t > c 2. So there are an infinite number of solutions while x is a continuous function over R. Remark 1.13. There exist continuous functions over R 2 such that at each initial condition there exist an infinite number of solutions. We will see that a sufficient condition is that f is Lipschitz. In fact since the set D is open we only need Lipschitz as a local condition. A function f : D R n where D R R n is open is locally Lipschitz if for any compact set C D there exists a K = K(C) 0 such that f(t, x) f(t, y) K x y whenever (t, x), (t, y) C. As an example f(t, x) = x 2 is not Lipschitz on R 2, but is locally Lipschitz. Notice that if f has continuous partial derivatives, then it is locally Lipschitz. We now prove a very useful inequality for this course. This will be helpful in the integral equation to obtain bounds on uniqueness of solutions. Theorem 1.14. (Gronwall Inequality) Let K 0 and f and g be continuous non-negative functions on α t β such that f(t) K + t f(s)g(s)ds for all α t β. Then α for all α t β. f(t) Ke t α g(s)ds Proof. Let h(t) = K + t f(s)g(s)ds. Then h(α) = K and f(t) h(t) for all t (α, β). From the α Fundamental Theorem of Calculus we have h (t) = f(t)g(t) h(t)g(t). Now and Integrating with respect to t we obtain So h (t)e t α g(s)ds e t α g(s)ds h(t)g(t) 0 d (h(t)e t g(s)ds) α 0. dt h(t)e t α g(s)ds h(α)e α α g(s)ds = h(t)e t α g(s)ds K 0. f(t) h(t) Ke t α g(s)ds. Theorem 1.15. Let f be locally Lipschitz continuous on D. If (τ, ξ) D, then there exists a unique solution to the initial value problem. Proof. Suppose there exist two solutions ϕ 1 and ϕ 2 for some compact set C D containing (τ, ξ) and let L be a Lipschitz constant in C for f. Then ϕ i (t) = ϕ i (τ) + t τ f(s, ϕ i (s))ds

8 1. INTRODUCTION AND BASIC CONCEPTS for i = 1, 2. Hence, ϕ 1 (t) ϕ 2 (t) 1 = t τ f(s, ϕ 1(s)) f(s, ϕ 2 (s))ds 1 t τ f(s, ϕ 1(s)) f(s, ϕ 2 (s)) 1 ds t τ L ϕ 1(s) ϕ 2 (s) 1 ds. Now we apply Gronwall s inequality where g = L and K = 0. So ϕ 1 (t) ϕ 2 (t) 0 for all t I. 1.6. Continuation We now know that if f is locally Lipschitz that solutions exist and are unique. The problem with the previous results is that the time for which the solution is known to exist is very short. How can we show the solution exists for a longer period of time? The idea is to combine the local solutions and continue the solutions as long as possible. Let x(t) be a solution for a < t < b a continuation to the right is a solution x 1 (t) such that x 1 (t) = x(t) for a < t < b and x 1 (t) exists an a < t < b 1 where b < b 1. Similarly, there is a continuation to the left. Proposition 1.16. Let x(t) be a solution of x = f(t, x) defined on I = (a, b) where b <. Then there exists a continuation of x(t) to the right if and only if lim t b x(t) = ξ exists and (b, ξ) D. Proof. Suppose that lim t b x(t) = ξ exists and (b, ξ) D. Then there exists a solution ϕ(t) defined on t (b α, b + α) where ϕ(b) = ξ. Let { x(t) a < t < b x 1 (t) = ϕ(t) b t < b + α. We now show that x(t) is a solution for a < t < b+α. For a < t < b we know x 1 (t) = x 1 (τ)+ t τ f(s, x 1(s))ds by definition. For t [b, b + α) we know x 1 (τ) + t τ f(s, x 1(s))ds = x 1 (τ) + b τ f(s, x 1(s))ds + t b f(s, x 1(s))ds = x 1 (b) + t b f(s, x 1(s))ds = ϕ(b) + t f(s, ϕ(s))ds b = ϕ(t) = x 1 (t). For the other direction the result is trivial since solutions are continuous. Proposition 1.17. Let x(t) be a solution to x = f(t, x) defined on an interval I = (a, b) where b <. If there exists a compact set C D and τ I such that (t, x(t)) C for all t τ, then there exists a continuation to the right. Proof. We need to show that lim t b x(t) exists. Then this will be in C since C is compact. Since f is continuous and C compact there exists some M > 0 such that f(t, x) M for all (t, x) C. If t 1, t 2 τ, then t2 x(t 1 ) x(t 2 ) = f(s, x(s))ds M t 2 t 1. t 1 Take an increasing convergent sequence {s n } in I such that s n b as n and such that lim n x(s n ) = ξ. This is possible since C is compact. For τ < t < b we have x(t) x(s n ) M s n t. Taking the limit as n we have x(t) ξ M b t. Therefore, as t b we have x(t) ξ. Theorem 1.18. Let x(t) be a solution to x = f(t, x) defined on an interval I = (a, b) where b <. If there exists a compact set C D and τ I such that (t, x(t)) C for all t τ, then x(t) has a continuation to the right such that (t, x 1 (t)) / C for some t τ. Proof. From Peano s theorem there exists some α > 0 such that if (τ, ξ) C, then x = f(t, x), x(τ) = ξ has a solution for t (τ α, τ + α). Since C is compact there exists some B > 0 such that t B for all (t, x) C. Now if x(t) is a solution it can be extended to a solution x 1 (t) for t (a, b+α) and x 1 (t) can be expended to a solution x 2 (t) for t (a, b + 2α). However, for n (B b)/α we know the continuation will leave C. A maximal solution is one that cannot be extended to the left or right. Theorem 1.19. Let x(t) be a solution of x = f(t, x) defined on an interval I = (a, b) where b <. Then there exists a maximal continuation to the right of x(t).

1.7. CONTINUITY IN INITIAL CONDITIONS 9 Proof. Let V m be a collection of open sets such that V m V m+1, each V m is compact, and D = m=1 V m. Assume that (τ, x(τ)) V 1. Applying the previous theorem to V 1 and in general V 2, V 3,... we see that this produces a sequence of continuations x m for x(t) defined on an interval (a, b m ). If b m as m, then we are done. Suppose that lim m b m = b <. Let lim t b m x m (t) = ξ m for each m N so long as the limit exists. If there exists some M such that ξ M is the first limit not to exist, then this is the maximal solution. On the other hand, if the limit exists for each m then we have two possibilities. Suppose that lim m ξ m does not exist, then we have the maximal solution. If the limit does exist and equals ξ, then (b, ξ) D is in some V m, but the solution leaves V m infinitely often so cannot limit on ξ. So there is a maximal solution. Corollary 1.20. Every solution can be extended to a maximal solution. Theorem 1.21. If x(t) is a maximal solution, then (t, x(t)) approaches the boundary of D as t increases. Proof. If the solution is defined over an interval I = (a, b) where b =, then we are done. Now suppose b < and there exists a compact set K D where there does not exists some τ I such that if t > τ, then (t, x(t)) / K. Let N be sufficiently large such that B = {(t, x) : (t, x) (s, y) 2 1 N for some (s, y) K} D. From a previous proposition and corollary to Peano s Theorem there exists some s 1 such that s 1 such that (s 1, x(s 1 )) / B and there exists some t 2 > s 1 such that (t 2, x(t 2 )) K. Similarly, there exists some s 2 > t 2 such that (s 2, x(s 2 )) / B and there exists some t 3 > s 2 such that (t 3, x(t 3 )) K. Continuing there exist sequences {t m } and {s m } such that t m < s m < t m+1 < b where (t m, x(t m )) K and (s m, x(s m )) / B. This implies that (s m, x(s m )) (t m, x(t m )) 2 1/N. Let M = sup{ f(t, x) 2 : (t, x) B}. Then 1 N (s m, x(s m )) (t m, x(t m )) s m t m 1 + x 2 2 dt = s m t m 1 + f(t, x(t)) 2 2 dt s m t m 1 + M 2. However, s m t m 1 + M 2 0) as m, a contradiction. For an initial value problem 1.7. Continuity in initial conditions { x = f(t, x) x(τ) = ξ such that f is locally Lipschitz what happens to the unique solutions as we vary the initial condition? Throughout this section we assume that f is locally Lipschitz. We will denote the unique solution by x(t, τ, ξ). We want to investigate the behavior of solutions as we vary the initial data. We will assume the solutions are maximal solutions. Theorem 1.22. Fix (σ, ζ) D. If the domain of x(t, σ, ζ) contains a t b, then there exists some δ > 0 such that for all (τ, ξ) in U = {(τ, ξ) : a < τ < b and ξ x(τ, σ, ζ) < δ the domain of x(t, τ, ξ) contains (a, b) and x(t, τ, ξ) depends continuously on the points in W = {(t, τ, ξ) : t (a, b) and (τ, ξ) U}. Proof. Let ψ(t) = x(t, σ, ζ). Choose δ 1 > 0 such that C = {(t, x) : a t b and x ψ(t) δ 1 } D. Since C is compact we know f is Lipschitz on C with constant L > 0. Fix δ (0, min{δ 1, e L(b a) δ 1 }). We will use the δ to define U and W. We now want to show they have the desired properties.

10 1. INTRODUCTION AND BASIC CONCEPTS For (t, τ, ξ) W set ϕ 0 (t, τ, ξ) = ψ(t) + ξ ψ(τ). So ϕ 0 (t, τ, ξ) = ψ(t) + C where C is small. More specifically, ϕ 0 (t, τ, ξ) ψ(t) = ξ x(τ, σ, ζ) < δ < δ 1. Hence, we have (t, ϕ 0 (t, τ, ξ)) C for all (t, τ, ξ) W. Set Then Therefore, we know that Inductively, we let and we have ϕ 1 (t, τ, ξ) = ξ + ϕ 1 (t, τ, ξ) ϕ 0 (t, τ, ξ) ϕ 1 (t, τ, ξ) ψ(t) t τ f(s, ϕ 0 (s, τ, ξ))ds. = t τ f(s, ϕ 0(s, τ, ξ)) f(s, ψ(s))ds Lδ t τ < Lδ(b a). ϕ 1 (t, τ, ξ) ϕ 0 (t, τ, ξ) + ϕ 0 (t, τ, ξ) ψ(t) < Lδ(b a) + δ(b a) = δ(1 + L(b a)) < δe L(b a) < δ 1. ϕ j (t, τ, ξ) = ξ + ϕ j (t, τ, ξ) ϕ j 1 (t, τ, ξ) for j 3. From the triangle inequality we have t τ f(s, ϕ j 1 (s, τ, ξ))ds = t τ f(s, ϕ j 1(s, τ, ξ)) f(s, ϕ j 2 (s, τ, ξ))ds t τ L ϕ j 1(s, τ, ξ) ϕ j 2 (s, τ, ξ))ds ( ) t τ Lδ L j 1 s τ j 1 (j 1)! ds = δ Lj t τ j j! ϕ j (t, τ, ξ) ψ(t) δ j L i t τ i i=0 i! δe L(b a) < δ 1. So (t, ϕ j (t, τ, ξ)) C for all (t, τ, ξ) W for all j N. Furthermore, each ϕ j is continuous and defined on a t b. We now show that ϕ j converges to x(t, τ, ξ). Note that ϕ n (t, τ, ξ) ϕ m (t, τ, ξ) δ i=m+1 n Li (b a) i i! so the sequence is Cauchy and converges to some function ϕ(t, τ, ξ). Furthermore, L i (b a) i ϕ(t, τ, ξ) ϕ m (t, τ, ξ) δ. i! So convergence is uniform and ϕ is continuous. To show it is a solution we show it satisfies the integral equation. Fix ɛ > 0 and j sufficiently large, then ϕ(t, τ, ξ) ξ t f(s, ϕ(s, τ, ξ))ds τ = Hence, m+1 ϕ(t, τ, ξ) ϕ j(t, τ, ξ) t τ f(s, ϕ(s, τ, ξ))ds + t τ f(s, ϕ j 1(s, τ, ξ))ds ϕ(t, τ, ξ) ϕ j (t, τ, ξ) + t τ f(s, ϕ(s, τ, ξ)) f(s, ϕ j(s, τ, ξ))ds < ɛ + t τ L ϕ j 1(s, τ, ξ) ϕ(s, τ, ξ) ds < ɛ + Lɛ(b a). ϕ(t, τ, ξ) ξ t τ f(s, ϕ(s, τ, ξ))ds = 0

and ϕ(t) = ξ + t f(s, ϕ(s, τ, ξ))ds. Also, we have τ 1.8. NUMERICAL APPROXIMATIONS 11 ϕ(τ, τ, ξ) = lim n ϕ n(τ, τ, ξ) = ξ. Therefore, ϕ(t, τ, ξ) = x(t, τ, ξ) for a t b. We now know that x(t, τ, ξ) is well-defined for some region D f R d+2 where the values are in R d. Furthermore, (t, τ, ξ) D f if and only if (τ, ξ) D and t is in the domain of x(t, τ, ξ). Theorem 1.23. The set D f is open in R d+2 and x(t, τ, ξ) is continuous on D f with values in R d. Proof. Let (s, σ, ζ) D f. Then s is in the domain of x(t, σ, ζ) and there exists a s b as in the previous theorem. So there exists an open set W containing (s, σ, ζ) where W D f. Then denote this as W (s,σ,ζ) and D f = W (s,σ,ζ). Hence, D f is open. The continuity follows from the previous theorem. 1.8. Numerical approximations Often it is difficult or impossible to write a solution to an ODE in terms of elementary functions. As in Peano s theorem it is possible to approximate solutions. For numerical work this is often sufficient. 1.8.1. Euler s method. This is the method used in Peano s Theorem. We start with an interval of size (τ α, τ + α) and partition it using 2k + 1 points. So τ α = t k < < t 0 = τ < < t k = τ + α. Define ψ k (t) = ψ k (t i ) + (t t i )f(t 9, ψ k (t i )) for t [t i, t i+1 ) and i 0 and for t (t i 1, t i ] for i < 0. From this we obtain line segments that estimate the true solution. This is the oldest method known for numerical approximations and as we saw previously will converge to the solution; however, the convergence is very slow and not frequently used in practice. If we look at the Taylor expansion we can find the error is proportional to the step size. Hence, to increase the accuracy by a factor of ten we need ten times the number of steps. If the step size is not sufficiently small we also can see that the solutions are numerically unstable and it can be challenging to find the size one needs. 1.8.2. Midpoint method. The Midpoint method is an improved version of Euler s method. Let h be the step size for the partition. Let x = f(t, x) and x(t 0 ) = x 0. Let x n+1 = x n + hf(t n + h 2, x n + h 2 f(t n, x n )) for n = 0, 1, 2,... It can be shown that this method is faster, but still not significantly faster. 1.8.3. Runga-Kutta. This method was developed around 1900 and is significantly faster than the previous two and variants on it are still used in computation. In this case we let and let k 1 = f(t n, x n ) k 2 = f(t n + h 2, x n + h 2 k 1) k 3 = f(t n + h 2, x n + h 2 k 2) k 4 = f(t n + h, x n + hk 3 ) x n+1 = x n + h 6 (k 1 + 2k 2 + 2k 3 + k 4 ). This should look similar to Simpson s rule. In this case one can show the error is roughly on order of h 2. So the accuracy increases significantly with more steps.

12 1. INTRODUCTION AND BASIC CONCEPTS 1.9. Exercises 1.1. Prove the Cauchy-Schwarz inequality v w v 2 w 2 where v, w R n and v w is the dot product. 1.2. Use the Cauchy-Schwarz inequality to show that x + y 2 x 2 + y 2. 1.3. Find constants A and B such that A x 1 x B x 1. For n = 2 graph the set {x R 2 : x 1 1} and {x R 2 : x 1}. 1.4. Prove the Contraction Mapping Theorem. 1.5. Prove that the relation of equivalent norms is an equivalence relation. 1.6. Show by example that Ascoli s Theorem is false when each of the hypothesis are removed individually from the statement. So remove one at a time the following: a. The interval I is bounded. b. The sequence of functions {f m } is equicontinuous. c. The sequence {f m (t)} is bounded for every t I. 1.7. Consider the initial value problem x = t 2 + x 2 and x(0) = 0 on R 2. Find the longest interval on which the proof of Peano s theorem guarantees a solution. 1.8. Let D = R 2 and solve the initial value problem x = 1 + x 2, x(0) = 0. 1.9. Suppose that f(t, x) is continuous and bounded on R d. Show that the initial value problem has a solution defined on an arbitrarily long interval for any initial condition. 1.10 Show that if f is locally Lipschitz on an open set D, then it is continuous on D. (Prove this from the definition.) 1.11. Suppose f(t, x) is locally Lipschitz with respect to x on R 2 and there exists some a < b such that f(t, a) = f(t, b) = 0 for all t. Show that if a < ξ < b, then the solution of x = f(t, x), x(τ) = ξ has a maximal solution defined over all of R. 1.12. Consider x = f(t, x) on R d+1. Suppose for every compact interval I of R there exists a positive real number M I such that f(t, x) x < 0 whenever x M I and t I. Prove that every maximal solution is defined on an interval of the form a < t <. 1.13. For the α defined in Peano s theorem prove the following: Theorem 1.24. (Picard-Lindelöf) Let f be locally Lipschitz on D and consider x = f(t, x) such that x(τ) = ξ for (τ, ξ) D. Then the sequence ϕ 0 (t) = ξ ϕ m (t) = ξ + t τ f(s, ϕ m 1(s))ds is defined on I = (τ α, τ + α) and the sequence ϕ m converges uniformly on I to the solution of the initial value problem. 1.14. Find the function x(t, τ, ξ) and its domain for each of the following: (1) x = t 2 /x 2 on D = {(t, x) : x > 0}. (2) x = x/t 2 on D = {(t, x) : t > 0}. (3) x = t 2 /[x 2 (1 t 3 )] on D = {(t, x) : t < 1 and x > 0}. 1.15. Find x(t, τ, ξ, µ) for x = x cos(µt). 1.16. Consider the simple scalar differential equation x = 3x 2 with the condition x(0) = 1. (1) Use separation of variables to find the solution of this initial value problem. (2) Use Euler s method to approximate the solution to this initial value problem for 0 t 1 with h = 0.1, 0.05, 0.01, and 0.005. (3) Use Runga-Kutta to approximate the solution to this initial value problem for 0 t 1 with h = 0.1, 0.05, 0.01, and 0.005. (4) Compare the three solutions with the error bounds given in Section 1.8.

CHAPTER 2 Linear Differential Equations In this chapter we study properties of linear ODEs. Although many ODEs are nonlinear they often have local linearizations. These linearizations have been used historically to study nonlinear behavior. The simplest case will be linear constant coefficient systems. We will see that in this case the solutions can be described quite easily. 2.1. Basic properties As a reminder a function T : R n R m is linear if T (αx + βy) = αt (x) + βt (y) for all α, β R and x, y R n. From linear algebra we know that any linear function can be represented as T (x) = Ax once a basis of vectors have been chosen. Now let x = A(t)x where A(t) is a matrix for each t and the matrix varies continuously in time (so each entry is a continuous function). Let ϕ 1 and ϕ 2 be solutions and a, b R. Then (aϕ 1 + bϕ 2 ) = aϕ 1 + bϕ 2 = aa(t)ϕ 1 + ba(t)ϕ 2 = A(t)(aϕ 1 + bϕ 2 ). So aϕ 1 + bϕ 2 is a solution. We will show that any linear ODE is of this form. We now review some facts on norms of matrices. Proposition 2.1. If a and b are norms of R n and R m respectively, then A = sup{ A a : x b = 1} defines a norm on the space of m n matrices and Ax a A x b. Proof. First, suppose that A = 0. By definition this is equivalent to Ax = 0 for all x b = 1. This implies that A is the zero matrix (to see this use the basis vectors for R m ). Now suppose the norm is nonzero. For α R we have Let B be an m n matrix. Then αa = sup{ αax a : x b = 1} = sup{ A(αx) a : αx b = 1} = sup{ α Ax a : x b = 1} = α A. A + B = sup{ (A + B)x a : x b = 1} sup{ Ax a + Bx a : x b = 1} sup{ Ax a : x b = 1} + sup{ Bx a : x b = 1} = A + B. Therefore, this defines a norm on matrices. Let x 0. Then x Ax a = x b A( ) a x b A. x b We will look at n n matrices and assume that a = b so we only have one norm on the Euclidean space. Proposition 2.2. Let a be a norm on R n and A and B be n n matrices. Then AB A B. 13

14 2. LINEAR DIFFERENTIAL EQUATIONS Proof. Let x R n. From the previous proposition we know that Ax a A x a. So If x a = 1, then we have Hence, ABx a A B x a. ABx a A B. AB A B. Theorem 2.3. Let A : I M n (R) and h : I R n be continuous on I. Then for all (τ, ξ) I R n the initial value problem { x = A(t)x + h(t) has a unique solution defined for all t I. x(τ) = ξ Proof. Let K be a compact subset of I R n. Then A(t) is bounded on K by some M > 0. Then A(t)p + h(t) (A(t)q + h(t)) A(t) p q M p q for all p, q R n. Hence, f(t, x) = A(t)x + h(t) is locally Lipschitz and so there exists a unique solution for each initial condition. To show the solution exists for all t I we revisit Gronwall s Inquality. Let I = (c, d) and ϕ(t) be a maximal solution such that ϕ(τ) = ξ where ϕ is defined for some interval I = (a, b) where τ I. Notice ϕ(t) = ϕ(τ) + t τ A(s)ϕ(s) + h(s)ds. Suppose that b < d, then there exists some C 1, C 2 > 0 such that A(t) C 1 and h(t) < C 2 for all t [τ, b]. Hence, ϕ(t) ξ + C 2 (d c) + for t (τ, b). Using Gronwall s inequality we then have and ϕ(t) is contained in t τ C 1 ϕ(s) ds ϕ(t) ( ξ + C 2 (d c))e C1(b a) {(t, x) : a t b and x ( ξ + C 2 (d c))e C1(b a) }. So the solution can be continued since it doesn t approach the boundary of the domain for the function, a contradiction. Hence, the solution exists over the entire interval I. 2.2. Fundamental matrices We know that the solutions to x = A(t)x form a vector space (it is left to the reader to verify the other properties for a vector space) and for each initial condition there is a unique solution. What can we say about the vector space of solutions? Let V be the set of solutions to x = A(t)x and τ I (where I is the interval for which A(t) varies continuously). For e 1,..., e n the standard basis vectors to R n and ϕ i a solution such that ϕ i (τ) = e i. Then the solutions ϕ 1,..., ϕ n are unique at τ and linearly independent at τ. let ψ(t) be another solution. Then ψ(τ) = ξ = n i=1 a 1e i for some a 1,..., a n R. So ψ(τ) = n i=1 a 1ϕ i (τ) and since the solutions form a vector space we have ψ(t) = n i=1 a iϕ i (t) for all t I. Thus, ϕ 1,..., ϕ n form a basis for V and V is n-dimensional. Now let X(t) = [ ϕ 1 (t) ϕ n (t) ]. For each t I we know that X(t) M n (R). This is called a fundamental matrix solution. Notice that X (t) = [ ϕ 1(t) ϕ n(t) ] = [ ] A(t)ϕ 1 A(t)ϕ n = A(t)X(t). So we will often look at the matrix differential equation (2.1) X = A(t)X

2.2. FUNDAMENTAL MATRICES 15 where X : I M n (R). Claim 2.4. The following hold for (2.1). a. X(t) is a solution if and only if every column is a solution of x = A(t)x. b. Maximal solutions are defined over I. c. If X(t) is a solution, then X(t)v is a solutions of x = A(t)x for all v R n. d. If X(t) is a solution, then X(t)B is a solution of X = A(t)X for all B M n (R). Proof. Let X = [ ] x 1 x n. Then A(t)X = [ ] [ A(t)x 1 A(t)x n = x 1 x n]. So the first result follows. Now the second holds since the solutions x 1,..., x n exist for all i. For the third result notice that X(t)v = n i=1 v ix i (t) and v i x i (t) solves x i = A(t)x i for each i. Lastly, we see that X(t)B = [ ] [ ] X(t)b 1 X(t)b n where B = b1 b n and the result follows from the first and third results. Proposition 2.5. Let X(t) be a solutions to (2.1). Then the determinant of X(t) either vanishes for all t I or is never zero on I. Proof. Suppose there exists some τ I such that det X(τ) = 0. So there exists some v R n {0} such that X(τ)v = 0. Let ψ(t) = X(t)v. Then ψ(t) is a solution to x = A(t)x and ψ(τ) = 0. We know that ϕ(t) 0 is a solution to x = A(t)x where ϕ(τ) = 0. By uniqueness this implies that ψ(t) = ϕ(t) and X(t)v 0 for all t I and det X(t) = 0 for all t I. Definition 2.6. A fundamental matrix X(t) is a solution to (2.1) such that det X(t) 0 for some t I. Theorem 2.7. If A(t) is continuous on I, then a fundamental matrix solution to (2.1) exists. If X(t) is a fundamental matrix solution, then X(t)[X(τ) 1 ]ξ is the unique solution to x = A(t)x where x(τ) = ξ. Furthermore, x(t, τ, ξ) = X(t)[X(τ)] 1 ξ and D f = I I R n. Proof. The first part of the theorem follows from our previous results. Let X(t) be a fundamental matrix solution and ϕ(t) = X(t)[X(τ)] 1 ξ. Then ϕ(t) is a solution to x = A(t)x since [X(τ)] 1 ξ R n. Also ϕ(τ) = ξ so ϕ(t) = X(t)[X(τ)] 1 ξ = x(t, τ, ξ) and D f = I I R n. Remark 2.8. The solutions ϕ 1,..., ϕ n in V form a basis if and only if for some τ I the vectors ϕ 1 (τ),..., ϕ n (τ) form a basis in R n. Definition 2.9. If X(t) is a fundamental matrix and s I, then X(t, s) = X(t)(X(s)) 1 is the principle fundamental matrix at s I. So there are infinitely many fundamental matrices. Now we have X(s, s) = Id and the principle fundamental matrix is unique. Theorem 2.10. Let A : I M n (R) and h : I R n be continuous on I. Then the solution to the initial value problem x = A(t)x + h(t) where x(τ) = ξ is given by x(t, τ, ξ) = X(t, τ)ξ + where X(t, s) is the principle matrix solution. t τ X(t, s)h(s)ds Proof. Let X(t) be a fundamental matrix for (2.1). Suppose we want a solution satisfying the initial value problem of the form ϕ(t) = X(t)v(t) for some function v(t) : I R n. Then ϕ (t) = X (t)v(t) + X(t)v (t) = A(t)X(t)v(t) + h(t). So we want X(t)v (t) = h(t) and v (t) = X(t) 1 h(t). Hence, v(t) = C + t τ X(s) 1 h(s)ds and ϕ(t) = X(t)C + The initial conditions imply that C = X(τ) 1 ξ. This last result is useful in obtaining solutions. t τ X(t)[X(s)] 1 h(s)ds.

16 2. LINEAR DIFFERENTIAL EQUATIONS Theorem 2.11. (Abel s Formula) Let X(t) be a fundamental matrix. Then for all τ I we have det X(t) = [det X(τ)]e t τ TrA(s)ds. Proof. Notice that it is sufficient to prove that det X(t) is a solution to y = [TrA(t)]y. Since this implies that y = ce t τ TrA(s)ds where c is det X(τ). A straight forward computation shows that d det X(t) = TrA(t) det X(t). dt 2.3. Higher order linear equations As mentioned before higher order equations of the form x n +a n (t)x n 1 + +a 1 (t) = g(t) can be formed into a first order equation where 0 1 0 0 0 0 0 1 0 0 A(t) =. 0 0 0 0 1 a 1 (t) a 2 (t) a 3 (t) a 4 (t) a n (t) and We then have (as we did before) x 1 = x x 1 = x 2 x 2 = x 3 0 h(t) =. 0. g(t). x n = a 1 (t)x 1 a 2 (t)x 2 a n (t)x n + g(t) Remark 2.12. A function ϕ(t) is a solution to x n + a n (t)x n 1 + + a 1 (t) = g(t) if and only if ϕ(t) is a solution to the higher order linear equation for ϕ(t) ϕ (t)... ϕ n 1 (t) Theorem 2.13. Let a i (t) for 1 i n and g(t) be continuous on I. Given τ I and x 0,..., x n 1 R there exists a unique solution to x n + a n (t)x n 1 + + a 1 (t)x = g(t) defined on I such that x(τ) = x 0 and x i (τ) = x i for 1 i n 1. This theorem follows directly from the previous theorems as it is just a special case so could be classified as a corollary. If g(t) = 0, then the system is homogeneous and solutions form a vector space. Definition 2.14. Let ϕ 1,..., ϕ n be (n 1) times differentiable solutions of x n +a n (t)x n 1 + +a 1 (t)x = g(t). The Wronskian is ϕ 1 ϕ n ϕ 1 ϕ n W (ϕ 1,..., ϕ n ) = det... ϕ n 1 1 ϕ n 1 n

2.4. COMPLEX LINEAR EQUATIONS AND VARIATION OF PARAMETERS 17 We will show that the Wronskian relates to the fundamental matrix. Proposition 2.15. If there exists some τ I such that W (ϕ 1,..., ϕ n )(τ) 0, then ϕ 1,..., ϕ n are linearly independent in the vector space of real-valued functions on I. Proof. Let c 1 ϕ 1 + c n ϕ n = 0 on I where c 1,..., c n R. Then for τ I we know ϕ 1 ϕ n c 1 ϕ 1 ϕ n c 2... = 0. ϕ1 n 1 ϕ n 1 n c n Since W (τ) 0 we know that c 1 = = c n = 0 and ϕ 1,..., ϕ n are linearly independent. The converse is in general false. Theorem 2.16. Solutions of x n + a n (t)x n 1 + + a 1 (t)x = 0 form a vector space of dimension n. Moreover, if {ϕ 1,..., ϕ n } is a set of n solutions of x n + a n (t)x n 1 + + a 1 (t)x = 0, then the following are equivalent: a. W (ϕ 1,..., ϕ n )(τ) 0 for some τ I. b. {ϕ 1,..., ϕ n } is a basis for the vector space of solutions to x n + a n (t)x n 1 + + a 1 (t)x = 0. c. W (ϕ 1,..., ϕ n )(t) 0 for all t I. Proof. If X(t) = ϕ 1 ϕ n.. ϕ n 1 1 ϕ n 1 n is a fundamental matrix, then W (ϕ 1,.., ϕ n )(t) = det X(t) 0 so the solutions form a vector space of dimension n. For a. implies b. notice from the previous result that we know the solutions are linearly independent and of dimension n so form a basis. For b. implies c. Let ϕ 1,..., ϕ n be a basis and X(t) = ϕ 1 ϕ n.. ϕ n 1 1 ϕ n 1 n If this is not a fundamental matrix there exists some v 0 such that X(t)v 0 for all t I. So these are not linear independent. Hence, X(t) is not a fundamental matrix and det X(t) = W (ϕ 1,..., ϕ n ) 0 for all t I. c. implies a. is trivial.. 2.4. Complex linear equations and variation of parameters Even when we start with a real valued matrix we may have complex eigenvalues. So it can be useful to examine (2.2) z = A(t)z + h(t) where z C n, A(t) M n (C) and h(t) C n for all t I. The main idea in this section is that all of the results we have proven apply in this setting. Notice each entry of A(t) can be rewritten as a jk (t) = α jk (t) + iβ jk (t) where α jk, β jk : I R. Similarly, h j (t) = γ j (t) + iδ j (t) where γ j, δ j : I R for all 1 j n. Let α 11 β 11 α 1n β in β 11 α 11 β 1n α 1n B(t) =.... α n1 β n1 α nn β nn β n1 α n1 β nn α nn

18 2. LINEAR DIFFERENTIAL EQUATIONS and Then is a solution to (2.2) if and only if is a solution to We then have the following result. γ 1 δ 1 g(t) =.. γ n δ n ϕ(t) = (u 1 (t) + iv 1 (t),..., u n (t) + iv n (t)) ψ(t) = (u 1, v 1,..., u n, v n ) x = B(t)x + g(t). Theorem 2.17. Let A : I M n (C) and h : I C n be continuous functions on an interval I of R. For any (τ, ξ) I C n the initial value problem z = A(t)z + h(t) where z(τ) = ξ has a unique solution that can be continued on all of I. We also know that the solutions to z = A(t)z form a vector space and if ψ(t) is a particular solution to (2.2) then the set of solutions is ψ + ϕ where ϕ is a solution to z = A(t)z. Proposition 2.18. Let A : I M n (R) and h : I R n be continuous on an interval I R. Let ϕ be a solution to the complex equation z = A(t)z + h(t). Then ϕ is real valued if and only if for some τ I the imaginary parts of ϕ(τ) are all zero. Proof. Let x(t) be a unique solution to x = A(t)x where x(τ) = ξ R. Then x(t) is a solution to z = A(t)z + h(t). Now suppose that there exists some τ I and ξ R n such that ϕ(τ) = ξ + i0 where ϕ(t) is a solution to z = A(t)z + h(t). Then from the previous sections if we look at the equation x = B(t)x + g(t) we see that the solution corresponding to the imaginary part is constantly zero. So ϕ(t) is real valued. 2.4.1. Variation of Paramters. We now state a general method to find solutions. We will write the result for real valued equations, but as we just saw the formula can also hold for complex valued equations. Theorem 2.19. Let b : R n R n be continuous for (t 0, x 0 ) R n R n. Then solutions to { x = A(t)x + h(t) are given by x(t 0 ) = x 0 t x(t) = X(t)[X(t 0 )] 1 x 0 + X(t)[X(s)] 1 h(s)ds t 0 where X(t) is a fundamental solution of equations to x = A(t)x and the integral is computed component by component. Proof. We know solutions will exist and be unique. Also, x (t) = x (t)[x(t 0 )] 1 x 0 + t t 0 X (t)[x(s)] 1 h(s)ds + X(t)[X(t)] 1 h(t) = A(t)X(t)[X(t 0 )] 1 x 0 + A(t) t t 0 X(t)[X(s)] 1 h(s)ds + h(t) = A(t)x(t) + h(t).

2.5. EXERCISES 19 2.5. Exercises 2.1. Let x(t) be a solution to x = A(t)x + h(t) where A(t) and h(t) are continuous on an open interval I = (0, ). Prove that x(t) is bounded for t 1 if both A(t) dt < and h(t) dt <. 1 1 2.2. Let A m be a sequence of invertible real n n matrices such that A m is a bounded sequence. (1) Show that if one of the row of A m goes to 0 as m goes to infinity, then det A m goes to 0 as m goes to infinity. (2) Show that there exists a positive real number α such that A 1 m α for all m. 2.3. Show that the definition of the principal matrix solution defined by the equation X(t, s) = X(t)[X(s)] 1 is independent of the fundamental matrix X(t) and that 2.4. Consider x = A(t)x + g(t) where Verify that A = X(t, s)x(s, τ) = X(t, τ). [ ] 3 1 0 3 X(t) = and g(t) = [ ] e 3t te 3t 0 e 3t [ ] sin t. cos t is a fundamental matrix solution of x = Ax. Find a solution of the initial value problem [ ] x 1 = A(t)x + g(t) and x(0) =. 1 2.5. Suppose A : (0, ) M n (R) is continuous. Prove the following: If 1 TrA(t)dt = then there exists a solution x(t) of x = A(t)x such that x(t) is unbounded for t 1. 2.6. Consider X = A(t)X on the interval 0 < t < where A(t) = 0 1 0 0 0 1. 6t 3 6t 2 3t 1 (1) Show that t3 t 2 t 3t 2 2t 1 6t 2 0 is a fundamental matrix solution of X = A(t)X. (2) Calculate X(t, s). (3) Use X(t, s) to solve the third-order initial value problem d 3 y dt 3 3 d 2 y t dt 2 + 6 dy t 2 dt 6 t 3 y = 0 where y(1) = 1, y (1) = 2, y (1) = 3. 2.7. Let p(t), q(t), and g(t) be continuous real-valued functions on an open interval I and suppose that ϕ 1 (t) and ϕ 2 (t) are linearly independent solutions of the second-order scalar equation y + p(t)y + q(t)y = 0. Derive and explicit formula for a particular solution of y + p(t)y + q(t)y = g(t). 2.8. Let A : I M n (R) be a continuous function on the open interval I. Show that z(t) = (u 1 (t) + iv 1 (t),..., u n (t) + iv n (t)) where u j (t) and v j (t) are differentiable real-valued functions on I for 1 j n is a solution of the complex differential equation z = A(t)z if and only if u(t) = (u 1 (t),..., u n (t) and v(t) = (v 1 (t),..., v n (t)) are solutions of the real differential equation x = A(t)x.