Math 634 Course Notes. 374 TMCB, Provo, UT address: Todd Fisher

Size: px
Start display at page:

Download "Math 634 Course Notes. 374 TMCB, Provo, UT address: Todd Fisher"

Transcription

1 Math 634 Course Notes 374 TMCB, Provo, UT address: Todd Fisher

2 2000 Mathematics Subject Classification. Primary Key words and phrases. Ordinary differential equations Abstract.

3 Contents Chapter 1. Introduction and basic concepts Introduction Flows Preliminaries on R n First-order systems of equations Existence Another approach for successive approximations Uniqueness Continuation Continuity in initial conditions Numerical approximations Euler s method Midpoint method Runga-Kutta Exercises Chapter 2. Linear Differential Equations Basic properties Fundamental matrices Higher order linear equations Complex linear equations and variation of parameters Variation of Paramters Exercises Chapter 3. Constant Coefficients Exponential of a matrix Generalized eigenspaces Canonical forms Real canonical form Higher order equations Integrals Exercises Chapter 4. Qualitative theory Qualitative approach Stability Fixed Points Lyapunov s Theorem Damped and undamped equations Proofs of Lyapunov s Theorems Invariant sets and stability Stability and Constant Coefficients Almost linear systems Stability and general systems Floquet Theory v

4 vi CONTENTS Exercises Chapter 5. Hyperbolicity Hyperbolic linear differential equations Topological conjugacy Linearizations Hartman-Grobman Theorem Hamiltonian Equations Exercises Chapter 6. Poincaré Sections and Planar Dynamics Transversal Sections Planar Dynamics Periodic Solutions Periodic solutions for Van der Pol Equations Recurrence Exercises Chapter 7. Bifurcations

5 CHAPTER 1 Introduction and basic concepts In this chapter we introduce the fundamental concepts of ordinary differential equations. After a brief introduction we review material from analysis on R n that will be useful. Then we explain the existence and uniqueness of solutions. After this we explain the continuation of solutions. Lastly, we address continuity of initial conditions and briefly mention numerical approximations Introduction Ordinary differential equations are equations involving functions and their derivatives. These arise naturally in many applications. As an example we look at a mass-spring system. Suppose we have a single spring and a weight hanging vertically from the spring. For simplicity assume the spring is massless, that their is no air resistance, and the weight can only move vertically. We let y be the displacement of the system. In this case Hooke s law states that the restoring force is proportional to the amount the spring is stretched where the constant is positive and depends on the spring. In this case the system has two forces F 1 = mg = the force of gravity, and F 2 = Ky = the restoring force. We have an equilibrium when F 1 + F 2 = 0 or when the displacement y = mg/k. From Newton s Second Law we know that the overall force is my so we have Ky + mg = my or y + K m y = g. This is an example of a second order equation. It is called a second order equation since the second derivative of y is in the expression, but not a third order derivative. A solution to the equation is a function y that solves the expression. Sometimes initial conditions are given such as the initial position and velocity and a solution is a function that solves the equation and such that at t = 0 the function has the prescribed position and velocity. Another classic example is a pendulum. Suppose we have a pendulum with length L and a mass m at the end. For simplicity we assume the pendulum is massless and there is no air resistance. Now we let θ be the angle moved from the vertical. Also, we let s be the distance moved along the arc, so s = Lθ. Then v = s = Lθ. From Newton s second law we have F = mv = mg sin θ. Combining the two expressions we have Lθ = g sin θ. This is again a second order equation. In mechanics second order equations arise very naturally. More generally, let f : D R n be continuous where D R R n is open. We let (1.1) x = f(t, x). A function x : (a, b) R n where a and b is a solution to (1.1) if x is C 1, (t, x(t)) D for all t (a, b), and x (t) = f(t, x(t)) for all t (a, b). Example: Let x = x + t. If x(t) is a solution, then (e t x) = e t x + e t x = e t (x + x ) = e t t. So e t x = e t (t 1) + c and x = t 1 + ce t. If we let (x 0, t 0 ) D then the initial value problem is the equation { x = f(t, x) x(t 0 ) = x 0 and a solution is a function x(t) such that x(t) = f(t, x(t)) and x(t 0 ) = x 0. 1

6 2 1. INTRODUCTION AND BASIC CONCEPTS For the previous example if we specify that x(0) = 0, then x(0) = ce 0 or c = 1 and x(t) = t 1 + e t. One approach we will use in working with ordinary differential equations is to transform an equation involving derivatives into an equation involving integrals. This can be done due to the next result. Proposition 1.1. Let f : D R n be continuous on an open set D R R n. Given (t 0, x 0 ) D a C 1 function x : (a, b) R n is a solution to the initial value problem { x = f(t, x) x(t 0 ) = x 0 for an interval (a, b) containing t 0 if and only if x(t) = x 0 + t t 0 f(s, x(s)) ds for all t (a, b). Proof. Suppose that x(t) is a solution to the initial value problem. Notice that the function t f(t, x(t)) is continuous since the composition of continuous functions is continuous. So the function is integrable on bounded intervals. For all t (a, b) we know that x(t) x 0 = x(t) x(t 0 ) = t t 0 x (s) ds = t t 0 f(s, x(s)) ds. Now suppose that x(t) = x 0 + t t 0 f(s, x(s)) ds for all t (a, b). Then x(t 0 ) = x 0 and x (t) = f(t, x(t)) by the Fundamental Theorem of Calculus. This follows since f is continuous and x is C Flows. Suppose we have x = f(x) where f : D R n is continuous where D R n is open. Then the ordinary differential equation does not depend on t and the equation is autonomous. A family of maps ϕ t : D R n where D R n is open and t R is a flow if ϕ o = Id and ϕ t+s = ϕ t ϕ s for t, s R. Example: Let y R n and ϕ t (x) = x + ty. Then ϕ 0 (x) = x and ϕ t+s (x) = x + (t + s)y = (x + sy) + ty = ϕ t (ϕ s (x)). Proposition 1.2. Let f : R n R n be continuous such that { x = f(x) x(0) = x 0 has a unique solution x(t, x 0 ) defined for all t R, then ϕ t = x(t, x 0 ) is a flow. Proof Given s R and y : R R n defined by y(t) = x(t + s, x 0 ) we have y(0) = x(s, x 0 ) and y (t) = x (t + s, x 0 ) = f(x(t + s, x 0 )) = f(y(t)). So y is a solution to x = f(x). Since a solution is unique we have y(t) = x(t, y(0)) = x(t, x(s, x 0 )) = x(t + s, x 0 ). So ϕ t+s (x 0 ) = (ϕ t ϕ s )(x 0 ). Also, ϕ 0 (x 0 ) = x(0, x 0 ) = x Preliminaries on R n Let R n be an n-dimensional vector space over R. A point will be represented by x = (x 1,..., x n ). Note we will assume we are using the typical topology on R n unless stated otherwise. Definition 1.3. A function : R n R is a norm if i. x > 0 for all x R n, where x 0, and 0 = 0, ii. cx = c x for all c R and all x R n, and iii. x + y x + y for all x, y R n. Example: The Euclidean norm is x 2 = x x x2 n. The one-norm is x 1 = x 1 + x n and the uniform or infinity norm is x = max{ x 1,..., x n }. Definition 1.4. Two norms α and β are equivalent if there exist constants A, B > 0 such that A x α x β B x α. A very useful fact in analysis on R n is the following: Theorem 1.5. Any two norms on R n are equivalent.

7 1.2. PRELIMINARIES ON R n 3 Proof. It is sufficient to show that any norm is equivalent to 2. Let e i = (0,..., 0, 1, 0,..., 0) where the 1 occurs in the i th spot. Let c = max{ e j : 1 j m}. Then we have x = n i=1 x ie j and n n x x i e i c x i nc x 2 i=1 since x i x 2 for all 1 i n. Notice that x y x y mc x y 2. We know that 2 is a continuous function on R n so the above implies that is continuous on R n. The set {x R n : x 2 = 1} is a compact set. So we know there exists some A > 0 such that x A > 0 for x 2 = 1. So for all x 0 we have 1 x x 2 = 1 2 which implies that Hence, i=1 A 1 x x 2 = 1 x. x 2 A x 2 x and x 1 A x. Remark 1.6. The statement above is not true in general for infinite dimensional spaces. Definition 1.7. A sequence of functions f k : I R n is equicontinuous on and interval I R if given ɛ > 0 there exists some δ > 0 such that for all m 1 we have f m (s) = f m (t) < ɛ whenever s t < δ. Often the sequences of functions we will look at in this class are equicontinuous. The next theorem will be very useful. Theorem 1.8. (Ascoli) Let f m : I R n be a sequence of functions defined over a bounded interval I R. If {f m } is equicontinuous and for all t I the sequence f m (t) is bounded, then there exists a subsequence of f m converging uniformly on I. Proof. We will use Cantor s diagonalization method. Let {r 1,..., } be an enumeration of the rationals in I. Let f (p,1) be a subsequence of functions from {f M } such that f (p,1) (r 1 ) converges as p. Now choose a subsequence f (p,2) of f (p,1) such that f (p,2) (r 2 ) converges as p. For each k > 1 we can choose a subsequence of f (p,k 1) such that f (p,k) (r k ) converges. So we also have f (p,k) (r j ) converges for each 1 j k. Let g p = f (p,p). For any k N we know that g p (r k ) converges as p. Fix ɛ > 0. By equicontinuity there exists some δ > 0 such that g p (s) g p (t) < ɛ/3 for all s t < δ and s, t I. Since the rationals are dense in I we know there exists some K N such that for all t I there exists some r i where t r i < δ and i < K. (So K is sufficiently large that r 1,..., r K is δ/2-dense in I.) Since g k (r i ) for 1 i K is Cauchy there exists some N N where g p (r i ) g q (r i ) < ɛ/3 for 1 i K and p, g > N. Let t I and p, q N. Select r i for 1 i K such that t r i < δ. Then g p (t) g q (t) g p (t) g p (r i ) + g p (r i ) g q (r i ) + g q (r i ) g q (t) < ɛ. So g p is Cauchy and converges to g(t) for all t I. Now let p. Then g(t) g q (t) < ɛ for all t I and q N. So g p (t) converges uniformly to g(t) on I. The next theorem is one of the most useful theorems in analysis. Theorem 1.9. (Contraction Mapping Theorem) Let X be a complete metric space and f : X X be a continuous map such that d(f(x), f(y)) < λd(x, y) for some λ (0, 1) and all x, y X. Then there exists a unique x X such that f(x ) = x and for all x X the sequence {x, f(x), f 2 (x),..., f n (x),...} converges to x. The above theorem will be used for instance to find a solution to a differential equation. In this case the space X will be a Banach space of function and the map f will be an operator on the Banach space.

8 4 1. INTRODUCTION AND BASIC CONCEPTS 1.3. First-order systems of equations Many equations involve the rate of change for multiple variables such as predator-prey equations. Due to this we will often examine equations of the form (1.2) x 1 = f 1 (t, x 1, x 2,..., x n ) x 2 = f 2 (t, x 1, x 2,..., x n ). x n = f n (t, x 1, x 2,..., x n ) Also suppose we have a higher order equation of the form x n + f 1 x n f n 1 x + f n x = g(x). Then the equation can be rewritten in the form of (1.2) by introducing dummy variables. Let y 1 = x, y 1 = y 2 = x, y 2 = y 3 = x,..., y n = x n. Then we have y 1 = y 2. y n 1 = y n y n = f 1 y n 1 f 2 y n 2 f n y 2 + g(y 1 ) Example: For the equation of a spring of the form x + k mx = 0 we then have { y 1 = y 2 y 2 = k m y 1 Throughout this course we will be studying systems of the form (1.2). Example: Suppose we have linked springs hanging vertically from a ceiling with mass one attached to spring one and mass two attached to spring two. We let y 1 be the displacement of spring one and y 2 the displacement for spring two. Then we have the equations This can be rewritten in the form m 1 y 1 = k 1 y 1 + k 2 (y 2 y 1 ) m 2 y 2 = k 2 (y 2 y 1 ) If we let x 1 = x 2 x 2 = k1 m 1 x 1 + k2 m 1 x 3 k2 m 1 x 1 x 3 = x 4 x 4 = k2 m 2 (x 3 x 1 ) A = m 1 ) 0 k2 m k 2 m 2 0 k2 m 2 0 ( k1+k2 Then we can also write the expression as x = Ax where x = More generally, the expression (1.2) can be rewritten as x = f(t, x) where f(t, x) = (f 1 (t, x),..., f n (t, x)). Working with systems of equations can be difficult. One of the main techniques is simply to adapt the techniques learned for first order equations in the undergraduate course to this setting. x 1 x 2 x 3 x 4.

9 1.4. EXISTENCE 5 Example: Suppose we start with the expression y 1 = y1 2 y 2 = y 1 + y 2 y 1 (t 0 ) = η 1 where η 1 > 0 y 2 (t 0 ) = η 2 To solve this we use separation of variables on the first equations and obtain 1 dy 1 = dt. So we have 1/y 1 = t + c or y 2 1 y 1 (t) = From the initial condition we then solve and obtain y 1 (t) = Substituting this into the second equation we have y 2 + y 2 = 1 t + c. η η 1 (t t 0 ). η η 1 (t t 0 ). This is first order equation of one variable and we can solve using techniques from undergraduate ODEs to obtain t e t s y 2 (t) = η 2 e t t0 + η 1 1 η 1 (s t 0 ) ds. t Existence In this section we address when there is a solution to (1.2). Our standing assumption is that the function function f(t, x) is continuous. Theorem (Peano) If f(t, x) is continuous in D, then for each (τ, ξ) D there exists at least one solution to the initial value problem x = f(t, x) x(τ) = ξ. Proof. The approach uses successive approximations and Ascoli s theorem. We know D is open so there exists a rectangle R D where R = {(t, x) D : t τ b and x ξ 1 b} for some b > 0. Since f is continuous and R is compact we know there exists M = sup{ f(t, x) 1 : (t, x) R} <. Let α = min{b, b M }. The idea is that the solutions are bounded by a cone of slope ±M in R so we know the solutions exist for at least time α. Let I = (τ α, τ + α). We know that f is uniformly continuous on R so there exists some δ(m) > 0 such that f(t, x) f(s, y) 1/M when t s < δ and x y 1 < δ for (t, x), (s, y) R. We want to construct a sequence of functions that satisfy Ascoli s Theorem. Let (M) = min{δ(m), δ(m)/m, 1/M} and fix a partition of I such that the distance between any two elements in the partition is less than (M) and τ α = t p < t p+1 < < t 1 < t 0 = τ < t 1 < < t p = τ + α. Now we define ϕ M for each M N. For t [t 1, t 1 ] let ϕ M (t) = ξ + (t t 0 )f(t 0, ξ) = ξ + (t τ)f(t 0, ξ). For t [t 1, t 2 ] define ϕ M (t) = ϕ M (t 1 ) + (t t 1 )f(t 1, ϕ M (t 1 )). Continue in this way to define ϕ M over I. We obtain a collection of (2p 1) line segments joined at the vertices. So ϕ M is continuous for each M. We now show (t, ϕ M (t)) stays in R for t I. To do this we show that ϕ M (t) ϕ M (s) 1 < (t s)m for t, s I.

10 6 1. INTRODUCTION AND BASIC CONCEPTS Assume that τ s < t. Then there exists some j, k {0,..., p} such that t j s < t j+1 and t k < t t k+1. If j = k then ϕ M (t) ϕ M (s) = t s f(t j, ϕ M (t j ) 1 t s M. If j < k, then ϕ M (t) ϕ M (s) ϕ M (t) ϕ M (t k ) ϕ M (t j+1 ) ϕ M (s) 1 ( t t k + t k t k t j+1 s )M = t s M. For s < τ < t we proceed similar to the above and for τ α < s < t < τ. Also, ϕ M (t) 1 ϕ M (t) ϕ M (τ) 1 + ϕ M (τ) 1 t τ M + ξ 1 αm + ξ 1. This implies that ϕ M is a uniformly bounded sequence that is equicontinuous over a bounded interval so we can apply Ascoli s Theorem. Notice that if t I such that t t i, then ϕ m(t) exists and equals f(t k, ϕ M (t k )) for some unique t k where t t k < (M). So ϕ m(t) f(t, ϕ M (t)) 1 = f(t k, ϕ M (t k )) f(t, ϕ M (t)) 1 < 1 M. (So ϕ M is almost a solution to the ODE.) Now let ϕ Mi be a uniformly convergent subsequence on I to a function ϕ : I R n. We now show that ϕ satisfies the integral equation. Notice that ϕ(t) ϕ(τ) t τ f(s, ϕ(s))ds 1 ϕ(t) ϕ Mi (t) 1 + ϕ Mi (t) ϕ(τ) t τ f(s, ϕ M i (s))ds 1 + t τ f(s, ϕ M i (s)) f(s, ϕ(s))ds 1. The first term goes to zero by convergence. The third term goes to zero since the sequence converges uniformly so we can pass the limit inside the integral. For the second term we know that by the definition of ϕ M. Then ϕ M (t) = ϕ M (τ) + t τ ϕ M i (s) f(s, ϕ Mi (s))ds 1 t τ ϕ M (s)ds t τ ϕ M i (s) f(s, ϕ Mi (s)) 1 ds t 1 τ M i ds α M i. So ϕ(t) ϕ(τ) t τ f(s, ϕ(s))ds 1 = 0 and ϕ(t) = ϕ(τ) + t f(s, ϕ(s))ds. Corollary If f is continuous on an open set D, then every point in D has at least one solution passing through it. Corollary If f is continuous on D and C D is compact, then there exists some α > 0 such that if (t, ξ) C then the initial value problem x = f(t, x), x(τ) = ξ has a solution defined for the interval (τ α, τ + α) Another approach for successive approximations. In other resources another method is used to construct a sequence of functions that converges to a solution. The idea is to use the integral equation to obtain a solution. Let ϕ 0 (t) = ξ and ϕ 1 (t) = ξ + t τ f(s, ϕ 0(s))ds. In general, let ϕ j (t) = ξ + t τ f(s, ϕ j 1(s))ds for j > 1. As in the last section one can show that ϕ j converges to a solution to the initial value problem. As an example let x = x and x(0) = 1. Then ϕ 0 (t) = 1 and ϕ 1 (t) = 1 + t 0 1ds = 1 t. For ϕ 2(t) = 1 + t 0 (1 s)ds = 1 t + t2 2. In general, we have k ( 1) i )t i ϕ k (t) =. i! i=0 τ

11 1.5. UNIQUENESS 7 So we have ( 1) i )t i ϕ(t) = i! i=0 = e t Uniqueness Now that we know solutions exist we are often concerned about uniqueness. Not every ODE will have a unique solution. Think of an ODE that represents the evaporation of a rain drop on the ground. After the rain drop has evaporated the zero solution will also be valid. So we need extra conditions on the ODE not just continuity of the function f. As another example of an ODE with non-unique solutions let x = x and x(0) = 0. Then x(t) = 0 is a solution. However, for any c 1 0 c 2 the following is a solution x(t) = (c1 t)2 4 t < c 1 0 c 1 t c 2 (t c 2) 2 4 t > c 2. So there are an infinite number of solutions while x is a continuous function over R. Remark There exist continuous functions over R 2 such that at each initial condition there exist an infinite number of solutions. We will see that a sufficient condition is that f is Lipschitz. In fact since the set D is open we only need Lipschitz as a local condition. A function f : D R n where D R R n is open is locally Lipschitz if for any compact set C D there exists a K = K(C) 0 such that f(t, x) f(t, y) K x y whenever (t, x), (t, y) C. As an example f(t, x) = x 2 is not Lipschitz on R 2, but is locally Lipschitz. Notice that if f has continuous partial derivatives, then it is locally Lipschitz. We now prove a very useful inequality for this course. This will be helpful in the integral equation to obtain bounds on uniqueness of solutions. Theorem (Gronwall Inequality) Let K 0 and f and g be continuous non-negative functions on α t β such that f(t) K + t f(s)g(s)ds for all α t β. Then α for all α t β. f(t) Ke t α g(s)ds Proof. Let h(t) = K + t f(s)g(s)ds. Then h(α) = K and f(t) h(t) for all t (α, β). From the α Fundamental Theorem of Calculus we have h (t) = f(t)g(t) h(t)g(t). Now and Integrating with respect to t we obtain So h (t)e t α g(s)ds e t α g(s)ds h(t)g(t) 0 d (h(t)e t g(s)ds) α 0. dt h(t)e t α g(s)ds h(α)e α α g(s)ds = h(t)e t α g(s)ds K 0. f(t) h(t) Ke t α g(s)ds. Theorem Let f be locally Lipschitz continuous on D. If (τ, ξ) D, then there exists a unique solution to the initial value problem. Proof. Suppose there exist two solutions ϕ 1 and ϕ 2 for some compact set C D containing (τ, ξ) and let L be a Lipschitz constant in C for f. Then ϕ i (t) = ϕ i (τ) + t τ f(s, ϕ i (s))ds

12 8 1. INTRODUCTION AND BASIC CONCEPTS for i = 1, 2. Hence, ϕ 1 (t) ϕ 2 (t) 1 = t τ f(s, ϕ 1(s)) f(s, ϕ 2 (s))ds 1 t τ f(s, ϕ 1(s)) f(s, ϕ 2 (s)) 1 ds t τ L ϕ 1(s) ϕ 2 (s) 1 ds. Now we apply Gronwall s inequality where g = L and K = 0. So ϕ 1 (t) ϕ 2 (t) 0 for all t I Continuation We now know that if f is locally Lipschitz that solutions exist and are unique. The problem with the previous results is that the time for which the solution is known to exist is very short. How can we show the solution exists for a longer period of time? The idea is to combine the local solutions and continue the solutions as long as possible. Let x(t) be a solution for a < t < b a continuation to the right is a solution x 1 (t) such that x 1 (t) = x(t) for a < t < b and x 1 (t) exists an a < t < b 1 where b < b 1. Similarly, there is a continuation to the left. Proposition Let x(t) be a solution of x = f(t, x) defined on I = (a, b) where b <. Then there exists a continuation of x(t) to the right if and only if lim t b x(t) = ξ exists and (b, ξ) D. Proof. Suppose that lim t b x(t) = ξ exists and (b, ξ) D. Then there exists a solution ϕ(t) defined on t (b α, b + α) where ϕ(b) = ξ. Let { x(t) a < t < b x 1 (t) = ϕ(t) b t < b + α. We now show that x(t) is a solution for a < t < b+α. For a < t < b we know x 1 (t) = x 1 (τ)+ t τ f(s, x 1(s))ds by definition. For t [b, b + α) we know x 1 (τ) + t τ f(s, x 1(s))ds = x 1 (τ) + b τ f(s, x 1(s))ds + t b f(s, x 1(s))ds = x 1 (b) + t b f(s, x 1(s))ds = ϕ(b) + t f(s, ϕ(s))ds b = ϕ(t) = x 1 (t). For the other direction the result is trivial since solutions are continuous. Proposition Let x(t) be a solution to x = f(t, x) defined on an interval I = (a, b) where b <. If there exists a compact set C D and τ I such that (t, x(t)) C for all t τ, then there exists a continuation to the right. Proof. We need to show that lim t b x(t) exists. Then this will be in C since C is compact. Since f is continuous and C compact there exists some M > 0 such that f(t, x) M for all (t, x) C. If t 1, t 2 τ, then t2 x(t 1 ) x(t 2 ) = f(s, x(s))ds M t 2 t 1. t 1 Take an increasing convergent sequence {s n } in I such that s n b as n and such that lim n x(s n ) = ξ. This is possible since C is compact. For τ < t < b we have x(t) x(s n ) M s n t. Taking the limit as n we have x(t) ξ M b t. Therefore, as t b we have x(t) ξ. Theorem Let x(t) be a solution to x = f(t, x) defined on an interval I = (a, b) where b <. If there exists a compact set C D and τ I such that (t, x(t)) C for all t τ, then x(t) has a continuation to the right such that (t, x 1 (t)) / C for some t τ. Proof. From Peano s theorem there exists some α > 0 such that if (τ, ξ) C, then x = f(t, x), x(τ) = ξ has a solution for t (τ α, τ + α). Since C is compact there exists some B > 0 such that t B for all (t, x) C. Now if x(t) is a solution it can be extended to a solution x 1 (t) for t (a, b+α) and x 1 (t) can be expended to a solution x 2 (t) for t (a, b + 2α). However, for n (B b)/α we know the continuation will leave C. A maximal solution is one that cannot be extended to the left or right. Theorem Let x(t) be a solution of x = f(t, x) defined on an interval I = (a, b) where b <. Then there exists a maximal continuation to the right of x(t).

13 1.7. CONTINUITY IN INITIAL CONDITIONS 9 Proof. Let V m be a collection of open sets such that V m V m+1, each V m is compact, and D = m=1 V m. Assume that (τ, x(τ)) V 1. Applying the previous theorem to V 1 and in general V 2, V 3,... we see that this produces a sequence of continuations x m for x(t) defined on an interval (a, b m ). If b m as m, then we are done. Suppose that lim m b m = b <. Let lim t b m x m (t) = ξ m for each m N so long as the limit exists. If there exists some M such that ξ M is the first limit not to exist, then this is the maximal solution. On the other hand, if the limit exists for each m then we have two possibilities. Suppose that lim m ξ m does not exist, then we have the maximal solution. If the limit does exist and equals ξ, then (b, ξ) D is in some V m, but the solution leaves V m infinitely often so cannot limit on ξ. So there is a maximal solution. Corollary Every solution can be extended to a maximal solution. Theorem If x(t) is a maximal solution, then (t, x(t)) approaches the boundary of D as t increases. Proof. If the solution is defined over an interval I = (a, b) where b =, then we are done. Now suppose b < and there exists a compact set K D where there does not exists some τ I such that if t > τ, then (t, x(t)) / K. Let N be sufficiently large such that B = {(t, x) : (t, x) (s, y) 2 1 N for some (s, y) K} D. From a previous proposition and corollary to Peano s Theorem there exists some s 1 such that s 1 such that (s 1, x(s 1 )) / B and there exists some t 2 > s 1 such that (t 2, x(t 2 )) K. Similarly, there exists some s 2 > t 2 such that (s 2, x(s 2 )) / B and there exists some t 3 > s 2 such that (t 3, x(t 3 )) K. Continuing there exist sequences {t m } and {s m } such that t m < s m < t m+1 < b where (t m, x(t m )) K and (s m, x(s m )) / B. This implies that (s m, x(s m )) (t m, x(t m )) 2 1/N. Let M = sup{ f(t, x) 2 : (t, x) B}. Then 1 N (s m, x(s m )) (t m, x(t m )) s m t m 1 + x 2 2 dt = s m t m 1 + f(t, x(t)) 2 2 dt s m t m 1 + M 2. However, s m t m 1 + M 2 0) as m, a contradiction. For an initial value problem 1.7. Continuity in initial conditions { x = f(t, x) x(τ) = ξ such that f is locally Lipschitz what happens to the unique solutions as we vary the initial condition? Throughout this section we assume that f is locally Lipschitz. We will denote the unique solution by x(t, τ, ξ). We want to investigate the behavior of solutions as we vary the initial data. We will assume the solutions are maximal solutions. Theorem Fix (σ, ζ) D. If the domain of x(t, σ, ζ) contains a t b, then there exists some δ > 0 such that for all (τ, ξ) in U = {(τ, ξ) : a < τ < b and ξ x(τ, σ, ζ) < δ the domain of x(t, τ, ξ) contains (a, b) and x(t, τ, ξ) depends continuously on the points in W = {(t, τ, ξ) : t (a, b) and (τ, ξ) U}. Proof. Let ψ(t) = x(t, σ, ζ). Choose δ 1 > 0 such that C = {(t, x) : a t b and x ψ(t) δ 1 } D. Since C is compact we know f is Lipschitz on C with constant L > 0. Fix δ (0, min{δ 1, e L(b a) δ 1 }). We will use the δ to define U and W. We now want to show they have the desired properties.

14 10 1. INTRODUCTION AND BASIC CONCEPTS For (t, τ, ξ) W set ϕ 0 (t, τ, ξ) = ψ(t) + ξ ψ(τ). So ϕ 0 (t, τ, ξ) = ψ(t) + C where C is small. More specifically, ϕ 0 (t, τ, ξ) ψ(t) = ξ x(τ, σ, ζ) < δ < δ 1. Hence, we have (t, ϕ 0 (t, τ, ξ)) C for all (t, τ, ξ) W. Set Then Therefore, we know that Inductively, we let and we have ϕ 1 (t, τ, ξ) = ξ + ϕ 1 (t, τ, ξ) ϕ 0 (t, τ, ξ) ϕ 1 (t, τ, ξ) ψ(t) t τ f(s, ϕ 0 (s, τ, ξ))ds. = t τ f(s, ϕ 0(s, τ, ξ)) f(s, ψ(s))ds Lδ t τ < Lδ(b a). ϕ 1 (t, τ, ξ) ϕ 0 (t, τ, ξ) + ϕ 0 (t, τ, ξ) ψ(t) < Lδ(b a) + δ(b a) = δ(1 + L(b a)) < δe L(b a) < δ 1. ϕ j (t, τ, ξ) = ξ + ϕ j (t, τ, ξ) ϕ j 1 (t, τ, ξ) for j 3. From the triangle inequality we have t τ f(s, ϕ j 1 (s, τ, ξ))ds = t τ f(s, ϕ j 1(s, τ, ξ)) f(s, ϕ j 2 (s, τ, ξ))ds t τ L ϕ j 1(s, τ, ξ) ϕ j 2 (s, τ, ξ))ds ( ) t τ Lδ L j 1 s τ j 1 (j 1)! ds = δ Lj t τ j j! ϕ j (t, τ, ξ) ψ(t) δ j L i t τ i i=0 i! δe L(b a) < δ 1. So (t, ϕ j (t, τ, ξ)) C for all (t, τ, ξ) W for all j N. Furthermore, each ϕ j is continuous and defined on a t b. We now show that ϕ j converges to x(t, τ, ξ). Note that ϕ n (t, τ, ξ) ϕ m (t, τ, ξ) δ i=m+1 n Li (b a) i i! so the sequence is Cauchy and converges to some function ϕ(t, τ, ξ). Furthermore, L i (b a) i ϕ(t, τ, ξ) ϕ m (t, τ, ξ) δ. i! So convergence is uniform and ϕ is continuous. To show it is a solution we show it satisfies the integral equation. Fix ɛ > 0 and j sufficiently large, then ϕ(t, τ, ξ) ξ t f(s, ϕ(s, τ, ξ))ds τ = Hence, m+1 ϕ(t, τ, ξ) ϕ j(t, τ, ξ) t τ f(s, ϕ(s, τ, ξ))ds + t τ f(s, ϕ j 1(s, τ, ξ))ds ϕ(t, τ, ξ) ϕ j (t, τ, ξ) + t τ f(s, ϕ(s, τ, ξ)) f(s, ϕ j(s, τ, ξ))ds < ɛ + t τ L ϕ j 1(s, τ, ξ) ϕ(s, τ, ξ) ds < ɛ + Lɛ(b a). ϕ(t, τ, ξ) ξ t τ f(s, ϕ(s, τ, ξ))ds = 0

15 and ϕ(t) = ξ + t f(s, ϕ(s, τ, ξ))ds. Also, we have τ 1.8. NUMERICAL APPROXIMATIONS 11 ϕ(τ, τ, ξ) = lim n ϕ n(τ, τ, ξ) = ξ. Therefore, ϕ(t, τ, ξ) = x(t, τ, ξ) for a t b. We now know that x(t, τ, ξ) is well-defined for some region D f R d+2 where the values are in R d. Furthermore, (t, τ, ξ) D f if and only if (τ, ξ) D and t is in the domain of x(t, τ, ξ). Theorem The set D f is open in R d+2 and x(t, τ, ξ) is continuous on D f with values in R d. Proof. Let (s, σ, ζ) D f. Then s is in the domain of x(t, σ, ζ) and there exists a s b as in the previous theorem. So there exists an open set W containing (s, σ, ζ) where W D f. Then denote this as W (s,σ,ζ) and D f = W (s,σ,ζ). Hence, D f is open. The continuity follows from the previous theorem Numerical approximations Often it is difficult or impossible to write a solution to an ODE in terms of elementary functions. As in Peano s theorem it is possible to approximate solutions. For numerical work this is often sufficient Euler s method. This is the method used in Peano s Theorem. We start with an interval of size (τ α, τ + α) and partition it using 2k + 1 points. So τ α = t k < < t 0 = τ < < t k = τ + α. Define ψ k (t) = ψ k (t i ) + (t t i )f(t 9, ψ k (t i )) for t [t i, t i+1 ) and i 0 and for t (t i 1, t i ] for i < 0. From this we obtain line segments that estimate the true solution. This is the oldest method known for numerical approximations and as we saw previously will converge to the solution; however, the convergence is very slow and not frequently used in practice. If we look at the Taylor expansion we can find the error is proportional to the step size. Hence, to increase the accuracy by a factor of ten we need ten times the number of steps. If the step size is not sufficiently small we also can see that the solutions are numerically unstable and it can be challenging to find the size one needs Midpoint method. The Midpoint method is an improved version of Euler s method. Let h be the step size for the partition. Let x = f(t, x) and x(t 0 ) = x 0. Let x n+1 = x n + hf(t n + h 2, x n + h 2 f(t n, x n )) for n = 0, 1, 2,... It can be shown that this method is faster, but still not significantly faster Runga-Kutta. This method was developed around 1900 and is significantly faster than the previous two and variants on it are still used in computation. In this case we let and let k 1 = f(t n, x n ) k 2 = f(t n + h 2, x n + h 2 k 1) k 3 = f(t n + h 2, x n + h 2 k 2) k 4 = f(t n + h, x n + hk 3 ) x n+1 = x n + h 6 (k 1 + 2k 2 + 2k 3 + k 4 ). This should look similar to Simpson s rule. In this case one can show the error is roughly on order of h 2. So the accuracy increases significantly with more steps.

16 12 1. INTRODUCTION AND BASIC CONCEPTS 1.9. Exercises 1.1. Prove the Cauchy-Schwarz inequality v w v 2 w 2 where v, w R n and v w is the dot product Use the Cauchy-Schwarz inequality to show that x + y 2 x 2 + y Find constants A and B such that A x 1 x B x 1. For n = 2 graph the set {x R 2 : x 1 1} and {x R 2 : x 1} Prove the Contraction Mapping Theorem Prove that the relation of equivalent norms is an equivalence relation Show by example that Ascoli s Theorem is false when each of the hypothesis are removed individually from the statement. So remove one at a time the following: a. The interval I is bounded. b. The sequence of functions {f m } is equicontinuous. c. The sequence {f m (t)} is bounded for every t I Consider the initial value problem x = t 2 + x 2 and x(0) = 0 on R 2. Find the longest interval on which the proof of Peano s theorem guarantees a solution Let D = R 2 and solve the initial value problem x = 1 + x 2, x(0) = Suppose that f(t, x) is continuous and bounded on R d. Show that the initial value problem has a solution defined on an arbitrarily long interval for any initial condition Show that if f is locally Lipschitz on an open set D, then it is continuous on D. (Prove this from the definition.) Suppose f(t, x) is locally Lipschitz with respect to x on R 2 and there exists some a < b such that f(t, a) = f(t, b) = 0 for all t. Show that if a < ξ < b, then the solution of x = f(t, x), x(τ) = ξ has a maximal solution defined over all of R Consider x = f(t, x) on R d+1. Suppose for every compact interval I of R there exists a positive real number M I such that f(t, x) x < 0 whenever x M I and t I. Prove that every maximal solution is defined on an interval of the form a < t < For the α defined in Peano s theorem prove the following: Theorem (Picard-Lindelöf) Let f be locally Lipschitz on D and consider x = f(t, x) such that x(τ) = ξ for (τ, ξ) D. Then the sequence ϕ 0 (t) = ξ ϕ m (t) = ξ + t τ f(s, ϕ m 1(s))ds is defined on I = (τ α, τ + α) and the sequence ϕ m converges uniformly on I to the solution of the initial value problem Find the function x(t, τ, ξ) and its domain for each of the following: (1) x = t 2 /x 2 on D = {(t, x) : x > 0}. (2) x = x/t 2 on D = {(t, x) : t > 0}. (3) x = t 2 /[x 2 (1 t 3 )] on D = {(t, x) : t < 1 and x > 0} Find x(t, τ, ξ, µ) for x = x cos(µt) Consider the simple scalar differential equation x = 3x 2 with the condition x(0) = 1. (1) Use separation of variables to find the solution of this initial value problem. (2) Use Euler s method to approximate the solution to this initial value problem for 0 t 1 with h = 0.1, 0.05, 0.01, and (3) Use Runga-Kutta to approximate the solution to this initial value problem for 0 t 1 with h = 0.1, 0.05, 0.01, and (4) Compare the three solutions with the error bounds given in Section 1.8.

17 CHAPTER 2 Linear Differential Equations In this chapter we study properties of linear ODEs. Although many ODEs are nonlinear they often have local linearizations. These linearizations have been used historically to study nonlinear behavior. The simplest case will be linear constant coefficient systems. We will see that in this case the solutions can be described quite easily Basic properties As a reminder a function T : R n R m is linear if T (αx + βy) = αt (x) + βt (y) for all α, β R and x, y R n. From linear algebra we know that any linear function can be represented as T (x) = Ax once a basis of vectors have been chosen. Now let x = A(t)x where A(t) is a matrix for each t and the matrix varies continuously in time (so each entry is a continuous function). Let ϕ 1 and ϕ 2 be solutions and a, b R. Then (aϕ 1 + bϕ 2 ) = aϕ 1 + bϕ 2 = aa(t)ϕ 1 + ba(t)ϕ 2 = A(t)(aϕ 1 + bϕ 2 ). So aϕ 1 + bϕ 2 is a solution. We will show that any linear ODE is of this form. We now review some facts on norms of matrices. Proposition 2.1. If a and b are norms of R n and R m respectively, then A = sup{ A a : x b = 1} defines a norm on the space of m n matrices and Ax a A x b. Proof. First, suppose that A = 0. By definition this is equivalent to Ax = 0 for all x b = 1. This implies that A is the zero matrix (to see this use the basis vectors for R m ). Now suppose the norm is nonzero. For α R we have Let B be an m n matrix. Then αa = sup{ αax a : x b = 1} = sup{ A(αx) a : αx b = 1} = sup{ α Ax a : x b = 1} = α A. A + B = sup{ (A + B)x a : x b = 1} sup{ Ax a + Bx a : x b = 1} sup{ Ax a : x b = 1} + sup{ Bx a : x b = 1} = A + B. Therefore, this defines a norm on matrices. Let x 0. Then x Ax a = x b A( ) a x b A. x b We will look at n n matrices and assume that a = b so we only have one norm on the Euclidean space. Proposition 2.2. Let a be a norm on R n and A and B be n n matrices. Then AB A B. 13

18 14 2. LINEAR DIFFERENTIAL EQUATIONS Proof. Let x R n. From the previous proposition we know that Ax a A x a. So If x a = 1, then we have Hence, ABx a A B x a. ABx a A B. AB A B. Theorem 2.3. Let A : I M n (R) and h : I R n be continuous on I. Then for all (τ, ξ) I R n the initial value problem { x = A(t)x + h(t) has a unique solution defined for all t I. x(τ) = ξ Proof. Let K be a compact subset of I R n. Then A(t) is bounded on K by some M > 0. Then A(t)p + h(t) (A(t)q + h(t)) A(t) p q M p q for all p, q R n. Hence, f(t, x) = A(t)x + h(t) is locally Lipschitz and so there exists a unique solution for each initial condition. To show the solution exists for all t I we revisit Gronwall s Inquality. Let I = (c, d) and ϕ(t) be a maximal solution such that ϕ(τ) = ξ where ϕ is defined for some interval I = (a, b) where τ I. Notice ϕ(t) = ϕ(τ) + t τ A(s)ϕ(s) + h(s)ds. Suppose that b < d, then there exists some C 1, C 2 > 0 such that A(t) C 1 and h(t) < C 2 for all t [τ, b]. Hence, ϕ(t) ξ + C 2 (d c) + for t (τ, b). Using Gronwall s inequality we then have and ϕ(t) is contained in t τ C 1 ϕ(s) ds ϕ(t) ( ξ + C 2 (d c))e C1(b a) {(t, x) : a t b and x ( ξ + C 2 (d c))e C1(b a) }. So the solution can be continued since it doesn t approach the boundary of the domain for the function, a contradiction. Hence, the solution exists over the entire interval I Fundamental matrices We know that the solutions to x = A(t)x form a vector space (it is left to the reader to verify the other properties for a vector space) and for each initial condition there is a unique solution. What can we say about the vector space of solutions? Let V be the set of solutions to x = A(t)x and τ I (where I is the interval for which A(t) varies continuously). For e 1,..., e n the standard basis vectors to R n and ϕ i a solution such that ϕ i (τ) = e i. Then the solutions ϕ 1,..., ϕ n are unique at τ and linearly independent at τ. let ψ(t) be another solution. Then ψ(τ) = ξ = n i=1 a 1e i for some a 1,..., a n R. So ψ(τ) = n i=1 a 1ϕ i (τ) and since the solutions form a vector space we have ψ(t) = n i=1 a iϕ i (t) for all t I. Thus, ϕ 1,..., ϕ n form a basis for V and V is n-dimensional. Now let X(t) = [ ϕ 1 (t) ϕ n (t) ]. For each t I we know that X(t) M n (R). This is called a fundamental matrix solution. Notice that X (t) = [ ϕ 1(t) ϕ n(t) ] = [ ] A(t)ϕ 1 A(t)ϕ n = A(t)X(t). So we will often look at the matrix differential equation (2.1) X = A(t)X

19 2.2. FUNDAMENTAL MATRICES 15 where X : I M n (R). Claim 2.4. The following hold for (2.1). a. X(t) is a solution if and only if every column is a solution of x = A(t)x. b. Maximal solutions are defined over I. c. If X(t) is a solution, then X(t)v is a solutions of x = A(t)x for all v R n. d. If X(t) is a solution, then X(t)B is a solution of X = A(t)X for all B M n (R). Proof. Let X = [ ] x 1 x n. Then A(t)X = [ ] [ A(t)x 1 A(t)x n = x 1 x n]. So the first result follows. Now the second holds since the solutions x 1,..., x n exist for all i. For the third result notice that X(t)v = n i=1 v ix i (t) and v i x i (t) solves x i = A(t)x i for each i. Lastly, we see that X(t)B = [ ] [ ] X(t)b 1 X(t)b n where B = b1 b n and the result follows from the first and third results. Proposition 2.5. Let X(t) be a solutions to (2.1). Then the determinant of X(t) either vanishes for all t I or is never zero on I. Proof. Suppose there exists some τ I such that det X(τ) = 0. So there exists some v R n {0} such that X(τ)v = 0. Let ψ(t) = X(t)v. Then ψ(t) is a solution to x = A(t)x and ψ(τ) = 0. We know that ϕ(t) 0 is a solution to x = A(t)x where ϕ(τ) = 0. By uniqueness this implies that ψ(t) = ϕ(t) and X(t)v 0 for all t I and det X(t) = 0 for all t I. Definition 2.6. A fundamental matrix X(t) is a solution to (2.1) such that det X(t) 0 for some t I. Theorem 2.7. If A(t) is continuous on I, then a fundamental matrix solution to (2.1) exists. If X(t) is a fundamental matrix solution, then X(t)[X(τ) 1 ]ξ is the unique solution to x = A(t)x where x(τ) = ξ. Furthermore, x(t, τ, ξ) = X(t)[X(τ)] 1 ξ and D f = I I R n. Proof. The first part of the theorem follows from our previous results. Let X(t) be a fundamental matrix solution and ϕ(t) = X(t)[X(τ)] 1 ξ. Then ϕ(t) is a solution to x = A(t)x since [X(τ)] 1 ξ R n. Also ϕ(τ) = ξ so ϕ(t) = X(t)[X(τ)] 1 ξ = x(t, τ, ξ) and D f = I I R n. Remark 2.8. The solutions ϕ 1,..., ϕ n in V form a basis if and only if for some τ I the vectors ϕ 1 (τ),..., ϕ n (τ) form a basis in R n. Definition 2.9. If X(t) is a fundamental matrix and s I, then X(t, s) = X(t)(X(s)) 1 is the principle fundamental matrix at s I. So there are infinitely many fundamental matrices. Now we have X(s, s) = Id and the principle fundamental matrix is unique. Theorem Let A : I M n (R) and h : I R n be continuous on I. Then the solution to the initial value problem x = A(t)x + h(t) where x(τ) = ξ is given by x(t, τ, ξ) = X(t, τ)ξ + where X(t, s) is the principle matrix solution. t τ X(t, s)h(s)ds Proof. Let X(t) be a fundamental matrix for (2.1). Suppose we want a solution satisfying the initial value problem of the form ϕ(t) = X(t)v(t) for some function v(t) : I R n. Then ϕ (t) = X (t)v(t) + X(t)v (t) = A(t)X(t)v(t) + h(t). So we want X(t)v (t) = h(t) and v (t) = X(t) 1 h(t). Hence, v(t) = C + t τ X(s) 1 h(s)ds and ϕ(t) = X(t)C + The initial conditions imply that C = X(τ) 1 ξ. This last result is useful in obtaining solutions. t τ X(t)[X(s)] 1 h(s)ds.

20 16 2. LINEAR DIFFERENTIAL EQUATIONS Theorem (Abel s Formula) Let X(t) be a fundamental matrix. Then for all τ I we have det X(t) = [det X(τ)]e t τ TrA(s)ds. Proof. Notice that it is sufficient to prove that det X(t) is a solution to y = [TrA(t)]y. Since this implies that y = ce t τ TrA(s)ds where c is det X(τ). A straight forward computation shows that d det X(t) = TrA(t) det X(t). dt 2.3. Higher order linear equations As mentioned before higher order equations of the form x n +a n (t)x n 1 + +a 1 (t) = g(t) can be formed into a first order equation where A(t) = a 1 (t) a 2 (t) a 3 (t) a 4 (t) a n (t) and We then have (as we did before) x 1 = x x 1 = x 2 x 2 = x 3 0 h(t) =. 0. g(t). x n = a 1 (t)x 1 a 2 (t)x 2 a n (t)x n + g(t) Remark A function ϕ(t) is a solution to x n + a n (t)x n a 1 (t) = g(t) if and only if ϕ(t) is a solution to the higher order linear equation for ϕ(t) ϕ (t)... ϕ n 1 (t) Theorem Let a i (t) for 1 i n and g(t) be continuous on I. Given τ I and x 0,..., x n 1 R there exists a unique solution to x n + a n (t)x n a 1 (t)x = g(t) defined on I such that x(τ) = x 0 and x i (τ) = x i for 1 i n 1. This theorem follows directly from the previous theorems as it is just a special case so could be classified as a corollary. If g(t) = 0, then the system is homogeneous and solutions form a vector space. Definition Let ϕ 1,..., ϕ n be (n 1) times differentiable solutions of x n +a n (t)x n 1 + +a 1 (t)x = g(t). The Wronskian is ϕ 1 ϕ n ϕ 1 ϕ n W (ϕ 1,..., ϕ n ) = det... ϕ n 1 1 ϕ n 1 n

21 2.4. COMPLEX LINEAR EQUATIONS AND VARIATION OF PARAMETERS 17 We will show that the Wronskian relates to the fundamental matrix. Proposition If there exists some τ I such that W (ϕ 1,..., ϕ n )(τ) 0, then ϕ 1,..., ϕ n are linearly independent in the vector space of real-valued functions on I. Proof. Let c 1 ϕ 1 + c n ϕ n = 0 on I where c 1,..., c n R. Then for τ I we know ϕ 1 ϕ n c 1 ϕ 1 ϕ n c 2... = 0. ϕ1 n 1 ϕ n 1 n c n Since W (τ) 0 we know that c 1 = = c n = 0 and ϕ 1,..., ϕ n are linearly independent. The converse is in general false. Theorem Solutions of x n + a n (t)x n a 1 (t)x = 0 form a vector space of dimension n. Moreover, if {ϕ 1,..., ϕ n } is a set of n solutions of x n + a n (t)x n a 1 (t)x = 0, then the following are equivalent: a. W (ϕ 1,..., ϕ n )(τ) 0 for some τ I. b. {ϕ 1,..., ϕ n } is a basis for the vector space of solutions to x n + a n (t)x n a 1 (t)x = 0. c. W (ϕ 1,..., ϕ n )(t) 0 for all t I. Proof. If X(t) = ϕ 1 ϕ n.. ϕ n 1 1 ϕ n 1 n is a fundamental matrix, then W (ϕ 1,.., ϕ n )(t) = det X(t) 0 so the solutions form a vector space of dimension n. For a. implies b. notice from the previous result that we know the solutions are linearly independent and of dimension n so form a basis. For b. implies c. Let ϕ 1,..., ϕ n be a basis and X(t) = ϕ 1 ϕ n.. ϕ n 1 1 ϕ n 1 n If this is not a fundamental matrix there exists some v 0 such that X(t)v 0 for all t I. So these are not linear independent. Hence, X(t) is not a fundamental matrix and det X(t) = W (ϕ 1,..., ϕ n ) 0 for all t I. c. implies a. is trivial Complex linear equations and variation of parameters Even when we start with a real valued matrix we may have complex eigenvalues. So it can be useful to examine (2.2) z = A(t)z + h(t) where z C n, A(t) M n (C) and h(t) C n for all t I. The main idea in this section is that all of the results we have proven apply in this setting. Notice each entry of A(t) can be rewritten as a jk (t) = α jk (t) + iβ jk (t) where α jk, β jk : I R. Similarly, h j (t) = γ j (t) + iδ j (t) where γ j, δ j : I R for all 1 j n. Let α 11 β 11 α 1n β in β 11 α 11 β 1n α 1n B(t) =.... α n1 β n1 α nn β nn β n1 α n1 β nn α nn

22 18 2. LINEAR DIFFERENTIAL EQUATIONS and Then is a solution to (2.2) if and only if is a solution to We then have the following result. γ 1 δ 1 g(t) =.. γ n δ n ϕ(t) = (u 1 (t) + iv 1 (t),..., u n (t) + iv n (t)) ψ(t) = (u 1, v 1,..., u n, v n ) x = B(t)x + g(t). Theorem Let A : I M n (C) and h : I C n be continuous functions on an interval I of R. For any (τ, ξ) I C n the initial value problem z = A(t)z + h(t) where z(τ) = ξ has a unique solution that can be continued on all of I. We also know that the solutions to z = A(t)z form a vector space and if ψ(t) is a particular solution to (2.2) then the set of solutions is ψ + ϕ where ϕ is a solution to z = A(t)z. Proposition Let A : I M n (R) and h : I R n be continuous on an interval I R. Let ϕ be a solution to the complex equation z = A(t)z + h(t). Then ϕ is real valued if and only if for some τ I the imaginary parts of ϕ(τ) are all zero. Proof. Let x(t) be a unique solution to x = A(t)x where x(τ) = ξ R. Then x(t) is a solution to z = A(t)z + h(t). Now suppose that there exists some τ I and ξ R n such that ϕ(τ) = ξ + i0 where ϕ(t) is a solution to z = A(t)z + h(t). Then from the previous sections if we look at the equation x = B(t)x + g(t) we see that the solution corresponding to the imaginary part is constantly zero. So ϕ(t) is real valued Variation of Paramters. We now state a general method to find solutions. We will write the result for real valued equations, but as we just saw the formula can also hold for complex valued equations. Theorem Let b : R n R n be continuous for (t 0, x 0 ) R n R n. Then solutions to { x = A(t)x + h(t) are given by x(t 0 ) = x 0 t x(t) = X(t)[X(t 0 )] 1 x 0 + X(t)[X(s)] 1 h(s)ds t 0 where X(t) is a fundamental solution of equations to x = A(t)x and the integral is computed component by component. Proof. We know solutions will exist and be unique. Also, x (t) = x (t)[x(t 0 )] 1 x 0 + t t 0 X (t)[x(s)] 1 h(s)ds + X(t)[X(t)] 1 h(t) = A(t)X(t)[X(t 0 )] 1 x 0 + A(t) t t 0 X(t)[X(s)] 1 h(s)ds + h(t) = A(t)x(t) + h(t).

23 2.5. EXERCISES Exercises 2.1. Let x(t) be a solution to x = A(t)x + h(t) where A(t) and h(t) are continuous on an open interval I = (0, ). Prove that x(t) is bounded for t 1 if both A(t) dt < and h(t) dt < Let A m be a sequence of invertible real n n matrices such that A m is a bounded sequence. (1) Show that if one of the row of A m goes to 0 as m goes to infinity, then det A m goes to 0 as m goes to infinity. (2) Show that there exists a positive real number α such that A 1 m α for all m Show that the definition of the principal matrix solution defined by the equation X(t, s) = X(t)[X(s)] 1 is independent of the fundamental matrix X(t) and that 2.4. Consider x = A(t)x + g(t) where Verify that A = X(t, s)x(s, τ) = X(t, τ). [ ] X(t) = and g(t) = [ ] e 3t te 3t 0 e 3t [ ] sin t. cos t is a fundamental matrix solution of x = Ax. Find a solution of the initial value problem [ ] x 1 = A(t)x + g(t) and x(0) = Suppose A : (0, ) M n (R) is continuous. Prove the following: If 1 TrA(t)dt = then there exists a solution x(t) of x = A(t)x such that x(t) is unbounded for t Consider X = A(t)X on the interval 0 < t < where A(t) = t 3 6t 2 3t 1 (1) Show that t3 t 2 t 3t 2 2t 1 6t 2 0 is a fundamental matrix solution of X = A(t)X. (2) Calculate X(t, s). (3) Use X(t, s) to solve the third-order initial value problem d 3 y dt 3 3 d 2 y t dt dy t 2 dt 6 t 3 y = 0 where y(1) = 1, y (1) = 2, y (1) = Let p(t), q(t), and g(t) be continuous real-valued functions on an open interval I and suppose that ϕ 1 (t) and ϕ 2 (t) are linearly independent solutions of the second-order scalar equation y + p(t)y + q(t)y = 0. Derive and explicit formula for a particular solution of y + p(t)y + q(t)y = g(t) Let A : I M n (R) be a continuous function on the open interval I. Show that z(t) = (u 1 (t) + iv 1 (t),..., u n (t) + iv n (t)) where u j (t) and v j (t) are differentiable real-valued functions on I for 1 j n is a solution of the complex differential equation z = A(t)z if and only if u(t) = (u 1 (t),..., u n (t) and v(t) = (v 1 (t),..., v n (t)) are solutions of the real differential equation x = A(t)x.

Ordinary Differential Equation Theory

Ordinary Differential Equation Theory Part I Ordinary Differential Equation Theory 1 Introductory Theory An n th order ODE for y = y(t) has the form Usually it can be written F (t, y, y,.., y (n) ) = y (n) = f(t, y, y,.., y (n 1) ) (Implicit

More information

Math Ordinary Differential Equations

Math Ordinary Differential Equations Math 411 - Ordinary Differential Equations Review Notes - 1 1 - Basic Theory A first order ordinary differential equation has the form x = f(t, x) (11) Here x = dx/dt Given an initial data x(t 0 ) = x

More information

MATH 4B Differential Equations, Fall 2016 Final Exam Study Guide

MATH 4B Differential Equations, Fall 2016 Final Exam Study Guide MATH 4B Differential Equations, Fall 2016 Final Exam Study Guide GENERAL INFORMATION AND FINAL EXAM RULES The exam will have a duration of 3 hours. No extra time will be given. Failing to submit your solutions

More information

ODEs Cathal Ormond 1

ODEs Cathal Ormond 1 ODEs Cathal Ormond 2 1. Separable ODEs Contents 2. First Order ODEs 3. Linear ODEs 4. 5. 6. Chapter 1 Separable ODEs 1.1 Definition: An ODE An Ordinary Differential Equation (an ODE) is an equation whose

More information

Lecture Notes for Math 524

Lecture Notes for Math 524 Lecture Notes for Math 524 Dr Michael Y Li October 19, 2009 These notes are based on the lecture notes of Professor James S Muldowney, the books of Hale, Copple, Coddington and Levinson, and Perko They

More information

Linear ODEs. Existence of solutions to linear IVPs. Resolvent matrix. Autonomous linear systems

Linear ODEs. Existence of solutions to linear IVPs. Resolvent matrix. Autonomous linear systems Linear ODEs p. 1 Linear ODEs Existence of solutions to linear IVPs Resolvent matrix Autonomous linear systems Linear ODEs Definition (Linear ODE) A linear ODE is a differential equation taking the form

More information

Econ 204 Differential Equations. 1 Existence and Uniqueness of Solutions

Econ 204 Differential Equations. 1 Existence and Uniqueness of Solutions Econ 4 Differential Equations In this supplement, we use the methods we have developed so far to study differential equations. 1 Existence and Uniqueness of Solutions Definition 1 A differential equation

More information

Econ Lecture 14. Outline

Econ Lecture 14. Outline Econ 204 2010 Lecture 14 Outline 1. Differential Equations and Solutions 2. Existence and Uniqueness of Solutions 3. Autonomous Differential Equations 4. Complex Exponentials 5. Linear Differential Equations

More information

Mathematics for Engineers II. lectures. Differential Equations

Mathematics for Engineers II. lectures. Differential Equations Differential Equations Examples for differential equations Newton s second law for a point mass Consider a particle of mass m subject to net force a F. Newton s second law states that the vector acceleration

More information

Half of Final Exam Name: Practice Problems October 28, 2014

Half of Final Exam Name: Practice Problems October 28, 2014 Math 54. Treibergs Half of Final Exam Name: Practice Problems October 28, 24 Half of the final will be over material since the last midterm exam, such as the practice problems given here. The other half

More information

Entrance Exam, Differential Equations April, (Solve exactly 6 out of the 8 problems) y + 2y + y cos(x 2 y) = 0, y(0) = 2, y (0) = 4.

Entrance Exam, Differential Equations April, (Solve exactly 6 out of the 8 problems) y + 2y + y cos(x 2 y) = 0, y(0) = 2, y (0) = 4. Entrance Exam, Differential Equations April, 7 (Solve exactly 6 out of the 8 problems). Consider the following initial value problem: { y + y + y cos(x y) =, y() = y. Find all the values y such that the

More information

2tdt 1 y = t2 + C y = which implies C = 1 and the solution is y = 1

2tdt 1 y = t2 + C y = which implies C = 1 and the solution is y = 1 Lectures - Week 11 General First Order ODEs & Numerical Methods for IVPs In general, nonlinear problems are much more difficult to solve than linear ones. Unfortunately many phenomena exhibit nonlinear

More information

NOTES ON EXISTENCE AND UNIQUENESS THEOREMS FOR ODES

NOTES ON EXISTENCE AND UNIQUENESS THEOREMS FOR ODES NOTES ON EXISTENCE AND UNIQUENESS THEOREMS FOR ODES JONATHAN LUK These notes discuss theorems on the existence, uniqueness and extension of solutions for ODEs. None of these results are original. The proofs

More information

2nd-Order Linear Equations

2nd-Order Linear Equations 4 2nd-Order Linear Equations 4.1 Linear Independence of Functions In linear algebra the notion of linear independence arises frequently in the context of vector spaces. If V is a vector space over the

More information

NORMS ON SPACE OF MATRICES

NORMS ON SPACE OF MATRICES NORMS ON SPACE OF MATRICES. Operator Norms on Space of linear maps Let A be an n n real matrix and x 0 be a vector in R n. We would like to use the Picard iteration method to solve for the following system

More information

A brief introduction to ordinary differential equations

A brief introduction to ordinary differential equations Chapter 1 A brief introduction to ordinary differential equations 1.1 Introduction An ordinary differential equation (ode) is an equation that relates a function of one variable, y(t), with its derivative(s)

More information

2.1 Dynamical systems, phase flows, and differential equations

2.1 Dynamical systems, phase flows, and differential equations Chapter 2 Fundamental theorems 2.1 Dynamical systems, phase flows, and differential equations A dynamical system is a mathematical formalization of an evolutionary deterministic process. An evolutionary

More information

Continuous Functions on Metric Spaces

Continuous Functions on Metric Spaces Continuous Functions on Metric Spaces Math 201A, Fall 2016 1 Continuous functions Definition 1. Let (X, d X ) and (Y, d Y ) be metric spaces. A function f : X Y is continuous at a X if for every ɛ > 0

More information

LMI Methods in Optimal and Robust Control

LMI Methods in Optimal and Robust Control LMI Methods in Optimal and Robust Control Matthew M. Peet Arizona State University Lecture 15: Nonlinear Systems and Lyapunov Functions Overview Our next goal is to extend LMI s and optimization to nonlinear

More information

1 Lyapunov theory of stability

1 Lyapunov theory of stability M.Kawski, APM 581 Diff Equns Intro to Lyapunov theory. November 15, 29 1 1 Lyapunov theory of stability Introduction. Lyapunov s second (or direct) method provides tools for studying (asymptotic) stability

More information

8 Periodic Linear Di erential Equations - Floquet Theory

8 Periodic Linear Di erential Equations - Floquet Theory 8 Periodic Linear Di erential Equations - Floquet Theory The general theory of time varying linear di erential equations _x(t) = A(t)x(t) is still amazingly incomplete. Only for certain classes of functions

More information

ODE Homework 1. Due Wed. 19 August 2009; At the beginning of the class

ODE Homework 1. Due Wed. 19 August 2009; At the beginning of the class ODE Homework Due Wed. 9 August 2009; At the beginning of the class. (a) Solve Lẏ + Ry = E sin(ωt) with y(0) = k () L, R, E, ω are positive constants. (b) What is the limit of the solution as ω 0? (c) Is

More information

Theory of Ordinary Differential Equations

Theory of Ordinary Differential Equations Theory of Ordinary Differential Equations Existence, Uniqueness and Stability Jishan Hu and Wei-Ping Li Department of Mathematics The Hong Kong University of Science and Technology ii Copyright c 24 by

More information

Chapter III. Stability of Linear Systems

Chapter III. Stability of Linear Systems 1 Chapter III Stability of Linear Systems 1. Stability and state transition matrix 2. Time-varying (non-autonomous) systems 3. Time-invariant systems 1 STABILITY AND STATE TRANSITION MATRIX 2 In this chapter,

More information

Nonlinear Control Systems

Nonlinear Control Systems Nonlinear Control Systems António Pedro Aguiar pedro@isr.ist.utl.pt 3. Fundamental properties IST-DEEC PhD Course http://users.isr.ist.utl.pt/%7epedro/ncs2012/ 2012 1 Example Consider the system ẋ = f

More information

Math 266: Phase Plane Portrait

Math 266: Phase Plane Portrait Math 266: Phase Plane Portrait Long Jin Purdue, Spring 2018 Review: Phase line for an autonomous equation For a single autonomous equation y = f (y) we used a phase line to illustrate the equilibrium solutions

More information

A Second Course in Elementary Differential Equations

A Second Course in Elementary Differential Equations A Second Course in Elementary Differential Equations Marcel B Finan Arkansas Tech University c All Rights Reserved August 3, 23 Contents 28 Calculus of Matrix-Valued Functions of a Real Variable 4 29 nth

More information

6 Linear Equation. 6.1 Equation with constant coefficients

6 Linear Equation. 6.1 Equation with constant coefficients 6 Linear Equation 6.1 Equation with constant coefficients Consider the equation ẋ = Ax, x R n. This equating has n independent solutions. If the eigenvalues are distinct then the solutions are c k e λ

More information

Several variables. x 1 x 2. x n

Several variables. x 1 x 2. x n Several variables Often we have not only one, but several variables in a problem The issues that come up are somewhat more complex than for one variable Let us first start with vector spaces and linear

More information

I. The space C(K) Let K be a compact metric space, with metric d K. Let B(K) be the space of real valued bounded functions on K with the sup-norm

I. The space C(K) Let K be a compact metric space, with metric d K. Let B(K) be the space of real valued bounded functions on K with the sup-norm I. The space C(K) Let K be a compact metric space, with metric d K. Let B(K) be the space of real valued bounded functions on K with the sup-norm Proposition : B(K) is complete. f = sup f(x) x K Proof.

More information

Nonlinear Systems Theory

Nonlinear Systems Theory Nonlinear Systems Theory Matthew M. Peet Arizona State University Lecture 2: Nonlinear Systems Theory Overview Our next goal is to extend LMI s and optimization to nonlinear systems analysis. Today we

More information

The Heine-Borel and Arzela-Ascoli Theorems

The Heine-Borel and Arzela-Ascoli Theorems The Heine-Borel and Arzela-Ascoli Theorems David Jekel October 29, 2016 This paper explains two important results about compactness, the Heine- Borel theorem and the Arzela-Ascoli theorem. We prove them

More information

First and Second Order Differential Equations Lecture 4

First and Second Order Differential Equations Lecture 4 First and Second Order Differential Equations Lecture 4 Dibyajyoti Deb 4.1. Outline of Lecture The Existence and the Uniqueness Theorem Homogeneous Equations with Constant Coefficients 4.2. The Existence

More information

Notes on uniform convergence

Notes on uniform convergence Notes on uniform convergence Erik Wahlén erik.wahlen@math.lu.se January 17, 2012 1 Numerical sequences We begin by recalling some properties of numerical sequences. By a numerical sequence we simply mean

More information

Math 4B Notes. Written by Victoria Kala SH 6432u Office Hours: T 12:45 1:45pm Last updated 7/24/2016

Math 4B Notes. Written by Victoria Kala SH 6432u Office Hours: T 12:45 1:45pm Last updated 7/24/2016 Math 4B Notes Written by Victoria Kala vtkala@math.ucsb.edu SH 6432u Office Hours: T 2:45 :45pm Last updated 7/24/206 Classification of Differential Equations The order of a differential equation is the

More information

Main topics and some repetition exercises for the course MMG511/MVE161 ODE and mathematical modeling in year Main topics in the course:

Main topics and some repetition exercises for the course MMG511/MVE161 ODE and mathematical modeling in year Main topics in the course: Main topics and some repetition exercises for the course MMG5/MVE6 ODE and mathematical modeling in year 04. Main topics in the course:. Banach fixed point principle. Picard- Lindelöf theorem. Lipschitz

More information

Existence and uniqueness of solutions for nonlinear ODEs

Existence and uniqueness of solutions for nonlinear ODEs Chapter 4 Existence and uniqueness of solutions for nonlinear ODEs In this chapter we consider the existence and uniqueness of solutions for the initial value problem for general nonlinear ODEs. Recall

More information

7 Planar systems of linear ODE

7 Planar systems of linear ODE 7 Planar systems of linear ODE Here I restrict my attention to a very special class of autonomous ODE: linear ODE with constant coefficients This is arguably the only class of ODE for which explicit solution

More information

Real Analysis Problems

Real Analysis Problems Real Analysis Problems Cristian E. Gutiérrez September 14, 29 1 1 CONTINUITY 1 Continuity Problem 1.1 Let r n be the sequence of rational numbers and Prove that f(x) = 1. f is continuous on the irrationals.

More information

Applied Differential Equation. November 30, 2012

Applied Differential Equation. November 30, 2012 Applied Differential Equation November 3, Contents 5 System of First Order Linear Equations 5 Introduction and Review of matrices 5 Systems of Linear Algebraic Equations, Linear Independence, Eigenvalues,

More information

Summary of topics relevant for the final. p. 1

Summary of topics relevant for the final. p. 1 Summary of topics relevant for the final p. 1 Outline Scalar difference equations General theory of ODEs Linear ODEs Linear maps Analysis near fixed points (linearization) Bifurcations How to analyze a

More information

Real Analysis Math 131AH Rudin, Chapter #1. Dominique Abdi

Real Analysis Math 131AH Rudin, Chapter #1. Dominique Abdi Real Analysis Math 3AH Rudin, Chapter # Dominique Abdi.. If r is rational (r 0) and x is irrational, prove that r + x and rx are irrational. Solution. Assume the contrary, that r+x and rx are rational.

More information

Old Math 330 Exams. David M. McClendon. Department of Mathematics Ferris State University

Old Math 330 Exams. David M. McClendon. Department of Mathematics Ferris State University Old Math 330 Exams David M. McClendon Department of Mathematics Ferris State University Last updated to include exams from Fall 07 Contents Contents General information about these exams 3 Exams from Fall

More information

DYNAMICAL SYSTEMS AND NONLINEAR PARTIAL DIFFERENTIAL EQUATIONS

DYNAMICAL SYSTEMS AND NONLINEAR PARTIAL DIFFERENTIAL EQUATIONS DYNAMICAL SYSTEMS AND NONLINEAR PARTIAL DIFFERENTIAL EQUATIONS J. Banasiak School of Mathematical Sciences University of KwaZulu-Natal, Durban, South Africa 2 Contents 1 Solvability of ordinary differential

More information

Metric Spaces and Topology

Metric Spaces and Topology Chapter 2 Metric Spaces and Topology From an engineering perspective, the most important way to construct a topology on a set is to define the topology in terms of a metric on the set. This approach underlies

More information

NATIONAL BOARD FOR HIGHER MATHEMATICS. Research Scholarships Screening Test. Saturday, February 2, Time Allowed: Two Hours Maximum Marks: 40

NATIONAL BOARD FOR HIGHER MATHEMATICS. Research Scholarships Screening Test. Saturday, February 2, Time Allowed: Two Hours Maximum Marks: 40 NATIONAL BOARD FOR HIGHER MATHEMATICS Research Scholarships Screening Test Saturday, February 2, 2008 Time Allowed: Two Hours Maximum Marks: 40 Please read, carefully, the instructions on the following

More information

Second Order Linear Equations

Second Order Linear Equations October 13, 2016 1 Second And Higher Order Linear Equations In first part of this chapter, we consider second order linear ordinary linear equations, i.e., a differential equation of the form L[y] = d

More information

Ordinary Differential Equations. Raz Kupferman Institute of Mathematics The Hebrew University

Ordinary Differential Equations. Raz Kupferman Institute of Mathematics The Hebrew University Ordinary Differential Equations Raz Kupferman Institute of Mathematics The Hebrew University June 26, 2012 2 Contents 1 Existence and uniqueness 3 1.1 Introduction................................ 3 1.2

More information

On the Ψ - Exponential Asymptotic Stability of Nonlinear Lyapunov Matrix Differential Equations

On the Ψ - Exponential Asymptotic Stability of Nonlinear Lyapunov Matrix Differential Equations DOI: 1.2478/awutm-213-12 Analele Universităţii de Vest, Timişoara Seria Matematică Informatică LI, 2, (213), 7 28 On the Ψ - Exponential Asymptotic Stability of Nonlinear Lyapunov Matrix Differential Equations

More information

1+t 2 (l) y = 2xy 3 (m) x = 2tx + 1 (n) x = 2tx + t (o) y = 1 + y (p) y = ty (q) y =

1+t 2 (l) y = 2xy 3 (m) x = 2tx + 1 (n) x = 2tx + t (o) y = 1 + y (p) y = ty (q) y = DIFFERENTIAL EQUATIONS. Solved exercises.. Find the set of all solutions of the following first order differential equations: (a) x = t (b) y = xy (c) x = x (d) x = (e) x = t (f) x = x t (g) x = x log

More information

FIXED POINT METHODS IN NONLINEAR ANALYSIS

FIXED POINT METHODS IN NONLINEAR ANALYSIS FIXED POINT METHODS IN NONLINEAR ANALYSIS ZACHARY SMITH Abstract. In this paper we present a selection of fixed point theorems with applications in nonlinear analysis. We begin with the Banach fixed point

More information

The first order quasi-linear PDEs

The first order quasi-linear PDEs Chapter 2 The first order quasi-linear PDEs The first order quasi-linear PDEs have the following general form: F (x, u, Du) = 0, (2.1) where x = (x 1, x 2,, x 3 ) R n, u = u(x), Du is the gradient of u.

More information

Newtonian Mechanics. Chapter Classical space-time

Newtonian Mechanics. Chapter Classical space-time Chapter 1 Newtonian Mechanics In these notes classical mechanics will be viewed as a mathematical model for the description of physical systems consisting of a certain (generally finite) number of particles

More information

WELL-POSEDNESS FOR HYPERBOLIC PROBLEMS (0.2)

WELL-POSEDNESS FOR HYPERBOLIC PROBLEMS (0.2) WELL-POSEDNESS FOR HYPERBOLIC PROBLEMS We will use the familiar Hilbert spaces H = L 2 (Ω) and V = H 1 (Ω). We consider the Cauchy problem (.1) c u = ( 2 t c )u = f L 2 ((, T ) Ω) on [, T ] Ω u() = u H

More information

2. Metric Spaces. 2.1 Definitions etc.

2. Metric Spaces. 2.1 Definitions etc. 2. Metric Spaces 2.1 Definitions etc. The procedure in Section for regarding R as a topological space may be generalized to many other sets in which there is some kind of distance (formally, sets with

More information

1. Let A R be a nonempty set that is bounded from above, and let a be the least upper bound of A. Show that there exists a sequence {a n } n N

1. Let A R be a nonempty set that is bounded from above, and let a be the least upper bound of A. Show that there exists a sequence {a n } n N Applied Analysis prelim July 15, 216, with solutions Solve 4 of the problems 1-5 and 2 of the problems 6-8. We will only grade the first 4 problems attempted from1-5 and the first 2 attempted from problems

More information

NOTES ON LINEAR ODES

NOTES ON LINEAR ODES NOTES ON LINEAR ODES JONATHAN LUK We can now use all the discussions we had on linear algebra to study linear ODEs Most of this material appears in the textbook in 21, 22, 23, 26 As always, this is a preliminary

More information

SPRING 2006 PRELIMINARY EXAMINATION SOLUTIONS

SPRING 2006 PRELIMINARY EXAMINATION SOLUTIONS SPRING 006 PRELIMINARY EXAMINATION SOLUTIONS 1A. Let G be the subgroup of the free abelian group Z 4 consisting of all integer vectors (x, y, z, w) such that x + 3y + 5z + 7w = 0. (a) Determine a linearly

More information

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability...

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability... Functional Analysis Franck Sueur 2018-2019 Contents 1 Metric spaces 1 1.1 Definitions........................................ 1 1.2 Completeness...................................... 3 1.3 Compactness......................................

More information

Math 140A - Fall Final Exam

Math 140A - Fall Final Exam Math 140A - Fall 2014 - Final Exam Problem 1. Let {a n } n 1 be an increasing sequence of real numbers. (i) If {a n } has a bounded subsequence, show that {a n } is itself bounded. (ii) If {a n } has a

More information

Additional Homework Problems

Additional Homework Problems Additional Homework Problems These problems supplement the ones assigned from the text. Use complete sentences whenever appropriate. Use mathematical terms appropriately. 1. What is the order of a differential

More information

I forgot to mention last time: in the Ito formula for two standard processes, putting

I forgot to mention last time: in the Ito formula for two standard processes, putting I forgot to mention last time: in the Ito formula for two standard processes, putting dx t = a t dt + b t db t dy t = α t dt + β t db t, and taking f(x, y = xy, one has f x = y, f y = x, and f xx = f yy

More information

THE INVERSE FUNCTION THEOREM

THE INVERSE FUNCTION THEOREM THE INVERSE FUNCTION THEOREM W. PATRICK HOOPER The implicit function theorem is the following result: Theorem 1. Let f be a C 1 function from a neighborhood of a point a R n into R n. Suppose A = Df(a)

More information

The Arzelà-Ascoli Theorem

The Arzelà-Ascoli Theorem John Nachbar Washington University March 27, 2016 The Arzelà-Ascoli Theorem The Arzelà-Ascoli Theorem gives sufficient conditions for compactness in certain function spaces. Among other things, it helps

More information

Lecture 4: Numerical solution of ordinary differential equations

Lecture 4: Numerical solution of ordinary differential equations Lecture 4: Numerical solution of ordinary differential equations Department of Mathematics, ETH Zürich General explicit one-step method: Consistency; Stability; Convergence. High-order methods: Taylor

More information

The goal of this chapter is to study linear systems of ordinary differential equations: dt,..., dx ) T

The goal of this chapter is to study linear systems of ordinary differential equations: dt,..., dx ) T 1 1 Linear Systems The goal of this chapter is to study linear systems of ordinary differential equations: ẋ = Ax, x(0) = x 0, (1) where x R n, A is an n n matrix and ẋ = dx ( dt = dx1 dt,..., dx ) T n.

More information

APPM 2360: Final Exam 10:30am 1:00pm, May 6, 2015.

APPM 2360: Final Exam 10:30am 1:00pm, May 6, 2015. APPM 23: Final Exam :3am :pm, May, 25. ON THE FRONT OF YOUR BLUEBOOK write: ) your name, 2) your student ID number, 3) lecture section, 4) your instructor s name, and 5) a grading table for eight questions.

More information

MIDTERM REVIEW AND SAMPLE EXAM. Contents

MIDTERM REVIEW AND SAMPLE EXAM. Contents MIDTERM REVIEW AND SAMPLE EXAM Abstract These notes outline the material for the upcoming exam Note that the review is divided into the two main topics we have covered thus far, namely, ordinary differential

More information

Solution of Linear State-space Systems

Solution of Linear State-space Systems Solution of Linear State-space Systems Homogeneous (u=0) LTV systems first Theorem (Peano-Baker series) The unique solution to x(t) = (t, )x 0 where The matrix function is given by is called the state

More information

Prof. M. Saha Professor of Mathematics The University of Burdwan West Bengal, India

Prof. M. Saha Professor of Mathematics The University of Burdwan West Bengal, India CHAPTER 9 BY Prof. M. Saha Professor of Mathematics The University of Burdwan West Bengal, India E-mail : mantusaha.bu@gmail.com Introduction and Objectives In the preceding chapters, we discussed normed

More information

Applied Dynamical Systems

Applied Dynamical Systems Applied Dynamical Systems Recommended Reading: (1) Morris W. Hirsch, Stephen Smale, and Robert L. Devaney. Differential equations, dynamical systems, and an introduction to chaos. Elsevier/Academic Press,

More information

Linear Differential Equations. Problems

Linear Differential Equations. Problems Chapter 1 Linear Differential Equations. Problems 1.1 Introduction 1.1.1 Show that the function ϕ : R R, given by the expression ϕ(t) = 2e 3t for all t R, is a solution of the Initial Value Problem x =

More information

The Contraction Mapping Principle

The Contraction Mapping Principle Chapter 3 The Contraction Mapping Principle The notion of a complete space is introduced in Section 1 where it is shown that every metric space can be enlarged to a complete one without further condition.

More information

SPACES ENDOWED WITH A GRAPH AND APPLICATIONS. Mina Dinarvand. 1. Introduction

SPACES ENDOWED WITH A GRAPH AND APPLICATIONS. Mina Dinarvand. 1. Introduction MATEMATIČKI VESNIK MATEMATIQKI VESNIK 69, 1 (2017), 23 38 March 2017 research paper originalni nauqni rad FIXED POINT RESULTS FOR (ϕ, ψ)-contractions IN METRIC SPACES ENDOWED WITH A GRAPH AND APPLICATIONS

More information

Work sheet / Things to know. Chapter 3

Work sheet / Things to know. Chapter 3 MATH 251 Work sheet / Things to know 1. Second order linear differential equation Standard form: Chapter 3 What makes it homogeneous? We will, for the most part, work with equations with constant coefficients

More information

Non-linear wave equations. Hans Ringström. Department of Mathematics, KTH, Stockholm, Sweden

Non-linear wave equations. Hans Ringström. Department of Mathematics, KTH, Stockholm, Sweden Non-linear wave equations Hans Ringström Department of Mathematics, KTH, 144 Stockholm, Sweden Contents Chapter 1. Introduction 5 Chapter 2. Local existence and uniqueness for ODE:s 9 1. Background material

More information

Mathematical Analysis Outline. William G. Faris

Mathematical Analysis Outline. William G. Faris Mathematical Analysis Outline William G. Faris January 8, 2007 2 Chapter 1 Metric spaces and continuous maps 1.1 Metric spaces A metric space is a set X together with a real distance function (x, x ) d(x,

More information

On graph differential equations and its associated matrix differential equations

On graph differential equations and its associated matrix differential equations Malaya Journal of Matematik 1(1)(2013) 1 9 On graph differential equations and its associated matrix differential equations J. Vasundhara Devi, a, R.V.G. Ravi Kumar b and N. Giribabu c a,b,c GVP-Prof.V.Lakshmikantham

More information

Examples include: (a) the Lorenz system for climate and weather modeling (b) the Hodgkin-Huxley system for neuron modeling

Examples include: (a) the Lorenz system for climate and weather modeling (b) the Hodgkin-Huxley system for neuron modeling 1 Introduction Many natural processes can be viewed as dynamical systems, where the system is represented by a set of state variables and its evolution governed by a set of differential equations. Examples

More information

Higher Order Averaging : periodic solutions, linear systems and an application

Higher Order Averaging : periodic solutions, linear systems and an application Higher Order Averaging : periodic solutions, linear systems and an application Hartono and A.H.P. van der Burgh Faculty of Information Technology and Systems, Department of Applied Mathematical Analysis,

More information

M.S.N. Murty* and G. Suresh Kumar**

M.S.N. Murty* and G. Suresh Kumar** JOURNAL OF THE CHUNGCHEONG MATHEMATICAL SOCIETY Volume 19, No.1, March 6 THREE POINT BOUNDARY VALUE PROBLEMS FOR THIRD ORDER FUZZY DIFFERENTIAL EQUATIONS M.S.N. Murty* and G. Suresh Kumar** Abstract. In

More information

Honours Analysis III

Honours Analysis III Honours Analysis III Math 354 Prof. Dmitry Jacobson Notes Taken By: R. Gibson Fall 2010 1 Contents 1 Overview 3 1.1 p-adic Distance............................................ 4 2 Introduction 5 2.1 Normed

More information

Linear ODE s with periodic coefficients

Linear ODE s with periodic coefficients Linear ODE s with periodic coefficients 1 Examples y = sin(t)y, solutions Ce cos t. Periodic, go to 0 as t +. y = 2 sin 2 (t)y, solutions Ce t sin(2t)/2. Not periodic, go to to 0 as t +. y = (1 + sin(t))y,

More information

1. Bounded linear maps. A linear map T : E F of real Banach

1. Bounded linear maps. A linear map T : E F of real Banach DIFFERENTIABLE MAPS 1. Bounded linear maps. A linear map T : E F of real Banach spaces E, F is bounded if M > 0 so that for all v E: T v M v. If v r T v C for some positive constants r, C, then T is bounded:

More information

g 2 (x) (1/3)M 1 = (1/3)(2/3)M.

g 2 (x) (1/3)M 1 = (1/3)(2/3)M. COMPACTNESS If C R n is closed and bounded, then by B-W it is sequentially compact: any sequence of points in C has a subsequence converging to a point in C Conversely, any sequentially compact C R n is

More information

Math 209B Homework 2

Math 209B Homework 2 Math 29B Homework 2 Edward Burkard Note: All vector spaces are over the field F = R or C 4.6. Two Compactness Theorems. 4. Point Set Topology Exercise 6 The product of countably many sequentally compact

More information

FUNCTIONAL ANALYSIS-NORMED SPACE

FUNCTIONAL ANALYSIS-NORMED SPACE MAT641- MSC Mathematics, MNIT Jaipur FUNCTIONAL ANALYSIS-NORMED SPACE DR. RITU AGARWAL MALAVIYA NATIONAL INSTITUTE OF TECHNOLOGY JAIPUR 1. Normed space Norm generalizes the concept of length in an arbitrary

More information

Fundamentals of Linear Algebra. Marcel B. Finan Arkansas Tech University c All Rights Reserved

Fundamentals of Linear Algebra. Marcel B. Finan Arkansas Tech University c All Rights Reserved Fundamentals of Linear Algebra Marcel B. Finan Arkansas Tech University c All Rights Reserved 2 PREFACE Linear algebra has evolved as a branch of mathematics with wide range of applications to the natural

More information

Dynamical Systems as Solutions of Ordinary Differential Equations

Dynamical Systems as Solutions of Ordinary Differential Equations CHAPTER 3 Dynamical Systems as Solutions of Ordinary Differential Equations Chapter 1 defined a dynamical system as a type of mathematical system, S =(X, G, U, ), where X is a normed linear space, G is

More information

This is a closed book exam. No notes or calculators are permitted. We will drop your lowest scoring question for you.

This is a closed book exam. No notes or calculators are permitted. We will drop your lowest scoring question for you. Math 54 Fall 2017 Practice Final Exam Exam date: 12/14/17 Time Limit: 170 Minutes Name: Student ID: GSI or Section: This exam contains 9 pages (including this cover page) and 10 problems. Problems are

More information

Formal Groups. Niki Myrto Mavraki

Formal Groups. Niki Myrto Mavraki Formal Groups Niki Myrto Mavraki Contents 1. Introduction 1 2. Some preliminaries 2 3. Formal Groups (1 dimensional) 2 4. Groups associated to formal groups 9 5. The Invariant Differential 11 6. The Formal

More information

An Undergraduate s Guide to the Hartman-Grobman and Poincaré-Bendixon Theorems

An Undergraduate s Guide to the Hartman-Grobman and Poincaré-Bendixon Theorems An Undergraduate s Guide to the Hartman-Grobman and Poincaré-Bendixon Theorems Scott Zimmerman MATH181HM: Dynamical Systems Spring 2008 1 Introduction The Hartman-Grobman and Poincaré-Bendixon Theorems

More information

21 Linear State-Space Representations

21 Linear State-Space Representations ME 132, Spring 25, UC Berkeley, A Packard 187 21 Linear State-Space Representations First, let s describe the most general type of dynamic system that we will consider/encounter in this class Systems may

More information

Lyapunov Stability Theory

Lyapunov Stability Theory Lyapunov Stability Theory Peter Al Hokayem and Eduardo Gallestey March 16, 2015 1 Introduction In this lecture we consider the stability of equilibrium points of autonomous nonlinear systems, both in continuous

More information

STABILITY. Phase portraits and local stability

STABILITY. Phase portraits and local stability MAS271 Methods for differential equations Dr. R. Jain STABILITY Phase portraits and local stability We are interested in system of ordinary differential equations of the form ẋ = f(x, y), ẏ = g(x, y),

More information

ORDINARY DIFFERENTIAL EQUATIONS

ORDINARY DIFFERENTIAL EQUATIONS ORDINARY DIFFERENTIAL EQUATIONS GABRIEL NAGY Mathematics Department, Michigan State University, East Lansing, MI, 4884 NOVEMBER 9, 7 Summary This is an introduction to ordinary differential equations We

More information

Xinfu Chen. Topics in Differential Equations. Department of mathematics, University of Pittsburgh Pittsburgh, PA 15260, USA

Xinfu Chen. Topics in Differential Equations. Department of mathematics, University of Pittsburgh Pittsburgh, PA 15260, USA Xinfu Chen Topics in Differential Equations Department of mathematics, University of Pittsburgh Pittsburgh, PA 1526, USA June 26, 26 ii Preface This notes is for math 3926, topics in differential equations,

More information

Section 2.8: The Existence and Uniqueness Theorem

Section 2.8: The Existence and Uniqueness Theorem Section 2.8: The Existence and Uniqueness Theorem MATH 351 California State University, Northridge March 10, 2014 MATH 351 (Differetial Equations) Sec. 2.8 March 10, 2014 1 / 26 Theorem 2.4.2 in Section

More information

1 The Existence and Uniqueness Theorem for First-Order Differential Equations

1 The Existence and Uniqueness Theorem for First-Order Differential Equations 1 The Existence and Uniqueness Theorem for First-Order Differential Equations Let I R be an open interval and G R n, n 1, be a domain. Definition 1.1 Let us consider a function f : I G R n. The general

More information

MATH 722, COMPLEX ANALYSIS, SPRING 2009 PART 5

MATH 722, COMPLEX ANALYSIS, SPRING 2009 PART 5 MATH 722, COMPLEX ANALYSIS, SPRING 2009 PART 5.. The Arzela-Ascoli Theorem.. The Riemann mapping theorem Let X be a metric space, and let F be a family of continuous complex-valued functions on X. We have

More information