Introduction to the Numerical Solution of IVP for ODE

Similar documents
Linear Multistep Methods

Fourth Order RK-Method

Initial value problems for ordinary differential equations

AN OVERVIEW. Numerical Methods for ODE Initial Value Problems. 1. One-step methods (Taylor series, Runge-Kutta)

Numerical Differential Equations: IVP

Lecture 4: Numerical solution of ordinary differential equations

Numerical Methods - Initial Value Problems for ODEs

Review Higher Order methods Multistep methods Summary HIGHER ORDER METHODS. P.V. Johnson. School of Mathematics. Semester

Consistency and Convergence

NUMERICAL SOLUTION OF ODE IVPs. Overview

CHAPTER 5: Linear Multistep Methods

Lecture Notes on Numerical Differential Equations: IVP

ECE257 Numerical Methods and Scientific Computing. Ordinary Differential Equations

Numerical solution of ODEs

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University

Module 6 : Solving Ordinary Differential Equations - Initial Value Problems (ODE-IVPs) Section 1 : Introduction

Applied Math for Engineers

Numerical Methods for Differential Equations

Ordinary Differential Equations

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University

8.1 Introduction. Consider the initial value problem (IVP):

Lecture V: The game-engine loop & Time Integration

Introductory Numerical Analysis

Finite Differences for Differential Equations 28 PART II. Finite Difference Methods for Differential Equations

Jim Lambers MAT 772 Fall Semester Lecture 21 Notes

Solving scalar IVP s : Runge-Kutta Methods

Euler s Method, cont d


4 Stability analysis of finite-difference methods for ODEs

Ordinary Differential Equations II

Lecture IV: Time Discretization

Mathematics for chemical engineers. Numerical solution of ordinary differential equations

Explicit One-Step Methods

Chap. 20: Initial-Value Problems

2tdt 1 y = t2 + C y = which implies C = 1 and the solution is y = 1

Second Order ODEs. CSCC51H- Numerical Approx, Int and ODEs p.130/177

Ordinary differential equations - Initial value problems

Ordinary Differential Equations

Ordinary Differential Equations

Runge-Kutta and Collocation Methods Florian Landis

Ordinary Differential Equations

Chapter 11 ORDINARY DIFFERENTIAL EQUATIONS

Introduction to standard and non-standard Numerical Methods

Introduction to numerical methods for Ordinary Differential Equations

Integration of Ordinary Differential Equations

Chapter 5 Exercises. (a) Determine the best possible Lipschitz constant for this function over 2 u <. u (t) = log(u(t)), u(0) = 2.

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 9

Physics 584 Computational Methods

Scientific Computing: An Introductory Survey

2 Numerical Methods for Initial Value Problems

Multistep Methods for IVPs. t 0 < t < T

MTH 452/552 Homework 3

Ordinary Differential Equations II

Math 660 Lecture 4: FDM for evolutionary equations: ODE solvers

Linear ODEs. Existence of solutions to linear IVPs. Resolvent matrix. Autonomous linear systems

You may not use your books, notes; calculators are highly recommended.

THE METHOD OF LINES FOR PARABOLIC PARTIAL INTEGRO-DIFFERENTIAL EQUATIONS

Numerical Analysis: Interpolation Part 1

Numerical Solution of Differential

Initial value problems for ordinary differential equations

Ordinary Differential Equations

The collocation method for ODEs: an introduction

Solving Ordinary Differential equations

Ordinary differential equations. Phys 420/580 Lecture 8

Numerical Solution of Fuzzy Differential Equations

1 Ordinary Differential Equations

ODE Homework 1. Due Wed. 19 August 2009; At the beginning of the class

16. Local theory of regular singular points and applications

Numerical Methods for Differential Equations

Numerical Algorithms as Dynamical Systems

Notes on PCG for Sparse Linear Systems

CONVERGENCE AND STABILITY CONSTANT OF THE THETA-METHOD

Maths III - Numerical Methods

Numerical Methods for Differential Equations Mathematical and Computational Tools

On linear and non-linear equations. (Sect. 1.6).

Ordinary differential equation II

Linear Multistep Methods I: Adams and BDF Methods

Chapter 8. Numerical Solution of Ordinary Differential Equations. Module No. 1. Runge-Kutta Methods

Lecture 42 Determining Internal Node Values

ORBIT 14 The propagator. A. Milani et al. Dipmat, Pisa, Italy

CHAPTER 12 Numerical Solution of Differential Equations

Feedback stabilization methods for the numerical solution of ordinary differential equations

Differential Equations

NUMERICAL ANALYSIS 2 - FINAL EXAM Summer Term 2006 Matrikelnummer:

LMI Methods in Optimal and Robust Control

An Iteratively Regularized Projection Method for Nonlinear Ill-posed Problems

Math 128A Spring 2003 Week 12 Solutions

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University

A CLASS OF CONTINUOUS HYBRID LINEAR MULTISTEP METHODS FOR STIFF IVPs IN ODEs

multistep methods Last modified: November 28, 2017 Recall that we are interested in the numerical solution of the initial value problem (IVP):

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit V Solution of

Ordinary Differential Equation Theory

The Milne error estimator for stiff problems

Index. higher order methods, 52 nonlinear, 36 with variable coefficients, 34 Burgers equation, 234 BVP, see boundary value problems

1 Directional Derivatives and Differentiability

CS 450 Numerical Analysis. Chapter 9: Initial Value Problems for Ordinary Differential Equations

(f(x) P 3 (x)) dx. (a) The Lagrange formula for the error is given by

10 The Finite Element Method for a Parabolic Problem

you expect to encounter difficulties when trying to solve A x = b? 4. A composite quadrature rule has error associated with it in the following form

Partial differential equation for temperature u(x, t) in a heat conducting insulated rod along the x-axis is given by the Heat equation:

Transcription:

Introduction to the Numerical Solution of IVP for ODE 45 Introduction to the Numerical Solution of IVP for ODE Consider the IVP: DE x = f(t, x), IC x(a) = x a. For simplicity, we will assume here that x(t) R n (so F = R), and that f(t, x) is continuous in t, x and uniformly Lipschitz in x (with Lipschitz constant L) on [a, b] R n. So we have global existence and uniqueness for the IVP on [a, b]. Moreover, the solution of the IVP x = f(t, x), x(a) = x a depends continuously on the initial values x a R n. This IVP is an example of a well-posed problem: for each choice of the data (here, the initial values x a ), we have: (1) Existence. There exists a solution of the IVP on [a, b]. () Uniqueness. The solution, for each given x a, is unique. (3) Continuous Dependence. The solution depends continuously on the data. Here, e.g., the map x a x(t, x a ) is continuous from R n into (C ([a, b]), ). A well-posed problem is a reasonable problem to approximate numerically. Grid Functions h h {}}{{}}{ a t 1 t t N b t 0 t Choose a mesh width h (with 0 < h b a), and let N = [ ] b a h (greatest integer (b a)/h). Let ti = a+ih (i = 0, 1,..., N) be the grid points in t (note: t 0 = a), and let x i denote the approximation to x(t i ). Note that t i and x i depend on h, but we will usually suppress this dependence in our notation. Explicit One-Step Methods Form of method: start with x 0 (presumably x 0 x a ). Recursively compute x 1,...,x N by x i+1 = x i + hψ(h, t i, x i ), i = 0,...,N 1. Here, ψ(h, t, x) is a function defined for 0 h b a, a t b, x R n, and ψ is associated with the given function f(t, x). Examples. Euler s Method. Here, ψ(h, t, x) = f(t, x). x i+1 = x i + hf(t i, x i ) x i x x i+1...... tangent line solution through (t i,x 9 ) t i t i+1 t

46 Ordinary Differential Equations Taylor Methods. Let p be an integer 1. To see how the Taylor Method of order p is constructed, consider the Taylor expansion of a C p+1 solution x(t) of x = f(t, x): x(t + h) = x(t) + hx (t) + + hp p! x(p) (t) + O(h p+1 ) }{{} remainder term where the remainder term is O(h p+1 ) by Taylor s Theorem with remainder. In the approximation, we will neglect the remainder term, and use the DE x = f(t, x) to replace x (t), x (t),... by expressions involving f and its derivatives: x (t) = f(t, x(t)) (n 1) x (t) = d (n n) (n 1) dt (f(t, x(t))) = D t f + D x f dx (t,x(t)) (t,x(t)) dt = (D t f + (D x f)f) (for n = 1, this is f t + f x f). (t,x(t)) For higher derivatives, inductively differentiate the expression for the previous derivative, and replace any occurrence of dx by f(t, x(t)). These expansions lead us to define the Taylor dt methods of order p: p = 1 : x i+1 = x i + hf(t i, x i ) (Euler s method, ψ(h, t, x) = f(t, x)) p = : x i+1 = x i + hf(t i, x i ) + h (D tf + (D x f)f) (ti,x i ) For the case p =, we have ψ(h, t, x) = T (h, t, x) ( f + h (D tf + (D x f)f)) We will use the notation T p (h, t, x) to denote the ψ(h, t, x) function for the Taylor method of order p. Remark. Taylor methods of order are rarely used computationally. They require derivatives of f to be programmed and evaluated. They are, however, of theoretical interest in determining the order of a method. Remark. A one-step method is actually an association of a function ψ(h, t, x) (defined for 0 h b a, a t b, x R n ) to each function f(t, x) (which is continuous in t, x and Lipschitz in x on [a, b] R n ). We study methods looking at one function f at a time. Many methods (e.g., Taylor methods of order p ) require more smoothness of f, either for their (t,x).

Introduction to the Numerical Solution of IVP for ODE 47 definition, or to guarantee that the solution x(t) is sufficiently smooth. Recall that if f C p (in t and x), then the solution x(t) of the IVP x = f(t, x), x(a) = x a is in C p+1 ([a, b]). For higher-order methods, this smoothness is essential in getting the error to be higher order in h. We will assume from here on (usually tacitly) that f is sufficiently smooth when needed. Examples. Modified Euler s Method Here ψ(h, t, x) tries to approximate x i+1 = x i + hf ( t i + h, x i + h f(t i, x i ) ) (so ψ(h, t, x) = f ( t + h, x + h f(t, x)) ). x ( t + h ) = f ( t + h, x ( t + h )), using the Euler approximation to x ( t + h ) ( x(t) + h f(t, x(t))). Improved Euler s Method (or Heun s Method) x i+1 = x i + h (f(t i, x i ) + f (t i+1, x i + hf(t i, x i ))) (so ψ(h, t, x) = 1 (f(t, x) + f (t + h, x + hf(t, x)))). Here again ψ(h, t, x) tries to approximate x ( t + h ) 1 (x (t) + x (t + h)). Or ψ(h, t, x) can be viewed as an approximation to the trapezoid rule applied to 1 h (x(t + h) x(t)) = 1 t+h x 1 h x (t) + 1 x (t + h). t Modified Euler and Improved Euler are examples of nd order two-stage Runge-Kutta methods. Notice that no derivatives of f need be evaluated, but f needs to be evaluated twice in each step (from x i to x i+1 ). Before stating the convergence theorem, we introduce the concept of accuracy. Local Truncation Error Let x i+1 = x i + hψ(h, t i, x i ) be a one-step method, and let x(t) be a solution of the DE x = f(t, x). The local truncation error (LTE) for x(t) is defined to be l(h, t) x(t + h) (x(t) + hψ(h, t, x(t))),

48 Ordinary Differential Equations that is, the local truncation error is the amount by which the true solution of the DE fails to satisfy the numerical scheme. l(h, t) is defined for those (h, t) for which 0 < h b a and a t b h. Define l(h, t) τ(h, t) = h and set τ i (h) = τ(h, t i ). Also, set Note that τ(h) = max τ(h, t) for 0 < h b a. a t b h l(h, t i ) = x(t i+1 ) (x(t i ) + hψ(h, t i, x(t i ))), explicitly showing the dependence of l on h, t i, and x(t). Definition. A one-step method is called [formally] accurate of order p (for a positive integer p) if for any solution x(t) of the DE x = f(t, x) which is C p+1, we have l(h, t) = O(h p+1 ). Definition. A one-step method is called consistent if ψ(0, t, x) = f(t, x). Consistency is essentially minimal accuracy: Proposition. A one-step method x i+1 = x i + hψ(h, t i, x i ), where ψ(h, t, x) is continuous for 0 h h 0, a t b, x R n for some h 0 (0, b a], is consistent with the DE x = f(t, x) if and only if τ(h) 0 as h 0 +. Proof. Suppose the method is consistent. Fix a solution x(t). For 0 < h h 0, let Z(h) = max ψ(0, s, x(s)) ψ(h, t, x(t)). a s,t b, s t h By uniform continuity, Z(h) 0 as h 0 +. Now l(h, t) = x(t + h) x(t) hψ(h, t, x(t)) = = = t+h t t+h t t+h t [x (s) ψ(h, t, x(t))] ds [f(s, x(s)) ψ(h, t, x(t))] ds [ψ(0, s, x(s)) ψ(h, t, x(t))] ds, so l(h, t) hz(h). Therefore τ(h) Z(h) 0. Conversely, suppose τ(h) 0. For any t [a, b) and any h (0, b t], x(t + h) x(t) h = ψ(h, t, x(t)) + τ(h, t). Taking the limit as h 0 gives f(t, x(t)) = x (t) = ψ(0, t, x(t)).

Introduction to the Numerical Solution of IVP for ODE 49 Convergence Theorem for One-Step Methods Theorem. Suppose f(t, x) is continuous in t, x and uniformly Lipschitz in x on [a, b] R n. Let x(t) be the solution of the IVP x = f(t, x), x(a) = x a on [a, b]. Suppose that the function ψ(h, t, x) in the one step method satisfies the following two conditions: 1. (Stability) ψ(h, t, x) is continuous in h, t, x and uniformly Lipschitz in x (with Lipschitz constant K) on 0 h h 0, a t b, x R n for some h 0 > 0 with h 0 b a, and. (Consistency) ψ(0, t, x) = f(t, x). Let e i (h) = x(t i (h)) x i (h), where x i is obtained from the one-step method x i+1 = x i + hψ(h, t i, x i ). (Note that e 0 (h) = x a x 0 (h) is the error in the initial value x 0 (h).) Then ( e e i (h) e K(ti(h) a) K(t i ) (h) a) 1 e 0 (h) + τ(h), K so e i (h) e K(b a) e 0 (h) + ek(b a) 1 τ(h). K Moreover, τ(h) 0 as h 0. Therefore, if e 0 (h) 0 as h 0, then max e i (h) 0 as h 0, 0 i b a h that is, the approximations converge uniformly on the grid to the solution. Proof. Hold h > 0 fixed, and ignore rounding error. Subtracting from gives So By induction, x i+1 = x i + hψ(h, t i, x i ) x(t i+1 ) = x(t i ) + hψ(h, t i, x(t i )) + hτ i, e i+1 e i + h ψ(h, t i, x(t i )) ψ(h, t i, x i ) + h τ i e i + hk e i + hτ(h) = (1 + hk) e i + hτ(h). e 1 (1 + hk) e 0 + hτ(h), and e (1 + hk) e 1 + hτ(h) (1 + hk) e 0 + hτ(h)(1 + (1 + hk)). e i (1 + hk) i e 0 + hτ(h)(1 + (1 + hk) + (1 + hk) + + (1 + hk) i 1 ) = (1 + hk) i e 0 + hτ(h) (1 + hk)i 1 (1 + hk) 1 = (1 + hk) i e 0 + τ(h) (1 + hk)i 1 K

50 Ordinary Differential Equations Since (1 + hk) 1 h e K as h 0 + (for K > 0), and i = t i a, we have h Thus (1 + hk) i = (1 + hk) t i a h e K(t i a). e i e K(ti a) e 0 + τ(h) ek(ti a) 1. K The preceding proposition shows τ(h) 0, and the theorem follows. If f is sufficiently smooth, then we know that x(t) C p+1. The theorem thus implies that if a one-step method is accurate of order p and stable [i.e. ψ is Lipschitz in x], then for sufficiently smooth f, If, in addition, e 0 (h) = O(h p ), then l(h, t) = O(h p+1 ) and thus τ(h) = O(h p ). max e i (h) = O(h p ), i i.e. we have p th order convergence of the numerical approximations to the solution. Example. The Taylor method of order p is accurate of order p. If f C p, then x C p+1, and ) l(h, t) = x(t + h) (x(t) + hx (t) + + hp p! x(p) (t) = 1 t+h (t + h s) p x (p+1) (s)ds. p! So l(h, t) M p+1 h p+1 (p + 1)! where t M p+1 = max a t b x(p+1) (t). Fact. A one-step method x i+1 = x i + hψ(h, t i, x i ) is accurate of order p if and only if ψ(h, t, x) = T p (h, t, x) + O(h p ), where T p is the ψ for the Taylor method of order p. Proof. Since x(t + h) x(t) = ht p (h, t, x(t)) + O(h p+1 ), we have for any given one-step method that l(h, t) = x(t + h) x(t) hψ(h, t, x(t)) = ht p (h, t, x(t)) + O(h p+1 ) hψ(h, t, x(t)) = h(t p (h, t, x(t)) ψ(h, t, x(t))) + O(h p+1 ). So l(h, t) = O(h p+1 ) iff h(t p ψ) = O(h p+1 ) iff ψ = T p + O(h p ). Remark. The controlled growth of the effect of the local truncation error (LTE) from previous steps in the proof of the convergence theorem (a consequence of the Lipschitz continuity of ψ in x) is called stability. The theorem states: Stability + Consistency (minimal accuracy) Convergence. In fact, here, the converse is also true.

Introduction to the Numerical Solution of IVP for ODE 51 Explicit Runge-Kutta methods One of the problems with Taylor methods is the need to evaluate higher derivatives of f. Runge-Kutta (RK) methods replace this with the much more reasonable need to evaluate f more than once to go from x i to x i+1. An m-stage (explicit) RK method is of the form x i+1 = x i + hψ(h, t i, x i ), with where a 1,..., a m are given constants, ψ(h, t, x) = m a j k j (h, t, x), j=1 k 1 (h, t, x) = f(t, x) and for j m, j 1 k j (h, t, x) = f(t + α j h, x + h β jr k r (h, t, x)) with α,...,α m and β jr (1 r < j m) given constants. We usually choose 0 < α j 1, and for accuracy reasons we choose r=1 j 1 (*) α j = r=1 β jr ( j m). Example. m = where x i+1 = x i + h(a 1 k 1 (h, t i, x i ) + a k (h, t i, x i )) k 1 (h, t i, x i ) = f(t i, x i ) k (h, t i, x i ) = f(t i + α h, x i + hβ 1 k 1 (h, t i, x i )). For simplicity, write α for α and β for β. Expanding in h, k (h, t, x) = f(t + αh, x + hβf(t, x)) = f(t, x) + αhd t f(t, x) + (D x f(t, x))(hβf(t, x)) + O(h ) = [f + h(αd t f + β(d x f)f)](t, x) + O(h ). So Recalling that ψ(h, t, x) = (a 1 + a )f + h(a αd t f + a β(d x f)f) + O(h ). T = f + h (D tf + (D x f)f),

5 Ordinary Differential Equations and that the method is accurate of order two if and only if ψ = T + O(h ), we obtain the following necessary and sufficient conditions on a two-stage (explicit) RK method to be accurate of order two: a 1 + a = 1, a α = 1, and a β = 1. We require α = β as in (*) (we now see why this condition needs to be imposed), whereupon these conditions become: a 1 + a = 1, a α = 1. Therefore, there is a one-parameter family (e.g., parameterized by α) of nd order, two-stage (m = ) explicit RK methods. Examples. (1) Setting α = 1 gives a = 1, a 1 = 0, which is the Modified Euler method. () Choosing α = 1 gives a = 1, a 1 = 1, which is the Improved Euler method, or Heun s method. Remark. Note that an m-stage explicit RK method requires m function evaluations (i.e., evaluations of f) in each step (x i to x i+1 ). Attainable Orders of Accuracy for Explicit RK methods # of stages (m) highest order attainable 1 1 Euler s method 3 3 4 4 5 4 6 5 7 6 8 7 Explicit RK methods are always stable: ψ inherits its Lipschitz continuity from f. Example. Modified Euler. Let L be the Lipschitz constant for f, and suppose h h 0 (for some h 0 b a). x i+1 = x i + hf ( t i + h, x i + h f(t i, x i ) ) ψ(t, h, x) = f ( t + h, x + h f(t, x))

Introduction to the Numerical Solution of IVP for ODE 53 So ψ(h, t, x) ψ(h, t, y) L ( x + hf(t, x)) ( y + hf(t, y)) L x y + h L f(t, x) f(t, y) L x y + h L x y K x y where K = L + h 0 L is thus the Lipschitz constant for ψ. Example. A popular 4 th order four-stage RK method is where x i+1 = x i + h 6 (k 1 + k + k 3 + k 4 ) k 1 = f(t i, x i ) k = f ( t i + h, x i + h k ) 1 k 3 = f ( t i + h, x i + hk ) k 4 = f(t i + h, x i + hk 3 ). The same argument as above shows this method is stable. Remark. RK methods require multiple function evaluations per step (going from x i to x i+1 ). One-step methods discard information from previous steps (e.g., x i 1 is not used to get x i+1 except in its influence on x i ). We will next study a class of multi-step methods. But first, we consider linear difference equations. Linear Difference Equations (Constant Coefficients) In this discussion, x i will be a (scalar) sequence defined for i 0. Consider the linear difference equation (k-step) (LDE) x i+k + α k 1 x i+k 1 + + α 0 x i = b i (i 0). If b i 0, the linear difference equation (LDE) is said to be homogeneous, in which case we will refer to it as (lh). If b i 0 for some i 0, the linear difference equation (LDE) is said to be inhomogeneous, in which case we refer to it as (li). Initial Value Problem (IVP): Given x i for i = 0,...,k 1, determine x i satisfying (LDE) for i 0. Theorem. There exists a unique solution of (IVP) for (lh) or (li). Proof. An obvious induction on i. The equation for i = 0 determines x k, etc. Theorem. The solution set of (lh) is a k-dimensional vector space (a subspace of the set of all sequences {x i } i 0 ).

54 Ordinary Differential Equations Proof Sketch. Choosing x 0 x 1. x k 1 = e j R k for j = 1,,...k and then solving (lh) gives a basis of the solution space of (lh). Define the characteristic polynomial of (lh) to be p(r) = r k + α k 1 r k 1 + + α 0. Let us assume that α 0 0. (If α 0 = 0, (LDE) isn t really a k-step difference equation since we can shift indices and treat it as a k-step difference equation for a k < k, namely k = k ν, where ν is the smallest index with α ν 0.) Let r 1,...,r s be the distinct zeroes of p, with multiplicities m 1,...,m s. Note that each r j 0 since α 0 0, and m 1 + + m s = k. Then a basis of solutions of (lh) is: { {i l r i j} i=0 : 1 j s, 0 l m j 1 }. Example. Fibonacci Sequence: F i+ F i+1 F i = 0, F 0 = 0, F 1 = 1. The associated characteristic polynomial r r 1 = 0 has roots r ± = 1 ± 5 (r + 1.6, r 0.6). The general solution of (lh) is F i = C + ( 1 + ) i ( 5 1 ) i 5 + C. The initial conditions F 0 = 0 and F 1 = 1 imply that C + = 1 5 and C = 1 5. Hence Since r < 1, we have ( F i = 1 5 ( 1 + ) i ( 5 1 ) i 5. 1 ) i 5 0 as i. Hence, the Fibonacci sequence behaves asymptotically like the sequence 1 5 ( 1+ ) 5 i.

Introduction to the Numerical Solution of IVP for ODE 55 Remark. If α 0 = α 1 = = α ν 1 = 0 and α ν 0 (i.e., 0 is a root of multiplicity ν), then x 0, x 1,..., x ν 1 are completely independent of x i for i ν. So x i+k + + α ν x i+ν = b i for i 0 with x i given for i ν behaves like a (k ν)-step difference equation. Remark. Define x i = and x 0 = equation x 0 x 1. x k 1 x i x i+1. x i+k 1. Then x i+1 = A x i for i 0, where 0 1...... A = 0 1, α 0... α k 1 is given by the I.C. So (lh) is equivalent to the one-step vector difference x i+1 = A x i, i 0, whose solution is x i = A i x 0. The characteristic polynomial of (lh) is the characteristic polynomial of A. Because A is a companion matrix, each distinct eigenvalue has only one Jordan block. If A = PJP 1 is the Jordan decomposition of A (J in Jordan form, P invertible), then x i = PJ i P 1 x 0. Let J j be the m j m j block corresponding to r j (for 1 j s), so J j = r j I + Z j, where Z j denotes the m j m j shift matrix: Z j = 0 1...... 0 1 0 Then i ( ) i Jj i = (r ji + Z j ) i = r i l j Zj l l. l=0 ( ) i Since is a polynomial in i of degree l and Z m j j = 0, we see entries in x l i of the form (constant) i l rj i for 0 l m j 1.. Remark. (li) becomes x i+1 = A x i + b i, i 0,

56 Ordinary Differential Equations where b i = [0,..., 0, b i ] T. This leads to a discrete version of Duhamel s principle (exercise). Remark. All solutions {x i } i 0 of (lh) stay bounded (i.e. are elements of l ) the matrix A is power bounded (i.e., M so that A i M for all i 0) the Jordan blocks J 1,...,J s are all power bounded { (a) each r j 1 (for 1 j s) and (b) for any j with m j > 1 (multiple roots), r j < 1. If (a) and (b) are satisfied, then the map x 0 {x i } i 0 is a bounded linear operator from R k (or C k ) into l (exercise). Linear Multistep Methods (LMM) A LMM is a method of the form α j x i+j = h for the approximate solution of an ODE IVP β j f i+j, i 0 x = f(t, x), x(a) = x a. Here we want to approximate the solution x(t) of this IVP for a t b at the points t i = a + ih (where h is the time step), 0 i b a. The term x h i denotes the approximation to x(t i ). We have set f i+j = f(t i+j, x i+j ). We normalize the coefficients so that α k = 1. The above is called a k-step LMM (if at least one of the coefficients α 0 and β 0 is non-zero). The above equation is similar to a difference equation in that one is solving for x i+k from x i, x i+1,...,x i+k 1. We assume as usual that f is continuous in (t, x) and uniformly Lipschitz in x. For simplicity of notation, we will assume that x(t) is real and scalar; the analysis that follows applies to x(t) R n or x(t) C n (viewed as R n for differentiability) with minor adjustments. Example. (Midpoint rule) (explicit) x(t i+ ) x(t i ) = ti+ This approximate relationship suggests the LMM Example. (Trapezoid rule) (implicit) The approximation t i x (s)ds hx (t i+1 ) = hf(t i+1, x(t i+1 )). Midpoint rule: x i+ x i = hf i+1. x(t i+1 ) x(t i ) = ti+1 t i x (s)ds h (x (t i+1 ) + x (t i ))

Introduction to the Numerical Solution of IVP for ODE 57 suggests the LMM Trapezoid rule: x i+1 x i = h (f i+1 + f i ). Explicit vs Implicit. If β k = 0, the LMM is called explicit: once we know x i, x i+1,..., x i+k 1, we compute immediately k 1 x i+k = (hβ j f i+j α j x i+j ). On the other hand, if β k 0, the LMM is called implicit: knowing x i, x i+1,..., x i+k 1, we need to solve k 1 x i+k = hβ k f(t i+k, x i+k ) (α j x i+j hβ j f i+j ) for x i+k. Remark. If h is sufficiently small, implicit LMM methods also have unique solutions given h and x 0, x 1,...,x k 1. To see this, let L be the Lipschitz constant for f. Given x i,...,x i+k 1, the value for x i+k is obtained by solving the equation where x i+k = hβ k f(t i+k, x i+k ) + g i, k 1 g i = (hβ j f i+j α j x i+j ) is a constant as far as x i+k is concerned. That is, we are looking for a fixed point of Φ(x) = hβ k f(t i+k, x) + g i. Note that if h β k L < 1, then Φ is a contraction: Φ(x) Φ(y) h β k f(t i+k, x) f(t i+k, y) h β k L x y. So by the Contraction Mapping Fixed Point Theorem, Φ has a unique fixed point. Any initial guess for x i+k leads to a sequence converging to the fixed point using functional iteration x (l+1) i+k = hβ k f(t i+k, x (l) i+k ) + g i which is initiated at some initial point x (0) i+k. In practice, one chooses either (1) iterate to convergence, or () a fixed number of iterations, using an explicit method to get the initial guess x (0) i+k. This pairing is often called a Predictor-Corrector Method.

58 Ordinary Differential Equations Function Evaluations. One FE means evaluating f once. Explicit LMM: 1 FE per step (after initial start) Implicit LMM:? FEs per step if iterate to convergence usually FE per step for a Predictor-Corrector Method. Initial Values. To start a k-step LMM, we need x 0, x 1,...,x k 1. We usually take x 0 = x a, and approximate x 1,...,x k 1 using a one-step method (e.g., a Runge-Kutta method). Local Truncation Error. For a true solution x(t) to x = f(t, x), define the LTE to be l(h, t) = α j x(t + jh) h β j x (t + jh). If x C p+1, then x(t + jh) = x(t) + jhx (t) + + (jh)p x (p) (t) + O(h p+1 ) and p! hx (t + jh) = hx (t) + jh x (t) + + jp 1 h p (p 1)! x(p) (t) + O(h p+1 ) and so where l(h, t) = C 0 x(t) + C 1 hx (t) + + C p h p x (p) (t) + O(h p+1 ), C 0 = α 0 + + α k C 1 = (α 1 + α + + kα k ) (β 0 + + β k ). C q = 1 q! (α 1 + q α + + k q α k ) 1 (q 1)! (β 1 + q 1 β + + k q 1 β k ). Definition. A LMM is called accurate of order p if l(h, t) = O(h p+1 ) for any solution of x = f(t, x) which is C p+1. Fact. A LMM is accurate of order at least p iff C 0 = C 1 = = C p = 0. (It is called accurate of order exactly p if also C p+1 0.) Remarks. (i) For the LTE of a method to be o(h) for all f s, we must have C 0 = C 1 = 0. To see this, for any f which is C 1, all solutions x(t) are C, so l(h, t) = C 0 x(t) + C 1 hx (t) + O(h ) is o(h) iff C 0 = C 1 = 0. (ii) Note that C 0, C 1,... depend only on α 0,...,α k, β 0,...,β k and not on f. So here, minimal accuracy is first order.

Introduction to the Numerical Solution of IVP for ODE 59 Definition. A LMM is called consistent if C 0 = C 1 = 0 (i.e., at least first-order accurate). Remark. If a LMM is consistent, then any solution x(t) for any f (continuous in (t, x), Lipschitz in x) has l(h, t) = o(h). To see this, note that since x C 1, x(t + jh) = x(t) + jhx (t) + o(h) and hx (t + jh) = hx (t) + o(h), so l(h, t) = C 0 x(t) + C 1 hx (t) + o(h). Exercise: Verify that the o(h) terms converge to 0 uniformly in t (after dividing by h) as h 0: use the uniform continuity of x (t) on [a, b]. Definition. A k-step LMM αj x i+j = h β j f i+j is called convergent if for each IVP x = f(t, x), x(a) = x a on [a, b] (f (C, Lip)) and for any choice of x 0 (h),...,x k 1 (h) for which lim x(t i(h)) x i (h) = 0 for i = 0,...,k 1, h 0 we have lim max x(t i(h)) x i (h) = 0. h 0 {i:a t i (h) b} Remarks. (i) This asks for uniform decrease of the error on the grid as h 0. (ii) By continuity of x(t), the condition on the initial values is equivalent to x i (h) x a for i = 0, 1,..., k 1. Fact. If a LMM is convergent, then the zeroes of the (first) characteristic polynomial of the method p(r) = α k r k + + α 0 satisfy the Dahlquist root condition: (a) all zeroes r satisfy r 1, and (b) multiple zeroes r satisfy r < 1. Examples. Consider the IVP x = 0, a t b, x(a) = 0. So f 0. Consider the LMM: αj x i+j = 0.

60 Ordinary Differential Equations (1) Let r be any zero of p(r). Then the solution with initial conditions x i = hr i for 0 i k 1 is Suppose h = b a m x i = hr i for 0 i b a. h for some m Z. If the LMM is convergent, then x m (h) x(b) = 0 as m. But So iff r 1. x m (h) = hr m = b a m rm. x m (h) x(b) = b a m rm 0 as m () Similarly if r is a multiple zero of p(r), taking x i (h) = hir i for 0 i k 1 gives So if h = b a m, then x i (h) = hir i, 0 i b a h x m (h) = b a m mrm = (b a)r m, so x m (h) 0 as h 0 iff r < 1. Definition. A LMM is called zero-stable if it satisfies the Dahlquist root condition. Recall from our discussion of linear difference equations that zero-stability is equivalent to requiring that all solutions of (lh) k α jx i+j = 0 for i 0 stay bounded as i. Remark. A consistent one-step LMM (i.e., k = 1) is always zero-stable. Indeed, consistency implies that C 0 = C 1 = 0, which in turn implies that p(1) = α 0 +α 1 = C 0 = 0 and so r = 1 is the zero of p(r). Therefore α 1 = 1, α 0 = 1, so the characteristic polynomial is p(r) = r 1, and the LMM is zero-stable. Exercise: Show that if an LMM is convergent, then it is consistent. Key Theorem. [LMM Convergence] A LMM is convergent if and only if it is zero-stable and consistent. Moreover, for zero-stable methods, we get an error estimate of the form max a t i (h) b x(t i (h)) x i (h) K 1 max x(t i(h)) x i (h) 0 i k 1 }{{} initial error. +K max i l(h, t i (h)) h }{{} growth of error controlled by zero-stability

Introduction to the Numerical Solution of IVP for ODE 61 Remark. If x C p+1 and the LMM is accurate of order p, then LTE /h = O(h p ). To get p th -order convergence (i.e., LHS = O(h p )), we need x i (h) = x(t i (h)) + O(h p ) for i = 0,...,k 1. This can be done using k 1 steps of a RK method of order p 1. Lemma. Consider (li) α j x i+j = b i for i 0 (where α k = 1), with the initial values x 0,...,x k 1 given, and suppose that the characteristic polynomial p(r) = k α jr j satisfies the Dahlquist root condition. Then there is an M > 0 such that for i 0, ( ) i x i+k M max{ x 0,..., x k 1 } + b ν. Remark. Recall that the Dahlquist root condition implies that there is an M > 0 for which A i M for all i 0, where 0 1...... A = 0 1 α 0 α k 1 is the companion matrix for p(r), and is the operator norm induced by the vector norm. The M in the Lemma can be taken to be the same as this M bounding A i. Proof. Let x i = [x i, x i+1,..., x i+k 1 ] T and b i = [0,..., 0, b i ] T. Then x i+1 = A x i + b i, so by induction x i+1 = A i+1 x i 0 + A i ν bν. Thus x i+k x i+1 i A i+1 x 0 + A i ν b ν i M( x 0 + b ν ).

6 Ordinary Differential Equations Proof of the LMM Convergence Theorem. The fact that convergence implies zerostability and consistency has already been discussed. Suppose a LMM is zero-stable and consistent. Let x(t) be the true solution of the IVP x = f(t, x), x(a) = x a on [a, b], let L be the Lipschitz constant for f, and set β = β j. Hold h fixed, and set e i (h) = x(t i (h)) x i (h), E = max{ e 0,..., e k 1 }, l i (h) = l(h, t i (h)), λ(h) = max l i(h), i I where I = {i 0 : i + k b a h }. Step 1. The first step is to derive a difference inequality for e i. This difference inequality is a discrete form of the integral inequality leading to Gronwall s inequality. For i I, we have Subtraction gives where Then α j x(t i+j ) = h b i h α j x i+j = h β j f(t i+j, x(t i+j )) + l i β j f i+j. α j e i+j = b i, β j (f(t i+j, x(t i+j )) f(t i+j, x i+j )) + l i. b i h β j L e i+j + l i. So, by the preceeding Lemma with x i+k replaced by e i+k, we obtain for i I [ ] i e i+k M E + b ν M M [ [ E + hl i β j e ν+j + E + hl β k e i+k + hlβ i+k 1 ] i l ν e ν + ] i l ν.

Introduction to the Numerical Solution of IVP for ODE 63 From here on, assume h is small enough that MhL β k 1. (Since { h b a : MhL β k 1 } is a compact subset of (0, b a], the estimate in the Key Theorem is clearly true for those values of h.) Moving MhL β k e i+k to the LHS gives i+k 1 e i+k hm 1 e ν + M E + M 3 λ/h for i I, where M 1 = MLβ, M = M, and M 3 = M(b a). (Note: For explicit methods, β k = 0, so the restriction MhL β k 1 is unnecessary, and the factors of in M 1, M, M 3 can be dropped.) Step. We now use a discrete comparison argument to bound e i. Let y i be the solution of ( ) i+k 1 y i+k = hm 1 y ν + (M E + M 3 λ/h) for i I, with initial values y j = e j for 0 j k 1. Then clearly by induction e i+k y i+k for i I. Now y k hm 1 ke + (M E + M 3 λ/h) M 4 E + M 3 λ/h, where M 4 = (b a)m 1 k + M. Subtracting ( ) for i from ( ) for i + 1 gives y i+k+1 y i+k = hm 1 y i+k, and so y i+k+1 = (1 + hm 1 )y i+k. Therefore, by induction we obtain for i I: y i+k = (1 + hm 1 ) i y k (1 + hm 1 ) (b a)/h y k e M1(b a) y k K 1 E + K λ/h, where K 1 = e M 1(b a) M 4 and K = e M 1(b a) M 3. Thus, for i I, e i+k K 1 E + K λ/h; since K 1 M 4 M M 1, also e j E K 1 E + K λ/h for 0 j k 1. Since consistency implies λ = o(h), we are done. Remarks. (1) Note that K 1 and K depend only on a, b, L, k, the α j s and β j s, and M. () The estimate can be refined we did not try to get the best constants K 1, K. For example, e M 1(b a) could be replaced by e M 1(t i a) in both K 1 and K, yielding more precise estimates depending on i, similar to the estimate for one-step methods.