Lecture 4: Numerical solution of ordinary differential equations

Similar documents
Consistency and Convergence

Jim Lambers MAT 772 Fall Semester Lecture 21 Notes

CS 450 Numerical Analysis. Chapter 9: Initial Value Problems for Ordinary Differential Equations

Scientific Computing: An Introductory Survey

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 9

CHAPTER 5: Linear Multistep Methods

Part IB Numerical Analysis

ECE257 Numerical Methods and Scientific Computing. Ordinary Differential Equations

Introduction to the Numerical Solution of IVP for ODE

Euler s Method, cont d

2tdt 1 y = t2 + C y = which implies C = 1 and the solution is y = 1

NUMERICAL SOLUTION OF ODE IVPs. Overview

Fourth Order RK-Method

Numerical Methods - Initial Value Problems for ODEs

Numerical Methods for Differential Equations

Ordinary Differential Equations

(f(x) P 3 (x)) dx. (a) The Lagrange formula for the error is given by

multistep methods Last modified: November 28, 2017 Recall that we are interested in the numerical solution of the initial value problem (IVP):

Ordinary Differential Equations II

Solving Ordinary Differential equations

Multistage Methods I: Runge-Kutta Methods

Runge-Kutta and Collocation Methods Florian Landis

Numerical Methods for Differential Equations

Ordinary Differential Equations

CS520: numerical ODEs (Ch.2)

CS 450 Numerical Analysis. Chapter 8: Numerical Integration and Differentiation

Ordinary Differential Equations II

Initial value problems for ordinary differential equations

Ordinary Differential Equations

Module 4: Numerical Methods for ODE. Michael Bader. Winter 2007/2008

Math Ordinary Differential Equations

Ordinary Differential Equations

Applied Math for Engineers

Math 660 Lecture 4: FDM for evolutionary equations: ODE solvers

AN OVERVIEW. Numerical Methods for ODE Initial Value Problems. 1. One-step methods (Taylor series, Runge-Kutta)

4 Stability analysis of finite-difference methods for ODEs

Initial value problems for ordinary differential equations

Introduction to standard and non-standard Numerical Methods

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit V Solution of

Numerical Differential Equations: IVP

1 Ordinary Differential Equations

Mathematics for chemical engineers. Numerical solution of ordinary differential equations

An introduction to Birkhoff normal form

Lecture 5: Single Step Methods

Mathematics for Engineers. Numerical mathematics

Chapter 5 Exercises. (a) Determine the best possible Lipschitz constant for this function over 2 u <. u (t) = log(u(t)), u(0) = 2.

Linear Multistep Methods I: Adams and BDF Methods

HIGHER ORDER METHODS. There are two principal means to derive higher order methods. b j f(x n j,y n j )

8.1 Introduction. Consider the initial value problem (IVP):

2 Numerical Methods for Initial Value Problems

Numerical Methods. King Saud University

Fixed point iteration and root finding

Ordinary differential equations - Initial value problems

Introduction to Initial Value Problems

Bindel, Fall 2011 Intro to Scientific Computing (CS 3220) Week 12: Monday, Apr 18. HW 7 is posted, and will be due in class on 4/25.

Calculus C (ordinary differential equations)

Cyber-Physical Systems Modeling and Simulation of Continuous Systems

COSC 3361 Numerical Analysis I Ordinary Differential Equations (II) - Multistep methods

you expect to encounter difficulties when trying to solve A x = b? 4. A composite quadrature rule has error associated with it in the following form

Exact and Approximate Numbers:

Solving PDEs with PGI CUDA Fortran Part 4: Initial value problems for ordinary differential equations

5. Ordinary Differential Equations. Indispensable for many technical applications!

Theory, Solution Techniques and Applications of Singular Boundary Value Problems

Lecture 2: Computing functions of dense matrices

The Initial Value Problem for Ordinary Differential Equations

On interval predictor-corrector methods

Chapter 6 - Ordinary Differential Equations

Introductory Numerical Analysis

Multistep Methods for IVPs. t 0 < t < T

Initial-Value Problems for ODEs. Introduction to Linear Multistep Methods

Ordinary Differential Equations

A brief introduction to ordinary differential equations

Numerical Solutions to Partial Differential Equations

NUMERICAL ANALYSIS 2 - FINAL EXAM Summer Term 2006 Matrikelnummer:

KINGS COLLEGE OF ENGINEERING DEPARTMENT OF MATHEMATICS ACADEMIC YEAR / EVEN SEMESTER QUESTION BANK

Do not turn over until you are told to do so by the Invigilator.

Department of Applied Mathematics and Theoretical Physics. AMA 204 Numerical analysis. Exam Winter 2004

Integration of Ordinary Differential Equations

Numerical Methods for the Solution of Differential Equations

CS 257: Numerical Methods

Evolution equations with spectral methods: the case of the wave equation

3.1 Interpolation and the Lagrange Polynomial

Finite Differences for Differential Equations 28 PART II. Finite Difference Methods for Differential Equations

Numerical solution of ODEs

Non-linear wave equations. Hans Ringström. Department of Mathematics, KTH, Stockholm, Sweden

Numerical Analysis. A Comprehensive Introduction. H. R. Schwarz University of Zürich Switzerland. with a contribution by

2.29 Numerical Fluid Mechanics Fall 2011 Lecture 20

Differential Equations

Numerical Algorithms as Dynamical Systems

Numerical Analysis Preliminary Exam 10 am to 1 pm, August 20, 2018

Index. higher order methods, 52 nonlinear, 36 with variable coefficients, 34 Burgers equation, 234 BVP, see boundary value problems

Numerical Methods for Differential Equations Mathematical and Computational Tools

Ordinary differential equation II

Exponential Integrators

TABLE OF CONTENTS INTRODUCTION, APPROXIMATION & ERRORS 1. Chapter Introduction to numerical methods 1 Multiple-choice test 7 Problem set 9

CHAPTER 10: Numerical Methods for DAEs

8.5 Taylor Polynomials and Taylor Series

1. The Transition Matrix (Hint: Recall that the solution to the linear equation ẋ = Ax + Bu is

Higher Order Averaging : periodic solutions, linear systems and an application

LMI Methods in Optimal and Robust Control

Transcription:

Lecture 4: Numerical solution of ordinary differential equations Department of Mathematics, ETH Zürich

General explicit one-step method: Consistency; Stability; Convergence. High-order methods: Taylor methods; Integral equation method; Runge-Kutta methods. Multi-step methods.

Stiff equations and systems. Perturbation theories for differential equations: Regular perturbation theory; Singular perturbation theory.

Consistency, stability and convergence Consider dx = f (t, x), t [0, T ], dt x(0) = x 0, x 0 R. f C 0 ([0, t] R): Lipschitz condition. Start at the initial time t = 0; Introduce successive discretization points t 0 = 0 < t 1 < t 2 <..., continuing on until we reach the final time T. Uniform step size: t := t k+1 t k > 0, does not dependent on k and assumed to be relatively small, with t k = k t. Suppose that K = T /( t): an integer.

General explicit one-step method: x k+1 = x k + t Φ(t k, x k, t), for some continuous function Φ(t, x, h). Taking in succession k = 0, 1,..., K 1, one-step at a time the approximate values x k of x at t k : obtained. Explicit scheme: x k+1 obtained from x k ; x k+1 appears only on the left-hand side.

Truncation error of the numerical scheme: T k ( t) = x(t k+1) x(t k ) t As t 0, k +, k t = t, DEFINITION: Consistency Φ(t k, x(t k ), t). T k ( t) dx Φ(t, x, 0). dt Numerical scheme consistent with the ODE if Φ(t, x, 0) = f (t, x) for all t [0, T ] and x R.

DEFINITION: Stability Numerical scheme stable if Φ: Lipschitz continuous in x, i.e., there exist positive constants C Φ and h 0 s.t. Φ(t, x, h) Φ(t, y, h) C Φ x y, t [0, T ], h [0, h 0 ], x, y R. Global error of the numerical scheme: DEFINITION: Convergence e k = x k x(t k ). Numerical scheme: convergent if e k 0 as t 0, k +, k t = t [0, T ].

THEOREM: Dahlquist-Lax equivalence theorem Numerical scheme: convergent iff consistent and stable.

PROOF: x(t k+1 ) x(t k ) = tk+1 t k f (s, x(s)) ds; x(t k+1 ) x(t k ) = ( t)f (t k, x(t k ))+ tk+1 t k [ f (s, x(s)) f (tk, x(t k )) ] ds. x(t k+1) x(t k ) ( t)f (t k, x(t k )) tk+1 [ = f (s, x(s)) f (tk, x(t k )) ] ds ( t) ω 1( t). t k

ω 1( t) : ω 1( t) := sup { f (t, x(t)) f (s, x(s)), 0 s, t T, t s t }. ω 1 ( t) 0 as t 0. If f : Lipschitz in t, then ω 1 ( t) = O( t).

From e k+1 e k = x k+1 x k (x(t k+1 ) x(t k )), e k+1 e k = t Φ(t k, x k, t) (x(t k+1 ) x(t k )). Or equivalently, e k+1 e k = t [ Φ(t k, x k, t) f (t k, x(t k )) ] [ x(t k+1 ) x(t k ) t f (t k, x(t k )) ]. Write e k+1 e k = t [ Φ(t k, x k, t) Φ(t k, x(t k ), t) + Φ(t k, x(t k ), t) f (t k, x(t k )) ] [ x(t k+1 ) x(t k ) t f (t k, x(t k )) ].

Let ω 2( t) := sup { Φ(t, x, h) f (t, x), t [0, T ], x R, 0 < h ( t) }. Consistency Φ(t k, x(t k ), t) f (t k, x(t k )) ω2( t) 0 as t 0. Stability condition Φ(t k, x k, t) Φ(t k, x(t k ), t) C Φ e k.

e k+1 ( 1 + C Φ t ) e k + tω 3( t), 0 k K 1; K = T /( t) and ω 3( t) := ω 1( t) + ω 2( t) 0 as t 0.

By induction, k 1 e k+1 (1 + C Φ t) k e 0 + ( t) ω 3( t) (1 + C Φ t) l, 0 k K. and l=0 k 1 (1 + C Φ t) l = (1 + C Φ t) k 1, C Φ t l=0 (1 + C Φ t) K (1 + C Φ T K )K e C ΦT. e k e CΦT e 0 + ec ΦT 1 ω 3( t). C Φ If e 0 = 0, then as t 0, k + s.t. k t = t [0, T ] lim e k = 0. k +

DEFINITION: An explicit one-step method: order p if there exist positive constants h 0 and C s.t. T k ( t) C( t) p, 0 < t h 0, k = 0,..., K 1; T k ( t): truncation error.

If the explicit one-step method: stable global error: bounded by the truncation error. PROPOSITION: Consider the explicit one-step scheme with Φ satisfying the stability condition. Suppose that e 0 = 0. Then e k+1 (ec ΦT 1) C Φ T l : truncation error and e k : global error. max T l( t) for k = 0,..., K 1; 0 l k

PROOF: [ ] e k+1 e k = ( t)t k ( t)+( t) Φ(t k, x k, t) Φ(t k, x(t k ), t). e k+1 (1 + C Φ ( t)) e k + ( t) T k ( t) (1 + C Φ ( t)) e k + ( t) max 0 l k T l( t).

Explicit Euler s method Φ(t, x, h) = f (t, x). Explicit Euler scheme: x k+1 = x k + ( t)f (t, x k ).

THEOREM: Suppose that f satisfies the Lipschitz condition; Suppose that f : Lipschitz with respect to t. Then the explicit Euler scheme: convergent and the global error e k : of order t. If f C 1, then the scheme: of order one.

PROOF: f satisfies the Lipschitz condition numerical scheme with Φ(t, x, h) = f (t, x): stable. Φ(t, x, 0) = f (t, x) for all t [0, T ] and x R numerical scheme: consistent. convergence. f : Lipschitz in t ω 1 ( t) = O( t). ω 2 ( t) = 0 ω 3 ( t) = O( t). e k = O( t) for 1 k K.

f C 1 x C 2. Mean-value theorem T k ( t) = 1 ( ) x(t k+1 ) x(t k ) f (t k, x(t k )) t = 1 ( x(t k ) + ( t) dx ) t dt (t k) + ( t)2 d 2 x 2 dt (τ) x(t k) f (t 2 k, x(t k )) = t 2 d 2 x dt 2 (τ), for some τ [t k, t k+1 ]. Scheme: first order.

High-order methods: In general, the order of a numerical solution method governs both the accuracy of its approximations and the speed of convergence to the true solution as the step size t 0. Explicit Euler method: only a first order scheme; Devise simple numerical methods that enjoy a higher order of accuracy. The higher the order, the more accurate the numerical scheme, and hence the larger the step size that can be used to produce the solution to a desired accuracy. However, this should be balanced with the fact that higher order methods inevitably require more computational effort at each step.

High-order methods: Taylor methods; Integral equation method; Runge-Kutta methods.

Taylor methods Explicit Euler scheme: based on a first order Taylor approximation to the solution. Taylor expansion of the solution x(t) at the discretization points t k+1 : x(t k+1 ) = x(t k ) + ( t) dx dt (t k) + ( t)2 d 2 x 2 dt (t k) + ( t)3 d 3 x 2 6 dt (t k) +.... 3 Evaluate the first derivative term by using the differential equation dx dt = f (t, x).

Second derivative can be found by differentiating the equation with respect to t: d 2 x dt 2 Second order Taylor method ( ) = d f f dx f (t, x) = (t, x) + (t, x) dt t x dt. x k+1 = x k + ( t)f (t k, x k ) + ( t)2 2 ( f t (t k, x k ) + f ) x (t k, x k )f (t k, x k ).

Proposition: Suppose that f C 2. Then ( ): of second order.

Proof: f C 2 x C 3. truncation error T k given by T k ( t) = ( t)2 6 d 3 x dt 3 (τ), for some τ [t k, t k+1 ] and so, ( ): of second order.

Drawbacks of higher order Taylor methods: (i) Owing to their dependence upon the partial derivatives of f, f needs to be smooth; (ii) Efficient evaluation of the terms in the Taylor approximation and avoidance of round off errors.

Integral equation method Avoid the complications inherent in a direct Taylor expansion. x(t) coincides with the solution to the integral equation x(t) = x 0 + t 0 f (s, x(s)) ds, t [0, T ]. Starting at the discretization point t k instead of 0, and integrating until time t = t k+1 gives ( ) x(t k+1 ) = x(t k ) + tk+1 t k f (s, x(s)) ds. Implicitly computes the value of the solution at the subsequent discretization point.

Compare formula ( ) with the explicit Euler method Approximation of the integral by tk+1 x k+1 = x k + ( t)f (t k, x k ). t k f (s, x(s)) ds ( t)f (t k, x(t k )). Left endpoint rule for numerical integration.

Left endpoint rule for numerical integration: Left endpoint rule: not an especially accurate method of numerical integration. Better methods include the Trapezoid rule:

Numerical integration formulas for continuous functions. (i) Trapezoidal rule: tk+1 t k (ii) Simpson s rule: tk+1 t k g(s) ds t ( ) g(t k+1 ) + g(t k ) ; 2 g(s) ds t ( g(t k+1 ) + 4g( t ) k + t k+1 ) + g(t k ) ; 6 2 (iii) Trapezoidal rule: exact for polynomials of order one; Simpson s rule: exact for polynomials of second order.

Use the more accurate Trapezoidal approximation tk+1 t k Trapezoidal scheme: f (s, x(s)) ds ( t) 2 x k+1 = x k + ( t) 2 [ f (t k, x(t k )) + f (t k+1, x(t k+1 )) [ ] f (t k, x k ) + f (t k+1, x k+1 ). Trapezoidal scheme: implicit numerical method. ].

Proposition: Suppose that f C 2 and ( ) ( t)c f 2 < 1; C f : Lipschitz constant for f in x. Trapezoidal scheme: convergent and of second order.

Proof: Consistency: Φ(t, x, t) := 1 2 [ ] f (t, x) + f (t + t, x + ( t)φ(t, x, t)). t = 0.

Stability: Φ(t, x, t) Φ(t, y, t) Cf x y + t 2 C f Φ(t, x, t) Φ(t, y, t). ( 1 ( t)c f 2 Stability holds with ) Φ(t, x, t) Φ(t, y, t) C f x y. C Φ = C f 1 ( t)c f 2 provided that t satisfies ( ).,

Second order scheme: By the mean-value theorem, T k ( t) = x(t k+1) x(t k ) t 1 [ ] f (t k, x(t k )) + f (t k+1, x(t k+1 )) 2 = 1 12 ( t)2 d 3 x dt 3 (τ), for some τ [t k, t k+1 ] second order scheme, provided that f C 2 (and consequently x C 3 ).

An alternative scheme: replace x k+1 by x k + ( t)f (t k, x k ). Improved Euler scheme: x k+1 = x k + ( t) 2 [ f (t k, x k ) + f (t k+1, x k + ( t)f(t k, x k )) Proposition: Improved Euler scheme: convergent and of second order. Improved Euler scheme: performs comparably to the Trapezoidal scheme, and significantly better than the Euler scheme. Alternative numerical approximations to the integral equation a range of numerical solution schemes. ].

Midpoint rule: tk+1 t k f (s, x(s)) ds ( t)f (t k + t 2, x(t k + t 2 )). Midpoint rule: same order of accuracy as the trapezoid rule. Midpoint scheme: approximate x(t k + t 2 ) by x k + t 2 f (t k, x k ), x k+1 = x k + ( t)f ( t k + t 2, x k + t 2 f (t k, x k ) ). Midpoint scheme: of second order.

Example of linear systems Consider the linear system of ODEs dx dt = Ax(t), t [0, + [, x(0) = x 0 R d. A M d (C): independent of t. DEFINITION: A one-step numerical scheme for solving the linear system of ODEs: stable if there exists a positive constant C 0 s.t. x k+1 C 0 x 0 for all k N.

Consider the following schemes: (i) Explicit Euler s scheme: (ii) Implicit Euler s scheme: (iii) Trapezoidal scheme: x k+1 = x k + ( t)ax k ; x k+1 = x k + ( t)ax k+1 ; x k+1 = x k + ( t) 2 with k N, and x 0 = x 0. [Ax k + Ax k+1 ],

Proposition: Suppose that Rλ j < 0 for all j. The following results hold: (i) Explicit Euler scheme: stable for t small enough; (ii) Implicit Euler scheme: unconditionally stable; (iii) Trapezoidal scheme: unconditionally stable.

Proof: Consider the explicit Euler scheme. By a change of basis, where x k = Cx k. If x 0 E j, then x k = x k+1 = (I + t(d + N)) k x 0, min{k,d} l=0 Ck l : binomial coefficient. C l k(1 + tλ j ) k l ( t) l N l x 0,

If 1 + ( t)λ j < 1, then x k : bounded. If 1 + ( t)λ j > 1, then one can find x 0 s.t. x k + (exponentially) as k +. If 1 + ( t)λ j = 1 and N 0, then for all x 0 s.t. N x 0 0, N 2 x 0 = 0, x k = (1 + ( t)λ j ) k x 0 + (1 + ( t)λ j ) k 1 k tn x 0 goes to infinity as k +. Stability condition 1 + ( t)λ j < 1 holds for t small enough. t < 2 Rλ j λ j 2,

Implicit Euler scheme: x k+1 = (I t(d + N)) k x 0. All the eigenvalues of the matrix (I t(d + N)) 1 : of modulus strictly smaller than 1. Implicit Euler scheme: unconditionally stable. Trapezoidal scheme: Stability condition: x k+1 = (I ( t) 2 (D + N)) k (I + ( t) 2 (D + N))k x 0. holds for all t > 0 since Rλ j < 0. 1 + ( t) 2 λ j < 1 ( t) 2 λ j,

REMARK: Explicit and implicit Euler schemes: of order one; Trapezoidal scheme: of order two.

Runge-Kutta methods: By far the most popular and powerful general-purpose numerical methods for integrating ODEs. Idea behind: evaluate f at carefully chosen values of its arguments, t and x, in order to create an accurate approximation (as accurate as a higher-order Taylor expansion) of x(t + t) without evaluating derivatives of f.

Runge-Kutta schemes: derived by matching multivariable Taylor series expansions of f (t, x) with the Taylor series expansion of x(t + t). To find the right values of t and x at which to evaluate f : Take a Taylor expansion of f evaluated at these (unknown) values; Match the resulting numerical scheme to a Taylor series expansion of x(t + t) around t.

Generalization of Taylor s theorem to functions of two variables: THEOREM: f (t, x) C n+1 ([0, T ] R). Let (t 0, x 0 ) [0, T ] R. There exist t 0 τ t, x 0 ξ x, s.t. f (t, x) = P n (t, x) + R n (t, x), P n (t, x): nth Taylor polynomial of f around (t 0, x 0 ); R n (t, x): remainder term associated with P n (t, x).

[ P n(t, x) = f (t 0, x 0) + (t t f 0) t [ (t t0) 2 2 f + 2 (x x0)2 + 2 [ 1 n... + n! ] f (t0, x0) + (x x0) (t0, x0) x t (t0, x0) + (t t0)(x x0) 2 f (t0, x0) 2 t x ] 2 f (t0, x0) x 2 j=0 j (t t 0) n j (x x 0) j n f t n j x C n j=0 ] (t0, x0) ; j 1 n+1 R n(t, x) = C n+1 j (t t 0) n+1 j (x x 0) j n+1 f (τ, ξ). (n + 1)! t n+1 j x j

Illustration: obtain a second-order accurate method (truncation error O(( t) 2 )). Match x + tf (t, x) + ( t)2 [ f f (t, x) + 2 t x (t, x)f (t, x)] + ( t)3 d 2 [f (τ, x)] 6 dt2 to x + ( t)f (t + α 1, x + β 1), τ [t, t + t] and α 1 and β 1: to be found. Match f (t, x) + ( t) [ f f (t, x) + 2 t x (t, x)f (t, x)] + ( t)2 d 2 [f (t, x)] 6 dt2 with f (t + α 1, x + β 1) at least up to terms of the order of O( t).

Multivariable version of Taylor s theorem to f, f f (t + α 1, x + β 1) = f (t, x) + α 1 t (t, x) + f β1 x (t, x) + α2 1 2 f (τ, ξ) 2 t2 2 f +α 1β 1 t x (τ, ξ) + β2 1 2 f (τ, ξ), 2 x 2 t τ t + α 1 and x ξ x + β 1. α 1 = t 2 and β 1 = t f (t, x). 2 Resulting numerical scheme: explicit midpoint method: the simplest example of a Runge-Kutta method of second order. Improved Euler method: also another often-used Runge-Kutta method.

General Runge-Kutta method: x k+1 = x k + t m: number of terms in the method. Each t i,k denotes a point in [t k, t k+1 ]. m c i f (t i,k, x i,k ), Second argument x i,k x(t i,k ) can be viewed as an approximation to the solution at the point t i,k. To construct an nth order Runge-Kutta method, we need to take at least m n terms. i=1

Best-known Runge-Kutta method: fourth-order Runge-Kutta method, which uses four evaluations of f during each step. κ 1 := f (t k, x k ), κ 2 := f (t k + t 2, x k + t κ1), 2 κ 3 := f (t k + t, x k + t κ2), 2 2 κ 4 := f (t k+1, x k + tκ 3), x k+1 = x k + ( t) (κ1 + 2κ2 + 2κ3 + κ4). 6 Values of f at the midpoint in time: given four times as much weight as values at the endpoints t k and t k+1 (similar to Simpson s rule from numerical integration).

Construction of Runge-Kutta methods: Construct Runge-Kutta methods by generalizing collocation methods. Discuss their consistency, stability, and order.

Collocation methods: P m: space of real polynomials of degree m. Interpolating polynomial: Given a set of m distinct quadrature points c 1 < c 2 <... < c m in R, and corresponding data g 1,..., g m ; There exists a unique polynomial, P(t) P m 1 s.t. P(c i ) = g i, i = 1,..., m.

DEFINITION: Define the ith Lagrange interpolating polynomial l i (t), i = 1,..., m, for the set of quadrature points {c j } by l i (t) := m j i,j=1 t c j c i c j. Set of Lagrange interpolating polynomials: form a basis of P m 1; Interpolating polynomial P corresponding to the data {g j } given by P(t) := m g i l i (t). i=1

Consider a smooth function g on [0, 1]. Approximate the integral of g on [0, 1] by exactly integrating the Lagrange interpolating polynomial of order m 1 based on m quadrature points 0 c 1 < c 2 <... < c m 1. Data: values of g at the quadrature points g i = g(c i ), i = 1,..., m.

Define the weights Quadrature formula: b i = 1 0 l i (s) ds. 1 g(s) ds 1 0 0 i=1 m g i l i (s) ds = m b i g(c i ). i=1

f : smooth function on [0, T ]; t k = k t for k = 0,..., K = T /( t): discretization points in [0, T ]. tk+1 f (s) ds can be approximated by t k tk+1 t k f (s) ds = ( t) 1 f (t k + tτ) dτ ( t) 0 i=1 m b i f (t k + ( t)c i ).

x: polynomial of degree m satisfying x(0) = x 0, F i R, i = 1,..., m. dx dt (c i t) = F i, Lagrange interpolation formula for t in the first time-step interval [0, t], m dx dt (t) = F i l i ( t t ). i=1

Integrating over the intervals [0, c i t] m ci m x(c i t) = x 0 + ( t) F j l j (s) ds = x 0 + ( t) a ij F j, j=1 0 j=1 for i = 1,..., m, with Integrating over [0, t] a ij := ci 0 l j (s) ds. m 1 m x( t) = x 0 + ( t) F i l i (s) ds = x 0 + ( t) b i F i. i=1 0 i=1

Writing dx/dt = f (x(t)), on the first time step interval [0, t], m F i = f (x 0 + ( t) a ij F j ), i = 1,..., m, x( t) = x 0 + ( t) Similarly, we have on [t k, t k+1 ] F i,k = f (x(t k ) + ( t) j=1 x(t k+1 ) = x(t k ) + ( t) m b i F i. i=1 m a ij F j,k ), i = 1,..., m, j=1 m b i F i,k. In the collocation method: one first solves the coupled nonlinear system to obtain F i,k, i = 1,..., m, and then computes x(t k+1 ) from x(t k ). i=1

REMARK: and t l 1 = m i=1 m i=1 m j=1 c l 1 i l i (t), t [0, 1], l = 1,..., m, b i c l 1 i = 1, l = 1,..., m, l a ij c l 1 j = cl i l, i, l = 1,..., m.

Runge-Kutta methods as generalized collocation methods In the collocation method, the coefficients b i and a ij : defined by certain integrals of the Lagrange interpolating polynomials associated with a chosen set of quadrature nodes c i, i = 1,..., m. Natural generalization of collocation methods: obtained by allowing the coefficients c i, b i, and a ij to take arbitrary values, not necessary related to quadrature formulas.

No longer assume the c i to be distinct. However, assume that m c i = a ij, i = 1,..., m. Class of Runge-Kutta methods for solving the ODE, m F i,k = f (t i,k, x k + ( t) a ij F j,k ), j=1 x k+1 = x k + ( t) t i,k = t k + c i t, or equivalently, x i,k = x k + ( t) x k+1 = x k + ( t) j=1 m b i F i,k, i=1 m a ij f (t j,k, x j,k ), j=1 m b i f (t i,k, x i,k ). i=1

Let Define Φ by κ j := f (t + c j t, x j ); m x i = x + ( t) a ij κ j, j=1 m Φ(t, x, t) = b i f (t + c i t, x i ). i=1 One step method. If a ij = 0 for j i scheme: explicit.

EXAMPLES: Explicit Euler s method and Trapezoidal scheme: Runge-Kutta methods. Explicit Euler s method: m = 1, b 1 = 1, a 11 = 0.

Trapezoidal scheme: m = 2, b 1 = b 2 = 1/2, a 11 = a 12 = 0, a 21 = a 22 = 1/2.

Fourth-order Runge-Kutta method: m = 4, c 1 = 0, c 2 = c 3 = 1/2, c 4 = 1, b 1 = 1/6, b 2 = b 3 = 1/3, b 4 = 1/6, a 21 = a 32 = 1/2, a 43 = 1, and all the other a ij entries are zero.

Consistency, stability, convergence, and order of Runge-Kutta methods Runge-Kutta scheme: consistent iff m b j = 1. j=1

Stability: A = ( a ij ) m i,j=1. Spectral radius ρ( A ) of the matrix A : ρ( A ) := max{ λ j, λ j : eigenvalue of A }.

THEOREM: C f : Lipschitz constant for f. Suppose ( t)c f ρ( A ) < 1. Then the Runge-Kutta method: stable.

PROOF: Φ(t, x, t) Φ(t, y, t) = m i=1 ] b i [f (t+c i t, x i ) f (t+c i t, y i ), with and m x i = x + ( t) a ij f (t + c j t, x j ), j=1 m y i = y + ( t) a ij f (t + c j t, y j ). j=1

x i y i = x y + ( t) m j=1 ] a ij [f (t + c j t, x j ) f (t + c j t, y j ). For i = 1,..., m, m x i y i x y + ( t)c f a ij x j y j. j=1

X and Y : X = X Y + ( t)c f A X, x 1 y 1. x m y m provided that ( t)c f ρ( A ) < 1. and Y = X (I ( t)c f A ) 1 Y, stability of the Runge-Kutta scheme. x y.. x y

Dahlquist-Lax equivalence theorem Runge-Kutta scheme: convergent provided that m j=1 b j = 1 and ( t)c f ρ( A ) < 1 hold.

Order of the Runge-Kutta scheme: compute the order as t 0 of the truncation error Write T k ( t) = x(t k+1) x(t k ) t T k ( t) = x(t k+1) x(t k ) t i=1 Suppose that f : smooth enough f (t k + c i t, x(t k ) + t Φ(t k, x(t k ), t). m m b i f (t k + c i t, x(t k ) + t a ij κ j ). m a ij κ j ) j=1 [ f = f (t k, x(t k )) + t c i t (t k, x(t k )) + ( a ij κ j ) f ] x (t k, x(t k )) j=1 +O(( t) 2 ). j=1

a ij κ j = ( a ij )f (t k, x(t k )) + O( t) = c i f (t k, x(t k )) + O( t). j=1 j=1

f (t k + c i t, x(t k ) + t m a ij κ j ) j=1 [ f = f (t k, x(t k )) + tc i t (t k, x(t k )) + f ] x (t k, x(t k ))f (t k, x(t k )) +O(( t) 2 ).

THEOREM: Assume that f : smooth enough. Then the Runge-Kutta scheme: of order 2 provided that the conditions m b j = 1 and j=1 m hold. i=1 b i c i = 1 2

Higher-order Taylor expansions THEOREM: Assume that f : smooth enough. Then the Runge-Kutta scheme: of order 3 provided that the conditions m b j = 1, and hold. i=1 j=1 m b i c i = 1 2, i=1 m b i ci 2 = 1 m 3, m b i a ij c j = 1 6 i=1 j=1

Of Order 4 provided that in addition m b i ci 3 = 1 m 4, m b i c i a ij c j = 1 8, i=1 i=1 j=1 m m m m i=1 j=1 m b i c i a ij cj 2 = 1 12, i=1 j=1 l=1 b i a ij a jl c l = 1 24 hold. The (fourth-order) Runge-Kutta scheme: of order 4.

Multi-step methods Runge-Kutta methods: improvement over Euler s methods in terms of accuracy, but achieved by investing additional computational effort. The fourth-order Runge-Kutta method involves four function evaluations per step.

For comparison, by considering three consecutive points t k 1, t k, t k+1, integrating the differential equation between t k 1 and t k+1, and applying Simpson s rule to approximate the resulting integral yields x(t k+1 ) = x(t k 1 ) + x(t k 1 ) + ( t) 3 tk+1 t k 1 f (s, x(s)) ds [ f (t k 1, x(t k 1 )) + 4f (t k, x(t k )) + f (t k+1, x(t k+1 )) x k+1 = x k 1 + ( t) [ ] f (t k 1, x k 1 ) + 4f (t k, x k ) + f (t k+1, x k+1 ). 3 Need two preceding values, x k and x k 1 in order to calculate x k+1 : two-step method. In contrast with the one-step methods: only a single value of x k required to compute the next approximation x k+1. ],

General n-step method: n n α j x k+j = ( t) β j f (t k+j, x k+j ), j=0 α j and β j : real constants and α n 0. If β n = 0, then x k+n : obtained explicitly from previous values of x j and f (t j, x j ) n-step method: explicit. Otherwise, the n-step method: implicit. j=0

EXAMPLE: (i) Two-step Adams-Bashforth method: explicit two-step method x k+2 = x k+1 + ( t) [ ] 3f (t k+1, x k+1 ) f (t k, x k ) ; 2 (ii) Three-step Adams-Bashforth method: explicit three-step method x k+3 = x k+2 + ( t) 12 [ ] 23f (t k+2, x k+2 ) 16f (t k+1, x k+1 )+f (t k, x k ) ;

(iii) Four-step Adams-Bashforth method: explicit four-step method [ x k+4 = x k+3 + ( t) 55f (t 24 k+3, x k+3 ) 59f (t k+2, x k+2 ) ] +37f (t k+1, x k+1 ) 9f (t k, x k ) ; (iv) Two-step Adams-Moulton method: implicit two-step method x k+2 = x k+1 + ( t) [ ] 5f (t k+2, x k+2 ) + 8f (t k+1, x k+1 ) + f (t k, x k ) ; 12 (v) Three-step Adams-Moulton method: implicit three-step method x k+3 = x k+2 + ( t) 24 [ 9f (t k+3, x k+3 )+19f (t k+2, x k+2 ) 5f (t k+1, x k+1 ) 9f (t k, x k ) ].

Construction of linear multi-step methods Suppose that x k, k N: sequence of real numbers. Shift operator E, forward difference operator + and backward difference operator : E : x k x k+1, + : x k x k+1 x k, : x k x k x k 1. + = E I and = I E 1 for any n N, (E I ) n = n ( 1) j Cj n E n j j=0 and (I E 1 ) n = n ( 1) j Cj n E j. j=0

and n +x k = n x k = n ( 1) j Cj n x k+n j j=0 n ( 1) j Cj n x k j. j=0

y(t) C (R); t k = k t, t > 0. Taylor series for any s N, ( + E s 1 y(t k ) = y(t k + s t) = l! (s t ) t )l y (t k ) = ( e s( t) t ) y (tk ), l=0 E s = e s( t) t. Formally, ( t) t = ln E = ln(i ) = + 1 2 2 + 1 3 3 +...

x(t): solution of ODE: ( t)f (t k, x(t k )) = ( + 1 2 2 + 1 ) 3 3 +... x(t k ). Successive truncation of the infinite series and so on. x k x k 1 = ( t)f (t k, x k ), 3 2 x k 2x k 1 + 1 2 x k 2 = ( t)f (t k, x k ), 11 6 x k 3x k 1 + 3 2 x k 2 1 3 x k 3 = ( t)f (t k, x k ), Class of implicit multi-step methods: backward differentiation formulas.

Similarly, E 1 (( t) t ) = ( t) t E 1 = (I ) ln(i ). (( t) ) = E(I ) ln(i ) = (I ) ln(i )E. t ( t)f (t k, x(t k )) = ( 1 2 2 1 ) 6 3 +... x(t k+1 ).

Successive truncation of the infinite series explicit numerical schemes: x k+1 x k = ( t)f (t k, x k ), 1 2 x k+1 1 2 x k 1 = ( t)f (t k, x k ), 1 3 x k+1 + 1 2 x k x k 1 + 1 6 x k 2 = ( t)f (t k, x k ),. The first of these numerical scheme: explicit Euler method, while the second: explicit mid-point method.

Construct further classes of multi-step methods: For y C, and tk D 1 y(t k ) = y(t 0) + y(s) ds, t 0 (E I )D 1 y(t k ) = tk+1 t k y(s) ds. (E I )D 1 = +D 1 = E D 1 = ( t)e (( t)d) 1,

(E I )D 1 = ( t)e ( ln(i ) ) 1.

(E I )D 1 = E D 1 = ED 1 = (DE 1 ) 1 = ( t) ( ( t)de 1 ) 1. (E I )D 1 = ( t) ((I ) ln(i )) 1.

tk+1 x(t k+1 ) x(t k ) = f (s, x(s)) ds = (E I )D 1 f (t k, x(t k )), t k ( ( t) (I ) ln(i ) ) 1 f (t k, x(t k )) x(t k+1 ) x(t k ) = ( ( t)e ) ln(i ) 1 f (t k, x(t k )).

Expand ln(i ) into a Taylor series on the right-hand side [ x(t k+1 ) x(t k ) = ( t) I + 1 2 + 5 12 2 + 3 ] 8 3 +... f (t k, x(t k )) and [ x(t k+1 ) x(t k ) = ( t) I 1 2 1 12 2 1 ] 24 3 +... f (t k+1, x(t k+1 )). Successive truncations families of (explicit) Adams-Bashforth methods and of (implicit) Adams-Moulton methods.

Consistency, stability, and convergence Introduce the concepts of consistency, stability, and convergence for analyzing linear multi-step methods.

DEFINITION: Consistency The n-step method: consistent with the ODE if the truncation error defined by n [ dx j=0 αj x(t k+j ) ( t)β j dt T k = (t k+j) ] ( t) n j=0 β j is s.t. for any ɛ > 0 there exists h 0 for which T k ɛ for 0 < t h 0 and any (n + 1) points ( (t j, x(t j )),..., (t j+n, x(t j+n )) ) on any solution x(t).

DEFINITION: Stability The n-step method: stable if there exists a constant C s.t., for any two sequences (x k ) and ( x k ) which have been generated by the same formulas but different initial data x 0, x 1,..., x k 1 and x 0, x 1,..., x k 1, respectively, x k x k C max{ x 0 x 0, x 1 x 1,..., x k 1 x k 1 } as t 0.

THEOREM: Convergence Suppose that the n-step method: consistent with the ODE. Stability condition: necessary and sufficient for the convergence. If x C p+1 and the truncation error O(( t) p ), then the global error e k = x(t k ) x k : O(( t) p ).

Stiff equations and systems: Let ɛ > 0: small parameter. Consider the initial value problem dx(t) = 1 x(t), t [0, T ], dt ɛ x(0) = 1, Exponential solution x(t) = e t/ɛ. Explicit Euler method with step size t: with solution x k+1 = (1 t ɛ )x k, x 0 = 1, x k = (1 t ɛ )k.

ɛ > 0 exact solution: exponentially decaying and positive. If 1 t ɛ < 1, then the iterates grow exponentially fast in magnitude, with alternating signs. Numerical solution: nowhere close to the true solution. If 1 < 1 t < 0, then the numerical solution decays in magnitude, but ɛ continue to alternate between positive and negative values. To correctly model the qualitative features of the solution and obtain a numerically accurate solution: choose the step size t so as to ensure that 1 t > 0, and hence t < ɛ. ɛ stiff differential equation.

In general, an equation or system: stiff if it has one or more very rapidly decaying solutions. In the case of the autonomous constant coefficient linear system: stiffness occurs whenever the coefficient matrix A has an eigenvalues λ j0 with large negative real part: Rλ j0 0, resulting in a very rapidly decaying eigensolution. It only takes one such eigensolution to render the equation stiff, and ruin the numerical computation of even well behaved solutions. Even though the component of the actual solution corresponding to λ j0 : almost irrelevant, its presence continues to render the numerical solution to the system very difficult. Most of the numerical methods: suffer from instability due to stiffness for sufficiently small positive ɛ. Stiff equations require more sophisticated numerical schemes to integrate.

Perturbation theories for differential equations Regular perturbation theory; Singular perturbation theory.

Regular perturbation theory: ɛ > 0: small parameter and consider dx = f (t, x, ɛ), t [0, T ], dt x(0) = x 0, x 0 R. f C 1 regular perturbation problem. Taylor expansion of x(t, ɛ) C 1 : x(t, ɛ) = x (0) (t) + ɛx (1) (t) + o(ɛ) with respect to ɛ in a neighborhood of 0.

x (0) : f 0(t, x) := f (t, x, 0). x (1) (t) = x (t, 0) : ɛ dx (1) dt x (1) (0) = 0. Compute numerically x (0) and x (1). dx (0) = f 0(t, x (0) ), t [0, T ], dt x (0) (0) = x 0, x 0 R, = f x (t, x (0), 0)x (1) + f ɛ (t, x (0), 0), t [0, T ],

Singular perturbation theory: Consider ɛ d 2 x dt 2 dx = f (t, x, ), t [0, T ], dt x(0) = x 0, x(t ) = x 1. Singular perturbation problem: order reduction when ɛ = 0.

Consider the linear, scalar and of second-order ODE: ɛ d 2 x dt 2 + 2 dx + x = 0, t [0, 1], dt x(0) = 0, x(1) = 1. α(ɛ) := 1 1 ɛ ɛ and β(ɛ) := 1 + 1 ɛ. x(t, ɛ) = e αt e βt/ɛ, t [0, 1]. e α e β/ɛ x(t, ɛ): involves two terms which vary on widely different length-scales.

Behavior of x(t, ɛ) as ɛ 0 +. Asymptotic behavior: nonuniform; There are two cases matching outer and inner solutions.

(i) Outer limit: t > 0 fixed and ɛ 0 +. Then x(t, ɛ) x (0) (t), x (0) (t) := e (1 t)/2. Leading-order outer solution satisfies the boundary condition at t = 1 but not the boundary condition at t = 0. Indeed, x (0) (0) = e 1/2. (ii) Inner limit: t/ɛ = τ fixed and ɛ 0 +. Then x(ɛτ, ɛ) X (0) (τ) := e 1/2 (1 e 2τ ). Leading-order inner solution satisfies the boundary condition at t = 0 but not the one at t = 1, which corresponds to τ = 1/ɛ. Indeed, lim τ + X (0) (τ) = e 1/2. (iii) Matching: Both the inner and outer expansions: valid in the region ɛ t 1, corresponding to t 0 and τ + as ɛ 0 +. They satisfy the matching condition lim x (0) (t) = lim X (0) (τ). t 0 + τ +

Construct an asymptotic solution without relying on the fact that we can solve it exactly. Outer solution: x(t, ɛ) = x (0) (t) + ɛx (1) (t) + O(ɛ 2 ). Use this expansion and equate the coefficients of the leading-order terms to zero. (0) dx 2 + x (0) = 0, t [0, 1], dt x (0) (1) = 1.

Inner solution. Suppose that there is a boundary layer at t = 0 of width δ(ɛ), and introduce a stretched variable τ = t/δ. Look for an inner solution X (τ, ɛ) = x(t, ɛ).

X satisfies d dt = 1 d δ dτ, ɛ d 2 X δ 2 dτ + 2 dx 2 δ dτ + X = 0. Two possible dominant balances: (i) δ = 1, leading to the outer solution; (ii) δ = ɛ, leading to the inner solution. Boundary layer thickness: of the order of ɛ, and the appropriate inner variable: τ = t/ɛ.

Equation for X : d 2 X dτ 2 + 2 dx dτ X (0, ɛ) = 0. + ɛx = 0, Impose only the boundary condition at τ = 0, since we do not expect the inner expansion to be valid outside the boundary layer where t = O(ɛ). Seek an inner expansion X (τ, ɛ) = X (0) (τ) + ɛx (1) (τ) + O(ɛ 2 ) and find that d 2 X (0) (0) dx + 2 = 0, dτ 2 dτ X (0) (0) = 0.

General solution: c: arbitrary constant of integration. X (0) (τ) = c(1 e 2τ ), Determine the unknown constant c by requiring that the inner solution matches with the outer solution. Matching condition: c = e 1/2. lim x (0) (t) = lim X (0) (τ), t 0 + τ +

Asymptotic solution as ɛ 0 + : { e 1/2 (1 e 2τ ) as ɛ 0 + with t/ɛ fixed, x(t, ɛ) = e (1 t)/2 as ɛ 0 + with t fixed.