Numerical Methods for Differential Equations

Similar documents
Numerical Methods for Differential Equations

Numerical Methods for Differential Equations Mathematical and Computational Tools

1 Ordinary Differential Equations

2 Numerical Methods for Initial Value Problems

Solving Ordinary Differential equations

Applied Math for Engineers

Chapter 5 Exercises. (a) Determine the best possible Lipschitz constant for this function over 2 u <. u (t) = log(u(t)), u(0) = 2.

Scientific Computing: An Introductory Survey

Part IB Numerical Analysis

CS 450 Numerical Analysis. Chapter 9: Initial Value Problems for Ordinary Differential Equations

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 9

Ordinary Differential Equations

Fourth Order RK-Method

Numerical Differential Equations: IVP

Numerical Methods - Initial Value Problems for ODEs

Multistage Methods I: Runge-Kutta Methods

CHAPTER 5: Linear Multistep Methods

Graded Project #1. Part 1. Explicit Runge Kutta methods. Goals Differential Equations FMN130 Gustaf Söderlind and Carmen Arévalo

8.1 Introduction. Consider the initial value problem (IVP):

Numerical solution of ODEs

Do not turn over until you are told to do so by the Invigilator.

Module 4: Numerical Methods for ODE. Michael Bader. Winter 2007/2008

Exam in TMA4215 December 7th 2012

Ordinary differential equations - Initial value problems

Review Higher Order methods Multistep methods Summary HIGHER ORDER METHODS. P.V. Johnson. School of Mathematics. Semester

On interval predictor-corrector methods

Bindel, Fall 2011 Intro to Scientific Computing (CS 3220) Week 12: Monday, Apr 18. HW 7 is posted, and will be due in class on 4/25.

Lecture 4: Numerical solution of ordinary differential equations

Multistep Methods for IVPs. t 0 < t < T

CS520: numerical ODEs (Ch.2)

Mathematics for chemical engineers. Numerical solution of ordinary differential equations

Math 660 Lecture 4: FDM for evolutionary equations: ODE solvers

MTH 452/552 Homework 3

Consistency and Convergence

Initial value problems for ordinary differential equations

ECE257 Numerical Methods and Scientific Computing. Ordinary Differential Equations

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit V Solution of

Initial value problems for ordinary differential equations

Solving PDEs with PGI CUDA Fortran Part 4: Initial value problems for ordinary differential equations

Linear Multistep Methods I: Adams and BDF Methods

The family of Runge Kutta methods with two intermediate evaluations is defined by

Solving Ordinary Differential Equations

Ordinary Differential Equations

TMA4215: Lecture notes on numerical solution of ordinary differential equations.

Math Numerical Analysis Homework #4 Due End of term. y = 2y t 3y2 t 3, 1 t 2, y(1) = 1. n=(b-a)/h+1; % the number of steps we need to take

HIGHER ORDER METHODS. There are two principal means to derive higher order methods. b j f(x n j,y n j )

Examination paper for TMA4215 Numerical Mathematics

Explicit One-Step Methods

Ordinary differential equation II

You may not use your books, notes; calculators are highly recommended.

Jim Lambers MAT 772 Fall Semester Lecture 21 Notes

Introductory Numerical Analysis

CS 257: Numerical Methods

Jim Lambers MAT 460/560 Fall Semester Practice Final Exam

Numerical Methods in Physics and Astrophysics

Finite Differences for Differential Equations 28 PART II. Finite Difference Methods for Differential Equations

Introduction to the Numerical Solution of IVP for ODE

Numerical Analysis. A Comprehensive Introduction. H. R. Schwarz University of Zürich Switzerland. with a contribution by

COSC 3361 Numerical Analysis I Ordinary Differential Equations (II) - Multistep methods

Chapter 10. Initial value Ordinary Differential Equations

Solving scalar IVP s : Runge-Kutta Methods

Modeling & Simulation 2018 Lecture 12. Simulations

Differential Equations

Ordinary Differential Equations II

Numerical Methods in Physics and Astrophysics

multistep methods Last modified: November 28, 2017 Recall that we are interested in the numerical solution of the initial value problem (IVP):

(f(x) P 3 (x)) dx. (a) The Lagrange formula for the error is given by

Advanced methods for ODEs and DAEs

Lecture IV: Time Discretization


you expect to encounter difficulties when trying to solve A x = b? 4. A composite quadrature rule has error associated with it in the following form

Ordinary Differential Equations II

Numerical Methods for Ordinary Differential Equations

Astrodynamics (AERO0024)

CHAPTER 10: Numerical Methods for DAEs

Validated Explicit and Implicit Runge-Kutta Methods

Runge-Kutta and Collocation Methods Florian Landis

Numerical Analysis II. Problem Sheet 9

Parallel Methods for ODEs

Butcher tableau Can summarize an s + 1 stage Runge Kutta method using a triangular grid of coefficients

Advanced methods for ODEs and DAEs

Examination paper for TMA4125 Matematikk 4N

The Direct Transcription Method For Optimal Control. Part 2: Optimal Control

4 Stability analysis of finite-difference methods for ODEs

AM205: Assignment 3 (due 5 PM, October 20)

AN OVERVIEW. Numerical Methods for ODE Initial Value Problems. 1. One-step methods (Taylor series, Runge-Kutta)

x 2 x n r n J(x + t(x x ))(x x )dt. For warming-up we start with methods for solving a single equation of one variable.

5. Ordinary Differential Equations. Indispensable for many technical applications!

Euler s Method, cont d

Lecture 6. Numerical Solution of Differential Equations B21/B1. Lecture 6. Lecture 6. Initial value problem. Initial value problem

Math 128A Spring 2003 Week 12 Solutions

Lecture V: The game-engine loop & Time Integration

Southern Methodist University.

Numerical Methods for Initial Value Problems; Harmonic Oscillators

Do not turn over until you are told to do so by the Invigilator.

Theory, Solution Techniques and Applications of Singular Boundary Value Problems

KINGS COLLEGE OF ENGINEERING DEPARTMENT OF MATHEMATICS ACADEMIC YEAR / EVEN SEMESTER QUESTION BANK

(0, 0), (1, ), (2, ), (3, ), (4, ), (5, ), (6, ).

Backward error analysis

Numerical Methods for the Solution of Differential Equations

Transcription:

Numerical Methods for Differential Equations Chapter 2: Runge Kutta and Linear Multistep methods Gustaf Söderlind and Carmen Arévalo Numerical Analysis, Lund University Textbooks: A First Course in the Numerical Analysis of Differential Equations, by Arieh Iserles and Introduction to Mathematical Modelling with Differential Equations, by Lennart Edsberg c Gustaf Söderlind, Numerical Analysis, Mathematical Sciences, Lund University, 2008-09 Numerical Methods for Differential Equations p. 1/63

Chapter 2: contents Solving nonlinear equations Fixed points Newton s method Quadrature Runge Kutta methods Embedded RK methods and adaptivity Implicit Runge Kutta methods Stability and the stability function Linear multistep methods Numerical Methods for Differential Equations p. 2/63

1. Solving nonlinear equations f(x) = 0 We can have a single equation x cos(x) = 0 or a system 4x 2 y 2 = 0 4xy 2 x = 1 Nonlinear equations may have no solution; one solution; any finite number of solutions; infinitely many solutions Numerical Methods for Differential Equations p. 3/63

Iteration and convergence Nonlinear equations are solved iteratively. One computes a sequence {x [k] } of approximations to the root x For f(x ) = 0, let e [k] = x [k] x Definition The method converges if lim k e [k] = 0 Definition The convergence is linear if e [k+1] c e [k] with 0 < c < 1 quadratic if e [k+1] c e [k] p with p = 2 superlinear if p > 1; cubic if p = 3, etc. Numerical Methods for Differential Equations p. 4/63

2. Fixed points Definition x is called a fixed point of the function g if x = g(x) Definition A function g is called contractive if g(x) g(y) L[g] x y with L[g] < 1 for all x,y in the domain of g Numerical Methods for Differential Equations p. 5/63

Fixed Point Theorem Theorem Assume that g is continuously differentiable on the compact interval I, i.e., g C 1 (I) If g : I I there exists an x I such that x = g(x ) If in addition L[g] < 1 on I, then x is unique, and...... the iteration x n+1 = g(x n ) converges to the fixed point x for all x 0 I Note Both conditions are absolutely essential! Numerical Methods for Differential Equations p. 6/63

Fixed Point Theorem. Existence and uniqueness 1 1 1 0.9 0.9 0.9 0.8 0.8 0.8 0.7 0.7 0.7 0.6 0.6 0.6 0.5 0.5 0.5 0.4 0.4 0.4 0.3 0.3 0.3 0.2 0.2 0.2 0.1 0.1 0.1 0 0 0.5 1 0 0 0.5 1 0 0 0.5 1 Left Center Right No condition satisfied: maybe no x First condition satisfied: maybe multiple x Both conditions satisfied: unique x Numerical Methods for Differential Equations p. 7/63

Fixed point iteration. x = e x ; g(x) = e x x [k+1] = e x[k] ; x [0] = 0 g : [0,1] [0,1]; g (x) = e x < 1 for x > 0 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Numerical Methods for Differential Equations p. 8/63

Error estimation in fixed point iteration Assume g (x) L < 1 for all x I. x [k+1] = g(x [k] ) x = g(x ) By the mean value theorem, x [k+1] x = g(x [k] ) g(x ) = g(x [k] ) g(x [k+1] ) + g(x [k+1] ) g(x ) x [k+1] x L x [k] x [k+1] + L x [k+1] x Numerical Methods for Differential Equations p. 9/63

Error estimation... (1 L) x [k+1] x L x [k] x [k+1] Error estimate (computable error bound) x [k+1] x L 1 L x[k] x [k+1] Theorem If L[g] < 1, then the error in fixed point iteration is bounded by x [k+1] x L[g] 1 L[g] x[k] x [k+1] Numerical Methods for Differential Equations p. 10/63

Example: The trapezoidal rule Approximate the solution to y = y 2 cos t, y(0) = 1/2, t [0,8π]. Taking 96 steps, solve nonlinear equations with one (left) and four (right) fixed point iterations. Graphs show absolute error 0.7 0.01 0.6 0.009 0.008 0.5 0.007 0.4 0.006 0.005 0.3 0.004 0.2 0.003 0.002 0.1 0.001 0 0 10 20 30 0 0 10 20 30 Numerical Methods for Differential Equations p. 11/63

3. Newton s method Newton s method solves f(x) = 0 using repeated linearizations. Linearize at the point (x [k],f(x [k] ))! 2 1.8 1.6 1.4 1.2 1 0.8 0.6 0.4 0.2 0 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 Numerical Methods for Differential Equations p. 12/63

Straight line equation Newton s method... y f(x [k] ) = f (x [k] ) (x x [k] ) Define x = x [k+1] y = 0, so that f(x [k] ) = f (x [k] ) (x [k+1] x [k] ) Solve for x = x [k+1], to get Newton s method x [k+1] = x [k] f(x[k] ) f (x [k] ) Numerical Methods for Differential Equations p. 13/63

Newton s method. Alternative derivation Expand f(x [k+1] ) in a Taylor series around x [k] f(x [k+1] ) = f(x [k] + (x [k+1] x [k] )) f(x [k] ) + f (x [k] ) (x [k+1] x [k] ) := 0 Newton s method x [k+1] = x [k] f(x[k] ) f (x [k] ) Numerical Methods for Differential Equations p. 14/63

Newton s method. Alternative derivation... This holds also when f is vector valued (equation systems) f(x [k] ) + f (x [k] ) (x [k+1] x [k] ) := 0 x [k+1] = x [k] ( f (x [k] ) ) 1 f(x [k] ) Definition f (x [k] ) is the Jacobian matrix of f, defined by f (x) = { fi x j } Numerical Methods for Differential Equations p. 15/63

Newton s method. Convergence Write Newton s method as a fixed point iteration with iteration function g(x) := x f(x)/f (x) x [k+1] = g(x [k] ) Note Newton s method converges fast if f (x ) 0, as g (x ) = f(x )f (x )/f (x ) 2 = 0 Expand g(x) in a Taylor series around x g(x [k] ) g(x ) g (x )(x [k] x ) + g (x ) (x [k] x ) 2 2 x [k+1] x g (x ) (x [k] x ) 2 2 Numerical Methods for Differential Equations p. 16/63

Newton s method. Convergence... Define the error by ε [k] = x [k] x, then ε [k+1] ( ε [k]) 2 Newton s method is quadratically convergent! Fixed point iterations are typically only linearly convergent ε [k+1] ε [k] Numerical Methods for Differential Equations p. 17/63

Implicit Euler. Newton vs Fixed point As y n+1 = y n + hf(y n+1 ) we need to solve an equation y = hf(y) + ψ Note All implict methods lead to an equation of this form! Theorem Fixed point iterations converge if L[hf] < 1, restricting the step size to h < 1/L[f]! Stiff equations have L[hf] 1 so fixed point iterations will not converge; it is necessary to use Newton s method! Numerical Methods for Differential Equations p. 18/63

Convergence order and rate Definition The convergence order is p with (asymptotic) error constant C p, if 0 < lim k ε [k+1] ε [k] p = C p < Special cases: p = 1 p = 2 Linear convergence Example Fixed point iteration C p = g (x ) Quadratic convergence Example Newton iteration C p = f (x ) 2f (x ) Numerical Methods for Differential Equations p. 19/63

4. Quadrature (integration) formulas Numerical quadrature is the approximation of definite integrals I(f) = b a f(x)dx We substitute the infinite sum by a finite sum: the integrand is sampled at a finite number of points I(f) = n w i f(x i ) + R n i=1 R n = I(f) n i=1 w if(x i ) is the quadrature error. Numerical method n I(f) w i f(x i ) i=1 Numerical Methods for Differential Equations p. 20/63

5. Runge Kutta methods To solve the IVP y = f(t,y), t t 0, y(t 0 ) = y 0 we use a quadrature formula to approximate the integral in y(t n+1 ) = y(t n ) + y(t n+1 ) y(t n ) + h tn+1 t n f(τ,y(τ))dτ s b j f(t n + c j h,y(t n + c j h)) j=1 Let {Y j } denote the numerical approximations to {y(t n + c j h)} A Runge Kutta method then has the form y n+1 = y n + h s b j f(t n + c j h,y j ) j=1 Numerical Methods for Differential Equations p. 21/63

Stage values and stage derivatives The vectors Y j are called stage values The vectors Y j = f(t n + c j h,y j ) are called stage derivatives Stage values and stage derivatives are related through Y i = y n + h i 1 j=1 a i,j Y j and the method advances one step through y n+1 = y n + h s j=1 b j Y j Numerical Methods for Differential Equations p. 22/63

The RK matrix, weights, nodes and stages A = {a ij } is the RK matrix, b = [b 1 b 2 b s ] T is the weight vector, c = [c 1 c 2 c s ] T are the nodes, and the method has s stages The Butcher tableau of an explicit RK method is 0 0 0 0 c 2 a 21 0 0...... c s a s1 a s2 0 or c A b T b 1 b 2 b s Simplifying assumption: c i = i a i,j i = 2,3,...,s j=1 Numerical Methods for Differential Equations p. 23/63

Simplest case: a 2-stage ERK Y 1 = f(t n,y n ) Y 2 = f(t n + c 2 h,y n + ha 21 Y 1) y n+1 = y n + h[b 1 Y 1 + b 2 Y 2] 0 0 0 c 2 a 21 0 b 1 b 2 c 2 = a 21 Numerical Methods for Differential Equations p. 24/63

2-stage ERK... Using Y 1 = f(t n,y n ), expand in Taylor series around t n,y n : Y 2 = f(t n + c 2 h,y n + ha 21 f(t n,y n )) = f + h [c 2 f t + a 21 f y f] + O(h 2 ) Numerical Methods for Differential Equations p. 25/63

2-stage ERK... Using Y 1 = f(t n,y n ), expand in Taylor series around t n,y n : Y 2 = f(t n + c 2 h,y n + ha 21 f(t n,y n )) = f + h [c 2 f t + a 21 f y f] + O(h 2 ) Inserting into y n+1 = y n + h[b 1 Y 1 + b 2Y ], we get y n+1 = y n + h(b 1 + b 2 )f + h 2 b 2 (c 2 f t + a 21 f y f) + O(h 3 ) 2 Numerical Methods for Differential Equations p. 25/63

2-stage ERK... Using Y 1 = f(t n,y n ), expand in Taylor series around t n,y n : Y 2 = f(t n + c 2 h,y n + ha 21 f(t n,y n )) = f + h [c 2 f t + a 21 f y f] + O(h 2 ) Inserting into y n+1 = y n + h[b 1 Y 1 + b 2Y ], we get y n+1 = y n + h(b 1 + b 2 )f + h 2 b 2 (c 2 f t + a 21 f y f) + O(h 3 ) Taylor expansion of the exact solution: y y = f = f t + f y y = f t + f y f y(t + h) = y + hf + 1 2 h2 (f t + f y f) + O(h 3 ) 2 Numerical Methods for Differential Equations p. 25/63

2-stage ERK... Matching terms in y n+1 = y n + h(b 1 + b 2 )f + h 2 b 2 (c 2 f t + a 21 f y f) + O(h 3 ) y(t + h) = y + hf + 1 2 h2 (f t + f y f) + O(h 3 ) and taking c 2 = a 21 we get the conditions for order 2: b 1 + b 2 = 1 (consistency) b 2 c 2 = 1/2 Note All consistent RK methods are convergent! Second order, 2-stage ERK methods have a Butcher tableau 0 0 0 1 2b 1 2b 0 1 b b Numerical Methods for Differential Equations p. 26/63

Example 1. The modified Euler method Put b = 1 to get 0 0 0 1 2 1 2 0 0 1 Y 1 = f(t n,y n ) Y 2 = f(t n + h/2,y n + hy 1/2) y n+1 = y n + hy 2 Second order explicit Runge Kutta (ERK) method Numerical Methods for Differential Equations p. 27/63

Example 2. Heun s method Put b = 1/2 to get 0 0 0 1 1 0 1/2 1/2 Y 1 = f(t n,y n ) Y 2 = f(t n + h,y n + hy 1) y n+1 = y n + h (Y 1 + Y 2)/2 Second order ERK, compare to the trapezoidal rule! Numerical Methods for Differential Equations p. 28/63

Third order 3-stage ERK Conditions for 3rd order: a 21 = c 2 ; a 31 + a 32 = c 3 b 1 + b 2 + b 3 = 1 b 2 c 2 + b 3 c 3 = 1/2 b 2 c 2 2 + b 3 c 2 3 = 1/3 b 3 a 32 c 2 = 1/6 Classical RK 0 0 0 0 1/2 1/2 0 0 1 1 2 0 1/6 2/3 1/6 Nyström scheme 0 0 0 0 2/3 2/3 0 0 2/3 0 2/3 0 1/4 3/8 3/8 Numerical Methods for Differential Equations p. 29/63

Exercise Construct the Butcher tableau for the 3-stage Heun method. Y 1 = f(t n,y n ) Y 2 = f(t n + h/3,y n + hy 1/3) Y 3 = f(t n + 2h/3,y n + 2hY 2/3) y n+1 = y n + h (Y 1 + 3Y 3)/4 Is it of order 3? Numerical Methods for Differential Equations p. 30/63

Classic RK4: 4th order, 4-stage ERK The original RK method (1895): Y 1 = f(t n,y n ) Y 2 = f(t n + h/2,y n + hy 1 /2) Y 3 = f(t n + h/2,y n + hy 2/2) Y 4 = f(t n + h,y n + hy 3) y n+1 = y n + h 6 (Y 1 + 2Y 2 + 2Y 3 + Y 4). Numerical Methods for Differential Equations p. 31/63

Classic 4th order RK4... Butcher tableau 0 1/2 1/2 1/2 0 1/2 1 0 0 1 1/6 1/3 1/3 1/6 s-stage ERK methods of order p = s exist only for s 4 (i.e. there is no 5-stage ERK of order 5) Numerical Methods for Differential Equations p. 32/63

Order conditions An s-stage ERK method has s + s(s 1)/2 coefficients to choose, but the order conditions are many Numerical Methods for Differential Equations p. 33/63

Order conditions An s-stage ERK method has s + s(s 1)/2 coefficients to choose, but the order conditions are many Number of coefficients stages s 1 2 3 4 5 6 7 8 9 10 # coefficients 1 3 6 10 15 21 28 36 45 55 Numerical Methods for Differential Equations p. 33/63

Order conditions An s-stage ERK method has s + s(s 1)/2 coefficients to choose, but the order conditions are many Number of coefficients stages s 1 2 3 4 5 6 7 8 9 10 # coefficients 1 3 6 10 15 21 28 36 45 55 Number of order conditions order p 1 2 3 4 5 6 7 8 9 10 # conditions 1 2 4 8 17 37 85 200 486 1205 Numerical Methods for Differential Equations p. 33/63

Order conditions An s-stage ERK method has s + s(s 1)/2 coefficients to choose, but the order conditions are many Number of coefficients stages s 1 2 3 4 5 6 7 8 9 10 # coefficients 1 3 6 10 15 21 28 36 45 55 Number of order conditions order p 1 2 3 4 5 6 7 8 9 10 # conditions 1 2 4 8 17 37 85 200 486 1205 Maximum order, min stages order p 1 2 3 4 5 6 7 8 min stages 1 2 3 4 6 7 9 10 Numerical Methods for Differential Equations p. 33/63

6. Embedded RK methods Two methods in one Butcher tableau (RK34) Y 1 = f(t n,y n ) Y 2 = f(t n + h/2,y n + hy 1/2) Y 3 = f(t n + h/2,y n + hy 2/2) Z 3 = f(t n + h,y n hy 1 + 2hY 2) Y 4 = f(t n + h,y n + hy 3) y n+1 = y n + h 6 (Y 1 + 2Y 2 + 2Y 3 + Y 4) order 4 z n+1 = y n + h 6 (Y 1 + 4Y 2 + Z 3) order 3 The difference y n+1 z n+1 can be used as an error estimate Numerical Methods for Differential Equations p. 34/63

Adaptive RK methods Example Use an embedded pair, e.g. RK34 Local error estimate r n+1 := y n+1 z n+1 = O(h 4 ) Adjust the step size h so that the local error estimate equals a prescribed tolerance TOL Simplest step size change scheme h n+1 = ( TOL r n+1 ) 1/p h n makes r n TOL Adaptivity using local error control Numerical Methods for Differential Equations p. 35/63

7. Implicit Runge Kutta methods (IRK) Y 1 Y 2 Y s y n+1 = y n + h = y n + h. = y n + h = y n + h s a 1,j f(t n + c j h,y j ) j=1 s a 2,j f(t n + c j h,y j ) j=1 s a s,j f(t n + c j h,y j ) j=1 s b j f(t n + c j h,y j ) j=1 Numerical Methods for Differential Equations p. 36/63

Implicit Runge Kutta methods... In stage value stage derivative form Y i = y n + h s j=1 a i,j Y j Y j = f(t n + c j h,y j ) y n+1 = y n + h s j=1 b j Y j Numerical Methods for Differential Equations p. 37/63

1-stage IRK method Implicit Euler (order 1) Implicit midpoint method (order 2) 1 1 1 Y 1 = f(t n + c 1 h,y n + ha 11 Y 1) y n+1 = y n + hb 1 Y 1 may be written as 1/2 1/2 y n+1 = y n + hb 1 f(t n + c 1 h,( b 1 a 11 b 1 y n + a 11 b 1 y n+1 )) Taylor expansion: y n+1 = y + hb 1 f(t n + c 1 h,y + a 11 hf + a 11 h 2 b 1 b 1 2 (f t + f y f) + O(h 2 )) = y + hb 1 hf + h 2 (b 1 c 1 f t + a 11 f y f) + O(h 3 ) 1 Numerical Methods for Differential Equations p. 38/63

Taylor expansions for 1-stage IRK y(t n+1 ) = y + hf + h2 2 (f t + f y f) + O(h 3 ) y n+1 = y + hb 1 f + h 2 (b 1 c 1 f t + a 11 f y f) + O(h 3 ) Condition for order 1 (consistency) b 1 = 1 Conditions for order 2 c 1 = a 11 = 1/2 Conclusion Implicit Euler is of order 1 and implicit midpoint method is the only 1-stage IRK of order 2 Numerical Methods for Differential Equations p. 39/63

8. The stability function Applying an IRK to the test equation, we get hy i = hλ (y n + s j=1 a i,j hy j ), i = 1,...,s Let hy = [hy 1 hy s ]T and 1 = [1 1 1] T R s, then (I hλa)hy = hλ1y n so hy = hλ(i hλa) 1 1y n y n+1 = y n + s j=1 b j hy j = [1 + hλb T (I hλa) 1 1]y n Numerical Methods for Differential Equations p. 40/63

The stability function Theorem For every Runge-Kutta method applied to the linear test equation y = λy we have y n+1 = R(hλ)y n where the rational function R(z) = 1 + zb T (I za) 1 1. If the method is explicit, then R(z) is a polynomial of degree s The function R(z) is called the method s stability function Numerical Methods for Differential Equations p. 41/63

A-stability of RK methods Definition The method s stability region is the set D = {z C : R(z) 1} Theorem If R(z) maps all of C into the unit circle, then the method is A-stable Corollary No explicit RK method is A-stable (For ERK R(z) is a polynomial, and P(z) as z ) Numerical Methods for Differential Equations p. 42/63

A-stable methods and the Maximum Principle Theorem R(z) 1 for all z C if and only if all the poles of R have positive real parts and R(iω) 1 for all ω R This is the Maximum Principle in complex analysis Example 0 1/4 1/4 Y 1 = f(y n + hy 1 /4 hy 2 /4) 2/3 1/4 5/12 Y 2 = f(y n + hy 1 /4 + 5hY 2 /12) 1/4 3/4 y n+1 = y n + h(y 1 + 3Y 2 )/4 Numerical Methods for Differential Equations p. 43/63

Example... Applied to the test equation, we get y n+1 = 1 + 1 3 hλ 1 2 3 hλ + 1 6 (hλ)2 y n with poles 2 ± i 2 C +, and R(iω) 2 = 1 + 1 9 ω2 1 + 1 9 ω2 + 1 36 ω4 1 Conclusion R(z) 1 z C. The method is A-stable Numerical Methods for Differential Equations p. 44/63

9. Linear Multistep Methods A multistep method is a method of the type y n+1 = Φ(f,h,y 0,y 1,...,y n ) using values from several previous steps Explicit Euler y n+1 = y n + hf(t n,y n ) Trapezoidal rule y n+1 = y n + h( f(t n,y n )+f(t n+1,y n+1 ) 2 ) Implicit Euler y n+1 = y n + hf(t n+1,y n+1 ) are all one-step methods, but also LM methods Numerical Methods for Differential Equations p. 45/63

Multistep methods and difference equations A k-step multistep method replaces the ODE y = f(t,y) by a (finite) difference equation k a k j y n j = h j=0 k b k j f(t n j,y n j ) j=0 Generating (characteristic) polynomials k ρ(w) = a m w m σ(w) = m=0 k m=0 b m w m Normalization a k = 1 or σ(1) = 1 (choose your preference) If b k = 0, the method is explicit ; if b k 0, it is implicit EE: ρ(w) = w 1, σ(w) = 1; IE: ρ(w) = w 1, σ(w) = w Numerical Methods for Differential Equations p. 46/63

Lagrange interpolation Given a grid {t 0,t 1,...,t k } construct a degree k polynomial basis Φ i (t) i = 0,1,...,k such that Φ i (t j ) = δ ij (the Kronecker delta) If the values z j = z(t j ) are known for the function z(t), then P(t) = k Φ j (t)z j j=0 interpolates z(t) on the grid: P(t j ) = z j ; P(t) z(t) for all t Numerical Methods for Differential Equations p. 47/63

Adams methods (J.C. Adams, 1880s) Suppose we have the first n + k approximations y m = y(t m ), m = 0, 1,...,n + k 1 Rewrite y = f(t,y) by integration y(t n+k ) y(t n+k 1 ) = tn+k t n+k 1 f(τ,y(τ)) dτ Adams methods approximate the integrand by an interpolation polynomial on t n,t n 1,... f(τ,y(τ)) P(τ) Numerical Methods for Differential Equations p. 48/63

Adams Bashforth methods (explicit) Interpolate f values with polynomial of degree k 1: P(t n+j ) = f(t n+j,y(t n+j )) j = 0,...,k 1 Then P(τ) = f(τ,y(τ)) + O(h k ) for t [t n,t n+k ] y(t n+k ) = y(t n+k 1 ) + tn+k t n+k 1 P(τ)dτ + O(h k+1 ) The k-step Adams-Bashforth method is the order k method y n+k = y n+k 1 + h k 1 j=0 b j f(t n+j,y n+j ) where b j = h 1 t n+k t n+k 1 Φ j (τ)dτ Numerical Methods for Differential Equations p. 49/63

Coefficients of AB1 AB1: for k = 1 y n+1 = y n + hb 0 f(t n,y n ) where b 0 = h 1 tn+1 t n Φ 0 (τ)dτ = h 1 tn+1 t n 1dτ = 1 y n+1 = y n + hf(t n,y n ) Conclusion AB1 is the explicit Euler method Numerical Methods for Differential Equations p. 50/63

Coefficients of AB2 Here Φ 1 (t) = t t n t n+1 t n ; Φ 0 (t) = t t n+1 t n t n+1 AB2: for k = 2 y n+2 = y n+1 + h [b 1 f(t n+1,y n+1 ) + b 0 f(t n,y n )] b 0 b 1 y n+2 = h 1 tn+2 t n+1 Φ 0 (τ)dτ = 1/2 tn+2 = h 1 Φ 1 (τ)dτ = 3/2 t n+1 [ 3 = y n+1 + h 2 f(t n+1,y n+1 ) 1 ] 2 f(t n,y n ) Numerical Methods for Differential Equations p. 51/63

Initializing an Adams method The first step of AB2 is y 2 = y 1 + h [ 3 2 f(t 1,y 1 ) 1 ] 2 f(t 0,y 0 ) so we need the values of y 0 and y 1 to start. y 0 is obtained from the initial value, but y 1 has to be computed with a one-step method. Implementing AB2: we may use AB1 for the first step, y n+2 y 1 = y 0 + hf(t 0,y 0 ) [ 3 = y n+1 + h 2 f(t n+1,y n+1 ) 1 ] 2 f(t n,y n ), n 0 Numerical Methods for Differential Equations p. 52/63

Example: AB1 vs AB2 Approximate the solution to y = y 2 cos t, y(0) = 1/2, t [0,8π] with h = π/6 and h = π/60 AB1: Solutions Errors 1 10 0 0.8 0.6 0.4 0.2 0 0 10 20 30 10 1 10 2 10 3 10 4 0 10 20 30 1.2 10 0 1 0.8 10 2 0.6 10 4 0.4 0.2 0 10 20 30 AB2: Solutions Errors 10 6 0 10 20 30 Numerical Methods for Differential Equations p. 53/63

How do we check the order of a multistep method? A multistep method is of order of consistency p if the local error is k a j y(t n+j ) h j=0 k b j y (t n+j ) = O(h p+1 ) j=0 Try if the formula holds exactly for polynomials y(t) = 1, t, t 2, t 3,... Insert y = t m and y = mt m 1 into the formula, take t n+j = jh: k a j (jh) m h j=0 k k b j m(jh) m 1 = h m (a j j m b j mj m 1 ) j=0 j=0 Numerical Methods for Differential Equations p. 54/63

Order conditions for multistep methods Theorem A k-step method is of consistency order p if and only if it satisfies the following conditions: k j m a j = m j=0 k j m 1 b j, m = 0,1,...,p j=0 k j p+1 a j (p + 1) j=0 k j p b j j=0 Then the multistep formula holds exactly for all polynomials of degree p or less. (Problems with y = P(t) are solved exactly) Numerical Methods for Differential Equations p. 55/63

The root condition Definition A polynomial ρ satisfies the root condition if all of its zeros have moduli less than or equal to one and the zeros of unit modulus are simple Examples ρ(w) = (w 1)(w 0.5)(w + 0.9) ρ(w) = (w 1)(w + 1) ρ(w) = (w 1) 2 (w 0.5) ρ(w) = (w 1)(w 2)(w + 2) ρ(w) = (w 1)(w 2 + 0.25) All Adams methods have ρ(w) = w k 1 (w 1) Numerical Methods for Differential Equations p. 56/63

The Dahlquist equivalence theorem Definiton A method is zero-stable if ρ satisfies the root condition Theorem A multistep method is convergent if and only if it is zero-stable and consistent of order p 1 (without proof) Example k-step Adams-Bashforth methods are of order k and have ρ(w) = w k 1 (w 1) they are convergent Numerical Methods for Differential Equations p. 57/63

Dahlquist s first barrier Theorem The maximal order of a zero-stable k-step method is k for explicit methods p = k + 1 if k is odd for implicit methods k + 2 if k is even Numerical Methods for Differential Equations p. 58/63

Backward differentiation formula of order 2 Construct a 2-step method of order 2 of the form α 2 y n+2 + α 1 y n+1 + α 0 y n = h f(t n+2,y n+2 ) Order conditions for p = 2: α 2 + α 1 + α 0 = 0; 2α 2 + α 1 = 1; 4α 2 + α 1 = 4 α 2 = 3/2; α 1 = 2; α 0 = 1/2; ρ(w) = 3 2 (w 1)(w 1 ) BDF2 is convergent 3 Numerical Methods for Differential Equations p. 59/63

Backward differentiation formulas The backward difference operator is defined as 0 y n+k = y n+k and j y n+k = j 1 y n+k j 1 y n+k 1, j 1 Theorem (without proof): The k-step BDF method k j=1 j j y n+k = hf(t n+k,y n+k ) is convergent of order p = k if and only if 1 k 6 Note BDF methods are suitable for stiff problems Numerical Methods for Differential Equations p. 60/63

BDF1-6 stability regions The methods are stable outside the indicated area 20 Stability regions of BDF1 6 methods 15 10 5 0 5 10 15 20 10 5 0 5 10 15 20 25 30 35 Numerical Methods for Differential Equations p. 61/63

A-stability of multistep methods Applying a multistep method to the linear test equation y = λy produces a difference equation k j=0 a j y n+j = hλ k j=0 b j y n+j The characteristic equation (with z := hλ) ρ(w) zσ(w) = 0 has k roots w j (z). The method is A-stable iff Rez 0 w j (z) 1, with simple unit modulus roots (root condition) Numerical Methods for Differential Equations p. 62/63

Dahlquist s second barrier Theorem (without proof): The highest order of an A-stable multistep method is p = 2. Of all 2nd order A-stable multistep methods, the trapezoidal rule has the smallest error Note There is no such order restriction for Runge Kutta methods, which can be A-stable for arbitrarily high orders A multistep method can be useful although it isn t A-stable Numerical Methods for Differential Equations p. 63/63