Explicit One-Step Methods

Similar documents
Applied Math for Engineers

Solving Ordinary Differential equations

Numerical Methods for Differential Equations

Mini project ODE, TANA22

Introduction to the Numerical Solution of IVP for ODE

Graded Project #1. Part 1. Explicit Runge Kutta methods. Goals Differential Equations FMN130 Gustaf Söderlind and Carmen Arévalo

CS 450 Numerical Analysis. Chapter 9: Initial Value Problems for Ordinary Differential Equations

Chapter 11 ORDINARY DIFFERENTIAL EQUATIONS

Numerical Methods for Differential Equations

The family of Runge Kutta methods with two intermediate evaluations is defined by

2 Numerical Methods for Initial Value Problems

Chapter 8. Numerical Solution of Ordinary Differential Equations. Module No. 1. Runge-Kutta Methods

Numerical Differential Equations. An Undergraduate Course. Ronald H.W. Hoppe

Integration of Ordinary Differential Equations

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 9

Consistency and Convergence

Lecture 4: Numerical solution of ordinary differential equations

Numerical solution of ODEs

Numerical Differential Equations: IVP

Differential Equations

FIRST-ORDER ORDINARY DIFFERENTIAL EQUATIONS III: Numerical and More Analytic Methods David Levermore Department of Mathematics University of Maryland


Scientific Computing: An Introductory Survey

8.1 Introduction. Consider the initial value problem (IVP):

Mathematics for chemical engineers. Numerical solution of ordinary differential equations

The next notion is the order (Konsistenzordnung) of a method: τ(h) = O(h p ).

Differential Equations

Finite Differences for Differential Equations 28 PART II. Finite Difference Methods for Differential Equations

Ordinary Differential Equations

Backward error analysis

Introduction to standard and non-standard Numerical Methods

AN OVERVIEW. Numerical Methods for ODE Initial Value Problems. 1. One-step methods (Taylor series, Runge-Kutta)

CS 257: Numerical Methods

Bindel, Fall 2011 Intro to Scientific Computing (CS 3220) Week 12: Monday, Apr 18. HW 7 is posted, and will be due in class on 4/25.

(f(x) P 3 (x)) dx. (a) The Lagrange formula for the error is given by

The Definition and Numerical Method of Final Value Problem and Arbitrary Value Problem Shixiong Wang 1*, Jianhua He 1, Chen Wang 2, Xitong Li 1

Second Order ODEs. CSCC51H- Numerical Approx, Int and ODEs p.130/177

Numerical Methods for Differential Equations

Linear Multistep Methods

Fourth Order RK-Method

Ordinary Differential Equations II

CS520: numerical ODEs (Ch.2)

Butcher tableau Can summarize an s + 1 stage Runge Kutta method using a triangular grid of coefficients

Astrodynamics (AERO0024)

Jim Lambers MAT 460/560 Fall Semester Practice Final Exam

Lecture IV: Time Discretization

Initial value problems for ordinary differential equations

Numerical Methods for Initial Value Problems; Harmonic Oscillators

EXAMPLE OF ONE-STEP METHOD

Starting Methods for Two-Step Runge Kutta Methods of Stage-Order 3 and Order 6

On the reliability and stability of direct explicit Runge-Kutta integrators

Exam in TMA4215 December 7th 2012

FINITE DIFFERENCES. Lecture 1: (a) Operators (b) Forward Differences and their calculations. (c) Backward Differences and their calculations.

Ordinary Differential Equations II

Ordinary differential equations - Initial value problems

Numerical Methods for Initial Value Problems; Harmonic Oscillators

1 Ordinary differential equations

Improved Starting Methods for Two-Step Runge Kutta Methods of Stage-Order p 3

Do not turn over until you are told to do so by the Invigilator.

Numerical Methods - Initial Value Problems for ODEs

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University

Chapter 6 - Ordinary Differential Equations

Numerical Analysis II. Problem Sheet 9

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University

Integration, differentiation, and root finding. Phys 420/580 Lecture 7

Logistic Map, Euler & Runge-Kutta Method and Lotka-Volterra Equations

Initial Value Problems

Lecture V: The game-engine loop & Time Integration

Numerical Analysis: Interpolation Part 1

Introduction to numerical methods for Ordinary Differential Equations

TS Method Summary. T k (x,y j 1 ) f(x j 1,y j 1 )+ 2 f (x j 1,y j 1 ) + k 1

MCE693/793: Analysis and Control of Nonlinear Systems

ODE Runge-Kutta methods

NUMERICAL SOLUTION OF ODE IVPs. Overview

Explicit Runge Kutta Pairs with Lower Stage-Order *

Multistage Methods I: Runge-Kutta Methods

Numerical Methods. King Saud University

THE TRAPEZOIDAL QUADRATURE RULE

Richarson Extrapolation for Runge-Kutta Methods

The Initial Value Problem for Ordinary Differential Equations

A Brief Introduction to Numerical Methods for Differential Equations

Numerical Integration

Cyber-Physical Systems Modeling and Simulation of Continuous Systems

Exercises, module A (ODEs, numerical integration etc)

COSC 3361 Numerical Analysis I Ordinary Differential Equations (II) - Multistep methods

Euler s Method, Taylor Series Method, Runge Kutta Methods, Multi-Step Methods and Stability.

Advanced Numerical Analysis. Lecture Notes Prof. Daniel Kressner EPFL / SB / MATHICSE / ANCHP Spring 2015

1 Ordinary Differential Equations

Ordinary Differential Equations

Runge-Kutta and Collocation Methods Florian Landis

Theory, Solution Techniques and Applications of Singular Boundary Value Problems

MATH 350: Introduction to Computational Mathematics

Department of Applied Mathematics and Theoretical Physics. AMA 204 Numerical analysis. Exam Winter 2004

Two Optimized Runge-Kutta Methods for the Solution of the Schrödinger Equation

KINGS COLLEGE OF ENGINEERING DEPARTMENT OF MATHEMATICS ACADEMIC YEAR / EVEN SEMESTER QUESTION BANK

Euler s Method, cont d

Differential Equations FMNN10 Graded Project #1 c G Söderlind 2017

ECE257 Numerical Methods and Scientific Computing. Ordinary Differential Equations

DIFFERENTIAL EQUATIONS

Southern Methodist University.

Transcription:

Chapter 1 Explicit One-Step Methods Remark 11 Contents This class presents methods for the numerical solution of explicit systems of initial value problems for ordinary differential equations of first order y (x) = f(x,y(x)), y(x 0 ) = y 0 For the most part, only initial value problems for scalar ordinary differential equations of first order y (x) = f(x,y(x)), y(x 0 ) = y 0, (11) considered, for simplicity of presentation The extension of the results and the methods to systems is generally straightforward 11 Consistency and Convergence Definition 12 Grid, step size A grid is a decomposition I h of the interval I = [x 0,x e ] I h = {x 0,x 1,,x N = x e } with x 0 < x 1 < < x N The differences between neighboring grid points h k = x k+1 x k are called step sizes Definition 13 Explicit and implicit methods Let y(x k ) denote the solution of (11) in the node x k and y k a numerical approximation of y(x k ) A numerical method for the solution of (11) on a grid I h is called explicit, if an approximation y k+1 in x k+1 can be calculated directly from already computed values y i, i < k Otherwise, the method is called implicit method Implicit methods require in each step the solution of a generally nonlinear equation for computing y k+1 Definition 14 One-step method, incremental function A one-step method for the computation of an approximation y k+1 of the solution of (11) on a grid I h has the form y k+1 = y k +h k Φ(x,y,h k ), k = 0,1,, y 0 = y(x 0 ) (12) Here, Φ(,, ) is called incremental function of the one-step method 2

Example 15 One-step methods, incremental functions The explicit or forward Euler method y k+1 = y k +h k f (x k,y k ), k = 0,1,2,, y 0 = y(x 0 ), is an explicit one-step method with the incremental function Φ(x,y,h k ) = f (x k,y k ) The computation of y k+1 requires only the substitution of already computed values into the function f(x,y) The implicit or backward Euler method y k+1 = y k +h k f (x k+1,y k+1 ), k = 0,1,2,, y 0 = y(x 0 ), is an implicit one-step method with the incremental function Φ(x,y,h k ) = f (x k+1,y k+1 ) One has to solve an equation for computing y k+1 The complexity of this step depends on f(x,y) Remark 16 Representation of implicit one-step methods Explicit one-step methods have an incremental function of the form Φ(x k,y k,h k ) For the considerations in this section, one can adopt the point of view that also implicit one-step methods can be written as explicit one-step methods, but generally one does not know the concrete form of the incremental function Example 17 Incremental function of the implicit Euler method The incremental function of the implicit Euler method can be written in the form Φ(x,y,h) = f (x+h,y +hφ(x,y,h)), which allows formally the representation of this method as explicit one-step scheme Definition 18 Local error Let ŷ k+1 be the result of one step of an explicit one-step method (12) with the initial value at the solution y(x), ie, ŷ k+1 = y(x k )+h k Φ(x k,y(x k ),h k ) Then, is called local error, see Figure 11 le(x k+1 ) = le k+1 = y(x k+1 ) ŷ k+1 Remark 19 The local error In the literature, sometimes y(x k+1 ) y(x k ) h k Φ(x k,y(x k ),h k ) is defined to be the local error One should require for a reasonable method that the local error is small in an appropriate sense 3

Figure 11: The local error Definition 110 Consistent method Lety(x)bethesolutionoftheinitialvalue problem (11), h max = max k h k and S := {(x,y) : x [x 0,x e ], y R} The one-step method (12) is said to be consistent, if for all f C(S), that satisfy in S a Lipschitz condition with respect to y, it holds ( ) le(x k+1 ) lim max = 0 h max 0 x k I h h k or ( ) lim max f(x k,y(x k )) Φ(x k,y(x k ),h k ) = 0 h max 0 x k I h Both conditions are equivalent problem for exercises Remark 111 Approximation of the derivative with the incremental function For bounded incremental functions it is obvious that the local error converges to zero if h max 0, because in this case it holds h K 0 and ŷ k+1 y(x k ) Consistency requires more, namely that the incremental function approximates the derivative of the solution sufficiently well le(x k+1 ) h k = y(x k+1) y(x k ) h k Φ(x k,y(x k ),h k ) y (x k ) Φ(x k,y(x k ),h k ) Example 112 Consistency of the explicit Euler method For the explicit Euler method is Φ(x k,y(x k ),h k ) = f (x k,y(x k )) Hence, the second condition from Definition 110 is satisfied and the method is consistent Remark 113 Quality of the approximation of the incremental function For practical purposes, not only the consistency itself but the quality of the approximation of the derivative by the incremental function is essential The quality allows a comparison of different one-step methods For simplicity of presentation, let h k = h for all k Definition 114 Order of consistency An explicit one-step method (12) has the consistency order p N if p is the largest natural number such that for all functions f C(S), which satisfy a Lipschitz condition with respect to y, it holds le(x k +h) ch p+1 for all x k I h, for all I h with h (0,H], and where the constant c > 0 is independent of h The constant c might depend on derivatives of y(x), on f(x,y), and on partial derivatives of f(x,y) 4

Example 115 Order of consistency of the explicit Euler method Consider the explicit Euler method and assume that the function y(x) is two times continously differentiable Then it follows with Taylor series expansion and using the differential equation that le(x k +h) = y(x k +h) ŷ k+1 = y(x k )+hy (x k )+ h2 2 y (x k +θh) y(x k ) hf (x k,y(x k )) }{{} = h2 2 y (x k +θh) h2 2 y C 2 ([x 0,x e]), =y (x k ) with θ (0,1) Hence, the method has consistency order 1 Remark 116 Consistency and convergence The consistency is a local property of a one-step method For practical purposes it is important that the computed solution converges to the analytical solution if the grid becomes finer and finer Of course, the order of convergence is of importance, too It will be shown that, under certain conditions, the convergence of a one-step method follows from its consistency and that the order of convergence equals the consistency order Definition 117 Convergent method, order of convergence A one-step method(12)convergesfortheinitialvalueproblem(11)ontheintervali = [x 0,x e ], if for each sequence of grids {I h } with h max = max hk h k 0 for the global error it follows that e(x k,h) = y(x k ) y k, x k I h, max e(x k,h) 0 for h max 0 x k I h The one-step method has the order of convergence p, if p is the largest natural number such that for all step lengths h max (0,H] it holds e(x k,h) ch p max x k I h, where c > 0 is independent of h max Lemma 118 Estimate for a sequence of real numbers Assume that for real numbers x n, n = 0,1,, the inequality x n+1 (1+δ) x n +β holds with constants δ > 0, β 0 Then, it follows that x n e nδ x 0 + enδ 1 β, n = 0,1, δ Proof: With induction, problem for exercises Theorem 119 Connection of consistency and convergence Let y(x) be the solution of the initial value problem (11) with f C(S) Let a Lipschitz condition hold for the second argument of the incremental function Φ(x,y 1,h) Φ(x,y 2,h) M y 1 y 2 (x,y) S, h (0,H] 5

Assume that for the local error the estimate le(x k +h) ch p+1 x I h,h (0,H] is valid and assume that y 0 = y(x 0 ) Then, it follows for the global error that where c is indepent of h Proof: It holds y k+1 e(x k+1,h) c em(x k+1 x 0) 1 h p, M = y k +hφ(x k,y k,h), y(x k+1 ) = y(x k )+hφ(x k,y(x k ),h)+le(x k+1 ), k = 0,1, Then, it follows with the triangle inequality and the Lipschitz condition of the incremental function that e(x k+1,h) = y(x k+1 ) y k+1 = y(xk ) y k +le(x k+1 )+h ( Φ(x k,y(x k ),h) Φ(x k,y k,h) ) = e(x k,h)+le(x k+1 )+h ( Φ(x k,y(x k ),h) Φ(x k,y k,h) ) e(x k,h) + le(x k+1 ) +h Φ(x k,y(x k ),h) Φ(x k,y k,h) e(x k,h) +ch p+1 +hm y(x k ) y k = (1+hM) e(x k,h) +ch p+1 This sequence of inequalities has the form that was considered in Lemma 118 One obtains with e(x 0) = 0 e(x k+1,h) e (k+1)hm e(x 0) +c e(k+1)hm 1 hm h p+1 = c em(x k+1 x 0) 1 h p M Remark 120 To Theorem 119 The consideration of a constant step length is only for simplicity of presentation The result of the theorems holds also for non-constant step lengths with h = max k h k The one-step methods computes an approximation y k of the solution in the grid points x k, k = 0,1,,N To enable a better comparison with the analytical solution, one connects these points linearly form (x k,y k ) to (x k+1,y k+1 ) One obtains a piecewise linear approximation of the solution that is defined on [x 0,x e ] This function is called y h (x) The considerations from above can be extended to the function y h (x) 12 Explicit Runge Kutta Schemes Remark 121 Idea The Euler methods are only of first order The idea of Runge 1 Kutta 2 methods consists in using an incremental function Φ(x,y,h) that is a linear combination of values of f(x,y) in discrete points With this approach, one obtains methods of higher order for the cost of evaluating more values of f(x,y) 1 Carle David Tolmé Runge (1856 1927) 2 Martin Kutta (1867 1944) 6

This approach can be illustrated well at the integral equation that is equivalent to the initial value problem (11) For simplicity, let the right hand side of (11) depend only on x Then, the integral equation has the form y(x) = y 0 + x x 0 f(t) dt (13) The idea of the Runge Kutta methods consists in approximating the right hand side by a quadrature rule, eg, in the interval [x k,x k+1 ] by xk+1 x k f(t) dt h k b j f (x k +c j h k ) with the weights b j and the nodes x k +c j h In the following, only h k = h for all k will be considered for the reason of simplicity Definition 122 Runge Kutta methods, increments, stages ARunge Kutta method has the form j=1 y k+1 = y k +hφ(x,y,h), k = 0,1,, y 0 = y(x 0 ), where the incremental function is defined with the help of K i (x,y,h) = f x k +c i h,y k +h a ij K j (x,y,h) by Φ(x,y,h) = j=1 b i K i (x,y,h), with c 1,,c s,b 1,,b s,a ij R, i,j = 1,,s The quantities K i (x,y,h), i = 1,,s, are called increments The natural number s N is the number of stages of the method An equivalent definition is as follows y (i) k+1 = y k +h Φ(x,y,h) = b i f a ij f j=1 ( x k +c i h,y (i) k+1 ( ) x k +c j h,y (j) k+1, The intermediate values y (i) k+1 are called stages Remark 123 Butcher 3 tableau For the reason of clearity, one writes a Runge Kutta scheme in general in form of a tableau, the so-called Butcher tableau c 1 a 11 a 12 a 1s c 2 a 21 a 22 a 2s c 3 a 31 a 32 a 3s c s a s1 a s2 a ss b 1 b 2 b s ) = c A b T (14) Here, c are the nodes, A is the matrix of the method, and b are the weights 3 John C Butcher, born 1933 7

Remark 124 Increments and Butcher tableau ForexplicitRunge Kuttaschemes, the increments can be computed one after another K 1 (x,y,h) = f(x k,y k ), K 2 (x,y,h) = f (x k +c 2 h,y k +ha 21 K 1 (x,y,h)), s 1 K s (x,y,h) = f x k +c s h,y k +h a sj K j (x,y,h) j=1 The Butcher tableau has the form 0 c 2 a 21 c 3 a 31 a 32 c s a s1 a s2 a s,s 1 b 1 b 2 b s 1 b s Example 125 Explicit Euler scheme The explicit Euler scheme is an explicit Runge Kutta scheme with the Butcher tableau 0 1 In the integral equation, the approximation xk+1 x k f(t,y(t)) dt hf (x k,y k (x k )) is used, see the proof of the Theorem of Peano, lectures notes of Numerical Mathematics I Theorem 126 Consistency of explicit Runge Kutta schemes Let f C(S), see Definition 110 An explicit Runge Kutta scheme is consistent if and only if b i = 1 (15) Proof: From the continuity of f(x,y) it follows that lim Ki(x,y,h) = f(x k,y(x k )), (x,y) S, i = 1,,s, h 0 for the case that the initial value of this step is y k = y(x k ) The continuity of the absolute value function gives lim f(x k,y(x k )) Φ(x k,y(x k ),h k ) = lim h 0 h 0 f(x k,y(x k )) b ik i(x,y,h) = f(x k,y(x k )) b i lim K i(x,y,h) h 0 ( ) = f(x k,y(x k )) 1 b i = 0 if and only if s bi = 1 Hence, the second condition in Definition 110 is satisfied 8

Theorem 127 Interpretation of the increments Let for the solution of (11) hold y C 2 ([x 0,x e ]), let f C(S), and let f be Lipschitz continuous in the second argument If y k = y(x k ) and i 1 c i = a ij, i 2, (16) j=1 holds, then K i (x,y,h) is an approximation of at least first order to y (x+c i h), ie, y (x k +c i h) K i (x,y,h) = O ( h 2) Proof: The proof follows by induction i = 2 For i = 2 follows with (11), the Lipschitz continuity, and Taylor series expansion that y (x k +c 2h) K 2(x,y,h) = f (x k +c 2h,y(x k +c 2h)) f (x k +c 2h,y(x k )+ha 21f(x k,y(x k ))) L y(x k +c 2h) y(x k ) ha 21f(x k,y(x k )) = L y(xk )+c 2hy (x k )+O(h 2 ) y(x k ) ha 21y (x k ) = L (c 2 a 21)hy (x k )+O(h 2 ) Hence, in the case c 2 = a 21 the error is of order O(h 2 ) i > 2 Let the asymptotic order of the errors be proved for all indices 2,,i 1 Then, one gets in the same way as for i = 2 y (x k +c ih) K i(x,y,h) ( = f (x i 1 k +c ih,y(x k +c ih)) f x k +c ih,y(x k )+h a ijk j(x,y,h)) j=i L y(x i 1 k +c ih) y(x k ) h a ijk j(x,y,h) j=i = L y(x i 1 k)+c ihy (x k )+O(h 2 ( ( ) y(x k ) h aij y (x k +c jh)+o(h 2 ) )) j=i i 1 = L cihy (x k )+O(h 2 ( ( ) h aij y (x k )+O(h) )) j=i ( = L h i 1 c i a ij )y (x k )+O(h 2 ) j=i The order of the error O(h 2 ) is given, if c i = i 1 j=i aij Remark 128 Conditions on the coefficients for certain orders of convergence The conditionsfromtheorems126and127aresatisfiedformanyexplicitrunge Kutta schemes The goal consists in determining the coefficients b 1,,b s, and a ij in such a way that one obtains an order of consistency as high as possible The consistency order of a Runge Kutta scheme with s stages can be derived from the Taylor series expansion of the local error Let (15) be valid, then one obtains, eg, A Runge Kutta scheme with the parameters (A,b,c) has at least consistency order p = 2 if b j c j = 1 2 (17) j=1 This condition will be shown in Example 129 for s = 2 9

If in addition b j c 2 j = 1 3 j=1 and b j a jk c k = 1 6 hold, then the order of consistency is at least p = 3 The proof of the last statement and conditions for even higher order consistency can be found in the literature, eg in (Strehmel and Weiner, 1995; Strehmel et al, 2012, Section 242) Example 129 Runge Kutta methods with 2 stages For the investigation of 2- stage Runge Kutta schemes one considers for simplicity the so-called autonomous initial value problem y (x) = f(y(x)), y(x 0 ) = y 0 One has for the increments K 1 (y,h) = f(y k ), j=1 k=1 K 2 (y,h) = f (y k +ha 21 K 1 (y k,h)) = f (y k +ha 21 f(y k )) = f(y k )+ha 21 f(y k )f y (y k )+O(h 2 ) If the initial value is exact, it follows for the incremental function that Φ(y(x k )) = b 1 K 1 (y,h)+b 2 K 2 (y,h) = (b 1 +b 2 )f(y(x k ))+hb 2 a 21 f(y(x k ))f y (y(x k ))+O(h 2 ) The Taylor series expansion of the solution has the form y(x k +h) = y(x k )+h y (x k ) + h2 }{{} 2 y (x k )+O ( h 3) =f(y(x k )) One obtains with the chain rule y (x) = d dx y (x) = d dx f(y(x)) = f y(y)y (x) = f y (y)f(y(x)) Now, it follows for the local error that le(x k +h) = y(x k +h) y(x k ) hφ(y(x k )) = y(x k )+hf(y(x k ))+ h2 2 (f y(y(x k ))f(y(x k )))+O ( h 3) y(x k ) ( ) h (b 1 +b 2 )f(y(x k ))+hb 2 a 21 f(y(x k ))f y (y(x k ))+O(h 2 ) = h ( 1 (b 1 +b 2 ) ) f(y(x k ))+h 2 ( 1 2 b 2a 21 )f(y(x k ))f y (y(x k )) +O ( h 3) To achieve an order of consistency as large as possible, the first two terms have to vanish One obtains with the condition c 2 = a 21 that b 1 +b 2 = 1, b 2 a 21 = 1 2 b 2c 2 = 1 2 The first equation is the general condition for consistency (15) and the second condition is exactly (17) for s = 2 These two conditions characterize all 2-stage explicit Runge Kutta methods that possess consistency and convergence order 2 c 2 c 2 1 1 2c 2 1 2c 2, with c 2 0 10

In the case c 2 = 1/2, one obtains the method of Runge (1895) 1/2 1/2 0 1 This method corresponds with respect to the approximation of the integral in (13) to the application of the mid point rule For c 2 = 1 one gets the method of Heun 4 (1900) 1 1 1/2 1/2, which corresponds to the use of the trapezoidal rule for the numerical quadrature in (13) Remark 130 Autonomous ordinary differential equations Every explicit first order ordinary differential equation y (x) = f(x,y(x)) can be transformed into an autonomous form ( ỹ (x) = f f(x,y(x)) (ỹ(x)) = 1 ) by introducing the function y(x) := x and ỹ(x) := ( y(x) y(x) ) Theorem 131 Consistency and convergence of explicit Runge Kutta methods Let y(x) be the solution of the initial value problem (11) with f C(S) and let f(x,y) satisfy a Lipschitz condition in the second argument Then, an explicit Runge Kutta scheme that is consistent of order p converges also with order p Proof: The incremental function of an explicit Runge Kutta scheme is a linear combination of values of the right hand side f(x,y) Thus, the assumptions of Theorem 119 are satisfied since the Lipschitz condition in this theorem follows from the required Lipschitz condition on the right hand side of the differential equation The statement of the theorem follows now directly from Theorem 119 Remark 132 Explicit Runge Kutta methods of higher order Analogously to 2- stage methods, it is possible to derive conditions on the coefficients of an explicit Runge Kutta scheme in order to construct methods of higher order An important question is the minimal number of stages that is necessary to be able to reach a certain order Some answers to this question are from Butcher (1963, 1965, 1985) 4 Karl Heun (1859 1929) p 1 2 3 4 5 6 7 8 mins 1 2 3 4 6 7 9 11 11

Example 133 Classical Runge Kutta scheme (1901) Theso-calledclassicalRunge Kutta scheme has four stages and the Butcher tableau 0 1/2 1/2 1/2 0 1/2 1 0 0 1 1/6 1/3 1/3 1/6 It is based on the Simpson 5 rule The center node of the Simpson rule is used twice, c 2 = c 3, but with a different second argument for the computation of the increments This method is of fourth order 13 Step Length Control Remark 134 Motivation The considerations so far did not provide a way for estimating a good step length for solving a given initial value problem with prescribed accuracy and with as little work as possible A good step length depends certainly on the concrete problem and generally it will change within the considered interval For these reasons, the step length has to be controlled during the numerical solution of the initial value problem A typical approach consists in computing two approximations of the solution in a node with different methods and to draw conclusions on the size of the local error based on the difference of these approximations Of course, the consideration of the global error would be better However, Theorem 119 shows that on the one hand the global error is influenced by problem-dependent terms, like the length of the interval [0,T] or the Lipschitz constant On the other hand, the global error can be small only if the local errors are small 131 The Richardson Method Remark 135 Idea Given a numerical method for solving an initial value problem and given a step length h The Richardson 6 method consists of the following steps, see also Figure 12: 1 Starting from a node (x 0,y 0 ) and using a step length of 2h, an approximation y 2h at the node x 0 +2h will be computed 2 Two approximations y h and y 2 h in x 0 +h and x 0 +2h are computed with two steps of length h 3 The step length will be controlled by comparing y 2h and y 2 h In general, the more accurate approximation will be y 2 h In addition, it will be demonstrated that it is possible to improve the accuracy of y 2 h with the information obtained by this method Example 136 Richardson method for an explicit 2-stage Runge Kutta method Consider an explicit 2-stage Runge Kutta scheme One obtains in the first step of the Richardson method y (1) 2h = y 0, y (2) 2h = y 0 +2ha 21 f(x 0,y 0 ), y 2h = y 0 +2h[b 1 K 1 (x,y)+b 2 K 2 (x,y)] [ ( )] = y 0 +2h b 1 f(x 0,y (1) 2h )+b 2f x 0 +2c 2 h,y (2) 2h, 5 Thomas Simpson (1710 1761) 6 Lewis Fry Richardson (1881 1953) 12

Figure 12: Sketch of the Richardson method or written as Butcher tableau 0 2c 2 2a 21 2b 1 2b 2 The second step of the Richardson method yields y (1) 2 h = y 0, y (2) 2 h = y 0 +ha 21 f(x 0,y 0 ), [ b 1 f y (3) 2 h = y h = y 0 +h y (4) ( x 0,y (1) 2 h 2 h = y h +ha 21 f(x 0 +h,y h ), y 2 h = y h +h The Butcher tableau of this method is [ b 1 f(x 0 +h,y h )+b 2 f 0 c 2 a 21 1 b 1 b 2 1+c 2 b 1 b 2 a 21 b 1 b 2 b 1 b 2 ) ( )] +b 2 f x 0 +c 2 h,y (2) 2 h, ( )] x 0 +h+c 2 h,y (4) 2 h That means, the computation of y 2 h is equivalent to the computation of an approximation with the help of an explicit 4-stage Runge Kutta scheme Altogether, 5 function evaluations are needed: ( ) ( f(x 0,y 0 ), f x 0 +2c 2 h,y (2) 2h,f x 0 +c 2 h,y (2) ( ) f(x 0 +h,y h ), f x 0 +h+c 2 h,y (4) 2 h In the case of a s-stage Runge Kutta method, 3s 1 function evaluations are required This number is rather large and the high costs per time step are a disadvantage of the Richardson method Remark 137 Comparison of both approximations Consider a one-step method y k+1 = y k +hφ(x,y,h) of order p Let the initial value x 0 be exact, then it follows for the local error in x 0 +2h that y(x 0 +2h) y 2h = c(x 0 )(2h) p+1 +O ( h p+2) (18) For estimating the local error in y 2 h it will be assumed that the incremental function Φ(x,y,h) is Lipschitz continuous in the second argument This assumption is 2 h ), 13

always satisfied for explicit Runge Kutta schemes if f(x,y) possesses this property, see the proof of Theorem 131 It is Let y 2 h = y h +hφ(x+h,y h,h) (19) ŷ 2 h = y(x 0 +h)+hφ(x+h,y(x 0 +h),h) (110) the iterate which is computed with the exact starting value Using the definition of the consistency order, one obtains with (19) and (110) y(x 0 +2h) y 2 h = (y(x 0 +2h) ŷ 2 h )+(ŷ 2 h y 2 h ) [ = c(x 0 +h)h p+1 +O ( h p+2)] [ + y(x 0 +h)+hφ(x+h,y(x 0 +h),h) ] y h hφ(x+h,y h,h) For the terms with the incremental function, one gets from the Lipschitz continuity and the consistency order hφ(x+h,y(x 0 +h),h) hφ(x+h,y h,h) hl y(x 0 +h) y h = O ( h p+2) It follows that y(x 0 +2h) y 2 h = c(x 0 +h)h p+1 +y(x 0 +h) y h +O ( h p+2) = c(x 0 +h)h p+1 +c(x 0 )h p+1 +O ( h p+2) +O ( h p+2) = 2c(x 0 )h p+1 +O ( h p+2), (111) where one assumes that c(x 0 + h) = c(x 0 ) + O(h), ie, that the constants do not change too rapidly Neglecting in (18) and (111) the higher order terms allows to eliminate the constant One obtains c(x 0 ) = 1 ( ) y2 h y 2h 1 2 2 p 1 hp+1 (112) It follows for the local error of the more accurate method that y(x 0 +2h) y 2 h = y 2 h y 2h 2 p 1 +O ( h p+2) (113) Remark 138 Increasing the accuracy Rearranging terms in (113) gives ( y(x 0 +2h) y 2 h + y ) 2 h y 2h 2 p = O ( h p+2) 1 Then, y 2 h = y 2 h + y 2 h y 2h 2 p 1 is an approximation of the solution of order p+1 Remark 139 Automatic step length control From (112) it follows that err = y 2 h y 2h 2 p 1 14 2c(x 0 )h p+1 (114)

is a computable approximation of the local error This approximation will be compared with a prescribed tolerance Often, a so-called scaled tolerance sc is used, (Hairer et al, 1993, p 167) or (Strehmel et al, 2012, p 61) The scaled tolerance is a combination of an absolute tolerance atol and a relative tolerance rtol Then, the scaled error sc = atol+max{ y 0, y 2 h }rtol err sc = y 2 h y 2h (2 p 1)sc is defined If err sc 1 err sc, then the performed step will be accepted Starting fromy 2 h ory 2 h, thenextstepwillbeperformed Ify 2 h isused, thisapproach is also called local Richardson extrapolation An important aspect is the choice of the step length h new for the next step The guideline is that the scaled error for the next step should be on the one hand still smaller than 1 but on the other hand as close as one as possible Following (114), it should hold 1 = err new sc = 2c(x 0)h p+1 sc = 2c(x 0 +2h new )h p+1 new sc ( ) p+1 hnew err sc h 2c(x 0)h p+1 new sc ( ) p+1 hnew, ie h new has to be chosen such that ( ) 1/(p+1) 1 h new h (115) err sc Iferr sc > 1, thentheperformedstepwillberejected TheRichardsonisrepeated from (x 0,y 0 ) with a step length h new < h That means, the work which was spent for performing the step with step length h was wasted One likes to avoid this situation Remark 140 Issues of the practical implementation In practical simulations, one uses some modifications of (115) for stabilizing the algorithm A safety factor α (0,1) is introduced ( ) 1/(p+1) 1 h new = α h, err sc often α [08,09] One likes to avoid large oscillations of the sizes of subsequent steps For this reason, a factor for the maximal increase α max of the new step size with respect to the current step size and a factor for the maximal decrease α min < α max are used Then, one obtains { { ( ) }} 1/(p+1) 1 h new = hmin α max,max α min,α err sc If a very large step length is proposed ( ) 1/(p+1) 1 α > α max, err sc then the factor α max becomes effective and similarly for the case that a very small step length is proposed h 15

Usually, one describes a minimal step length h min and a maximal step length h max and requires for all step lengths that h k [h min,h max ] In the first step, one has to estimate h Generally, this estimate has to be corrected In practice, this correction is done very fast by algorithms for automatic step length control An algorithm for determining a good initial step length can be found in (Hairer et al, 1993, p 168) 132 Embedded Runge Kutta Schemes Remark 141 Motivation, embedded Runge Kutta schemes Richardson extrapolation is quite expensive in terms of evaluations of the incremental function It is possible to construct a step length control that needs less evaluations, with so-called embedded Runge Kutta schemes The idea of embedded Runge Kutta schemes consists in computing numerical approximations of the solution at the next time with two one-step methods with different order The methods are chosen in such a way that it is possible to use certain evaluations of the incremental function for both of them That means, one has to construct a Runge Kutta scheme of the form 0 c 2 a 21, c s a s1 a s,s 1 b 1 b s 1 b s b1 bs 1 bs such that y 1 = y 0 +h b i K i (x,y) is order of p and ỹ 1 = y 0 +h bi K i (x,y) is of order q, see Figure 13 In general, it is q = p 1 or q = p+1 Figure 13: Sketch of embedded Runge Kutta schemes 16

Example 142 Runge Kutta Fehlberg 2(3) ConsiderexplicitRunge Kuttaschemes with 3 stages 0 c 2 a 21 c 3 a 31 a 32 b 1 b 2 b 3 p = 2 b1 b2 b3 q = 3 One of the schemes should be of order 2 and the other one of third order There are 11 parameters to choose From Theorem 126, Theorem 127, and Remark 128 it follows that 8 equations have to be satisfied c 2 = a 21, c 3 = a 31 +a 32, b 1 +b 2 +b 3 = 1, b 2 c 2 +b 3 c 3 = 1 2, b1 + b 2 + b 3 = 1, b2 c 2 + b 3 c 3 = 1 2, b2 c 2 2 + b 3 c 2 3 = 1 3, b3 a 32 c 2 = 1 6 That means, one has to set three parameters First, one can choose c 2 = 1, b 3 = 0 Then, it follows from the first equation that a 21 = 1, from the fourth equation that b 2 = 1/2, and from the third equation that b 1 = 1/2 Now, one chooses c 3 = 1/2 From the sixth and seventh equation it follows that b 2 = 1/6 and b 3 = 4/6 Then, one gets from the fifth equation b 1 = 1/6 and from the eight equation a 32 = 1/4 Finally, the second equation gives a 31 = 1/4 The resulting methods have the form 0 1 1 1/2 1/4 1/4 1/2 1/2 0 p = 2 1/6 1/6 4/6 q = 3 Themethodwithorderq = 3istheSimpsonrule Thecompleteembeddedapproach is called Runge Kutta Fehlberg 7 2(3) method (RKF 2(3)) Remark 143 Error estimate, theoretical drawback It holds for the embedded scheme y 1 = y(x 0 +h)+o ( h p+1), ỹ 1 = y(x 0 +h)+o ( h q+1) It follows that err := ỹ 1 y 1 = y(x 0 +h)+o ( h p+1) y(x 0 +h)+o ( h q+1) = O ( h p+1) +O ( h q+1) (116) is an estimate of the main error term of the Runge Kutta scheme of order q = min{p,q} That means, one obtains only an estimate of the error of the lower order method To obtain information only on the lower order method is the main theoretical drawback of the approach, since one is interested actually in the higher order method and one will continue the computation also from the higher order approximation 7 Erwin Fehlberg (1911 1972) 17

Remark 144 Automatic step length control, I Controller Let h be the step size that was used for computing y 1 of order p and ỹ 1 of order q with p < q From (116), one has err = y 1 ỹ 1 = ch p+1 Given a tolerance tol for the maximal local error Oneapproachconsistsincontrollingtheerrorperstep(EPS)Then, onerequires that r 1 = err tol (117) If this condition is satisfied, then one requires for the new step size that the local error is equal to the tolerance This requirement gives h new = c h p+1 new = tol Thus, the new step length is computed by ( ) 1/(p+1) ( ) 1/(p+1) tol tol = c c h p+1 h ( ) 1/k α tol h new = h, (118) where k = p+1 and α (0,1) is again a safety factor Another way is the consideration of the error relative to the current step length, the so-called error per unit step (EPUS), r 1 r 1 = err h The satisfaction of a condition of form (117) leads to a new step of form (118) with k = p If (117) is not satisfied, then the step is rejected and it will be repeated with a step length smaller than h A generalization of this approach is the so-called I Controller Replacing in (118) 1/k by k I gives ( ) ki α tol h new = h For obtaining a useful automatic step length control mechanism, the choice kk I = 1 is not necessary The following choices can be found in the literature kk I [0,2] k I [0,2/k] stable control, kk I (1,2) k I (1/k,2/k) fast and oscillating control, kk I (0,1) k I (0,1/k) slow and smooth control, kk I = 1 k I = 1/k standard I Controller There are more sophisticated controllers that are used in practical simulations, see Söderlind (2002) for an overview Remark 145 Methods used in practice In practice, one uses, eg, RKF 4(5), s = 6, Fehlberg (1964), RKF 7(8), s = 13, Fehlberg (1969), r 1 18

DOPRI 4(5) (or DOPRI 5(4) or DOPRI5), s = 6, Dormand 8, Prince 9 : Dormand and Prince (1980), DOPRI 7(8), s = 13, Prince and Dormand (1981) Thestandardroutineode45fromMATLABusesDOPRI4(5)andtheroutineode23 uses RKF 2(3), see Example 142 Remark 146 Fehlberg trick The Fehlberg trick requires that ( ) ( s 1! K s = f x k +c s h,y k +h a si K i = f x k +h,y k +h ) b i K i, ie, the last evaluation of the incremental function of the old step can be used as first value of the incremental function in the new step The conditions for applying this trick are a si = b i, i = 1,,s 1, b s = 0, c s = 1 It can be applied, eg, in DOPRI 4(5) This trick works only if h old h new 8 John R Dormand 9 P J Prince 19