Multistage Methods I: Runge-Kutta Methods

Similar documents
Linear Multistep Methods I: Adams and BDF Methods

Solving scalar IVP s : Runge-Kutta Methods

Numerical Methods for Differential Equations

2tdt 1 y = t2 + C y = which implies C = 1 and the solution is y = 1

Review. Numerical Methods Lecture 22. Prof. Jinbo Bi CSE, UConn

Solving Ordinary Differential equations

MTH 452/552 Homework 3

Numerical Methods for Differential Equations

CHAPTER 5: Linear Multistep Methods

Numerical solution of ODEs

Chapter 5 Exercises. (a) Determine the best possible Lipschitz constant for this function over 2 u <. u (t) = log(u(t)), u(0) = 2.

Ordinary Differential Equations

CHAPTER 10: Numerical Methods for DAEs

Review Higher Order methods Multistep methods Summary HIGHER ORDER METHODS. P.V. Johnson. School of Mathematics. Semester

Consistency and Convergence

Lecture 4: Numerical solution of ordinary differential equations

CS 257: Numerical Methods

CS520: numerical ODEs (Ch.2)

Math Numerical Analysis Homework #4 Due End of term. y = 2y t 3y2 t 3, 1 t 2, y(1) = 1. n=(b-a)/h+1; % the number of steps we need to take

Validated Explicit and Implicit Runge-Kutta Methods

Four Point Gauss Quadrature Runge Kuta Method Of Order 8 For Ordinary Differential Equations

Lecture IV: Time Discretization

TABLE OF CONTENTS INTRODUCTION, APPROXIMATION & ERRORS 1. Chapter Introduction to numerical methods 1 Multiple-choice test 7 Problem set 9

Applied Math for Engineers

Fourth Order RK-Method

Ordinary Differential Equations II

Module 4: Numerical Methods for ODE. Michael Bader. Winter 2007/2008

2 Numerical Methods for Initial Value Problems

Southern Methodist University.

Bindel, Fall 2011 Intro to Scientific Computing (CS 3220) Week 12: Monday, Apr 18. HW 7 is posted, and will be due in class on 4/25.

Runge-Kutta and Collocation Methods Florian Landis

HIGHER ORDER METHODS. There are two principal means to derive higher order methods. b j f(x n j,y n j )

Scientific Computing: An Introductory Survey

MA/CS 615 Spring 2019 Homework #2

Euler s Method, Taylor Series Method, Runge Kutta Methods, Multi-Step Methods and Stability.

Computational Techniques Prof. Dr. Niket Kaisare Department of Chemical Engineering Indian Institute of Technology, Madras

Exam in TMA4215 December 7th 2012

Applied Numerical Analysis

Section 7.4 Runge-Kutta Methods

Part IB Numerical Analysis

The collocation method for ODEs: an introduction

Numerical Integration (Quadrature) Another application for our interpolation tools!

Numerical Analysis Preliminary Exam 10 am to 1 pm, August 20, 2018

Numerical Analysis. A Comprehensive Introduction. H. R. Schwarz University of Zürich Switzerland. with a contribution by

Ordinary Differential Equations II

Finite Differences for Differential Equations 28 PART II. Finite Difference Methods for Differential Equations

2.29 Numerical Fluid Mechanics Fall 2011 Lecture 20

Multistep Methods for IVPs. t 0 < t < T

CS 450 Numerical Analysis. Chapter 8: Numerical Integration and Differentiation

CS 450 Numerical Analysis. Chapter 9: Initial Value Problems for Ordinary Differential Equations

Euler s Method, cont d

Introduction to standard and non-standard Numerical Methods

Ordinary differential equations - Initial value problems

Lecture V: The game-engine loop & Time Integration

(f(x) P 3 (x)) dx. (a) The Lagrange formula for the error is given by

The family of Runge Kutta methods with two intermediate evaluations is defined by

You may not use your books, notes; calculators are highly recommended.

Computational Techniques Prof. Dr. Niket Kaisare Department of Chemical Engineering Indian Institute of Technology, Madras

Scientific Computing: An Introductory Survey

Partitioned Runge-Kutta Methods for Semi-explicit Differential-Algebraic Systems of Index 2

Optimal order a posteriori error estimates for a class of Runge Kutta and Galerkin methods

Do not turn over until you are told to do so by the Invigilator.

Fixed point iteration and root finding

Numerical Differential Equations: IVP

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 9

Energy-Preserving Runge-Kutta methods

The Initial Value Problem for Ordinary Differential Equations

1 Systems of First Order IVP

Parallel Methods for ODEs

Jim Lambers MAT 772 Fall Semester Lecture 21 Notes

Lecture 21: Numerical Solution of Differential Equations

Ordinary Differential Equations

1 Ordinary Differential Equations

A Gauss Lobatto quadrature method for solving optimal control problems

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University

ECE257 Numerical Methods and Scientific Computing. Ordinary Differential Equations

Runge-Kutta Theory and Constraint Programming Julien Alexandre dit Sandretto Alexandre Chapoutot. Department U2IS ENSTA ParisTech SCAN Uppsala

Outline. 1 Interpolation. 2 Polynomial Interpolation. 3 Piecewise Polynomial Interpolation

EAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science

NUMERICAL SOLUTION OF ODE IVPs. Overview

Astrodynamics (AERO0024)

Preliminary Examination in Numerical Analysis

Outline. 1 Numerical Integration. 2 Numerical Differentiation. 3 Richardson Extrapolation

AM205: Assignment 3 (due 5 PM, October 20)

Initial value problems for ordinary differential equations

Review for Exam 2 Ben Wang and Mark Styczynski

In numerical analysis quadrature refers to the computation of definite integrals.

Preface. 2 Linear Equations and Eigenvalue Problem 22

Solving PDEs with PGI CUDA Fortran Part 4: Initial value problems for ordinary differential equations

Section 6.6 Gaussian Quadrature

Ordinary Differential Equations

Fifth-Order Improved Runge-Kutta Method With Reduced Number of Function Evaluations

Index. higher order methods, 52 nonlinear, 36 with variable coefficients, 34 Burgers equation, 234 BVP, see boundary value problems

Module 6: Implicit Runge-Kutta Methods Lecture 17: Derivation of Implicit Runge-Kutta Methods(Contd.) The Lecture Contains:


Numerical Methods for Engineers. and Scientists. Applications using MATLAB. An Introduction with. Vish- Subramaniam. Third Edition. Amos Gilat.

Initial value problems for ordinary differential equations

Chap. 20: Initial-Value Problems

Design of optimal Runge-Kutta methods

Advanced methods for ODEs and DAEs

Transcription:

Multistage Methods I: Runge-Kutta Methods Varun Shankar January, 0 Introduction Previously, we saw that explicit multistep methods (AB methods) have shrinking stability regions as their orders are increased. Further, the second Dalhquist barrier stopped us from generating high-order A-stable multistep methods. In this chapter, we will introduce Runge-Kutta methods. Unlike the multistep methods, these methods generate information within a time-step to get highorder convergence; these methods are called multistage methods due to the generation of these intermediate stages. We will show how both explicit and implicit Runge-Kutta (RK) methods can be generated from quadrature rules. We will then discuss and analyze the general form of RK methods, abstracting away the polynomial interpolation idea. Explicit RK Methods from Quadrature Let us now derive some popular explicit RK methods. Unlike multistep methods, we will only use information over a single time-step to generate these. For this, we will return to the integral form of the simple ODE problem. This integral form is given by: y n+ = y n + t n+ t n f(t, y(t))dt. (). Forward Euler Using the left endpoint rule to approximate the integral, we get y n+ y n + t n+ t n f(t n, y n ), () = y n+ y n + tf(t n, y n ). (3)

Forward Euler is thus both AB and RK.. Second-order RK methods We now use the trapezoidal rule to approximate the integral. This gives us y n+ y n + t [f(t n, y n ) + f(t n+, y n+ )]. (4) This, however, is an implicit method. To make it explicit, we will simply approximate the y n+ on the right hand side with Forward Euler! This gives y n+ y n + t f(t n, y n ) + t f(t n+, y n + tf(t n, y n )). (5) This is typically written as y n+ y n + t (k + k ), () k = f(t n, y n ), (7) k = f(t n + h, y n + tk ). (8) This is the classical second-order Runge-Kutta method, referred to as RK. It is also known as Improved Euler or Heun s Method. This is not the only RK method. To generate a second RK method, all we need to do is apply a different quadrature rule of the same order to approximate the integral. For example, if we use the Midpoint rule, we get ( ) y n+ y n + tf t n+, yn+. (9) To make this a fully-explicit method, we simply replace y n+ by a Forward Euler approximation. This yields ( y n+ y n + tf t n+, yn + t ) f(t n, y n ). (0) This is called the Midpoint method. There is no real consistent naming scheme. This is simply because there were (and continue to be) armies of researchers developing RK methods of different orders and properties..3 RK4 If f is not dependent on y, RK4 can be derived from Simpson s rule. You will show this in your assignment.

3 General derivations of explicit RK methods As we go to higher-order explicit RK methods, the quadrature approach becomes ridiculously cumbersome. Instead, it is fairly common to generate explicit RK methods using the general form of RK methods. This is typically written as s y n+ = y n + t b i k i, () i= k = f(t n, y n ), () k = f(t n + c t, y n + t(a k )), (3) k 3 = f(t n + c 3 t, y n + t(a 3 k + a 3 k )), (4). (5) k s = f (t n + c s t, y n + t (a s k + a s k +... + a s,s k s )). () It is straightforward to collect the coefficients a ij into the Runge-Kutta matrix A. We also have the condition s c i = a ij. (7) j= For explicit RK methods, A is strictly lower-triangular. This implies that the first row of A is always filled with zeros! Consequently, for explicit RK methods, c = 0. 3. Example: Generating explicit RK methods Let s see how one goes about generating explicit RK methods with the above general formula. To do this, we first write down the general form of an RK method: y n+ = y n + t(b k + b k ) + O( t 3 ), (8) k = f(t n, y n ), (9) k = f(t n + c t, y n + ta k ). (0) Our strategy to find b, b, c and a is straightforward: we will match the Taylor expansion of y n+ to the Taylor expansion of the general RK method. This will give us conditions to find the unknown coefficients. Taylor Expansion for y n+ First, let s Taylor expand y n+. y n+ = y n + t t yn (t) + t t yn (t) + O( t 3 ). () 3

From the ODE, we have y (t) = f(t, y). We can use this to obtain an expression for y (t): y (t) = f t (t, y) + f y (t, y)y (t), () Plugging back into the Taylor series for y n+, we get = f t (t, y) + f y (t, y)f(t, y). (3) y n+ = y n + tf(t n, y n ) + t [f t(t n, y n ) + f y (t n, y n )f(t n, y n )] + O( t 3 ), = y n + t f(t n, y n ) + t [f(t n, y n ) + tf t (t n, y n ) + tf y (t n, y n )f(t n, y n )] + O( t 3 ). (4) (5) Taylor Expansion for the general RK scheme Now that we have an exact Taylor expansion for y n+, we can expand the RK scheme, and compare. First, we Taylor expand f(t n + c t, y n + ta k ): f(t n + c t, y n + ta k ) = f(t n, y n ) + c tf t (t n, y n ) + ta f y (t n, y n )k + O( t ), () = f(t n, y n ) + c tf t (t n, y n ) + ta f y (t n, y n )f(t n, y n ) + O( t ). (7) We then plug in this expression into the general RK scheme and simplify: y n+ = y n + (b + b ) tf(t n, y n ) + b t [c f t (t n, y n ) + a f y (t n, y n ), f(t n, y n )] + O( t 3 ). (8) Comparing Taylor expansions We can now compare the Taylor expansion of y n+ to the Taylor-expanded general RK scheme, and require the terms to match up to O( t 3 ). This gives us three conditions: b + b =, (9) c b =, (30) a b =. (3) These are three nonlinear equations for four unknowns! There are thus multiple choices of the unknowns that satisfy these equations. 4

In practice, we typically pick the c values, then solve the b values, and then solve for the a values. Remember, the c values dictate where between t n and t n+ we place our stages. b = b =, c = a = yields the classical RK method. b = 0, b =, c = a = leads to the previously seen Midpoint method. b = 4, b = 3 4, c = a = 3 describes the so-called Ralston method, which has been shown to minimize the local truncation error for the class of RK schemes. In general, the higher the number of stages s of the RK method, the more the number of conditions and unknowns, and the trickier it becomes to solve for the unknowns. 3. Butcher Tableaux It becomes extremely tedious to write down the different a, b, c values for RK methods. John Butcher designed an elegant solution to this problem: it is now common to collect these coefficients in a table, also called a Butcher tableau. c a a... a s....... Butcher tableaux are represented as follows: c s a s a s... a ss b b... b s A huge list of Butcher tableaux for both explicit and implicit RK methods is given at https://en.wikipedia.org/wiki/list_of_runge%e%80%93kutta_ methods. 4 Implicit RK Methods From Quadrature If you recall the derivation of explicit RK methods from quadrature, we initially ended up with something implicit when applying the quadrature rule, then threw out the implicitness by replacing appropriate terms with a Forward Euler approximation. To generate implicit RK methods (IRK methods), we simply apply the quadrature rule over the interval, and do no replacing. Why use implict RK methods at all? For one thing, we explicit implicit RK methods to be more stable than explicit methods, just like AM and BDF methods were more stable than AB methods. For another, explicit RK methods have a non-monotonic relationship between order p and number of stages s. For p > 4, s > p. An eighth order explicit RK method actually requires stages! However, for implicit RK methods, it is possible to get a much higher order than the number of stages used. 5

Here, we will discuss a few IRK methods and how to generate them. First, we present a definition and theorem that reassure us about the connection between RK methods and polynomial interpolation. Definition: Let c,..., c s be distinct real numbers in [0, ]. The collocation polynomial p(t) is a polynomial of degree s satisfying p(t 0 ) = y 0, (3) p t (t 0 + c i t) = f(t 0 + c i t, p(t 0 + c i t)), i =,..., s, (33) and the numerical solution of the corresponding collocation method is defined by y = p(t 0 + t). Theorem, Wright 970: The collocation method of the above definition is equivalent to the s-stage RK method with coefficients a ij = b i = c i 0 0 l i (t) = k i l j (t)dt, (34) l i (t)dt, (35) t c k c i c k. (3) Practically speaking, this theorem simply tells us that we can generate the RK coefficients for an IRK method by picking the c values, then using the Lagrange interpolating polynomial over the interval to generate a and b values. Recall that the c values are where the intermediate stages are located in time between t n and t n+. 4. Gauss methods Gauss methods are based on Gaussian quadrature. The idea is simple: we select c... c s to be the zeros of some orthogonal polynomial. Some common choices are Legendre polynomials, Chebyshev polynomials of the first kind and Chebyshev polynomials of the second kind. We will briefly discuss Gauss-Legendre IRK methods. A Gauss-Legendre method based on s stages has an order of p = s. Further, all Gauss-Legendre methods are A-stable! These methods therefore can go to arbitrarily high order, making them superior in one way to BDF methods. Unfortunately, they can be very expensive to compute; recall that for an implicit method, one has to solve a linear system. In the case of an s step implicit multistep method that solves an ODE system of M components, the linear system is M M in size. However, for an s stage IRK method, the linear

system is Ms Ms in size. Thus, in practice, Gauss-Legendre methods of p > are rarely used. This is why even BDF3 and BDF4 are popular despite their lack of A-stability; on large systems of ODEs, they are very efficient to solve (at the cost of extra storage). Here is the Butcher tableau for the two-stage Gauss-Legendre method of order 4. 3 + 3 4 4 + 3 4 3 4 Expanding the stages from the Butcher tableau, we have y n+ = y n + t (k + k ), (37) ( ( ) ( ( ) )) 3 k = f t n + t, y n + t 4 k 3 + 4 k, (38) ( ( ) (( ) )) 3 k = f t n + + 3 t, y n + t 4 + k + 4 k. (39) This is obviously implicit! method! It also doesn t look as nice as an explicit RK 4. Radau methods Radau methods have the highest possible order among quadrature formulae with either c = 0 or c s =. In these cases, p = s. Radau formulae allow either the first or last RK stage to be explicit, thereby offering lower costs than Gauss-Legendre RK methods while also allowing for a high-order A-stable method. If c s =, the corresponding collocation method is called a Radau IIA method. 4.3 Lobatto methods Lobatto methods have the highest possible order with c = 0 and c s =. For this to be possible, the c values must be zeros of d s dx s ( x s (x ) s ). (40) The order is p = s. When s =, we recover the implicit Trapezoidal rule. 7