Solving Ordinary Differential equations

Similar documents
Chapter 8. Numerical Solution of Ordinary Differential Equations. Module No. 1. Runge-Kutta Methods

Applied Math for Engineers

CS 450 Numerical Analysis. Chapter 9: Initial Value Problems for Ordinary Differential Equations

Scientific Computing: An Introductory Survey

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 9

Numerical Methods for Differential Equations

ECE257 Numerical Methods and Scientific Computing. Ordinary Differential Equations

Ordinary differential equations - Initial value problems

Numerical Methods for Differential Equations

Solving Ordinary Differential Equations

8.1 Introduction. Consider the initial value problem (IVP):

Consistency and Convergence

Multistage Methods I: Runge-Kutta Methods

Bindel, Fall 2011 Intro to Scientific Computing (CS 3220) Week 12: Monday, Apr 18. HW 7 is posted, and will be due in class on 4/25.

The family of Runge Kutta methods with two intermediate evaluations is defined by

Numerical Differential Equations: IVP

Fourth Order RK-Method

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit V Solution of

Initial value problems for ordinary differential equations

MTH 452/552 Homework 3

CS520: numerical ODEs (Ch.2)

Initial value problems for ordinary differential equations


CHAPTER 5: Linear Multistep Methods

NUMERICAL SOLUTION OF ODE IVPs. Overview

Chapter 5 Exercises. (a) Determine the best possible Lipschitz constant for this function over 2 u <. u (t) = log(u(t)), u(0) = 2.

2 Numerical Methods for Initial Value Problems

Module 4: Numerical Methods for ODE. Michael Bader. Winter 2007/2008

Mathematics for chemical engineers. Numerical solution of ordinary differential equations

Numerical Methods - Initial Value Problems for ODEs

Euler s Method, cont d

COSC 3361 Numerical Analysis I Ordinary Differential Equations (II) - Multistep methods

Ordinary Differential Equations II

Jim Lambers MAT 772 Fall Semester Lecture 21 Notes

Lecture 4: Numerical solution of ordinary differential equations

Initial-Value Problems for ODEs. Introduction to Linear Multistep Methods

Review Higher Order methods Multistep methods Summary HIGHER ORDER METHODS. P.V. Johnson. School of Mathematics. Semester

Ordinary Differential Equations

Math 660 Lecture 4: FDM for evolutionary equations: ODE solvers

Numerical solution of ODEs

multistep methods Last modified: November 28, 2017 Recall that we are interested in the numerical solution of the initial value problem (IVP):

1 Ordinary Differential Equations

Multistep Methods for IVPs. t 0 < t < T

Ordinary Differential Equations

Ordinary differential equation II

Ordinary Differential Equations II

Chapter 6 - Ordinary Differential Equations

Part IB Numerical Analysis

Graded Project #1. Part 1. Explicit Runge Kutta methods. Goals Differential Equations FMN130 Gustaf Söderlind and Carmen Arévalo

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University

Introduction to standard and non-standard Numerical Methods

Ordinary Differential Equations

Numerical Methods for the Solution of Differential Equations

Mini project ODE, TANA22

Astrodynamics (AERO0024)

2.29 Numerical Fluid Mechanics Fall 2011 Lecture 20

Linear Multistep Methods I: Adams and BDF Methods

Solving PDEs with PGI CUDA Fortran Part 4: Initial value problems for ordinary differential equations

1 Error Analysis for Solving IVP

You may not use your books, notes; calculators are highly recommended.

HIGHER ORDER METHODS. There are two principal means to derive higher order methods. b j f(x n j,y n j )

Modeling & Simulation 2018 Lecture 12. Simulations

Differential Equations

Math 128A Spring 2003 Week 11 Solutions Burden & Faires 5.6: 1b, 3b, 7, 9, 12 Burden & Faires 5.7: 1b, 3b, 5 Burden & Faires 5.

AN OVERVIEW. Numerical Methods for ODE Initial Value Problems. 1. One-step methods (Taylor series, Runge-Kutta)

Integration of Ordinary Differential Equations

Ordinary Differential Equations

4 Stability analysis of finite-difference methods for ODEs

Chapter 11 ORDINARY DIFFERENTIAL EQUATIONS

Math Numerical Analysis Homework #4 Due End of term. y = 2y t 3y2 t 3, 1 t 2, y(1) = 1. n=(b-a)/h+1; % the number of steps we need to take

9.6 Predictor-Corrector Methods

Euler s Method, Taylor Series Method, Runge Kutta Methods, Multi-Step Methods and Stability.

5.6 Multistep Methods

Explicit One-Step Methods

Numerical Analysis. A Comprehensive Introduction. H. R. Schwarz University of Zürich Switzerland. with a contribution by

Lecture Notes on Numerical Differential Equations: IVP

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University

ENGI9496 Lecture Notes State-Space Equation Generation

Computational Methods CMSC/AMSC/MAPL 460. Ordinary differential equations

4.4 Computing π, ln 2 and e

Finite Differences for Differential Equations 28 PART II. Finite Difference Methods for Differential Equations

Efficient Simulation of Physical System Models Using Inlined Implicit Runge Kutta Algorithms

[ ] is a vector of size p.

The Initial Value Problem for Ordinary Differential Equations

S#ff ODEs and Systems of ODEs

On interval predictor-corrector methods

Applied Numerical Analysis

Butcher tableau Can summarize an s + 1 stage Runge Kutta method using a triangular grid of coefficients

Differential Equations

The Chemical Kinetics Time Step a detailed lecture. Andrew Conley ACOM Division

Initial Value Problems

KINGS COLLEGE OF ENGINEERING DEPARTMENT OF MATHEMATICS ACADEMIC YEAR / EVEN SEMESTER QUESTION BANK

Ordinary Differential Equations I

Chapter 8. Numerical Solution of Ordinary Differential Equations. Module No. 2. Predictor-Corrector Methods

Introduction to the Numerical Solution of IVP for ODE

Chapter 10. Initial value Ordinary Differential Equations

Validated Explicit and Implicit Runge-Kutta Methods

MATH 350: Introduction to Computational Mathematics

TMA4215: Lecture notes on numerical solution of ordinary differential equations.

CHAPTER 10: Numerical Methods for DAEs

Transcription:

Solving Ordinary Differential equations Taylor methods can be used to build explicit methods with higher order of convergence than Euler s method. The main difficult of these methods is the computation of the derivatives of f (t, y). The idea behind these methods is simple: considering the Taylor series of y(t n+1 ) up to a certain order. For example, by truncating up to order h 2 we will have the Taylor method of order 2. Let us build this method. Consider: y(t n+1 ) = y(t n ) + hy (t n ) + h2 2 y (t n ) + h3 6 y (ζ n ) We will take up to order h 2. For computing y (t n ) and y (t n ) we will use the differential equation: y (t) = f (t, y(t)) y (t) = f t (t, y(t)) + f y (t, y(t))y (t) = f t (t, y(t)) + f y (t, y(t))f (t, y(t)) Therefore we have the method y n+1 = y n + hf (t n, y n ) + h2 2 (f t(t n, y n ) + f y (t, y n )f (t n, y n ))

Solving Ordinary Differential equations By defining as the local truncation error the term which is neglected for building the method, we see that in our method this error is T n = h3 6 y (ζ n ), which satisfies T n Ch 3. This means that the method is order 2. Error Analysis 1 Truncation Error (Truncating Taylor Series) [Large step size = large errors] 2 Rounding Error (Machine precision) [very small step size = roundoff errors] Two kinds of Truncation of Error: 1 Local - error within one step due to application of method 2 Propagation - error due to previous local errors. Global Truncation Error = Local + Propagation Generally if local truncation error is O(h (n+1) ) then, the Global truncation error is O(h (n) ).

Solving Ordinary Differential equations General Single-step methods: Before introducing the Runge-Kutta methods, we will briefly describe the general set-up for Single-step methods. In general, these methods can be written as: y n+1 = y n + hφ(x n, y n, y n+1, h) (1) where Φ is related to f and it is called the Step Function. When Φ depends on y n+1 the method is implicit and in other case the method is explicit.

The examples of single-step methods considered so far are: 1 Forward Euler, Φ(x n, y n, y n+1, h) = f (x n, y n ) (explicit). 2 Trapezoidal, Φ(x n, y n, y n+1, h) = (f (x n, y n ) + f (x n+1, y n+1 ))/2 (implicit). 3 Taylor of order 2, Φ(x n, y n, y n+1, h) = f (t n, y n ) + h 2 (f t(t n, y n ) + f y (t, y n )f (t n, y n ) (explicit).

For Single-step methods Definition We define the local truncation error, T n, as: T n (h) = y(t n+1 ) y(t n ) hφ(t n, y(t n ), y(t n+1 ), h) (2)

Runge-Kutta methods are very popular methods which allow us to obtain high-order methods avoiding the computation of the derivatives of the Taylor methods. Let s see how to build an explicit Runge-Kutta method of order 2: y n+1 = y n + hφ(t n, y n, h) Φ(t n, y n, h) = ak 1 + bf (t n + αh, y n + βhk 1 ), k 1 = f (t n, y n ) (3) The constants of the Step function will be determine by imposing that the method behaves like the Taylor method up to order 2. The Taylor method of order 2 was: y n+1 = y n + hf (t n, y n ) + h2 2 (f t(t n, y n ) + f y (t, y n )f (t n, y n )) Then, in order to compare with the new method, we just need to expand f (t n + αh, y n + βhk 1 ) up to order 1: f (t n + αh, y n + βhk 1 ) = f (t n, y n ) + f t (t n, y n )αh + f y (t n, y n )βhk 1 + O(h 2 ) = f (t n, y n ) + f t (t n, y n )αh + f y (t n, y n )f (t n, y n )βh + O(h 2

Going to Eq. (3), we have that up to order 2 the new method is: y n+1 = y n + h(a + b)f (t n, y n ) + bh 2 (αf t (t n, y n ) + βf (t n, y n )f y (t n y n )) Then, if the method must behave as the Taylor method up to order 2, we have the following equations for the parameters of the method: a + b = 1, bα = bβ = 1/2 The methods verifying these equations will be convergents with order of convergence 2. Consider, for instance, the methods with α = β. Then, b = 1/2α and a = 1 1/2α. The methods with α = 1/2 and α = 1 are called the Modified Euler Method and the Heun Method, respectively.

The Modified Euler Method (α = 1/2) can be written as y n+1 = y n + hf (t n + h 2, y n + h 2 f n). (4) and using the notation k 1 = f (t n, y n ), k 2 = f (t n + h 2, y n + h 2 k 1), we have that y n+1 = y n + hk 2.

The Heun Method (α = 1), can be written as y n+1 = y n + h 2 (f (t n, y(t n )) + f (t n+1, y n + hf (t n, y n ))). (5) If k 1 = f (t n, y n ), k 2 = f (t n + h, y n + hk 1 ), then, we have: y n+1 = y n + h 2 (k 1 + k 2 ).

The previous notation can be generalized and k 1 = f (t n, y n ), k 2 = f (t n + c 2 h, y n + a 21 hk 1 ), y n+1 = y n + h(b 1 k 1 + b 2 k 2 ), where c 2, a 21, b 1 and b 2 are constants. All these methods are members of a big family of methods which are called the Runge-Kutta Methods.

The general Runge-Kutta s-stages Method is where y n+1 = y n + h k i = f t n + c i h, y n + h s b i k i, i=1 s a ij k j, i = 1, 2,..., s. A convenient way of displaying the coefficients of the Runge-Kutta Methods is the use of the Butcher tableaux: j=1 c A b T = c 1 a 11 a 12 a 13... a 1s c 2 a 21 a 22 a 23... a 2s c 3 a 31 a 32 a 33... a 3s...... c s a s1 a s2 a s3... a ss b 1 b 2 b 3... b s

Example Find the Butcher tableaux of the following Runge-Kutta Method: ( 1 y n+1 = y n + h 4 k 1 + 3 ) 4 k 2 ( ( 1 k 1 = f t n, y n + h 4 k 1 1 )) 4 k 2, ( k 2 = f t n + 2 ( 1 3 h, y n + h 4 k 1 + 5 )) 12 k 2. Sol.: 1 0 4 1 4 2 1 5 3 4 1 4 12 3 4

If p (s) is the maximum order of convergence which can be obtained by using an explicit Runge-Kutta s-stages method, then we have p (s) = s, s = 1, 2, 3, 4 p (5) = 4 p (6) = 5 p (7) = 6 p (8) = 6 p (9) = 7 p (s) s 2, s = 10, 11,... This behaviour explains the popularity of the 4-stages Runge-Kutta Methods of order 4.

Adaptive Step-size Control Goal: with little additional effort estimate (bound) the magnitude of local truncation error at each step so that step size can be reduced/increased if local error increases/decreases. Basic idea: 2. Use a matched pair of Runge-Kutta formulas of order p and p + 1 that use common values of ki, and yield estimate or bound local truncation error.

R-K Method of Order p: R-K method of Order p + 1: y n+1 = y n + h y n+1 = y n + h s b i k i i=1 s b i k i These methods are called embedded Runge-Kutta Methods and can be expressed by using the following modified Butcher tableaux: i=1 c A b T b T b T b T

The local truncation error for the order-p method can be estimated as follows: T n+1 = h s (b i b i )k i. Then, the step-size can be adapted by imposing the following condition: i=1 T n+1 < Tol, being Tol the tolerance by step unit. The new step size is then selected as follows: h new = ( qtol T n+1 where q is a factor with a value of q 0.8. ) 1 p+1 h. (6)

Example: The Runge-Kutta-Fehlberg (4,5) scheme has the following Butcher tableaux 0 0 0 0 0 0 0 1 1 4 4 0 0 0 0 0 3 3 9 8 32 32 0 0 0 0 12 1932 13 2197 7200 7296 2197 2197 0 0 0 439 3680 1 216 8 513 845 4104 0 0 1 2 8 27 2 3544 1859 2565 4104 11 40 0 25 1408 2197 216 0 2565 4104 1 5 0 16 6656 135 0 12825 1 360 0 128 4275 2197 75240 28561 56430 9 50 1 50 2 55 2 55

Higher-Order ODEs and Systems of Equations An n th order ODE can be converged into a system of n coupled 1st-order ODEs. Systems of first order ODEs are solved just as one solves a single ODE. Example: Consider the 4th-order ODE By letting y + a(x)y + b(x)y + c(x)y + d(x)y = f (x) y = v 3 ; y = v 2 ; and y = v 1 this 4th-order ODE can be written as a system of four coupled 1st-order ODEs.

Stiff Differential Equations A stiff system of ODE s is one involving rapidly changing components together with slowly changing ones. In many cases, the rapidly varying components die away quickly, after which the solution is dominated by the slow ones. Consider the following ODE: y = 50(y cos(t)), 0 t 1.25, y(0) = 0, The figure shows the approximations obtained by using the Forward Euler method for h = 1.25/31 and h = 1.25/32.

2.5 Forward Euler solutions of dy/dt= 50*(y cos(t)) 2 1.5 1 y 0.5 0 0.5 1 o N=32 x N=31 1.5 0 0.2 0.4 0.6 0.8 1 1.2 1.4 t

In order to ensure the stability of the numerical solution, we have to choose an step size h < 2/50. This means that, for stiff problems, we would need a very small step to capture the behavior of the rapid transient and to preserve a stable and accurate solution, and this would then make it very laborious to compute the slowly evolving solution. Euler s method is known as an explicit method because the derivative is taken at the known point i. An alternative is to use an implicit approach, in which the derivative is evaluated at a future step i + 1. The simplest method of this type is the backward Euler method that yields y n+1 = y n + hf (y n+1, t n+1 ). In general, for this kind of problems, suitable methods are the BDF backward differentiation formulae methods. These methods have the general form: 1 h k j=1 1 j j y n+1 = f n+1.

Linear Multistep Methods for I.V. Problems The linear multistep methods can be written in the general form k k α j y n+j = h β j f n+j, (7) j=0 j=0 where k is called the step number and without loss of generality we let α k = 1. Explicit methods are characterised by β k = 0 and implicit methods have β k 0.

Linear Multistep Methods for I.V. Problems The Adams Family This familiy of methods is derived from the identity y(t n+1 ) y(t n ) = tn+1 t n f (t, y(t))dt. (8) With a view to using previously computed values of y n, we replace f (t, y) by the polynomial of degree k 1 passing through the k points (t n k+1, f n k+1 ),..., (t n 1, f n 1 ), (t n, f n ). Using a constant step size, this polynomial can be written (in the Newton backward difference form) as follows P k 1 (r) = f n +r f n + r(r + 1) 2 f n +...+ 2! donde t = t n + rh, r [ (k 1), 0]. r(r + 1)...(r + k 2) k 1 f n. (k 1)!

Linear Multistep Methods for I.V. Problems Then, the Adams-Bashforth method of k steps has the form y n+1 y n = and by integrating the polynomial y n+1 y n = where γ i = ( 1) i 1 γ 0 = 1 0 0 1 0 tn+1 t n P k 1 (t)dt. k 1 P k 1 (r)hdr = h γ i i f n = ( r i dr = 1, γ 1 = γ 3 = 1 0 ) dr 1 0 i=0 rdr = 1 2, γ 2 = 1 r(r + 1)(r + 2) dr = 3 6 8,... 0 k 1 γ i i f n i (9) i=0 r(r + 1) dr = 5 2 2,

Linear Multistep Methods for I.V. Problems The first Adams-Bashforth methods are then: y n+1 = y n + hf n y n+1 = y n + h 2 (3f n f n 1 ) y n+1 = y n + 12 h (23f n 16f n 1 + 5f n 2 ) y n+1 = y n + 24 h (55f n 59f n 1 + 37f n 2 9f n 3 ) and so on. (AB1), Euler Method (AB2) (AB3) (AB4) (10)

Linear Multistep Methods for I.V. Problems The Adams-Bashforth methods are explicit methods. Implicit Adams methods are derived by integrating the order k polynomial passing through the points (t n+1, f n+1 ), (t n, f n ), (t n 1, f n 1 )..., (t n k+1, f n k+1 ). Note that we are considering the additional data point (t n+1, f n+1 ). The corresponding polynomial is P k (r) = f n+1 +r f n+1 + where t = t n+1 + rh, r [ (k 1), 0]. r(r + 1) 2 r(r + 1)...(r + k 1) f n+1 +...+ k f n+1. 2! k!

Linear Multistep Methods for I.V. Problems Then, for the implicit methods we will have y n+1 y n = where 0 k P k (r)hdr = h γ i i f n+1 = h 1 i=0 i=0 k γ i i f n+1 i, γ 0 = 0 1 dr = 1, γ 1 = γ 3 = 0 1 0 1 rdr = 1 2, γ 2 = 0 1 r(r + 1)(r + 2) dr = 1 6 24,... r(r + 1) dr = 1 2 12,

Linear Multistep Methods for I.V. Problems The implicit Adams methods are called Adams-Moulton methods (AM). The first AM methods are: y n+1 = y n + hf n+1 = y n + hf (t n+1 y n+1 ) y n+1 = y n + h 2 (f n+1 + f n ) y n+1 = y n + h 12 (5f n+1 + 8f n f n 1 ) y n+1 = y n + h 24 (9f n+1 19f n 5f n 1 + f n 2 ) (AM0), implicit Euler method (AM1), trapezoidal method (AM2) (AM3) Note that, as typical for any implicit method, we have to solve a nonlinear equation. (11)