Euler s Method, cont d

Similar documents
Consistency and Convergence

Jim Lambers MAT 772 Fall Semester Lecture 21 Notes

Numerical Methods - Initial Value Problems for ODEs

Initial value problems for ordinary differential equations

Ordinary Differential Equations

Numerical Differential Equations: IVP

Math 128A Spring 2003 Week 12 Solutions

1 Systems of First Order IVP

Lecture Notes on Numerical Differential Equations: IVP

Do not turn over until you are told to do so by the Invigilator.

You may not use your books, notes; calculators are highly recommended.

Numerical Analysis Exam with Solutions

Lecture 4: Numerical solution of ordinary differential equations

Fourth Order RK-Method

1 Error Analysis for Solving IVP

Applied Math for Engineers

Ordinary differential equation II

multistep methods Last modified: November 28, 2017 Recall that we are interested in the numerical solution of the initial value problem (IVP):

Section 7.4 Runge-Kutta Methods

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University

ECE257 Numerical Methods and Scientific Computing. Ordinary Differential Equations

Ordinary Differential Equations II

Runge-Kutta methods. With orders of Taylor methods yet without derivatives of f (t, y(t))

Solving Ordinary Differential equations

Higher Order Taylor Methods

NUMERICAL SOLUTION OF ODE IVPs. Overview

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University

Ordinary Differential Equations II

8.1 Introduction. Consider the initial value problem (IVP):

2tdt 1 y = t2 + C y = which implies C = 1 and the solution is y = 1

Chap. 20: Initial-Value Problems

Review Higher Order methods Multistep methods Summary HIGHER ORDER METHODS. P.V. Johnson. School of Mathematics. Semester

Ordinary Differential Equations

EXAMPLE OF ONE-STEP METHOD

Introductory Numerical Analysis

ODE Runge-Kutta methods

Integration of Ordinary Differential Equations

Differential Equations

Solving scalar IVP s : Runge-Kutta Methods

A Brief Introduction to Numerical Methods for Differential Equations

Physics 584 Computational Methods

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 9

CS 450 Numerical Analysis. Chapter 9: Initial Value Problems for Ordinary Differential Equations

Scientific Computing: An Introductory Survey

Numerical methods for solving ODEs

Numerical Methods for Initial Value Problems; Harmonic Oscillators

1 Ordinary Differential Equations

Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 12: Monday, Apr 16. f(x) dx,

Introduction to standard and non-standard Numerical Methods

(f(x) P 3 (x)) dx. (a) The Lagrange formula for the error is given by

FORTRAN 77 Lesson 7. Reese Haywood Ayan Paul

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit V Solution of

AN OVERVIEW. Numerical Methods for ODE Initial Value Problems. 1. One-step methods (Taylor series, Runge-Kutta)

Do not turn over until you are told to do so by the Invigilator.

Section 7.2 Euler s Method

Lecture 21: Numerical Solution of Differential Equations

Ordinary Differential Equations

Multistage Methods I: Runge-Kutta Methods

INTRODUCTION TO COMPUTER METHODS FOR O.D.E.

TABLE OF CONTENTS INTRODUCTION, APPROXIMATION & ERRORS 1. Chapter Introduction to numerical methods 1 Multiple-choice test 7 Problem set 9

Initial value problems for ordinary differential equations

Numerical Methods for Initial Value Problems; Harmonic Oscillators

Lecture V: The game-engine loop & Time Integration

Ph 22.1 Return of the ODEs: higher-order methods

Numerical method for approximating the solution of an IVP. Euler Algorithm (the simplest approximation method)

Introduction to the Numerical Solution of IVP for ODE

Lecture IV: Time Discretization

Mathematics for chemical engineers. Numerical solution of ordinary differential equations

What we ll do: Lecture 21. Ordinary Differential Equations (ODEs) Differential Equations. Ordinary Differential Equations

10 Numerical Solutions of PDEs

Hermite Interpolation

Solving Ordinary Differential Equations

dt 2 The Order of a differential equation is the order of the highest derivative that occurs in the equation. Example The differential equation

Variable Step Size Differential Equation Solvers

Chapter 8. Numerical Solution of Ordinary Differential Equations. Module No. 1. Runge-Kutta Methods


2 Numerical Methods for Initial Value Problems

Simple ODE Solvers - Derivation

2nd Year Computational Physics Week 4 (standard): Ordinary differential equations

FIRST-ORDER ORDINARY DIFFERENTIAL EQUATIONS II: Graphical and Numerical Methods David Levermore Department of Mathematics University of Maryland

Jim Lambers MAT 460/560 Fall Semester Practice Final Exam

Numerical Methods for Differential Equations

MA 3021: Numerical Analysis I Numerical Differentiation and Integration

Notation Nodes are data points at which functional values are available or at which you wish to compute functional values At the nodes fx i

Runge-Kutta and Collocation Methods Florian Landis

Defect-based a-posteriori error estimation for implicit ODEs and DAEs

Numerical Methods for Differential Equations

Ordinary Differential Equations

CS 257: Numerical Methods

Euler s Method, Taylor Series Method, Runge Kutta Methods, Multi-Step Methods and Stability.

Numerical Analysis II. Problem Sheet 9

+ h4. + h5. 6! f (5) i. (C.3) Since., it yields

Mathematical Models. MATH 365 Ordinary Differential Equations. J. Robert Buchanan. Spring Department of Mathematics

Mathematical Models. MATH 365 Ordinary Differential Equations. J. Robert Buchanan. Fall Department of Mathematics

Numerical solution of ODEs

Numerical Methods for the Solution of Differential Equations

Ordinary Differential Equations

MecE 390 Final examination, Winter 2014

4 Stability analysis of finite-difference methods for ODEs

Transcription:

Jim Lambers MAT 461/561 Spring Semester 009-10 Lecture 3 Notes These notes correspond to Sections 5. and 5.4 in the text. Euler s Method, cont d We conclude our discussion of Euler s method with an example of how the previous convergence analyses can be used to select a suitable time step h. Example Consider the IVP y = y, 0 < t < 10, y0) = 1. We know that the exact solution is yt) = e t. Euler s method applied to this problem yields the difference equation y n+1 = y n hy n = 1 h)y n, y 0 = 1. We wish to select h so that the error at time T = 10 is less than 0.001. To that end, we use the error bound yt n ) y n hm L [eltn t 0) 1], with M = 1, since y t) = e t, which satisfies 0 < y t) < 1 on [0, 10], and L = 1, since ft, y) = y satisfies f/ y = 1 1. Substituting t n = 10 and t 0 = 0 yields y10) y n h [e10 1] 1101.7h. Ensuring that the error at this time is less than 10 3 requires choosing h < 9.08 10 8. However, the bound on the error at t = 10 is quite crude. Applying Euler s method with this time step yields a solution whose error at t = 10 is 10 11. Now, suppose that we include roundoff error in our error analysis. The optimal time step is δ h = M, where δ is a bound on the roundoff error during any time step. We use δ = u, where u is the unit roundoff, because each time step performs only two floating-point operations. Even if 1 h is computed once, in advance, its error still propagates to the multiplication with y n. In a typical 1

double-precision floating-point number system, u 1.1 10 16. It follows that the optimal time step is δ 1.1 10 h = M = 16 ) 1.5 10 8. 1 With this value of h, we find that the error at t = 10 is approximately 3.55 10 1. This is even more accurate than with the previous choice of time step, which makes sense, because the new value of h is smaller. Runge-Kutta Methods We have seen that Euler s method is first-order accurate. We would like to use Taylor series to design methods that have a higher order of accuracy. First, however, we must get around the fact that an analysis of the global error, as was carried out for Euler s method, is quite cumbersome. Instead, we will design new methods based on the criteria that their local error, the error committed during a single time step, is higher-order in h. We now define the concept of local error precisely. Definition The local truncation error of the diference equation is y n+1 = y n + hφt n, y n ) τ n+1 h) = yt n+1) yt n ) + hφt n, yt n ))) h = yt n+1) yt n ) h φt n, yt n )). That is, the local truncation error is the residual due to substituting the exact solution yt) into the numerical method described by the difference equation. Note that due to the division by h, which also occurred in the global error analysis of Euler s method, the local truncation error is of the same order as the global error. Using higher-order Taylor series directly to approximate yt n+1 ) is cumbersome, because it requires evaluating derivatives of f. Therefore, our approach will be to use evaluations of f at carefully chosen values of its arguments, t and y, in order to create an approximation that is just as accurate as a higher-order Taylor series expansion of yt + h). To find the right values of t and y at which to evaluate f, we need to take a Taylor expansion of f evaluated at these unknown) values, and then match the resulting numerical scheme to a Taylor series expansion of yt + h) around t. To that end, we state a generalization of Taylor s theorem to functions of two variables. Theorem Let ft, y) be n + 1) times continuously differentiable on a convex set D, and let t 0, y 0 ) D. Then, for every t, y) D, there exists ξ between t 0 and t, and μ between y 0 and y, such that ft, y) = P n t, y) + R n t, y),

where P n t, y) is the nth Taylor polynomial of f about t 0, y 0 ), [ P n t, y) = ft 0, y 0 ) + t t 0 ) f t t 0, y 0 ) + y y 0 ) f ] y t 0, y 0 ) + [ t t0 ) f t t 0, y 0 ) + t t 0 )y y 0 ) f t y t 0, y 0 ) + y y 0) ] f y t 0, y 0 ) + + 1 n ) n t t n! j 0 ) n j y y 0 ) j n f t n j y j t 0, y 0 ), j=0 and R n t, y) is the remainder term associated with P n t, y), n+1 1 ) n + 1 R n t, y) = t t n + 1)! j 0 ) n+1 j y y 0 ) j n+1 f t n+1 j ξ, μ). yj j=0 We now illustrate our proposed approach in order to obtain a method that is second-order accurate; that is, its local truncation error is Oh ). This involves matching to y + hft, y) + h d h3 [ft, y)] + dt 6 y + ha 1 ft + α 1, y + β 1 ), d [fξ, y)] dt where t ξ t + h and the parameters a 1, α 1 and β 1 are to be determined. After simplifying by removing terms or factors that already match, we see that we only need to match with ft, y) + h d h d [ft, y)] + [ft, yt))] dt 6 dt a 1 ft + α 1, y + β 1 ), at least up to terms of Oh), so that the local truncation error will be Oh ). Applying the multivariable version of Taylor s theorem to f, we obtain a 1 ft + α 1, y + β 1 ) = f a 1 ft, y) + a 1 α 1 t t, y) + a f 1β 1 t, y) + y α1 f t ξ, μ) + α f 1β 1 t y ξ, μ) + β 1 f ξ, μ), y where ξ is between t and t + α 1 and μ is between y and y + β 1. Meanwhile, computing the full derivatives with respect to t in the Taylor expansion of the solution yields ft, y) + h f t t, y) + h f y t, y)ft, y) + Oh ). 3

Comparing terms yields the equations a 1 = 1, a 1 α 1 = h, a 1β 1 = h ft, y), which has the solutions a 1 = 1, α 1 = h, β 1 = h ft, y). The resulting numerical scheme is y n+1 = y n + f t n + h, y n + h ) ft n, y n ). This scheme is known as the midpoint method, or the explicit midpoint method. Note that it evaluates f at the midpoint of the intervals [t n, t n+1 ] and [y n, y n+1 ], where the midpoint in y is approximated using Euler s method with time step h/. The midpoint method is the simplest example of a Runge-Kutta method, which is the name given to any of a class of time-stepping schemes that are derived by matching multivaraiable Taylor series expansions of ft, y) with terms in a Taylor series expansion of yt + h). Another often-used Runge-Kutta method is the modified Euler method y n+1 = y n + h [ft n, y n ) + ft n+1, y n + hft n, y n )], which resembles the Trapezoidal Rule from numerical integration, and is also second-order accurate. However, the best-known Runge-Kutta method is the fourth-order Runge-Kutta method, which uses four evaluations of f during each time step. The method proceeds as follows: k 1 = hft n, y n ), k = hf t n + h, y n + 1 ) k 1, k 3 = hf t n + h, y n + 1 ) k, k 4 = hf t n+1, y n + k 3 ), y n+1 = y n + 1 6 k 1 + k + k 3 + k 4 ). In a sense, this method is similar to Simpson s Rule from numerical integration, which is also fourth-order accurate, as values of f at the midpoint in time are given four times as much weight as values at the endpoints t n and t n+1. Example We compare Euler s method with the fourth-order Runge-Kutta scheme on the initial value problem y = ty, 0 < t 1, y0) = 1, 4

Figure 1: Solutions of y = ty, y0) = 1 on [0, 1], computed using Euler s method and the fourth-order Runge-Kutta method which has the exact solution yt) = e t. We use a time step of h = 0.1 for both methods. The computed solutions, and the exact solution, are shown in Figure 1. It can be seen that the fourth-order Runge-Kutta method is far more accurate than Euler s method, which is first-order accurate. In fact, the solution computed using the fourth-order Runge- Kutta method is visually indistinguishable from the exact solution. At the final time T = 1, the relative error in the solution computed using Euler s method is 0.038, while the relative error in the solution computing using the fourth-order Runge-Kutta method is 4.4 10 6. 5