Consistency and Convergence

Similar documents
Euler s Method, cont d

Jim Lambers MAT 772 Fall Semester Lecture 21 Notes

Lecture 4: Numerical solution of ordinary differential equations

Ordinary Differential Equations

Initial value problems for ordinary differential equations

Ordinary Differential Equations II

Fourth Order RK-Method

Do not turn over until you are told to do so by the Invigilator.

Numerical Methods - Initial Value Problems for ODEs

Ordinary Differential Equations II

Numerical Differential Equations: IVP

Introduction to standard and non-standard Numerical Methods

Math 128A Spring 2003 Week 12 Solutions

You may not use your books, notes; calculators are highly recommended.

Review Higher Order methods Multistep methods Summary HIGHER ORDER METHODS. P.V. Johnson. School of Mathematics. Semester

2tdt 1 y = t2 + C y = which implies C = 1 and the solution is y = 1

Ordinary differential equation II

ECE257 Numerical Methods and Scientific Computing. Ordinary Differential Equations

Solving Ordinary Differential equations

NUMERICAL SOLUTION OF ODE IVPs. Overview

Runge-Kutta methods. With orders of Taylor methods yet without derivatives of f (t, y(t))

Part IB Numerical Analysis

multistep methods Last modified: November 28, 2017 Recall that we are interested in the numerical solution of the initial value problem (IVP):

Ordinary Differential Equations

1 Ordinary Differential Equations

ODE Runge-Kutta methods

Section 7.4 Runge-Kutta Methods

Ordinary Differential Equations I

Simple ODE Solvers - Derivation

Lecture Notes on Numerical Differential Equations: IVP

Mathematics for chemical engineers. Numerical solution of ordinary differential equations

Ordinary Differential Equations

Ordinary Differential Equations

Multistage Methods I: Runge-Kutta Methods

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit V Solution of

Introductory Numerical Analysis

INTRODUCTION TO COMPUTER METHODS FOR O.D.E.

Physics 584 Computational Methods

Lecture 5: Single Step Methods

Initial value problems for ordinary differential equations

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University

FIRST-ORDER ORDINARY DIFFERENTIAL EQUATIONS II: Graphical and Numerical Methods David Levermore Department of Mathematics University of Maryland

AN OVERVIEW. Numerical Methods for ODE Initial Value Problems. 1. One-step methods (Taylor series, Runge-Kutta)

1 Systems of First Order IVP

Ordinary Differential Equations I

8.1 Introduction. Consider the initial value problem (IVP):

Lecture IV: Time Discretization

Do not turn over until you are told to do so by the Invigilator.

Introduction to the Numerical Solution of IVP for ODE

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University

EXAMPLE OF ONE-STEP METHOD

Finite Differences for Differential Equations 28 PART II. Finite Difference Methods for Differential Equations

Runge-Kutta and Collocation Methods Florian Landis

Numerical solution of ODEs

Differential Equations

Math 660 Lecture 4: FDM for evolutionary equations: ODE solvers

FIRST-ORDER ORDINARY DIFFERENTIAL EQUATIONS III: Numerical and More Analytic Methods David Levermore Department of Mathematics University of Maryland

Applied Math for Engineers

Higher Order Taylor Methods

Numerical Methods for Differential Equations

Euler s Method, Taylor Series Method, Runge Kutta Methods, Multi-Step Methods and Stability.

MA 3021: Numerical Analysis I Numerical Differentiation and Integration

Solving scalar IVP s : Runge-Kutta Methods

2 Numerical Methods for Initial Value Problems

Bindel, Fall 2011 Intro to Scientific Computing (CS 3220) Week 12: Monday, Apr 18. HW 7 is posted, and will be due in class on 4/25.


Exam in TMA4215 December 7th 2012

Lecture V: The game-engine loop & Time Integration

Numerical Analysis Exam with Solutions

FORTRAN 77 Lesson 7. Reese Haywood Ayan Paul

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University

(f(x) P 3 (x)) dx. (a) The Lagrange formula for the error is given by

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 9

Backward error analysis

Scientific Computing: An Introductory Survey

Notation Nodes are data points at which functional values are available or at which you wish to compute functional values At the nodes fx i

Ordinary Differential Equations

CS520: numerical ODEs (Ch.2)

Differential Equations

Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 12: Monday, Apr 16. f(x) dx,

Math Numerical Analysis Homework #4 Due End of term. y = 2y t 3y2 t 3, 1 t 2, y(1) = 1. n=(b-a)/h+1; % the number of steps we need to take

10 Numerical Solutions of PDEs

Numerical Solution of Fuzzy Differential Equations

Module 4: Numerical Methods for ODE. Michael Bader. Winter 2007/2008

Initial-Value Problems for ODEs. Introduction to Linear Multistep Methods

A Brief Introduction to Numerical Methods for Differential Equations

Ph 22.1 Return of the ODEs: higher-order methods

Integration of Ordinary Differential Equations

Solving Ordinary Differential Equations

CONVERGENCE AND STABILITY CONSTANT OF THE THETA-METHOD

CS 450 Numerical Analysis. Chapter 9: Initial Value Problems for Ordinary Differential Equations

Numerical methods for solving ODEs

4 Stability analysis of finite-difference methods for ODEs

Jim Lambers MAT 460/560 Fall Semester Practice Final Exam

[ ] is a vector of size p.

On interval predictor-corrector methods

Numerical Methods for Differential Equations

The Definition and Numerical Method of Final Value Problem and Arbitrary Value Problem Shixiong Wang 1*, Jianhua He 1, Chen Wang 2, Xitong Li 1

Chapter 6 - Ordinary Differential Equations

Chapter 5 Exercises. (a) Determine the best possible Lipschitz constant for this function over 2 u <. u (t) = log(u(t)), u(0) = 2.

Transcription:

Jim Lambers MAT 77 Fall Semester 010-11 Lecture 0 Notes These notes correspond to Sections 1.3, 1.4 and 1.5 in the text. Consistency and Convergence We have learned that the numerical solution obtained from Euler s method, y n+1 = y n + hft n, y n ), t n = t 0 + nh, converges to the exact solution yt) of the initial value problem y = ft, y), yt 0 ) = y 0, as h 0. We now analyze the convergence of a general one-step method of the form y n+1 = y n + hφt n, y n, h), for some continuous function Φt, y, h). We define the local truncation error of this one-step method by T n h) = yt n+1) yt n ) Φt n, yt n ), h). h That is, the local truncation error is the result of substituting the exact solution into the approximation of the ODE by the numerical method. As h 0 and n, in such a way that t 0 + nh = t [t 0, T ], we obtain T n h) y t) Φt, yt), 0). We therefore say that the one-step method is consistent if Φt, y, 0) = ft, y). A consistent one-step method is one that converges to the ODE as h 0. We then say that a one-step method is stable if Φt, y, h) is Lipschitz continuous in y. That is, for some constant L Φ. Φt, u, h) Φt, v, h) L Φ u v, t [t 0, T ], u, v R, h [0, h 0 ], 1

We now show that a consistent and stable one-step method is consistent. Using the same approach and notation as in the convergence proof of Euler s method, and the fact that the method is stable, we obtain the following bound for the global error e n = yt n ) y n : ) e L ΦT t 0 ) 1 e n max L T mh). Φ 0 m n 1 Because the method is consistent, we have lim max T nh) = 0. h 0 0 n T/h It follows that as h 0 and n in such a way that t 0 + nh = t, we have and therefore the method is convergent. In the case of Euler s method, we have Therefore, there exists a constant K such that lim e n = 0, n Φt, y, h) = ft, y), T n h) = h f τ), τ t 0, T ). T n h) Kh, 0 < h h 0, for some sufficiently small h 0. We say that Euler s method is first-order accurate. More generally, we say that a one-step method has order of accuracy p if, for any sufficiently smooth solution yt), there exists constants K and h 0 such that T n h) Kh p, 0 < h h 0. We now consider an example of a higher-order accurate method. An Implicit One-Step Method Suppose that we approximate the equation yt n+1 ) = yt n ) + tn+1 t n y s) ds by applying the Trapezoidal Rule to the integral. This yields a one-step method y n+1 = y n + h [ft n, y n ) + ft n+1, y n+1 )],

known as the trapezoidal method. It follows from the error in the Trapezoidal Rule that T n h) = yt n+1) yt n ) h 1 [ft n, y n ) + ft n+1, y n+1 )] = 1 1 h y τ n ), τ n t n, t n+1 ). Therefore, the trapezoidal method is second-order accurate. To show convergence, we must establish stability by finding a suitable Lipschitz constant L Φ for the function Φt, y, h) = 1 [ft n, y n ) + ft n+1, y n+1 )], assuming that L f is a Lipschitz constant for ft, y) in y. We have Φt, u, h) Φt, v, h) = 1 ft, u, h) + ft + h, u + hφt, u, h)) ft, v, h) ft + h, v + hφt, v, h) L f u v + h L f Φt, u, h) Φt, v, h). Therefore 1 h ) L f Φt, u, h) Φt, v, h) L f u v, and therefore L Φ L f 1 h L, f provided that h L f < 1. We conclude that for h sufficiently small, the trapezoidal method is stable, and therefore convergent, with Oh ) global error. The trapezoidal method constrasts with Euler s method because it is an implicit method, due to the evaluation of ft, y) at y n+1. It follows that it is generally necessary to solve a nonlinear equation to obtain y n+1 from y n. This additional computational effort is offset by the fact that implicit methods are generally more stable than explicit methods such as Euler s method. Another example of an implicit method is backward Euler s method y n+1 = y n + hft n+1, y n+1 ). Like Euler s method, backward Euler s method is first-order accurate. Runge-Kutta Methods We have seen that Euler s method is first-order accurate. We would like to use Taylor series to design methods that have a higher order of accuracy. First, however, we must get around the fact that an analysis of the global error, as was carried out for Euler s method, is quite cumbersome. 3

Instead, we will design new methods based on the criteria that their local truncation error, the error committed during a single time step, is higher-order in h. Using higher-order Taylor series directly to approximate yt n+1 ) is cumbersome, because it requires evaluating derivatives of f. Therefore, our approach will be to use evaluations of f at carefully chosen values of its arguments, t and y, in order to create an approximation that is just as accurate as a higher-order Taylor series expansion of yt + h). To find the right values of t and y at which to evaluate f, we need to take a Taylor expansion of f evaluated at these unknown) values, and then match the resulting numerical scheme to a Taylor series expansion of yt + h) around t. To that end, we state a generalization of Taylor s theorem to functions of two variables. Theorem Let ft, y) be n + 1) times continuously differentiable on a convex set D, and let t 0, y 0 ) D. Then, for every t, y) D, there exists ξ between t 0 and t, and μ between y 0 and y, such that ft, y) = P n t, y) + R n t, y), where P n t, y) is the nth Taylor polynomial of f about t 0, y 0 ), [ P n t, y) = ft 0, y 0 ) + t t 0 ) f t t 0, y 0 ) + y y 0 ) f ] y t 0, y 0 ) + [ t t0 ) f t t 0, y 0 ) + t t 0 )y y 0 ) f t y t 0, y 0 ) + y y 0) ] f y t 0, y 0 ) + + 1 n ) n t t n! j 0 ) n j y y 0 ) j n f t n j y j t 0, y 0 ), j=0 and R n t, y) is the remainder term associated with P n t, y), n+1 1 ) n + 1 R n t, y) = t t n + 1)! j 0 ) n+1 j y y 0 ) j n+1 f t n+1 j ξ, μ). yj j=0 We now illustrate our proposed approach in order to obtain a method that is second-order accurate; that is, its local truncation error is Oh ). This involves matching to y + hft, y) + h d h3 [ft, y)] + dt 6 y + ha 1 ft + α 1, y + β 1 ), d [fξ, y)] dt where t ξ t + h and the parameters a 1, α 1 and β 1 are to be determined. After simplifying by removing terms or factors that already match, we see that we only need to match ft, y) + h d h d [ft, y)] + [ft, yt))] dt 6 dt 4

with a 1 ft + α 1, y + β 1 ), at least up to terms of Oh), so that the local truncation error will be Oh ). Applying the multivariable version of Taylor s theorem to f, we obtain a 1 ft + α 1, y + β 1 ) = f a 1 ft, y) + a 1 α 1 t t, y) + a f 1β 1 t, y) + y α1 f t ξ, μ) + α f 1β 1 t y ξ, μ) + β 1 f ξ, μ), y where ξ is between t and t + α 1 and μ is between y and y + β 1. Meanwhile, computing the full derivatives with respect to t in the Taylor expansion of the solution yields ft, y) + h Comparing terms yields the equations f t t, y) + h f y t, y)ft, y) + Oh ). a 1 = 1, a 1 α 1 = h, a 1β 1 = h ft, y), which has the solutions a 1 = 1, α 1 = h, β 1 = h ft, y). The resulting numerical scheme is y n+1 = y n + hf t n + h, y n + h ) ft n, y n ). This scheme is known as the midpoint method, or the explicit midpoint method. Note that it evaluates f at the midpoint of the intervals [t n, t n+1 ] and [y n, y n+1 ], where the midpoint in y is approximated using Euler s method with time step h/. The midpoint method is the simplest example of a Runge-Kutta method, which is the name given to any of a class of time-stepping schemes that are derived by matching multivaraiable Taylor series expansions of ft, y) with terms in a Taylor series expansion of yt + h). Another often-used Runge-Kutta method is the modified Euler method y n+1 = y n + h [ft n, y n ) + ft n+1, y n + hft n, y n )], which resembles the Trapezoidal Rule from numerical integration, and is also second-order accurate. 5

However, the best-known Runge-Kutta method is the fourth-order Runge-Kutta method, which uses four evaluations of f during each time step. The method proceeds as follows: k 1 = hft n, y n ), k = hf t n + h, y n + 1 ) k 1, k 3 = hf t n + h, y n + 1 ) k, k 4 = hf t n+1, y n + k 3 ), y n+1 = y n + 1 6 k 1 + k + k 3 + k 4 ). In a sense, this method is similar to Simpson s Rule from numerical integration, which is also fourth-order accurate, as values of f at the midpoint in time are given four times as much weight as values at the endpoints t n and t n+1. Example We compare Euler s method with the fourth-order Runge-Kutta scheme on the initial value problem y = ty, 0 < t 1, y0) = 1, which has the exact solution yt) = e t. We use a time step of h = 0.1 for both methods. The computed solutions, and the exact solution, are shown in Figure 1. It can be seen that the fourth-order Runge-Kutta method is far more accurate than Euler s method, which is first-order accurate. In fact, the solution computed using the fourth-order Runge- Kutta method is visually indistinguishable from the exact solution. At the final time T = 1, the relative error in the solution computed using Euler s method is 0.038, while the relative error in the solution computing using the fourth-order Runge-Kutta method is 4.4 10 6. 6

Figure 1: Solutions of y = ty, y0) = 1 on [0, 1], computed using Euler s method and the fourth-order Runge-Kutta method 7