Jim Lambers MAT 772 Fall Semester Lecture 21 Notes

Similar documents
multistep methods Last modified: November 28, 2017 Recall that we are interested in the numerical solution of the initial value problem (IVP):

Fourth Order RK-Method

Consistency and Convergence

Numerical Methods - Initial Value Problems for ODEs

Bindel, Fall 2011 Intro to Scientific Computing (CS 3220) Week 12: Monday, Apr 18. HW 7 is posted, and will be due in class on 4/25.

Euler s Method, cont d

9.6 Predictor-Corrector Methods

Math 128A Spring 2003 Week 11 Solutions Burden & Faires 5.6: 1b, 3b, 7, 9, 12 Burden & Faires 5.7: 1b, 3b, 5 Burden & Faires 5.

Review Higher Order methods Multistep methods Summary HIGHER ORDER METHODS. P.V. Johnson. School of Mathematics. Semester

Lecture 4: Numerical solution of ordinary differential equations

COSC 3361 Numerical Analysis I Ordinary Differential Equations (II) - Multistep methods

Initial-Value Problems for ODEs. Introduction to Linear Multistep Methods

HIGHER ORDER METHODS. There are two principal means to derive higher order methods. b j f(x n j,y n j )

ECE257 Numerical Methods and Scientific Computing. Ordinary Differential Equations

CHAPTER 5: Linear Multistep Methods

Mathematics for chemical engineers. Numerical solution of ordinary differential equations

Part IB Numerical Analysis

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 9

MTH 452/552 Homework 3

8.1 Introduction. Consider the initial value problem (IVP):

Scientific Computing: An Introductory Survey

Initial value problems for ordinary differential equations

CS 450 Numerical Analysis. Chapter 9: Initial Value Problems for Ordinary Differential Equations

Applied Math for Engineers

Multistep Methods for IVPs. t 0 < t < T

Module 4: Numerical Methods for ODE. Michael Bader. Winter 2007/2008

Linear Multistep Methods I: Adams and BDF Methods

Chapter 5 Exercises. (a) Determine the best possible Lipschitz constant for this function over 2 u <. u (t) = log(u(t)), u(0) = 2.

4 Stability analysis of finite-difference methods for ODEs

Chapter 8. Numerical Solution of Ordinary Differential Equations. Module No. 2. Predictor-Corrector Methods

5.6 Multistep Methods

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University

Numerical Differential Equations: IVP

Do not turn over until you are told to do so by the Invigilator.

Solving Ordinary Differential equations

AN OVERVIEW. Numerical Methods for ODE Initial Value Problems. 1. One-step methods (Taylor series, Runge-Kutta)

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University

Ordinary Differential Equations

Solving Ordinary Differential Equations

2 Numerical Methods for Initial Value Problems

Ordinary Differential Equations II

Math 128A Spring 2003 Week 12 Solutions

1 Error Analysis for Solving IVP

16.7 Multistep, Multivalue, and Predictor-Corrector Methods

Ordinary differential equations - Initial value problems

2.29 Numerical Fluid Mechanics Fall 2011 Lecture 20

Hermite Interpolation

Numerical Methods for Differential Equations

CS520: numerical ODEs (Ch.2)

MATHEMATICAL METHODS INTERPOLATION

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit V Solution of

Initial value problems for ordinary differential equations

Math Numerical Analysis Homework #4 Due End of term. y = 2y t 3y2 t 3, 1 t 2, y(1) = 1. n=(b-a)/h+1; % the number of steps we need to take

Applied Numerical Analysis

Math 660 Lecture 4: FDM for evolutionary equations: ODE solvers

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University

Solving PDEs with PGI CUDA Fortran Part 4: Initial value problems for ordinary differential equations

KINGS COLLEGE OF ENGINEERING DEPARTMENT OF MATHEMATICS ACADEMIC YEAR / EVEN SEMESTER QUESTION BANK

On interval predictor-corrector methods

Ordinary Differential Equations II

Numerical Methods for the Solution of Differential Equations


Integration of Ordinary Differential Equations

Numerical Methods for Differential Equations

PARTIAL DIFFERENTIAL EQUATIONS

Numerical Methods for Engineers. and Scientists. Applications using MATLAB. An Introduction with. Vish- Subramaniam. Third Edition. Amos Gilat.

1 Ordinary Differential Equations

Numerical solution of ODEs

Lecture IV: Time Discretization

Introduction to the Numerical Solution of IVP for ODE

Computational Methods CMSC/AMSC/MAPL 460. Ordinary differential equations

Name of the Student: Unit I (Solution of Equations and Eigenvalue Problems)

Research Article Diagonally Implicit Block Backward Differentiation Formulas for Solving Ordinary Differential Equations

Ordinary Differential Equations

Exact and Approximate Numbers:

Finite Differences for Differential Equations 28 PART II. Finite Difference Methods for Differential Equations

NUMERICAL SOLUTION OF ODE IVPs. Overview

The Initial Value Problem for Ordinary Differential Equations

Chapter 6 - Ordinary Differential Equations

Lecture V: The game-engine loop & Time Integration

Chap. 20: Initial-Value Problems

Chapter 10. Initial value Ordinary Differential Equations

Modeling & Simulation 2018 Lecture 12. Simulations

Multistage Methods I: Runge-Kutta Methods

Section 7.4 Runge-Kutta Methods

8 Numerical Integration of Ordinary Differential

16.7 Multistep, Multivalue, and Predictor-Corrector Methods

You may not use your books, notes; calculators are highly recommended.

Notation Nodes are data points at which functional values are available or at which you wish to compute functional values At the nodes fx i

Numerical Methods. King Saud University

Astrodynamics (AERO0024)

A STUDY OF GENERALIZED ADAMS-MOULTON METHOD FOR THE SATELLITE ORBIT DETERMINATION PROBLEM

Solution of First Order Initial Value Problem by Sixth Order Predictor Corrector Method

Numerical Methods. Scientists. Engineers

IMPLICIT INTERVAL MULTISTEP METHODS FOR SOLVING THE INITIAL VALUE PROBLEM

NUMERICAL ANALYSIS SYLLABUS MATHEMATICS PAPER IV (A)

Runge-Kutta and Collocation Methods Florian Landis

Jim Lambers MAT 460/560 Fall Semester Practice Final Exam

x n+1 = x n f(x n) f (x n ), n 0.

Review. Numerical Methods Lecture 22. Prof. Jinbo Bi CSE, UConn

Transcription:

Jim Lambers MAT 772 Fall Semester 21-11 Lecture 21 Notes These notes correspond to Sections 12.6, 12.7 and 12.8 in the text. Multistep Methods All of the numerical methods that we have developed for solving initial value problems are onestep methods, because they only use information about the solution at time to approximate the solution at time +1. As n increases, that means that there are additional values of the solution, at previous times, that could be helpful, but are unused. Multistep methods are time-stepping methods that do use this information. A general multistep method has the form α i y n+1 i = h β i f(+1 i, y n+1 i ), where s is the number of steps in the method (s = 1 for a one-step method), and h is the time step size, as before. By convention, α = 1, so that y n+1 can be conveniently expressed in terms of other values. If β =, the multistep method is said to be explicit, because then y n+1 can be described using an explicit formula, whereas if β =, the method is implicit, because then an equation, generally nonlinear, must be solved to compute y n+1. To ensure that the multistep method is consistent with the underlying ODE y = f(t, y), for general f, it is necessary that the constants α i and β i satisfy the constraints α i =, β i = 1. This ensures that if f(t, y), a constant solution is produced by the numerical scheme, and as h, the numerical method converges to the ODE itself. Most multistep methods fall into two broad categories: Adams methods: these involve the integral form of the ODE, y(+1 ) = y( ) + f(s, y(s)) ds. The general idea behind Adams methods is to approximate the above integral using polynomial interpolation of f at the points +1 s, +2 s,..., if the method is explicit, and +1 as well if the method is implicit. In all Adams methods, α = 1, α 1 = 1, and α i = for i = 2,..., s. 1

Backward differentiation formulas (BDF): these methods also use polynomial interpolation, but for a different purpose to approximate the derivative of y at +1. This approximation is then equated to f(+1, y n+1 ). It follows that all methods based on BDFs are implicit, and they all satisfy β = 1, with β i = for i = 1, 2,..., s. In our discussion, we will focus exclusively on Adams methods. Explicit Adams methods are called Adams-Bashforth methods. To derive an Adams-Bashforth method, we interpolate f at the points, 1,..., s+1 with a polynomial of degree s 1. We then integrate this polynomial exactly. It follows that the constants β i, i = 1,..., s, are the integrals of the corresponding Lagrange polynomials from to +1, divided by h, because there is already a factor of h in the general multistep formula. The local truncation error of the resulting method is O(h s ). To see this, we firsote that because all of the interpolation points have a spacing of h, and the degree of the interpolating polynomial is s 1, the interpolation error is O(h s ). Integrating this error over an interval of width h yields an error in y n+1 that is O(h s+1 ), but because the definition of local truncation error calls for dividing both sides of the difference equation by h, the local truncation error turns out to be O(h s ). Example We derive the three-step Adams-Bashforth method, y n+1 = y n + h[α 1 f(, y n ) + α 2 f( 1, y n 1 ) + α 3 f( 2, y n 2 ). The constants α i, i = 1, 2, 3, are obtained by evaluating the integral from to +1 of a polynomial p 2 (t) that passes through f(, y n ), f( 1, y n 1 ), and f( 2, y n 2 ). Because we can write 2 p 2 (t) = f( i, y n i )L i (t), where L i (t) is the ith Lagrange polynomial for the interpolation points, 1 and 2, and because our final method expresses y n+1 as a linear combination of y n and values of f, it follows that the constants β i, i = 1, 2, 3, are the integrals of the Lagrange polynomials from to +1, divided by h. However, using a change of variable u = (+1 s)/h, we can instead interpolate at the points u = 1, 2, 3, thus simplifying the integration. If we define p 2 (u) = p 2 (s) = p 2 (+1 hu) and L i (u) = L i (+1 hu), then we obtain f(s, y(s)) ds = = h = h p 2 (s) ds p 2 (u) du f(, y n ) L (u) + f( 1, y n 1 ) L 1 (u) + f( 2, y n 2 ) L 2 (u) du 2

[ = h f(, y n ) f( 2, y n 2 ) [ = h f(, y n ) L (u) du + f( 1, y n 1 ) L 2 (u) du L 1 (u) du+ (u 2)(u 3) (1 2)(1 3) du + f( 1, y n 1 ) (u 1)(u 2) (3 1)(3 2) du f( 2, y n 2 ) [ 23 = h 12 f(, y n ) 4 3 f( 1, y n 1 ) + 5 12 f( 2, y n 2 ). We conclude that the three-step Adams-Bashforth method is y n+1 = y n + h 12 [23f(, y n ) 16f( 1, y n 1 ) + 5f( 2, y n 2 ). This method is third-order accurate. (u 1)(u 3) (2 1)(2 3) du+ The same approach can be used to derive an implicit Adams method, which is known as an Adams-Moulton method. The only difference is that because +1 is an interpolation point, after the change of variable to u, the interpolation points, 1, 2,..., s are used. Because the resulting interpolating polynomial is of degree one greater than in the explicit case, the error in an s-step Adams-Moulton method is O(h s+1 ), as opposed to O(h s ) for an s-step Adams-Bashforth method. An Adams-Moulton method can be impractical because, being implicit, it requires an iterative method for solving nonlinear equations, such as fixed-point iteration, and this method must be applied during every time step. An alternative is to pair an Adams-Bashforth method with an Adams-Moulton method to obtain an Adams-Moulton predictor-corrector method. Such a method proceeds as follows: Predict: Use the Adams-Bashforth method to compute a first approximation to y n+1, which we denote by y n+1. Evaluate: Evaluate f at this value, computing f(+1, y n+1 ). Correct: Use the Adams-Moulton method to compute y n+1, but instead of solving an equation, use f(+1, y n+1 ) in place of f(+1, y n+1 ) so that the Adams-Moulton method can be used as if it was an explicit method. Evaluate: Evaluate f at the newly computed value of y n+1, computing f(+1, y n+1 ), to use during the next time step. 3

Example We illustrate the predictor-corrector approach with the two-step Adams-Bashforth method and the two-step Adams-Moulton method y n+1 = y n + h 2 [3f(, y n ) f( 1, y n 1 ) y n+1 = y n + h 12 [5f(+1, y n+1 ) + 8f(, y n ) f( 1, y n 1 ). First, we apply the Adams-Bashforth method, and compute y n+1 = y n + h 2 [3f(, y n ) f( 1, y n 1 ). Then, we compute f(+1, y n+1 ) and apply the Adams-Moulton method, to compute y n+1 = y n + h 12 [5f(+1, y n+1 ) + 8f(, y n ) f( 1, y n 1 ). This new value of y n+1 is used when evaluating f(+1, y n+1 ) during the next time step. One drawback of multistep methods is that because they rely on values of the solution from previous time steps, they cannot be used during the first time steps, because not enough values are available. Therefore, it is necessary to use a one-step method, with the same order of accuracy, to compute enough starting values of the solution to be able to use the multistep method. For example, to use the three-step Adams-Bashforth method, it is necessary to first use a one-step method such as the fourth-order Runge-Kutta method to compute y 1 and y 2, and then the Adams-Bashforth method can be used to compute y 3 using y 2, y 1 and y. Consistency and Zero-Stability For multistep methods, the notion of convergence is exactly the same as for one-step methods. However, we must define consistency and stability slightly differently, because we must account for the fact that a multistep method requires starting values that are computed using another method. Therefore, we say that a multistep method is consistent if its own local truncation error τ n (h) approaches zero as h, and if the one-step method used to compute its starting values is also consistent. We also say that a s-step multistep method is stable, or zero-stable, if there exists a constant K such that for any two sequences of values {y k } and {z k } produced by the method with step size h from different sets of starting values {y, y 1,..., y s 1 } and {z, z 1,..., z s 1 }, as h. y n z n K max y j z j, j s 1 4

To compute the local truncation error of Adams methods, integrate the error in the polynomial interpolation used to derive the method from to +1. For the explicit s-step method, this yields τ n+1 (h) = 1 h f (s) (ξ, y(ξ)) (t )(t 1 ) (t s+1 ) dt. s! Using the substitution u = (+1 t)/h, and the Weighted Mean Value Theorem for Integrals, yields τ n+1 (h) = 1 f (s) (ξ, y(ξ)) 1 h s+1 ( 1) s (u 1)(u 2) (u s) du. h s! Evaluating the integral yields the constant in the error term. We also use the fact that y = f(t, y) to replace f (s) (ξ, y(ξ)) with y (s+1) (ξ). Obtaining the local truncation error for an implicit, Adams-Moulton method can be accomplished in the same way, except that +1 is also used as an interpolation point. For a general multistep method, we substitute the exact solution into the method, as in one-step methods, and obtain T n (h) = s j= α jy(+1 j ) h s j= β jf(+1 j, y(+1 j )) h s j= β, j where the scaling by h s j= β j is designed to make this definition of local truncation error consistent with that of one-step methods. By replacing each evaluation of y(t) by a Taylor series expansion around, we find that T n (h) as h only if α j =, j= s 1 (s j)α j β j =. j= j= In other words, we must have ρ(1) =, ρ (1) = σ(1) =, where ρ(z) = α j z s j, σ(z) = β j z s j j= j= are the first and second characteristic polynomials, respectively, of the multistep method. However, further analysis is required to obtain the local truncation error of a predictor-corrector method that is obtained by combining two Adams methods. The result of this analysis is the following theorem. Theorem Let the solution of the initial value problem y = f(t, y), t < t T, y(t ) = y 5

be approximated by the Adams-Moulton s-step predictor-corrector method with predictor and corrector y n+1 = y n + h y n+1 = y n + h [ β i f n+1 i i=1 β f(+1, y n+1 ) + β i f n+1 i. Then the local truncation error of the predictor-corrector method is i=1 σ n+1 (h) = τ n+1 (h) + τ n+1 (h)β f y (+1, y(+1 ) + ξ n+1 ) where τ n+1 (h) and τ n+1 (h) are the local truncation errors of the predictor and corrector, respectively, and ξ n+1 is between and hτ n+1 (h). Furthermore, there exist constant α and β such that [ y( ) y n max y(t i) y i + βσ(h) e α(tn t), i s 1 where σ(h) = max s n (T t )/h σ n (h). It follows that the predictor-corrector method is convergent if it is consistent. It is interesting to note that the preceding theorem only requires consistency for convergence, as opposed to both consistency and stability. This is because such a predictor-corrector method is already guaranteed to be stable. To see this, we examine the stability of a general s-step multistep method of the form α i y n+1 i = h β i f(+1 i, y n+1 i ). If this method is applied to the initial value problem y =, y(t ) = y, y =, for which the exact solution is y(t) = y, then for the method to be stable, the computed solution must remain bounded. It follows that the computed solution satisfies the m-term recurrence relation which has a solution of the form α i y n+1 i =, y n = c i n p i λ n i, 6

where the c i and p i are constants, and the λ i are the roots of the characteristic equation α λ s + α 1 λ s 1 + + α s 1 λ + α s =. When a root λ i is distinct, p i =. Therefore, to ensure that the solution does not grow exponentially, the method must satisfy the root condition: All roots must satisfy λ i 1. If λ i = 1 for any i, then it must be a simple root, meaning that its multiplicity is one. It can be shown that a multistep method is zero-stable if and only if it satisfies the root condition. Furthermore, λ = 1 is always a root, because in order to be consistent, a multistep method must have the property that s α i =. If this is the only root that has absolute value 1, then we say that the method is strongly stable, whereas if there are multiple roots that are distinct from one another, but have absolute value 1, then the method is said to be weakly stable. Because all Adams methods have the property that α = 1, α 1 = 1, and α i = for i = 2, 3,..., s, it follows that the roots of the characteristic equation are all zero, except for one root that is equal to 1. Therefore, all Adams methods are strongly stable. This explains why only consistency is necessary to ensure convergence. In general, a consistent multistep method is convergent if and only if it is stable. Example A multistep method that is neither an Adams method, nor a backward differentiation formula, is an implicit 2-step method known as Simpson s method: y n+1 = y n 1 + h 3 [f n+1 + 4f n + f n 1. Although it is only a 2-step method, it is fourth-order accurate, due to the high degree of accuracy of Simpson s Rule. This method is obtained from the relation satisfied by the exact solution, y(+1 ) = y( 1 ) + 1 f(t, y(t)) dt. Since the integral is over an interval of width 2h, it follows that the coefficients β i obtained by polynomial interpolation of f must satisfy the condition β i = 2, as opposed to summing to 1 for Adams methods. For this method, we have s = 2, α = 1, α 1 = and α 2 = 1, which yields the characteristic polynomial λ 2 1. This polymomial has two distinct roots, 1 and 1, that both have absolute value 1. It follows that Simpson s method is only weakly stable. 7