Similar documents
8.1 Introduction. Consider the initial value problem (IVP):

Numerical Differential Equations: IVP

Solving Ordinary Differential equations

Fourth Order RK-Method

Initial value problems for ordinary differential equations

Applied Math for Engineers

Scientific Computing: An Introductory Survey

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 9

Numerical Methods - Initial Value Problems for ODEs

CS 450 Numerical Analysis. Chapter 9: Initial Value Problems for Ordinary Differential Equations

CHAPTER 5: Linear Multistep Methods

CS520: numerical ODEs (Ch.2)

Numerical Methods for the Solution of Differential Equations

ECE257 Numerical Methods and Scientific Computing. Ordinary Differential Equations

Consistency and Convergence

Mathematics for chemical engineers. Numerical solution of ordinary differential equations

Ordinary differential equations - Initial value problems

Numerical Methods for Differential Equations

Introduction to the Numerical Solution of IVP for ODE

Numerical Methods for Differential Equations

2 Numerical Methods for Initial Value Problems

Lecture 4: Numerical solution of ordinary differential equations

Explicit One-Step Methods

Initial value problems for ordinary differential equations

Solving Ordinary Differential Equations

Jim Lambers MAT 772 Fall Semester Lecture 21 Notes

Ordinary Differential Equations

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit V Solution of

9.6 Predictor-Corrector Methods

Multistage Methods I: Runge-Kutta Methods

Part IB Numerical Analysis

Differential Equations

Ordinary differential equation II

Bindel, Fall 2011 Intro to Scientific Computing (CS 3220) Week 12: Monday, Apr 18. HW 7 is posted, and will be due in class on 4/25.

Module 4: Numerical Methods for ODE. Michael Bader. Winter 2007/2008

NUMERICAL SOLUTION OF ODE IVPs. Overview

The family of Runge Kutta methods with two intermediate evaluations is defined by

Review Higher Order methods Multistep methods Summary HIGHER ORDER METHODS. P.V. Johnson. School of Mathematics. Semester

Math 128A Spring 2003 Week 12 Solutions

Numerical solution of ODEs

Multistep Methods for IVPs. t 0 < t < T

Review. Numerical Methods Lecture 22. Prof. Jinbo Bi CSE, UConn

Numerical Methods. King Saud University

Chapter 5 Exercises. (a) Determine the best possible Lipschitz constant for this function over 2 u <. u (t) = log(u(t)), u(0) = 2.

Euler s Method, cont d

Chapter 11 ORDINARY DIFFERENTIAL EQUATIONS

MATH 350: Introduction to Computational Mathematics

Ordinary Differential Equations

Math 128A Spring 2003 Week 11 Solutions Burden & Faires 5.6: 1b, 3b, 7, 9, 12 Burden & Faires 5.7: 1b, 3b, 5 Burden & Faires 5.

AN OVERVIEW. Numerical Methods for ODE Initial Value Problems. 1. One-step methods (Taylor series, Runge-Kutta)

Ordinary Differential Equations II

KINGS COLLEGE OF ENGINEERING DEPARTMENT OF MATHEMATICS ACADEMIC YEAR / EVEN SEMESTER QUESTION BANK

(f(x) P 3 (x)) dx. (a) The Lagrange formula for the error is given by

Linear Multistep Methods

Math Numerical Analysis Homework #4 Due End of term. y = 2y t 3y2 t 3, 1 t 2, y(1) = 1. n=(b-a)/h+1; % the number of steps we need to take

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University

multistep methods Last modified: November 28, 2017 Recall that we are interested in the numerical solution of the initial value problem (IVP):

Ordinary Differential Equations


Lecture Notes on Numerical Differential Equations: IVP

2tdt 1 y = t2 + C y = which implies C = 1 and the solution is y = 1

HIGHER ORDER METHODS. There are two principal means to derive higher order methods. b j f(x n j,y n j )

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University

MTH 452/552 Homework 3

1 Ordinary Differential Equations

Applied Numerical Analysis

Linear Multistep Methods I: Adams and BDF Methods

Chapter 10. Initial value Ordinary Differential Equations

Ordinary Differential Equations

Math 660 Lecture 4: FDM for evolutionary equations: ODE solvers

Integration of Ordinary Differential Equations

COSC 3361 Numerical Analysis I Ordinary Differential Equations (II) - Multistep methods

Numerical Methods for Engineers. and Scientists. Applications using MATLAB. An Introduction with. Vish- Subramaniam. Third Edition. Amos Gilat.

Numerical Methods for Differential Equations

Chap. 20: Initial-Value Problems

Southern Methodist University.

CS 257: Numerical Methods

Ordinary Differential Equations

5.6 Multistep Methods

2

Solving PDEs with PGI CUDA Fortran Part 4: Initial value problems for ordinary differential equations

Ordinary Differential Equations

Math 411 Preliminaries

Do not turn over until you are told to do so by the Invigilator.

Chapter 6 - Ordinary Differential Equations

Second Order ODEs. CSCC51H- Numerical Approx, Int and ODEs p.130/177

Section 7.4 Runge-Kutta Methods

On interval predictor-corrector methods

Initial-Value Problems for ODEs. Introduction to Linear Multistep Methods

Ordinary Differential Equations II

Numerical Integration of Ordinary Differential Equations for Initial Value Problems

Validated Explicit and Implicit Runge-Kutta Methods

The Initial Value Problem for Ordinary Differential Equations

Introductory Numerical Analysis

Name of the Student: Unit I (Solution of Equations and Eigenvalue Problems)

(again assuming the integration goes left to right). Standard initial value techniques are not directly applicable to delay problems since evaluation

EXAMPLE OF ONE-STEP METHOD

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University

NUMERICAL METHODS FOR ENGINEERING APPLICATION

Butcher tableau Can summarize an s + 1 stage Runge Kutta method using a triangular grid of coefficients

Transcription:

Notes for Numerical Analysis Math 5466 by S. Adjerid Virginia Polytechnic Institute and State University (A Rough Draft)

Contents Numerical Methods for ODEs 5. Introduction............................ 5. One-step Methods......................... 7.. Taylor Methods...................... 7.. Runge-Kutta Methods....................3 Error Estimation and Control.............. 7..4 Systems and higher-order differential equations.... 0.3 Multi-step Methods........................ 5.3. Adams-Bashforth Methods................ 5.3. Adams-Moulton Methods................ 7.3.3 Predictor-Corrector Methods............... 8.3.4 Methods Based on Backward Difference Formulas (BDF) 9.4 Consistency, Stability and Convergence............. 3.4. Basic notions and definitions............... 3.4. Stability.......................... 34.4.3 Absolute stability..................... 38.5 Two-point Boundary Value Problems.............. 4.5. Introduction........................ 4.5. The Shooting Method.................. 4.5.3 The Finite Difference Method.............. 4 3

4 CONTENTS

Chapter Numerical Methods for ODEs. Introduction Differential equation are used to model many physical situations such as vibrations, electric circuits, chemical reactions, biology and numerical solutions of partial differential equations. Example : Vibration of a mass attached to an elastic spring l 0 is the initial legth of spring at rest. l(t) is the length at time t x(t) =l(t) l 0 If we stretch the spring to have x(0) = l(0) l 0 and let it go the dynamics is modeled by Newton equation ma = kx(t) where a is the acceleration x 00 (t) =a(t) to obtain the second-order differential equation mx 00 (t) kx(t) =0; t>0 (..a) Subject the initial conditions x(0) = x 0 ; x 0 (0) = v 0 (..b) Example : Swinging pendulum 5

6 CHAPTER. NUMERICAL METHODS FOR ODES If (t) is the angle between the vertical position at rest ( = 0) and the position at t, the motion of the pendulum is described by the equation 00 (t) g sin( ) =0; t>0 (..) L with initial conditions (0) = 0 and 0 (0) =. Where L is the length of the pendulum and g is the gravitational acceleration. Here, we solve problems which consist of finding y(t) such that y 0 (t) =f(t; y); t>0 (..3a) subject to the initial condition y(t 0 )=y 0 : (..3b) The next theorem states the existence of solution to (..3) Theorem... Let f(t; y) be continuous function for all t 0 < t < T and all y. Suppose further that f(t; y) satisfies the Lipschitz condition where L>0. jf(t; w) f(t; z)j»ljw zj; for all w; z; t 0 <t<t: (..4) Then, the problem (..3) has a unique differentiable solution. Example: y 0 (t) =3y(t)+e t ; with f(t; y) =y + e t is continuous and f(t; w) f(t; z) =3(w z); which leads to jf(t; w) f(t; z)j»3jw zj:. Thus, f(t; y) islipschitz continuous with L =3.

.. ONE-STEP METHODS 7. One-step Methods In this section we will study Taylor and Runge-Kutta methods... Taylor Methods We note that since the initial condition y(t 0 )isgiven then one may compute y 0 (t 0 )=f(t 0 ;y 0 ) y 00 (t 0 )=f t (t 0 ;y 0 )+f y (t 0 ;y 0 )f(t 0 ;y 0 ) y 000 (t 0 )= d f(t; y(t)) dt = f tt +f ty f + f yy f + f y (f t + f y f); at (t 0 ;y 0 ) Therefore, all derivatives y (n) (t 0 ) will be also be known. This leads us to use Taylor series to write y(t 0 + h) =y(t 0 )+y 0 (t 0 )h + + y(p) (t 0 ) h p + y(p+) (c) p! (p + )! hp+ (..5) if T p (t 0 ;y 0 ;h)=y(t 0 )+y 0 (t 0 )h + + y(p) (t 0 ) h p (..6) p! we define the O(h p )Taylor method as step 0: set h>0,t i = t 0 + ih; i =; ; 3; Step : input y(t 0 )=y 0 for i=:n y i+ = T p (t i ;y i ;h) end

8 CHAPTER. NUMERICAL METHODS FOR ODES In the remainder of this chapter we adopt the notation y i ß y(t i ). The O(h p+ ) Taylor method has a degree of precision p The local discretization error is ffl(t k )= y(p+) (c) (p + )! hp+ (..7) The truncation error is fi p = ffl h = y(p+) (c) (p + )! hp (..8) The total (global) discretization error is e(t i )=y(t i ) y i (..9) Now, let us study the special case p = which leads to a well know method. Euler's method: Let t 0 <t<t and subdivide [t 0 ;T]into N subintervals to have t i = t 0 + i Λ h where h =(T t 0 )=N is the stepsize. y 0 known y i+ = y i + hf(t i ;y i ); i =0; ; ; ;N : (..0) Example: y 0 = y; 0 <t<; y(0) = : where the exact solution is y(t) =e t Let h ==0 and t i = i Λ 0:;i=0; ; ;N

.. ONE-STEP METHODS 9 y 0 = y = y 0 + hf(t 0 ;y 0 )=+h y = y + hf(t ;y ) = ( + h) y k = y k + nhf(t k ;y k ) = ( + h) k ; k =; ; ; 0 (..) Show a plot the exact solution and numerical solution Error analysis for Euler's method The local error is defined by ffl(t k )=y(t k ) y k where y k is the approximation obtained from Euler's method y k = y(t k )+hf(t k ;y(t k ) Subtracting (..5) with p = and y = y 0 + h(t 0 ;y 0 ) to obtain ffl(t )=y(t ) y = h y 00 (c) which can be bounded as where jffl(t )j» h M M = jjy 00 jj ;[t0 ;T ] The global discretization error is defined as e k+ = y(t k+ ) y k+ where y k+ is the approximation of y(t k+ ) obtained from Euler's method.

0 CHAPTER. NUMERICAL METHODS FOR ODES Next, we state a theorem on the global discretization error for Euler's method Theorem... Let y(t) be the solution of the initial value problem (..3). If f(t; y) is Lipschitz continuous with respect to y and if jjy 00 jj ;[t0 ;T ]» M then the global discretization error can be bounded as Proof. Subtracting (..5) from leads to je k+ j < hm L (el(t k+ t 0) ) (..) y k+ = y k + hf(t k ;y k ) e k+ = e k + h[f(t k ;y(t k )) f(t k ;y k )] + h y 00 (c)= Using the hypothesis of the theorem we have je k+ j»je k j + hlje k j + h M =(+hl)je k j + h M Using this recursive inequality we show that je k+ j»je k j( + hl) +(+hl) h M + h M The global error je k+ j»je 0 j(+hl) k+ + h M We can assume e 0 = 0 and use a geometric series to obtain +(+hl) + ( + hl) + +(+hl) k a + ar + ar + + ar n = a rn r

.. ONE-STEP METHODS je k+ j» hm L ( + hl) k+ From Calculus we know that +x<e x ; x>0 use x = hl in the previous inequality to obtain je k+ j» hm L e Lh(k+) Noting that h(k +)=t k+ t 0 we complete the proof of the theorem. Remarks:. The local error is O(h ). The global error is O(h) Taylor method of order : Consider the linear problem: y 0 = ay; y(0) = y 0 ; y(t) =y 0 e at : Taylor method of order is given as y 00 = a y y 0 is given (..3) y k+ = y k + hay k + a h y k =(+ah + a h )y k (..4) y k+ = ( + ah + a h )k+ y 0 (..5) Show numerical results for Taylor methods of order p =; ; 3; 4 and compare convergence rates.

CHAPTER. NUMERICAL METHODS FOR ODES.. Runge-Kutta Methods We note that Taylor methods main advantage consists of achieving high accuracy by using O(h p ) methods. On the-other-hand they require highorder derivates of f(t; y) and a large number of function evaluations. In this section we will introduce a class of methods that only use values of f(t; y) with a smaller number of function evaluations while keeping the high-order precision of Taylor methods. Second-order Runge-Kutta methods We would like to construct a method of the form: y RK k+ = y k + c f(t k + ff; y k + fi) that has the same order as Taylor method y T k+ = y k + hf(t k ;y k )+ h [f t(t k ;y k )+f y (t k ;y k )f(t k ;y k )] Our aim is to find c ;ff and fi such that y RK k+ y T k+ = O(h 3 ) Use Taylor expansion in two variables to write f(t k + ff; y k + fi) =f(t k ;y k )+fff t (t k ;y k )+fif y (t k ;y k )+O(ff )+O(fi ) This leads to y RK k+ = y k + c f(t k ;y k )+c fff t (t k ;y k )+c fif y (t k ;y k ) Match the coefficients with those of second-order Taylor method to obtain c = h (..6) c ff = h (..7) c fi = h f(t k;y k ) (..8)

.. ONE-STEP METHODS 3 This in turn leads to c = h (..9) ff = h (..0) fi = h f(t k;y k ) (..) This yields Midpoint method: y 0 given (..) y k+ = y k + hf(t k + h ;y k + h f(t k;y k )) (..3) The global error of the midpoint method is O(h ) We also note that the difference y T y RK» O(c (ff + fi )) = O(h 3 ) Other second-order Runge-Kutta methods have the form y k+ = y k + c f(t k ;y k )+c f(t k + ff; y k + ffif(t k ;;y k )) Applying Taylor series in two dimensions and match the coefficients with second order Taylor method leads to a family of methods. Here we give Heun's method. Heun Method: with c = h 4, c = 3h 4, ff = ffi = h 3 y 0 known y k+ = y k + h 4 f(t k ;y k )+3f(t k + h 3 ;y k + h 3 f(t k;y k )) (..4) We note that Heun's method uses two function evaluations while the corresponding Taylor method uses three function evaluations.

4 CHAPTER. NUMERICAL METHODS FOR ODES Example: y 0 = ty + e t ( + t t )=f(t; y); <t<3 (..5) y(0) = e = y 0 (..6) where the exact solution is y(t) =te t Let us use h =0: and t 0 =,t k =+kλ0:;k >0. y = y 0 + h f(t 0 ;y 0 )+3f(t 0 + h 4 3 ;y 0 + h 3 f(t 0;y 0 )) y =:788 y =3:97094 y 3 =5:64005 y 4 =7:84479 General Form of Runge-Kutta Methods: The general form of explicit s-stage Runge-Kutta method has the form: K = f(t k ;y k ) K i = f(t k + ff i h; y k + h i = ; ; ;s y k+ = y k + h sx i= i X j= fi ij K j ) It is also convenient to write the coefficients in a Table 0 ff fi. ff s fi s fi s;s c c s c s c i K i (..7)

.. ONE-STEP METHODS 5 We note that P s i= c i =. Examples Second-Order methods: Midpoint Method 0 0 Heun's Method 0 3 3 4 3 4 A third-order method: 3rd order RK 0 0 0 0 6 0 3 6 The classical Runge-Kutta with O(h 4 ) global error: This method is derived by applying Taylor series in two dimensions and matching coefficients with those of the fourth-order Taylor method. The method obtained is a the four-stage method

6 CHAPTER. NUMERICAL METHODS FOR ODES y 0 known k = f(t k ;y k ) k = f(t k + h ;y k + h k ) k 3 = f(t k + h ;y k + h k ) k 4 = f(t k + h; y k + hk 3 ) y k+ = y k + h 6 (k +k +k 3 + k 4 ) k = 0; ; ; (..8) Classical Fourth-order RK 0 0 0 0 6 6 6 6 Example: y 0 = y +t; <t<; y() = ; f(t; y) =y +t: Let us use h =0:, t k =+k Λ 0:; k =0; ; ; y 0 = k = f(; ) =3 k = f( + 0: 0: ; + ) =3:5 k 3 = f( + 0: 0: ; + ) =3:65 k 4 = f( + 0:; +0:k 3 ) =3:565 y k+ =+ 0: 6 (k +k +k 3 + k 4 ) =:35854 (..9)

.. ONE-STEP METHODS 7..3 Error Estimation and Control I: Embedded Runge-Kutta Methods Embedded RK methods contain a numerical solution y and a higher-order and more precise approximation ^y. These two approximations will serve for error estimation and stepsize control in an adaptive algorithm. There are several embedded RK methods described in the book by Hairer, Norsett and Wanner. (i) Runge-Kutta-Fehlberg (3) Method 0 4 y ^y 6 4 0 4 6 6 (ii) Runge-Kutta-Fehlberg 4(5) Method 0 4 4 3 8 3 3 9 3 3 93 700 97 97 796 97 439 6-8 3680 53 845 404 8 3544 7 565 859 404 40 y 5 6 0 408 565 97 0 404 5 ^y 6 35 0 6656 85 856 56430 9 50 55

8 CHAPTER. NUMERICAL METHODS FOR ODES This method yields an O(h 4 ) y and an O(h 5 ) ^y Runge-Kutta solutions that are combined to obtain the adaptive Matlab ode45 function. We note that in all embedded Runge-Kutta methods we use ^y k to compute ^y k+ = ^y k + h 6X i= ^c i K i (t k ;h;^y k ) (..30) and y k+ = ^y k + h 6X i= c i K i (t k ;h;^y k ): (..3) In the literature, there are other embedded RK methods such as Merson 4(5), RKF(3). A embedded RK method that minimizes the error is given by Dormand-Prince 5(4). for more details consult the book by Hairer, Norsett and Wanner. II: Richardson Extrapolation We may use Richardson extrapolation to obtain higher-order methods If we assume that we have an O(h p ) method and solve the problem using one-step of size h to obtain y(t k+ ) y h k+ = ch p + O(h p+ ); (..3) and with two steps of size h= to obtain a new approximation of y(t k+ as y(t k+ ) y h h p k+ = c + O(h p+ ) (..33) Multiplying (..33) by p and subtracting (..3) leads to which can be written as p y(t k+ y(t k+ ) p y h k+ + yh k+ = O(h p+ ):

.. ONE-STEP METHODS 9 Therefore, y(t k+ )= p y h k+ yh k+ + O(h p+ ) p ^y k+ = p y h k+ yh k+ p is a higher-order, i.e., O(h p+ ) approximation to y(t k+ ). III: Error Control and Stepsize Selection Adaptive methods can be very efficient and reliable by using automatic stepsize selection. Thus, the stepsize depends on the solution and is adjusted automatically. An adaptive algorithm should use smaller stepsizes when the error becomes large and larger stepsizes in regions of smaller errors. Embedded Runge-Kutta methods and Richardson extrapolation provide an estimate of the error as described below. Apply an Embedded Runge-Kutta method or Richardson extrapolation with double stepping to obtain two approximations to y(t k ), y k and ^y k such that and y(t k )=y k + ch p+ + ; y(t k )=^y k + dh p+ + : Subtracting the two equations we obtain errest = ^y k y k = ch p+ dh p+ + ßch p+ ; (..34) which can be used to select the the next step-size, H, such that the discretization error is less than a prescribed tolerance tol with a safety factor. Assume c to be the same for all stepsize h and the targeted error can be written as targeterror = ch p+ : Using (..34) we write c = errest h p+ :

0 CHAPTER. NUMERICAL METHODS FOR ODES Thus, the targeted error is Setting targeterror < tol, we obtain targeterror = errest p+ H p+ h ch p+ = errest h p+ H p+ < tol: (..35) Now, we solve for the new stepsize H to obtain H = safetyfactor Λ h Λ (tol=errest) p+ : (..36) The safety factor is selected to be 0:5 p+. An adaptive algorithm for an embedded RKp(p+) Step 0: Read t_0, T, h, y_0, p, safetyfactor,tol Step : If t0 + h > T, stop Step : Compute y and yhat Step 3: errest = yhat - y Step 4: if errest < tol, t0=t0+h,y0=yhat (accept the step) Step 5: if errest > tol/00 or errest < tol go to Step Step 6: compute new h according to h = safetyfactor* h* (tol/errest)^ /(p+)} Step 7: Go to Step..4 Systems and higher-order differential equations Example: x 0 (t) = y + x + t (..37) y 0 (t) = sin(x)+exp(t) (..38)

.. ONE-STEP METHODS subject to the initial conditions x(0) = x 0 and y(0) = y 0. We use the following vector notation Y = x(t) F = y(t) and write our system (..37) as f (t; Y ) y + y = + t f (t; Y ) sin(y )+exp(t) Y 0 (t) =F (t; Y ); t>0;y(0) = Y 0 = x0 y 0 : Runge-Kutta methods can be used with vector notation Euler's method: Y 0 ; known Y k+ = Y k + hf (t k ;Y k ); k =0; ; ; (..39) Heun's method: Y 0 ; known F (t k ;Y k )+3F(t k + h 3 ;Y k + h 3 F (t k;y k )) ; Y k+ = Y k + h 4 k =0; ; : (..40) Classical Runge-Kutta Method:

CHAPTER. NUMERICAL METHODS FOR ODES Y 0 given K = F (t k ;Y k ) K = F (t k + h ;Y k + h K ) K 3 = F (t k + h ;Y k + h K ) K 4 = F (t k + h; Y k + hk 3 ) Y k+ = Y k + h 6 (K +K +K 3 + K 4 ); k =0; ; ; :(..4) Higher-order differential equations: x (m) + a m x (m ) + + a x 00 (t)+a x 0 (t)+a 0 x(t) =g(t) (..4) with initial conditions x (k) (0) = x k ; k =0; ; ; ;m : Transform the equation (..4) into a system of first order ordinary differential equations. using the mapping Y k+ (t) =x (k) (t);k =0; ; ; ;m ; and noting that Y 0 k(t) =x (k+) = Y k+ (t) =F k (t; Y ); k =; ; ;m ; and Y 0 m(t) =x (m) (t) =g(t) m X a i Y i+ = F m (t; Y ): i= Now using vector notation we write the system as subject to the initial conditions Y 0 (t) =F (t; Y );

.. ONE-STEP METHODS 3 Y (0) = Y 0 =(x 0 ;x ; ;x m ) t : Example : The pendulum motion is described by the nonlinear secondorder differential equation m 00 + g L sin( ); t 0; (0) = x 0; 0 (0) = x ; (..43) where g denotes the gravitational acceleration and L the length of pendulum. Example : The vibration of an elastic string is modeled by linear secondorder equation mx 00 kx =0; t 0; x(0) = x 0 ; x 0 (0) = x : (..44) Example 3: The vibration of a two-mass-three-spring system jj Spring(k) (m) Spring(k) (m) Spring(k) jj is modeled by the system of second-order differential equations m x 00 +(k + k )x kx = 0 m x 00 +(k + k )x kx = 0 Example 4: Van der Pol problem given by the nonlinear second-order differential equation x 00 + ffl(x )x 0 + x =0; t 0; x(0) = x 0 ; x 0 (0) = x (..45) Example 5: Method of Lines for the one-dimensional heat equation where the temperature T (x; t) satisfies the partial differential equation T t (t; x) =T xx (t; x); 0 <x<; t>0: (..46a) subject to the initial and boundary conditions T (0;x)=f(x); (..46b)

4 CHAPTER. NUMERICAL METHODS FOR ODES and T (t; 0) = T (t; ) = 0; t 0: (..46c) Next, we subdivide the interval [0; ] into N subintervals and define x i = i Λ h; h ==N. The heat equation at the point (t; x i );i=; ; ;N can be written as T t (t; x i )=T xx (t; x i ); i =; ; ;N ; T t (t; x i )= T (t; x i ) T (t; x i )+T (t; x i+ ) h + O(h ) Now, we neglect the truncation error and let T i (t) ß T (t; x i ) to obtain the following system of ordinary differential equations dt i (t) (t) = T i (t) T i (t)+t i+ (t) ;i=; ; ;N : dt h Since T 0 (t) =T N (t) =0; t 0we write the system as T 0 (t) = T (t)+t (t) h T 0 i (t) = T i (t) T i (t)+t i+ (t) h ; i =; ; ;N ; T 0 N (t) = T N (t) T N (t) h : (..47a) The initial conditions for the system of ordinary differential equations are obtained from g(x) as T i (0) = T (0;x i )=g(x i ); i =; ; ;N : (..47b)

.3. MULTI-STEP METHODS 5.3 Multi-step Methods A Runge-Kutta method of order m will require at least m function evaluations. In this section, we will study explicit and implicit methods that will require only one function evaluation per step. however, these methods require the solution at several previous time steps..3. Adams-Bashforth Methods We integrate the differential equation (..3) on [t k ;t k+ ] to obtain Z tk+ Z tk+ y 0 (t)dt = f(t; y(t))dt (..48) t k t k We interpolate f(t; y(t)) at t k n ;t k n+ ; ;t k to write f(t; y(t)) = kx i=k n L i (t)f(t i ;y(t i )) + y(n+) (ο) (n + )! where L i are the Lagrange interpolation polynomials. Next, we approximate the equation (..48) as y(t k+ )=y(t k )+ nx i= ky i=k n (t t i ); c k i f(t k i ;y(t k i )+Cy (n+) (ο) (..49) where c k i = We used the fact the Z tk+ t k L k i (t)dt; C = kq i=k n If we assume t k = t 0 + kh, k =0; ; ; Few Adams-Bashforth methods: kq R tk+ t k i=k n (t t i )dt : (n + )! (t t i )doesnot change sign on [t k ;t k+ ].

6 CHAPTER. NUMERICAL METHODS FOR ODES n =0,O(h),Euler method (local error = 5h3 y3 (ο)) y k+ = y k + hf k ; k =0; ; ; (..50) n =,O(h ) (local error = 3h4 8 y4 (ο)) y k ; y k known y k+ = y k + h[ 3 f k f k ]; k =; ; (..5) n =O(h 3 ) y k ;y k ; y k known y k+ = y k + h[ 3 f k 6 f k + 5 f k ]; k =; 3; (..5) n =3,O(h 4 ) y k 3 ; y k ; y k ; y k known y k+ = y k + h[ 55 4 f k 59 4 f k + 37 4 f k 9 4 f k 3]; k =3; 4; (..53) Where f l = f(t l ;y l ); k = k 3;k ;k ;k The local discretization error for n = 3 is ffl = 5 70 h5 y (5) (ο)

.3. MULTI-STEP METHODS 7.3. Adams-Moulton Methods Adams-Moulton are obtained from (..48) by interpolating f(t; y(t)) at the n + points t k n+ ; ; t k ;t k ;t k+ and write f(t; y(t)) = k+ X i=k n+ L i (t)f(t i ;y(t i )) + Error Substituting this into (..48) to obtain the general form of Adams Moulton methods y k+ = y k + k+ X i=k n+ c i f(t i ;y i ) (..54) If y i ;i= k n +; ;k are known we solve for y k+ from (..54). These methods are implicit and more stable than (explicit) Adams-Bashforth methods from the previous section. Few Adams-Moutlon methods: n =0,O(h), (Implicit Euler Method) (local error = h y() (ο)) y k+ = y k + hf k+ ; k =0; ; ; (..55) n =,O(h )(Trapezoidal rule) (Local error = h3 y3 (ο)) y k+ = y k + h[ f k+ + f k]; k =0; ; ; (..56) n =,O(h 3 ), (Local error = h4 4 y4 (ο)) y k+ = y k + h[ 5 f k+ 8 f k f k ]; k =; ; (..57)

8 CHAPTER. NUMERICAL METHODS FOR ODES n =3,O(h 4 ) y k+ = y k + h[ 9 4 f k+ + 9 4 f k 5 4 f k + 4 f k ]; k =; 3; (..58) The local discretization error for n = 3 is ffl(t k+ = y(t k+ ) y k+ = 9 70 h5 y (5) (ο).3.3 Predictor-Corrector Methods Explicit and implicit Adams methods may be used as predictor-corrector as illustrated in the following examples. Example : We consider a two-step predictor-corrector method where Predictor using second-order Adams-Bashforth: ~y k+ = y k + h (3f k f k ); (..59) Corrector using second-order Adams-Moulton: y k+ = y k + h ( ~ f k+ + f k ) (..60) where ~ f k+ = f(t k+ ; ~y k+ ) Example Predictor using fourth-order Adams-Bashforth: ~y k+ = y k + h[ 55 4 f k 59 4 f k + 37 4 f k 9 4 f k 3]; (..6)

.3. MULTI-STEP METHODS 9 Corrector using Fourth-order Adams-Moulton: y k+ = y k + h[ 9 4 ~ f k+ + 9 4 f k 5 4 f k + 4 f k ] (..6) where ~f k+ = f(t k+ ; ~y k+ ) Remarks:. In general, Adams-Moulton methods yield more accurate solutions than Adams-Bashforth. A price has to be paid for this by requiring the solution of a nonlinear algebraic problem. Usually, Newton-Raphson method is used to solve it.. Adams-Bashforth methods require one function evaluation and may be superior to RK methods when function evaluations are expensive 3. A one-step method of the same order is needed to generate starting values 4. Adams methods extend to systems of ordinary differential equations using vector notation 5. To avoid solving an algebraic problem Adams-Bashforth and Adams- Moulton may be used in pairs as predictor-corrector.3.4 Methods Based on Backward Difference Formulas (BDF) Backward Difference Formula (BDF) methods are obtained by interpolating y(t) instead of f(t; y(t)). BDF methods up to sixth order are implicit and A-stable, thus, suitable for stiff problems where stability restriction for some adams methods requires much smaller time step than what is required by accuracy. Now let us consider the model problem

30 CHAPTER. NUMERICAL METHODS FOR ODES y 0 (t) =f(t; y(t)); y(0) = y 0 Assume y 0 ;y ; ;y k are known (may be computed using a one-step method) and write y 0 (t k+ )=f(t k+ ;y(t k )) Using the backward difference formula to obtain y 0 (t k+ )= y(t k+) y(t k ) h + h y00 (ο) y k+ y k h = f(t k+ ;y k+ ) We note that y k+ is defined implicitly. Again this requires the solution of a nonlinear algebraic equation at each step. Usually, Newton's method is used to solve it. Higher-order BDF methods are derived by interpolating y(t) at t k+ ;t k ; t k+ n, n =; ;, to obtain the following methods using uniform time-step h n =,O(h) y k+ y k = hf k+ ; k =0; ; (..63) n =,O(h ) 3 y k+ y k + y k = hf k+ ; k =; ; (..64) n =3,O(h 3 )

.4. CONSISTENCY, STABILITY AND CONVERGENCE 3 6 y k+ 3y k + 3 y k 3 y k = hf k+ (..65) n =4,O(h 4 ) 5 y k+ 4y k +3y k 4 3 y k + 4 y k 3 = hf k+ (..66) n =5,O(h 5 ) 37 60 y k+ 5y k +5y k 0 3 y k + 5 4 y k 3 4 y k 4 = hf k+ (..67) n =6,O(h 6 ) 47 60 y k+ 6y k + 5 y k 0 3 y k + 5 4 y k 3 6 5 y k 4 + 6 = hf k+ (..68) Remarks: We note that BDF methods. Exhibit good A-stability. Are very efficient for stiff problems 3. Require the solution of an algebraic problem each time step.4 Consistency, Stability and Convergence.4. Basic notions and definitions Consistency:

3 CHAPTER. NUMERICAL METHODS FOR ODES Definition. A one-step numerical method defined by with truncation error fi k+ (h) = y(t k+) y(t k ) h is consistent if and only if y k+ = y k + hφ(t k ;y k ;h);k =0; ; ; (..69) lim ffi(t k ;y(t k );h)];k =0; ; ; ;N (..70) max h!0 0»k»N jfi k (h)j =0: (..7) Definition. The method (..69) is O(h p ) consistent if and only if max jfi k (h)j = O(h p ) <Ch p ; as h! 0: (..7) 0»k»N Definition 3. The method (..69) is stable if and only if there exists C > 0 independent of h such that max jw k y k j»c(jw 0 y 0 j + max jfi i (h; y k ) fi k (h; w k )j); for h<h 0 ; 0»k»N 0»k»N (..73) where y k ; k = 0; ; ; ;N and w k ; k = 0; ; ; ;N are approximations given by (..69). Definition 4. The numerical solution given by (..69) is convergent if and only if lim max h!0 0»k»N jy k y(t k )j =0: (..74) Using the strong stability condition (..73) we can prove the following theorem. Theorem.4.. The numerical method (..69) converges if and only if it is stable and consistent. In the next theorem we show that Lipschitz continuity of Φ with respect to y is sufficient for stability.

.4. CONSISTENCY, STABILITY AND CONVERGENCE 33 Theorem.4.. If the numerical method (..69) is O(h p ) consistent and Φ(t; y; h) is Lipschitz continuous, i.e., there exists L>0 such that then jφ(t; w; h) Φ(t; z; h)j <Ljw zj; jy k+ y(t k+ )j» Chp L (el(t k+ t 0) ): (..75) Proof. Assume the method (..69) is O(h p ) consistent and write Subtracting (..69) we obtain y(t k+ )=y(t k )+hφ(t k ;y(t k );h)+o(h p+ ): (..76) y(t k+ ) y k+ = y(t k ) y k + h[φ(t k ;y(t k );h) Φ(t k ;y k ;h)] + O(h p+ ): (..77) Applying the triangle inequality and Lipschitz property leads to Using the recursive formula we obtain je k+ j»( + hl)je k j + Ch p+ : (..78) je k+ j»( + hl) k+ je 0 j + Ch p+ ( + A + A + + A k ): (..79) where A =+hl. Since e 0 =0we have je k+ j» Ak+ A Chp+ = Chk L (( + hl)k+ ): (..80) Using + x» e x for x 0, we write je k+ j» Chp L (elh(k+) ): (..8) Using t k+ t 0 = h(k +)we complete the proof. Thus, max 0»k»N which shows convergence. jy k y(t k )j <h p C(eLT ) ; (..8) L

34 CHAPTER. NUMERICAL METHODS FOR ODES.5 Midpoint y = y, y 0 =, y =e h Exact solution is y(t) =e t h=0.5 h=0.5 0.5 0 0.5 Weak instability for y k+ = y k + h f(t k, y k ).5 Oscillations appear for long times.5 0 3 4 5 6 7 8.4. Stability Figure.: Weak instability for the midpoint method An example of a weakly stable method: For instance we study the midpoint method given y 0 and y : for the problem where the exact solution y(t) =e t. y k+ = y k +hf(t k ;y k ); k =; ; (..83) y 0 = y; y(0)=; To eliminate the effect of initial errors we use y 0 = and y = e h and present the numerical solutions for h = 0:5; 0:5; 0:065 in Figure.. This weak instability is not due to round-off errors but due to the method itself. For instance let us consider the problem y 0 = y; y(0)=:

.4. CONSISTENCY, STABILITY AND CONVERGENCE 35 The midpoint method becomes y k+ = y k +zy k ; y 0 =; y = e h ; z = h : (..84) We look for solutions of the form y k = r k where r is solution of r zr =0 r = z + p z + and r = z p z +Thus, the general solution y k can be expressed as y k = C r k + C r k ; where C and C are determined by which yields y 0 ==C + C ; y = e z = C r + C r ; C = ( z + p z +)+e z p ; z + C = (z + p z +) e z p : z + Hence, the exact solution of the difference equation (..84) is y k = C r k + C r k ; (..85) where C =+O(z 3 ) and C = O(z 3 ). For the numerical example, = we have 0 <r < and r <. Thus, jr j k! 0 and k! while jr j k! as k!. This explains the weak stability since jr j! as h! 0. Smaller stepsizes h will only delay the spurious oscillations and will not eliminate them. The main problem is that the numerical solution (..85) contains a parasitic term that does not correspond to the solution. The term C r k converges to the true solution while C r k cause the spurious oscillations. This analysis can be applied to a general n-step method of the form y k+ = nx i=0 a n i y k i + h nx i= b n i f(t k i ;y k i ) (..86)

36 CHAPTER. NUMERICAL METHODS FOR ODES The stability polynomial is obtained by assuming f(t; y) =0and look for a solution of form y k = r k to obtain the stability polynomial p(r) =r n+ nx i=0 a n i r n i : (..87) Thus, the exact solution can be written as a linear combination y k = nx i=0 C i r k i : Definition 5. (Root condition) The stability polynomial satisfies the root condition if and only if all roots of p(r) =0are such that and if each root r i such that jr i j =is simple. The strong root condition is satisfied if and only if jr i j»; 0» i» n (..88) r 0 =; jr i j < ;i=; ; ;n: (..89). If the stability polynomial satisfies the root condition with more than one simple root on the unit circle, then the method is weakly stable, i.e., for small h it will give an accurate solution over a fixed interval.. If the strong root condition is satisfied, then all parasitic terms will go to zero as k!. Method is strongly stable, i.e., for h small enough the solution is stable. 3. All methods which do not satisfy the root condition are unstable. Examples:. Midpoint-Method: The stability polynomial is p(r) =r ; r 0 =; r = : Thus, the midpoint is weakly stable.

.4. CONSISTENCY, STABILITY AND CONVERGENCE 37. For all Adams-Bashforth methods: The stability polynomial is p(r) =r n+ r n ; r 0 =; r i =0; i =; ; ;n: Thus, Adams-Bashforth are strongly stable. 3. One-step Runge-Kutta methods are strongly stable 4. For the method y k+ =4y k 3y k hf(t k ;y k ) derived using the difference formula whose stability polynomial is Thus the method is unstable. y 0 (t k ) ß y k+ +4y k 3y k h p(r) =r 4r +3=0; r 0 =; r =3: Consistency of multistep Methods: To every multistep method (..86) we can associate two polynomials: the stability polynomial p(r) and s(r) = n+ X i=0 b n+ i r n+ i One can prove that the multistep method (..86) is consistent if it is exact for the two problems y 0 =0; y(0) = ; which is equivalent to p() = 0. and y 0 (t) =; y(0) = 0:

38 CHAPTER. NUMERICAL METHODS FOR ODES which yields by setting y k = h Λ k and f(t; y) = (k +)Λ h nx i=0 Writing k i = k n + n i to obtain (k n)p()+(n +) a n i (k i) Λ h = h nx i=0 X n+ b n+ i i=0 a n i (n i) =s() P Since p() = 0 and (n +) n a n i (n i) =p 0 () we have p 0 () = s() i=0 Now we are ready the state the convergence theorem Theorem.4.3. The multistep method (..86) converges if and only if it satisfies the root condition definition (5) with p()=0, p 0 () = s(). Proof. Consult Cheney and Kincaid..4.3 Absolute stability If we consider the linear problem y 0 = y; t 0; y(0) = y 0 > 0; where is a complex number with negative real part. Thus, the exact solution is y(t) =y 0 e t : As t! + the exact solution decays to 0, i.e., lim jy(t)j =0 t! A method is A-stable if the numerical solution mimics the true solution, i.e., lim jy kj =0: k!

.4. CONSISTENCY, STABILITY AND CONVERGENCE 39 Absolute stability for one-step Methods: First, let us examine the behavior of Euler's method y k =(+ h) k y 0 : Wewould like the numerical solution to mimic the true solution, i.e., to decay to 0 as t approaches +. Let us write the norm of y k as jy k j = j+ hj k jy 0 j: In order to obtain lim k! jy kj =0,we must have j+zj < ; z = h: This is equivalent toz lying in the interior of a unit disk centered at ( ; 0). For instance, if is negative then we have < h<0 which leads to the A-stability condition h< : We note that this is not related to accuracy. If h =, the numerical solution is constant. In this case we say that Euler method is 0 stable. If h>, the numerical solution diverges while oscillating as k! Stability of Heun's method: Repeating the same process as for Euler's method we obtain y k =(+z + z =) k y 0 :

40 CHAPTER. NUMERICAL METHODS FOR ODES Again the method is absolutely stable if If <0 we have j+z + z =j < : < + h + ( h) < : This is equivalent toh<. Applying (..69) to the linear problem y 0 = y we obtain y k = ffi(z) k y 0 ; z = h: Definition 6. The method (..69) is A-stable if and only if jffi(z)j <, for all z, such that Real(z) < 0. Definition 7. The stability region of a method is the set of complex numbers z such that the method is A-stable. Remark: In order to obtain the stability curves we partition ß into i = i Λ ß=N, solve ffi(z) =e i i ;i=0; ; :N ; and plot the roots. Absolute stability regions for multi-step methods: Adams-Bashforth Method: Let us study, for instance, the second-order Adams-Bashforth method for y 0 = y to obtain y k+ =(+ 3z )y k z y k : If we look for an exact solution of the form y k polynomial: = r k, r must satisfy the q(r) =r ( + 3z )r z =0

.4. CONSISTENCY, STABILITY AND CONVERGENCE 4 Definition 8. The stability region is the set of complex numbers z, Real(z) < 0 such that all roots of q(r) are such that jr i (z)j < for all i, i.e.,the method is absolutely stable. BDF Methods:. Backward Euler Method: where The absolute stability region is Thus, backward Euler is A-stable. y k+ = y k + zy k+ ; q(r) =( z)r ; r 0 ==( z): fz jj( z)j > g: In general the A-stability region of a multistep method is the set of z such that the roots of p(r) =zs(r); satisfy r i < ; i =0; ; n. Thus, the boundary of the stability region can be obtained by plotting z = p(r) s(r) ; for r = ei ; =0; ß=N; ; ß; N > 0:. The trapezoidal method is A-stable and has the smallest error constant C = = among all second-order methods. 3. There are no O(h p ); p> multistep A-stable methods 4. No explicit linear multistep method is A-stable 5. There exist O(h p ) implicit Runge-Kutta A-stable methods ( see Hairer et al. ).

4 CHAPTER. NUMERICAL METHODS FOR ODES n jx(t n ) x n j jy(t n ) y n j jje()jj 0 E4 E4.49804E-0 0 E0 E4.74E-0 30 E E 8.5554E-03 40 E E 6.43634E-03 50 9E 6 E 7 5.5967E-03 60 E E 4.30564E-03 Table.: Errors for RKF45 and backward Euler applied to the stiff problem (..90). Example of a stiff problem: with initial conditions The exact solution is The Eigenvalues of the matrix x 0 = 98x + 99y y 0 = 398x 399y (..90) x(0)=; y(0) = : x(t) =e t ; y(t) = e t A =» 98 99 398 399 are = and = 00. We use RKF45 on [0,] and backward Euler Methods with h ==n and show the results in Table.. In the second and third columns we show the final error for RKF45. The last column contains the infinity norm of the final error for the backward Euler method..5 Two-point Boundary Value Problems.5. Introduction.5. The Shooting Method.5.3 The Finite Difference Method