EXAMPLE OF ONE-STEP METHOD

Similar documents
SYSTEMS OF ODES. mg sin ( (x)) dx 2 =

Chapter 8. Numerical Solution of Ordinary Differential Equations. Module No. 1. Runge-Kutta Methods

AN OVERVIEW. Numerical Methods for ODE Initial Value Problems. 1. One-step methods (Taylor series, Runge-Kutta)

Consistency and Convergence

Euler s Method, cont d

Mathematics for chemical engineers. Numerical solution of ordinary differential equations

Chapter 11 ORDINARY DIFFERENTIAL EQUATIONS

Numerical Methods. King Saud University

(f(x) P 3 (x)) dx. (a) The Lagrange formula for the error is given by

Mini project ODE, TANA22

HIGHER ORDER METHODS. There are two principal means to derive higher order methods. b j f(x n j,y n j )

Review: Power series define functions. Functions define power series. Taylor series of a function. Taylor polynomials of a function.

4.4 Computing π, ln 2 and e

Exercises, module A (ODEs, numerical integration etc)

Applied Math for Engineers

Initial Value Problems

Ordinary differential equation II

First Order Numerical Methods

Review Higher Order methods Multistep methods Summary HIGHER ORDER METHODS. P.V. Johnson. School of Mathematics. Semester

Integration of Ordinary Differential Equations

First Order Numerical Methods

Numerical Methods - Initial Value Problems for ODEs

Chapter 5: Numerical Integration and Differentiation

Backward error analysis

Fourth Order RK-Method

2.1 NUMERICAL SOLUTION OF SIMULTANEOUS FIRST ORDER ORDINARY DIFFERENTIAL EQUATIONS. differential equations with the initial values y(x 0. ; l.

FIRST-ORDER ORDINARY DIFFERENTIAL EQUATIONS III: Numerical and More Analytic Methods David Levermore Department of Mathematics University of Maryland

Virtual University of Pakistan

Chapter 11. Taylor Series. Josef Leydold Mathematical Methods WS 2018/19 11 Taylor Series 1 / 27

Examination paper for TMA4215 Numerical Mathematics

FORTRAN 77 Lesson 7. Reese Haywood Ayan Paul

14.7: Maxima and Minima

FIRST-ORDER ORDINARY DIFFERENTIAL EQUATIONS II: Graphical and Numerical Methods David Levermore Department of Mathematics University of Maryland

Making Models with Polynomials: Taylor Series

Homework Solutions: , plus Substitutions

Numerical Methods for Initial Value Problems; Harmonic Oscillators

Jim Lambers MAT 460/560 Fall Semester Practice Final Exam

Section 7.4 Runge-Kutta Methods

Finite Difference Calculus

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University

First In-Class Exam Solutions Math 246, Professor David Levermore Tuesday, 21 February log(2)m 40, 000, M(0) = 250, 000.

Math 1310 Final Exam

Section x7 +

Numerical Analysis Exam with Solutions

Ordinary differential equations - Initial value problems

Numerical Methods for Initial Value Problems; Harmonic Oscillators

Extrapolation in Numerical Integration. Romberg Integration

Solving Ordinary Differential equations

Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 12: Monday, Apr 16. f(x) dx,

MA2501 Numerical Methods Spring 2015

16.7 Multistep, Multivalue, and Predictor-Corrector Methods

Second Order ODEs. CSCC51H- Numerical Approx, Int and ODEs p.130/177

13 Implicit Differentiation

Numerical solution of ODEs

Ordinary Differential Equations

Linear Multistep Methods

SYDE 112, LECTURE 7: Integration by Parts

THE TRAPEZOIDAL QUADRATURE RULE

Partial Derivatives October 2013

Romberg Integration. MATH 375 Numerical Analysis. J. Robert Buchanan. Spring Department of Mathematics

EAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science

Romberg Rule of Integration

Runge - Kutta Methods for first order models

Section 5.8. Taylor Series

1 Ordinary differential equations

Principles of Scientific Computing Local Analysis

6.0 INTRODUCTION TO DIFFERENTIAL EQUATIONS

Finite Differences for Differential Equations 28 PART II. Finite Difference Methods for Differential Equations

Ordinary Differential Equations

Example: Limit definition. Geometric meaning. Geometric meaning, y. Notes. Notes. Notes. f (x, y) = x 2 y 3 :

Explicit One-Step Methods

Chap. 20: Initial-Value Problems

The Definition and Numerical Method of Final Value Problem and Arbitrary Value Problem Shixiong Wang 1*, Jianhua He 1, Chen Wang 2, Xitong Li 1

You may not use your books, notes; calculators are highly recommended.

Differential Equations

MA 3021: Numerical Analysis I Numerical Differentiation and Integration

2 Numerical Methods for Initial Value Problems

CS 450 Numerical Analysis. Chapter 8: Numerical Integration and Differentiation

Taylor series - Solutions

Ordinary Differential Equations. Monday, October 10, 11

. (a) Express [ ] as a non-trivial linear combination of u = [ ], v = [ ] and w =[ ], if possible. Otherwise, give your comments. (b) Express +8x+9x a

Initial value problems for ordinary differential equations

Math Numerical Analysis Homework #4 Due End of term. y = 2y t 3y2 t 3, 1 t 2, y(1) = 1. n=(b-a)/h+1; % the number of steps we need to take

1.6 Computing and Existence

Runge - Kutta Methods for first order models

Department of Applied Mathematics and Theoretical Physics. AMA 204 Numerical analysis. Exam Winter 2004

Chapter 4: Partial differentiation

ODE Runge-Kutta methods

Assignment # 7, Math 370, Fall 2018 SOLUTIONS: y = e x. y = e 3x + 4xe 3x. y = e x cosx.

Ordinary Differential Equations (ODEs)

CS 450 Numerical Analysis. Chapter 9: Initial Value Problems for Ordinary Differential Equations

UNIVERSITY of LIMERICK OLLSCOIL LUIMNIGH

JUST THE MATHS UNIT NUMBER DIFFERENTIATION APPLICATIONS 5 (Maclaurin s and Taylor s series) A.J.Hobson

8 Numerical Integration of Ordinary Differential

11.10a Taylor and Maclaurin Series

Do not turn over until you are told to do so by the Invigilator.

Section 5.2 Series Solution Near Ordinary Point

Examples MAT-INF1100. Øyvind Ryan

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit V Solution of

Math 128: Lecture 4. March 31, 2014

Transcription:

EXAMPLE OF ONE-STEP METHOD Consider solving y = y cos x, y(0) = 1 Imagine writing a Taylor series for the solution Y (x), say initially about x = 0. Then Y (h) = Y (0) + hy (0) + h2 2 Y (0) + h3 6 Y (0) + We can calculate Y (0) = Y (0) cos(0) = 1. How do we calculate Y (0) and higher order derivatives? Y (x) = Y (x) cos(x) Y (x) = Y (x) sin(x) + Y (x) cos(x) Y (x) = Y (x) cos(x) 2Y (x) sin(x) + Y (x) cos x Then Y (0) = 1, Y (0) = 1, and Y (0) = Y (0) sin(0) + Y (0) cos(0) = 1 Y (0) = Y (0) cos(0) 2Y (0) sin(0) + Y (0) cos 0 = 0

Thus Y (h) = Y (0) + hy (0) + h2 2 Y (0) + h3 6 Y (0) + = 1 + h + h2 2 + We can generate as many terms as desired, obtaining added accuracy as we do so. In this particular case, the true solution is Y (x) = exp (sin x). Thus Y (h) = 1 + h + h2 2 h4 8 +

We can truncate the series after a particular order. Then continue with the same process to generate approximations to Y (2h), Y (3h),... Letting x n = nh, and using the order 2 Taylor approximation, we have Y (x n+1 ) = Y (x n ) + hy (x n ) + h2 2 Y (x n ) + h3 6 Y (ξ n ) with x n ξ n x n+1. Drop the truncation error term, and then define y n+1 = y n + hy n + h2 2 y n, n 0 with y n = y n cos(x n ) y n = y n sin(x n ) + y n cos(x n ) We give a numerical example of computing the numerical solution with Taylor series methods of orders 2, 3, and 4. For a Taylor series of degree r, the global error will be O(h r ). The numerical example output is given in a separate file.

A 4 th -ORDER EXAMPLE Consider solving y = y, y(0) = 1 whose true solution is Y (x) = e x. Differentiating the equation we obtain Y (x) = Y (x) Y = Y = Y Y = Y = Y Y (4) = Y Then expanding Y (x n + h) in a Taylor series, Y (x n+1 ) = Y n + hy n + h2 2 Y n + h3 6 Y n + h4 24 Y (4) n + h5 120 Y (4) (ξ n )

Dropping the truncation error, we have the numerical method ) y n+1 = y n +hy n+ h2 2 y n + h3 6 y n + h4 24 y n (4) = (1 h + h2 2 h3 6 + h4 y n 24 with y 0 = 1. By induction, Since Thus y n = ) n (1 h + h2 2 h3 6 + h4, n 0 24 e h = 1 h + h2 2 h3 6 + h4 24 h5 120 e ξ, 0 < ξ < h ) n y n = (e h + h5 120 e ξ = e (1 nh + h5 (. = e xn 1 + x nh 4 ) 120 eh ξ Y (x n ) y n = e xn + O(h 4 ) 120 eh ξ ) n

We can introduce the Taylor series method for the general problem y = f (x, y), y(x 0 ) = Y 0 Simply imitiate what was done above for the particular problem y = y cos x.

In general, Y (x) = f (x, Y (x)) Y (x) = f x (x, Y (x)) + f y (x, Y (x)) Y (x) = f x (x, Y (x)) + f y (x, Y (x)) f (x, Y (x)) Y (x) = f xx + 2f xy f + f yy f 2 + f y f x + f 2 y f and we can continue on this manner. Thus we can calculate derivatives of any order for Y (x); and then we can define Taylor series of any desired order. This used to be considered much too arduous a task for practical problems, because everything had to be done by hand. But with symbolic programs such as Mathematica and Maple, Taylor series can be considered a serious framework for numerical methods. Programs that implement this in an automatic way, with varying order and stepsize, are available.

RUNGE-KUTTA METHODS Nonetheless, most researchers still consider Taylor series methods to be too expensive for most practical problems (a point contested by others). This leads us to look for other one-step methods which imitate the Taylor series methods, without the necessity to calculate the higher order derivatives. These are called Runge-Kutta methods. There are a number of ways in which one can approach Runge-Kutta methods, and I will describe a fairly classical approach. We begin by considering explicit Runge-Kutta methods of order 2. We want to write y n+1 = y n + hf (x n, y n, h; f ) with F (x n, y n, h; f ) some carefully chosen approximation to f (x, y) on the interval [x n, x n+1 ]. In particular, write F (x, y, h; f ) = γ 1 f (x, y) + γ 2 f (x + αh, y + βhf (x, y))

F (x, y, h; f ) = γ 1 f (x, y) + γ 2 f (x + αh, y + βhf (x, y)) This is some kind of average derivative. Intuitively, we should restrict α so that 0 α 1. In addition to this, how should the constants γ 1, γ 2, α, β be chosen? Introduce the truncation error T n (Y ) = Y (x + h) [Y (x) + hf (x, Y (x), h; f )] and choose the constants to make T n (Y ) = O(h p ) with p as large as possible. By choosing we can show α = β = 1 2γ 2, γ 1 = 1 γ 2 T n (Y ) = O(h 3 )

EXAMPLES y n+1 = y n + hf (x n, y n, h; f ) F (x, y, h; f ) = γ 1 f (x, y) + γ 2 f (x + αh, y + βhf (x, y)) Case: γ 2 = 1 2. This leads to the trapezoidal Runge-Kutta method. y n+1 = y n + h 2 [f (x n, y n ) + f (x n + h, y n + hf (x n, y n ))] (1) Case: γ 2 = 1. This leads to the midpoint Runge-Kutta method. y n+1 = y n + hf (x n + h 2, y n + h 2 f (x n, y n )) (2) We can derive other second order formulas by choosing other values for γ 2.

NUMERICAL EXAMPLE We illustrate these two methods by solving Y (x) = Y (x) + 2 cos x, Y (0) = 1 with true solution Y (x) = sin x + cos x. We give numerical results for the Runge-Kutta method in (1) with stepsizes h = 0.05 and 0.1. Observe the rate at which the error decreases when h is halved.

h x y h (x) Error 0.1 2.0 0.491215673 1.93E 3 4.0 1.407898629 2.55E 3 6.0 0.680696723 5.81E 5 8.0 0.841376339 2.48E 3 10.0 1.380966579 2.13E 3 0.05 2.0 0.492682499 4.68E 4 4.0 1.409821234 6.25E 4 6.0 0.680734664 2.01E 5 8.0 0.843254396 6.04E 4 10.0 1.382569379 5.23E 4 Observe the ratio by which the error decreases when h is halved, for each fixed x. This is consistent with an error formula Y (x n ) y h (x n ) = O ( h 2)

FOUR-STAGE FORMULAS We have just studied 2-stage formulas. To obtain a higher rate of convergence, we must use more derivative evaluations. A 4-stage formula looks like y n+1 = y n + hf (x n, y n, h; f ) F (x, y, h; f ) = γ 1 V 1 + γ 2 V 2 + γ 3 V 3 + γ 4 V 4 V 1 = f (x, y) V 2 = f (x + α 2 h, y + β 2,1 hv 1 ) V 3 = f (x + α 3 h, y + β 3,1 hv 1 + β 3,2 hv 2 ) V 4 = f (x + α 4 h, y + h(β 4,1 V 1 + β 4,2 V 2 + β 4,3 V 3 )) Again it can be analyzed by expanding the truncation error T n (Y ) = Y (x + h) [Y (x) + hf (x, Y (x), h; f )] in powers of h.

We attempt to choose the unknown coefficients so as to force T n (Y ) = O(h p ) with p as large as possible. In the above case, this can be done so that p = 5. The algebra becomes very complicated, and we omit it here. The truncation error indicates the new error introduced in each step of the numerical method. The total or global error for this case will be of size O ( h 4). The classical 4 th -order formula follows. y n+1 = y n + h 6 [V 1 + 2V 2 + 2V 3 + V 4 ] V 1 = f (x, y) V 2 = f (x + 1 2 h, y + 1 2 hv 1) V 3 = f (x + 1 2 h, y + 1 2 hv 2) V 4 = f (x + h, y + hv 3 ) The even more accurate 4 th -order Runge-Kutta-Fehlberg formula is given in the text, along with a numerical example.

ERROR DISCUSSION If a Runge-Kutta method satisfies T n (Y ) = O(h p ) with p 2, then it can be shown that Y (x n ) y n ch p 1, x 0 x n b when solving the initial value problem y = f (x, y), x 0 x n b, y(x 0 ) = Y 0 We can also go further and show that Y (x n ) y h (x n ) = D(x n )h p 1 + O (h p ), x 0 x n b This can then be used to justify Richardson s extrapolation. For example, if T n (Y ) = O(h 3 ), then Richardson s error estimate is Y (x n ) y h (x n ) 1 3 [y h(x n ) y 2h (x n )]