Lecture Notes on Numerical Differential Equations: IVP

Similar documents
Numerical Differential Equations: IVP

Numerical Methods - Initial Value Problems for ODEs

Initial value problems for ordinary differential equations

Euler s Method, cont d

Consistency and Convergence

Fourth Order RK-Method

NUMERICAL SOLUTION OF ODE IVPs. Overview

1 Error Analysis for Solving IVP

Ordinary Differential Equations

9.2 Euler s Method. (1) t k = a + kh for k = 0, 1,..., M where h = b a. The value h is called the step size. We now proceed to solve approximately

Ordinary Differential Equations

Ordinary differential equations - Initial value problems

Integration of Ordinary Differential Equations

Initial value problems for ordinary differential equations

Mathematics for chemical engineers. Numerical solution of ordinary differential equations

Introduction to the Numerical Solution of IVP for ODE

You may not use your books, notes; calculators are highly recommended.

8.1 Introduction. Consider the initial value problem (IVP):

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit V Solution of

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University

AN OVERVIEW. Numerical Methods for ODE Initial Value Problems. 1. One-step methods (Taylor series, Runge-Kutta)

2tdt 1 y = t2 + C y = which implies C = 1 and the solution is y = 1

Chapter 11 ORDINARY DIFFERENTIAL EQUATIONS

ECE257 Numerical Methods and Scientific Computing. Ordinary Differential Equations

Physics 584 Computational Methods

Numerical solution of ODEs

9.6 Predictor-Corrector Methods

Variable Step Size Differential Equation Solvers

Math 128A Spring 2003 Week 12 Solutions

Ordinary differential equation II

Chapter 8. Numerical Solution of Ordinary Differential Equations. Module No. 1. Runge-Kutta Methods

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 9

Scientific Computing: An Introductory Survey

Initial Value Problems

Solving Ordinary Differential equations

CS 450 Numerical Analysis. Chapter 9: Initial Value Problems for Ordinary Differential Equations

PART I Lecture Notes on Numerical Solution of Root Finding Problems MATH 435

What we ll do: Lecture 21. Ordinary Differential Equations (ODEs) Differential Equations. Ordinary Differential Equations

Euler s Method, Taylor Series Method, Runge Kutta Methods, Multi-Step Methods and Stability.

Finite Differences for Differential Equations 28 PART II. Finite Difference Methods for Differential Equations

Introductory Numerical Analysis

Section 7.2 Euler s Method

Lecture 21: Numerical Solution of Differential Equations

Solving Ordinary Differential Equations

Chap. 20: Initial-Value Problems

Numerical Solution of Fuzzy Differential Equations

Applied Math for Engineers

Do not turn over until you are told to do so by the Invigilator.

Differential Equations

Higher Order Taylor Methods

Runge-Kutta methods. With orders of Taylor methods yet without derivatives of f (t, y(t))

1 Systems of First Order IVP

Second Order ODEs. CSCC51H- Numerical Approx, Int and ODEs p.130/177

Solving scalar IVP s : Runge-Kutta Methods

Section 7.4 Runge-Kutta Methods

(f(x) P 3 (x)) dx. (a) The Lagrange formula for the error is given by

Introduction to Initial Value Problems

Backward error analysis

Ordinary Differential Equations

4 Stability analysis of finite-difference methods for ODEs

Initial-Value Problems for ODEs. Introduction to Linear Multistep Methods

Ordinary Differential Equations

2 Numerical Methods for Initial Value Problems

FIRST-ORDER ORDINARY DIFFERENTIAL EQUATIONS III: Numerical and More Analytic Methods David Levermore Department of Mathematics University of Maryland

FIRST-ORDER ORDINARY DIFFERENTIAL EQUATIONS II: Graphical and Numerical Methods David Levermore Department of Mathematics University of Maryland

CS520: numerical ODEs (Ch.2)

Chapter 8. Numerical Solution of Ordinary Differential Equations. Module No. 2. Predictor-Corrector Methods

COSC 3361 Numerical Analysis I Ordinary Differential Equations (II) - Multistep methods

+ h4. + h5. 6! f (5) i. (C.3) Since., it yields

multistep methods Last modified: November 28, 2017 Recall that we are interested in the numerical solution of the initial value problem (IVP):

Numerical method for approximating the solution of an IVP. Euler Algorithm (the simplest approximation method)

Numerical Methods for Differential Equations

10 Numerical Solutions of PDEs

Initial Value Problems for. Ordinary Differential Equations

Ordinary Differential Equations II

ACCELERATION OF RUNGE-KUTTA INTEGRATION SCHEMES

Numerical Methods for Differential Equations

Lecture 42 Determining Internal Node Values

Ordinary Differential Equations II

Numerical Methods - Boundary Value Problems for ODEs

Numerical Solution of ODE IVPs

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University

Lecture 4: Numerical solution of ordinary differential equations

Ordinary Differential Equation Theory

CS 257: Numerical Methods

A Brief Introduction to Numerical Methods for Differential Equations

TS Method Summary. T k (x,y j 1 ) f(x j 1,y j 1 )+ 2 f (x j 1,y j 1 ) + k 1

Math Numerical Analysis Homework #4 Due End of term. y = 2y t 3y2 t 3, 1 t 2, y(1) = 1. n=(b-a)/h+1; % the number of steps we need to take

On interval predictor-corrector methods

5.2 - Euler s Method

Numerical methods for solving ODEs

Section x7 +

Initial Value Problems

Ordinary Differential Equations

Part IB Numerical Analysis

Math 252 Fall 2002 Supplement on Euler s Method

Chapter 6 - Ordinary Differential Equations

EXAMPLE OF ONE-STEP METHOD

1 Ordinary differential equations

Transcription:

Lecture Notes on Numerical Differential Equations: IVP Professor Biswa Nath Datta Department of Mathematical Sciences Northern Illinois University DeKalb, IL. 60115 USA E mail: dattab@math.niu.edu URL: www.math.niu.edu/~dattab

1 Initial Value Problem for Ordinary Differential Equations We consider the problem of numerically solving a system of differential equations of the form dy dt = f(t, y), a t b, y(a) =α (given). Such a problem is called the Initial Value Problem or in short IVP, because the initial value of the solution y(a) =α is given. Since there are infinitely many values between a and b, we will only be concerned here to find approximations of the solution y(t) at several specified values of t in [a, b], rather than finding y(t) at every value between a and b. Denote y i = (an approximate value of y(at) att = t i.) Divide [a, b] inton equal subintervals of length h: h = b a N 00 11 0 1 (step size) t 0 = a<t 1 <t < t N = b. 01 01 01 01 01 01 01 00 11 01 01 01 0 1 01 01 01 01 01 a = t 0 t 1 t t N = b

The Initial Value Problem Given (1) y = f(y, t), a t b () The initial value y(t 0 )=y(a) =α (3) The step-size h. Find y i (an approximation of y(t i )), i=1,,n, where N = b a h. We will briefly describe here the well-known numerical methods for solving the IVP, such as the etc. The Euler Method The Taylor Method of higher order The Runge-Kutta Method The Adams-Moulton Method The Milne Method We will also discuss the error behavior and convergence of these methods. However, before doing so, we state a result without proof, in the following section on the existence and uniqueness of the solution for the IVP. The proof can be found in most books on ordinary differential equations. Existence and Uniqueness of the Solution for the IVP

Theorem: (Existence and Uniqueness Theorem for the IVP). The initial value problem: { y = f(t, y) y(a) =α has a unique solution y(t) fora t b, iff(t, y) is continuous on the domain, given by R = {a t b, <y< } and satisfies the following inequality: Whenever (t, y) and(t, y ) R. Definition. f(t, y) f(t, y ) L y y, The condition f(t, y) f(t, y ) L y y is called the Lipschitz Condition. The number L is called a Lipschitz Constant. Definition. AsetS is said to be convex if whenever (t 1,y 1 )and(t,y )belongtos, thepoint λ)t 1 +t, (1 λ)y 1 + λy ) also belongs to S for each λ when 0 λ 1. Simplification of the Lipschitz Condition for the Convex Domain ( (1 If the domain happens to be a convex set, then the condition of the above Theorem reduces to Definition. f (t, y) y L for all (t, y) R. Liptischitz Condition and Well-Posednes An IVP is said to be well-posed if a small perhubation in the data of the problem leads to only a small change in the solution.

Since numerical computation may very well introduce some perhubations to the problem, it is important that the problem that is to be solved is well-posed. Fortunately, the Lipschitz condition is a sufficient condition for the IVP problem to be wellposed. Theorem (Well-Posedness of the IVP problem). If f(t, y) Satisfies the Lipschitz Condition, then the IVP is well-posed. The Euler Method One of the simplest methods for solving the IVP is the classical Euler method. The method is derived from the Taylor Series expansion of the function y(t). The function y(t) has the following Taylor series expansion of order n at t = t i+1 : y(t i+1 )=y(t i )+(t i+1 t i )y (t i )+ (t i+1 t i )! n+1 (t i+1 t i ) y n+1 (ξ i ), where ξ i is in (t i,t i+1 ). (n +1)! Substitute h = t i+1 t i.then Taylor Series Expansion under n of y(t) y (t i )+ + (t n i+1 t i ) y (n) (t i )+ n! y(t i+1 )=y(t i )+hy (t i )+ h! y (t i )+ + hn n! y(n) (t i )+ hn+1 (n +1)! y(n+1) (ξ i ). For n = 1, this formula reduces to y(t i+1 )=y(t i )+hy (t i )+ h y (ξ).

The term = h! y() (ξ i )iscalltheremainder term. Neglecting the remainder term, we have Euler s Method y i+1 = y i + hy (t i ) = y i + hf(t i,y i ), i =0, 1,,,N 1 This formula is known as the Euler method and now can be used to approximate y(t i+1 ). y(t N ) = y(b) y(t ) y(t 1 ) y(t 0 ) = y(a) =α Geometrical Interpretation a t 1 t t N 1 b = t N

Algorithm: Euler s Method for IVP Input: (i). The function f(t, y) (ii). The end points of the interval [a, b] :a and b (iii). The initial value: α = y(t 0 )=y(a) Output: Approximations y i+1 of y(t i +1),i =0, 1,,N 1. Step 1. Initialization: Set t 0 = a, y 0 = y(t 0 )=y(a) =α. and N = b a h. Step. For i =0, 1,,N 1do Compute y i+1 = y i + hf(t i,y i ) End Example: y = t +5, 0 t 1. y(0) = 0, h=0.5 The points of subdivisions are: t 0 =0,t 1 =0.5,t =0.50,t 3 =0.75 and t 4 =1. i =0: t 1 = t 0 + h =0.5 i =1: t = t 1 + h =0.50 i =: t 3 = t + h =0.75 y 1 = y 0 + hf(t 0,y 0 )=0+.5(5) = 1.5 (exact value of y(1) : 1.55) y = y 1 + hf(t 1,y 1 ) =1.5 + 0.5(t 1 +5)=1.5 + 0.5((0.5) +5) =.5156 (exact value of y() :.5417) y 3 = y + hf(t,y ) =.5156 +.5((.5) +5)=3.881 (exact value of y(3) : 3.8906) Note: The exact values above are correct up to 4 decimal digits. Example: y = t +5, 0 t,

y(0) = 0,h=0.5 So, the points of subdivisions are: t 0 =0,t 1 =0.5, t =1,t 3 =1.5, t 4 =. We compute y 1,y,y 3, and y 4, which are, respectively, approximations to y(0.5),y(1),y(1.5), and y(). i =0: y 1 = y 0 + hf(t 0,y 0 )=y(0) + hf(0, 0) = 0 + 0.5 5=.5 (exact Value =.5417). i =1: y = y 1 + hf(t 1,y 1 )=.5+0.5((0.5) +5)=5.150 (exact Value =5.3333). i =: y 3 = y + hf(t,y )=5.150 + 0.5(t +5)=5.150 + 0.5(1.5) = 8.150 (exact Value =8.650) The Errors in Euler s Method The approximations obtained by a numerical method to solve the IVP are usually subjected to three types of errors: Local Truncation Error Global Truncation Error Round-off Error Definition. The local truncation error is the error made at a single step due to the truncation of the series used to solve the problem.

Definition. The global truncation error is the truncation error at any step, that is, the total of the accumulative single-step truncation errors at previous steps. Recall that the Euler Method was obtained by truncating the Taylor series y(t i+1 )=y(t i )+hy (t i )+ h y (t i )+ after two terms. Thus, in obtaining Euler s method, the first term neglected was h y (t). So the local error in Euler s method is: E L = h y (ξ i ), where ξ i lies between t i and t i+1.inthiscase,wesaythat the local error is of order h, written as O(h ). On the other hand, the global truncation error is of order h : O(h), as can be seen from the following theorem. Denote the global error at Step i by E i,thatis,e i = y(t i ) y i. Below we give a bound for this error assuming that certain properties of the derivatives of the solution are known. The proof of the result can be found in the book by Gear [Numerical Initial Value Problems in Ordinary Differential Equations, Prentice Hall, Inc., (1971)]

Theorem: (Global Error Bound for the Euler Method) Let y(t) be the unique solution of the IVP: y = f(t, y); y(a) =α. a t b, <y<, Let L and M be two numbers such that f(t, y) y L, and y (t) M in [a, b]. Then the global error E i at t = t i satisfies E i = y(t i) y i hm L (el(ti a) 1). Thus, The global error bound for Euler s method depends upon h, whereas the local error depends upon h. Remark. Since the exact solution y(t) of the IVP is not known, the above bound may not be of practical importance as far as knowing how large the error can be a priori. However, from this error bound, we can say that the Euler method can be made to converge faster by decreasing the step-size. Furthermore, if the equalities, L and M of the above theorem can be found, then we can determine what step-size will be needed to achieve a certain accuracy, as the following example shows.

Example: dy dt = t + y,y(0) = 0 0 t 1, 1 y(t) 1. Determine how small the step-size should be so that the error does not exceed ɛ =10 4. Since f(t, y) = t + y, we have f y = y Thus, f y 1 for all y, giving L =1. To find M, we compute the second-derivative of y(t) as follows: y = dy = f(t, y)(given) dt By implicit differentiation, y = f t + f f y ( ) t + y = t + y = t + y (t + y ) So, y (t) = t + y (t + y ), for 1 y 1. Thus, M =, and E i = y(t i ) y i h L (e(ti) 1) = h(e (ti) 1) = h(e 1). Now, for the error not to exceed 10 4,wemusthave: h(e 1) < 10 4 or h< 10 4 e 1 5.8198 10 5. 3 High-order Taylor Methods Recall that the Taylor s series expansion of y(t) ofdegreen is given by

Now, y(t i+1 ) = y(t i )+hy (t i )+ h y (t i )+ + hn n! y(n) (t i )+ hn+1 (n +1)! y(n+1) (ξ i ) Thus, (i) y (t) =f(t, y(t)) (given). (ii) y (t) =f (t, y(t)). (iii) y (i) (t) =f (i 1) (t, y(t)),i=1,,...,n. y(t i+1 )=y(t i )+hf(t i,y(t i ))+ h f (t i,y(t i ))+ + hn 1 (n 1)! f (n 1) (t i,y i )+ hn n! f (n 1) (t i,y(t i ))+ h n+1 (n +1)! f (n 1) (ξ i,y(ξ i ) = y(t i )+h [f(t i,y(t i )) + h ] f (t i,y(t i )) + + hn 1 f n 1 (t i,y(t i )) + Remainder Term n! Neglecting the remainder term the above formula can be written in compact form as follows: y i+1 = y i + ht k (t i,y i ),i=0, 1,,N 1, where T k (t i.y i ) is defined by: T k (t i,y 1 )=f(t i,y i )+ h f (t i,y i )+ + hk 1 f (k 1) (t i,y i ) k! So, if we truncate the Taylor Series after (k + 1) terms and use the truncated series to obtain the approximating of y 1+1 of y(t i+1 ), we have the following of k-th order Taylor s algorithm for the IVP.

Taylor s Algorithm of order k for IVP Input: (i) The function f(t, y) (ii) The end points: a and b (iii) The initial value: α = y(t o )=y(a) (iv) The order of the algorithm: k (v) The step size: h Step 1 Initialization: t 0 = a, y 0 = α, N = b a h Step. For i =...,N 1 do.1 Compute T k (t i,y i )=f(t i,y i )+ h f (t i,y i )+...+ hk 1 f (k 1) (t i,y i ) k!. Compute y i+1 = y i + ht k (t i,y i ) End Note: With k = 1, the above formula for y i+1, reduces to Euler s method. Example: y = y t +1, 0 t, y(0) = 0.5,h=0. f(t, y(t)) = y t + 1 (Given). f (t, y(t)) = d dt (y t +1)=y t = y t +1 t f (t, y(t)) = d dt (y t +1 t) =y t +1 t = y t t 1 so, y(0.) y 1 = y 0 + hf(t 0,y(t 0 )) + h f (t 0,y(t 0 )) = 0.5+0. 1.5+ (0.) (0.5+1)=.300check. y(0.4) y =1.15800

4 Runge-Kutta Methods The Euler s method is the simplest to implement; however, even for a reasonable accuracy the step-size h needs to be very small. The difficulties with higher order Taylor s series methods are that the derivatives of higher orders of f(t, y) need to be computed, which are very often difficult to compute/ needles to say that if f(t, y) is not explicity known in many areas. The Runge-Kutta methods aim at achieving the accuracy of higher order Taylor series methods without computing the higher order derivatives. We first develop the simplest one: The Runge-Kutta Methods of order. The Runge-Kutta Methods of order Suppose that we want an expression of the approximation y i+1 in the form: y i+1 = y i + α 1 k 1 + α k, (4.1) where k 1 = hf(t i,y i ), (4.) and k = hf(t i + αh, y i + βk 1 ). (4.3)

The constants α 1 and α and α and β are to be chosen so that the formula is as accurate as thetaylor sseriesmethodwithn =1. To develop the method we need an important result from Calculus: Taylor s series for function to two variables. Taylor s Theorem for Function of Two Variables Let f(t, y) and its partial derivatives of orders up to (n + 1) are continuous in the domain D = {(t, y) a t b, c y d}. Then [ f(t, y) =f(t 0,y 0 )+ (t t 0 ) f t (t 0,y 0 )+(y y 0 ) f ] y (t 0,y 0 ) + [ 1 n ( ni ) ] + (t t 0 ) h i (y y 0 ) i n f n! t n 1 y (t 0,y i 0 ) + R n (t, y), h=0 where R n (t, y) is the remainder after n terms and involves the partial derivative of order n +1. Using the above theorem with n =1,wehave f(t i + αh, y i + βk 1 )=f(t i,y i )+αh f t (t f i,y i )+βk 1 y (t i,y i ) (4.4) From (4.4) and (4.3), we obtain k h = f(t i,y i )+αh f t (t f i.y i )+βk 1 y (t i,y i ). (4.5) Again, substituting the value of k 1 from (4.) and k from (4.3) in (4.1) we get (after some rearrangment):

[ y i+1 = y i + α 1 hf(t i,y i )+α 1 h f(t i,y i )+αh f t (t i,y i )+βhf(t i,y i ) f [ = y i +(α 1 + α )hf(t i,y i )+α h α f t (t i,y i )+βf(t i,y i ) f y (t i,y i ) Also, note that y(t i+1 )=y(t i )+hf(t i,y i )+ h order terms. So, neglecting the higher order terms, we can write y i+1 = y i + hf(t i,y i )+ h ] y (t i,y i ) ] (4.6) ( f t (t i,y i )+f(t i,y i ) f ) y (t i,y i ) +higher ( f t (t i,y i )+f f ) y (t i,y i ). (4.7) If we want (4.6) and (4.7) to agree for numerical approximations, then we must have α 1 + α = 1 (comparing the coefficients of hf(t i,y i )). α α = 1 (comparing the coefficients of h f t (t i,y i ). α β = 1 (comparing the coefficents of h f(t i,y i ) f y (t iy i ). Since the number of unknowns here exceeds the number of equations, there are infinitely many possible solutions. The simplest solution is: α 1 = α = 1,α= β =1. With these choices we can generate y i+1 from y i as follows. The process is known as the Modified Euler s Method.

Generating y i+1 from y i in Modified Euler s Method y i+1 = y i + 1 (k 1 + k ), where k 1 = hf(t i,y i ) or y i+1 = y i + h k = hf(t i + h, y i + k 1 ). [ ] f(t i,y i )+f(t i + h, y i + hf(t i,y i ) Algorithm: Inputs: The given function: f(t, y) The end points of the interval: a and b The step-size: h The initial value y(t 0 )=y(a) =α Outputs: Step 1 The Modified Euler Method Approximations y i+1 of y(t i+1 )=y(t 0 + ih), i =0, 1,,,N 1 (Initialization) Set t 0 = a, y 0 = y(t 0 )=y(a) =α N = b a h Step For i =0, 1,,,N 1do Compute k 1 = hf(t i,y i ) Compute k = hf(t i + h, y i + k 1 ) Compute y i+1 = y i + 1 (k 1 + k ). End Local Error in the Modified Euler Method Since in deriving the modified Euler method, we neglected the terms involving h 3 and higher powers of h, thelocal error for this method is O(h 3 ). Thus with the Modified Euler

method, we will be able to use larger step-size h than the Euler Method to obtain the same accuracy. Example: y = t + y, y(0) = 1 h =0.01,y 0 = y(0) = 1. i =0: y 1 = y 0 + 1 (k 1 + k ) k 1 = hf(t 0,y 0 )=0.01(0 + 1) = 0.01 k = hf(t 0 + h, y 0 + k 1 )=0.01 f(0.01, 1+0.01) =0.01 (0.01 + 1.01) = 0.01 1.0 = 0.010 y(0.01) y 1 =1+ 1 (0.01 + 0.010) = 1.0101 i =1: y = y 1 + 1 (k 1 + k ) k 1 = hf(t 1,y 1 ) =0.01 f(0.01, 1.0101) = 0.01 (0.01 + 1.0101) =0.010 k = hf(t 1 + h, y 1 + k 1 ) =0.01 f(0.0, 1.0101 + 0.010) = 0.01 (0.0 + 1.003) = 0.0104 y(0.0) y =1.0101 + 1 (0.010 + 0.0104) = 1.004 The Midpoint and Heun s Methods In deriving the modified Euler s Method, we have considered only one set of possible values of α 1,α,α 1 and β. We will now consider two more sets of values. α =0,α =1,α= β = 1. This gives us the Midpoint Method.

The Midpoint Method y i+1 = y i + k where k 1 = hf(t i,y i ) ( k = hf t i + h, y i + k ) 1 or y i+1 = y i +hf α 1 = 1 4,β 1 = 3 4,α= β = 3 Then we have Heun s Method. y i+1 = y i + 1 4 k 1 + 3 4 k where k 1 = hf(t i,y i ) ( k = hf t i + 3 h, y i + ) 3 k 1 ( t i + h, y i+ h ) f(t i,y i ),i=0, 1,...,N 1. Heun s Method or y i+1 = y i + h 4 f(t i,y i )+ 3h (t 4 f i + 3 h, y i + h ) 3 f(t i,y i ),i=0, 1,,N 1 Heun s Method and the Modified Euler s Method are classified as the Runge-Kutta methods of order.

The Runge-Kutta Method of order 4. A method very widely used in practice is the Runge-Kutta method of order 4. It s derivation is complicated. We will just state the method, without proof. Inputs: Algorithm: The Runge-Kutta Method of Order 4 f(t, y) - the given function a, b - the end points of the interval α - the initial value y(t 0 ) h - the step size Outputs: The approximations y i+1 of y(t i+1 ), i =0, 1,,N 1 Step 1: Step : (Initialization) Set t 0 = a, y 0 = y(t 0 )=y(a) =α N = b a h. (Computations of the Runge-Kutta Coefficients) For i =0, 1,,,n do k 1 = hf(t i,y i ) k = hf(t i + h, y i + 1 k 1) k 3 = hf(t i + h, y i + 1 k ) k 4 = hf(t i + h, y i + k 3 ) Step 3: (Computation of the Approximate Solution) Compute: y i+1 = y i + 1 6 (k 1 +k +k 3 + k 4 ) The Local Truncation Error: The local truncation error of the Runge-Kutta Method of order 4 is O(h 5 ).

Example: y = t + y, y(0) = 1 h =0.01 Let s complete y(0.01) using the Runge-Kutta Method of order 4. i =0 y(0.01) y 1 = y 0 + 1 6 (k 1 +k +k 3 + k 4 ) where k 1 = hf(t 0,y 0 )=0.01f(0, 1) = 0.01 1=0.01. k = hf(t 0 + h, y 0 + k 1 )=0.01f ( k 3 = hf t 0 + h, y 0 + k ) ( ) [ 0.01 0.01, 1+0.01 =0.01 ) ( = h t 0 + h + y 0 + k k 4 = hf(t 0 + h, y 0 + k 3 )=h(t 0 + h + y 0 + k 3 )=0.0100100 y 1 = y 0 + 1 6 (k 1 +k +k 3 + k 4 )=1.010100334 and so on. =0.0101005. + 1+0.01 ] =0.0101.