AN OVERVIEW Numerical Methods for ODE Initial Value Problems 1. One-step methods (Taylor series, Runge-Kutta) 2. Multistep methods (Predictor-Corrector, Adams methods) Both of these types of methods are widely used, with the multistep methods probably being more widely used. We will first study multistep methods, returning in 6.10 to the single step methods. PLAN OF STUDY 6.3 Simplified general theory 6.4, 6.5, 6.6 Low order examples 6.7 Modern higher order methods 6.8 General theory 6.9 Stiff equations & Method of Lines 6.10 Single step methods 6.11 Boundary value problems
WHAT IS A MULTISTEP METHOD? As before let the node points be given by x n = x 0 + nh, n =0, 1, 2,... with corresponding numerical solutions y 0, y 1,y 2,... General form of multistep method: y n+1 = a j y n j + h b j f(x n j,y n j ), In this, p 0 is a given integer, and { a j } and { bj } are given coefficients. This is called a (p +1)-stepmethod.
EXAMPLES (1) The midpoint rule: y n+1 = y n 1 +2hf (x n,y n ), n =1, 2, 3,... This is a two-step method. Note that y 0 = Y 0 in general; but y 1 must be obtained by some other means. y 2 is obtained from y 0 and y 1 ; and similarly for y 3, etc. In this case, p =1. (2) The trapezoidal rule: y n+1 = y n + h 2 [f (x n,y n )+f (x n+1,y n+1 )], n 0 This is a single step method. It is also our first example of an implicit method; and many of the most desirable formulas are of this type. In this case, p =0.
PROBLEMS How do we derive multistep methods? What is the truncation error? Does the method converge? If so, how fast? Is the method stable? Is there an asymptotic error formula?
TRUNCATION ERROR For any differentiable function Y (x), define the truncation error in going from x n to x n+1 by T n (Y )=Y (x n+1 ) +h a j Y (x n j ) b j Y (x n j ), By how much, does the numerical formula fail to predict Y (x n+1 ) from the preceding values? For the trapezoidal rule, we can show T n (Y )= h3 12 Y (ξ n ), x n ξ n x n+1 For the midpoint rule, we can show T n (Y )= h3 3 Y (ξ n ), x n 1 ξ n x n+1
CONSISTENCY Introduce τ n (Y )= 1 h T n(y ), τ(h; Y )= max p n N 1 τ n(y ) where the node points in [x 0,b]are x 0 <x 1 < <x N b We say the multistep method is consistent if for every function Y (x) that is continuously differentiable on [x 0,b], τ(h; Y ) 0 as h 0 The method has order m if τ(h; Y )=O(h m ) for all such functions Y (x). When is a method consistent? When is it of order m?
EXAMPLE Consider the midpoint method y n+1 = y n 1 +2hf (x n,y n ), n =1, 2, 3,... In this case, T n (Y )=Y (x n+1 ) Y (x n 1 ) 2hY (x n ) τ n (Y )=2 [ Y (xn+1 ) Y (x n 1 ) 2h Thus consistency in this case says that Y (x n ) Y (x n+1 ) Y (x n 1 ) Y (x n ) 2h which seems a reasonable request in solving a differential equation. ]
THEOREM (a) In order that the method be consistent, it is necessary and sufficient that a j =1 b j ja j =1 (b) In order that τ(h; Y )=O(h m )forsomem 1, it is necessary and sufficient that the method be consistent and that for i =2,..., m. ( j) i a j + i ( j) i 1 b j =1
In this case that τ(h; Y )=O(h m ), T n (Y )= c m+1 (m +1)! hm+1 Y (m+1) (x n )+O ( h m+2 with c m+1 =1 In addition, τ(h; Y ) c m+1 (m +1)! hm ( j) m+1 a j +(m +1) ) ( j) m b j Y (m+1) + O ( h m+1)
DERIVATION OF TRUNCATION ERROR Recall the formula T n (Y )=Y (x n+1 ) +h a j Y (x n j ) b j Y (x n j ), We expand Y (x) andy (x) aboutx n with x = x n+1, x n 1,..., x n p. Y (x) = Y (x n )+(x x n )Y (x n )+ + (x x n) m Y (m) (x n ) m! + (x x n) m+1 Y (m+1) (x n )+ (m +1)!
Y (x) = Y (x n )+(x x n )Y (x n )+ + (x x n) m 1 Y (m) (x n ) (m 1)! + (x x n) m Y (m+1) (x n )+ m! We substitute these into the formula for the truncation error, rearrange the terms, and obtain the formula T n (Y )= m+1 i=0 c i i! hi Y (i) (x n )+O(h m+2 ) c i =1 for i 2. c 0 =1 c 1 =1 a j ja j + b j ( j) i a j + i ( j) i 1 b j
For the consistency, we want to look at τ n (Y )= 1 h T n(y ) τ n (Y )= c 0 h Y (x n)+c 1 Y (x n )+ c 2 2 hy (x n )+ and we want it to go to zero as h goes to zero. This requires c 0 =0andc 1 =0. Theseareexactlythe equations asserted earlier. Consistency makes the error exactly zero whenever Y (x) is a linear function. To have τ n (Y )=O (h m ), we must also have Then c 2 =... = c m =0 T n (Y )= c m+1 (m +1)! hm+1 Y (m+1) (x n )+O ( h m+2 )
CONVERGENCE The general multistep method for solving is given by y n+1 = y = f(x, y), x x 0, y(x 0 )=Y 0 a j y n j + h b j f(x n j,y n j ), (1) with y 0. = Y0. The values y 1,..., y p must be calculated by other means, sometimes by using a single step method. To talk about the accuracy of the method (1), we must also consider the accuracy in the initial values y 1,..., y p ; and these will depend on the stepsize h, since y i = y h (x i ) Y (x 0 + ih), i =0, 1,..., p Introduce the maximum of the initial errors, η(h; Y )= max,1,...,p Y (x 0 + ih) y h (x 0 + ih)
Also recall the quantity τ(h; Y )= 1 h max p n N 1 T n(y ) where x 0,..., x N are the node points on [x 0,b]. THEOREM Assume: (a) The multistep method is consistent; (b) All a j 0. Then for all sufficiently small values of h, say0<h h 0 forsomechoiceofh 0, the multistep method has asolutiony n+1 y h (x n+1 )ateachnodepoint. In addition, we have max x 0 x j b Y (x j ) y h (x j ) c 1 η(h; Y )+c 2 τ(h; Y ) with suitably chosen constants c 1,c 2 which depend on the function f(x, y), but not on the stepsize h.
EXAMPLE. Recall the midpoint rule y n+1 = y n 1 +2hf (x n,y n ), n =1, 2, 3,... in which p =1,a 0 =0,a 1 =1. Inthiscase, η(h; Y )=max{ Y (x 0 ) y h (x 0 ), Y (x 1 ) y h (x 1 ) } From earlier, I said T n (Y )= h3 3 Y (ξ n ), x n 1 ξ n x n+1 Then For the error, max x 0 x j b τ(h; Y ) h2 3 Y h 2 Y (x j ) y h (x j ) c 1 η(h; Y )+c 2 3 Y We usually have y 0 = Y 0 ; and therefore, we should choose y 1 with an accuracy of Y (x 1 ) y h (x 1 )=O(h 2 )
THE PROOF Recall that T n (Y )=Y(x n+1 ) +h a j Y (x n j ) b j Y (x n j ), τ n (Y )= 1 h T n(y ), Then we can write Y (x) =f(x, Y (x)) Y (x n+1 ) = a j Y (x n j ) +h +hτ n (Y ), b j f(x n j,y(x n j ))
The multistep method is y n+1 = a j y n j + h We subtract these to get b j f(x n j,y n j ), e n+1 = +h +hτ n (Y ), a j e n j b j [ f(xn j,y(x n j )) f(x n j,y n j ) ] with e j Y (x j ) y h (x j ). Recall the Lipschitz condition Then f(x, y) f(x, z) K y z e n+1 en j a j p en j +hk b j +hτ n (Y ),
Introduce f j = max i=0,1,...j e i the maximum of the errors up thru x j.then p p e n+1 f n a j + hkf n+1 +hτ(h; Y ), Using consistency, Introduce Then we can write c = K a j =1 b j b j e n+1 f n + hcf n+1 + hτ(h; Y ),
Clearly, the right side also bounds f n alone; and thus f n+1 f n + hcf n+1 + hτ(h; Y ), Also, f p = η(h; Y ) We can combine these results, together with the type of analysis used for Euler s method, to get f n e 2c(x n x 0 ) η(h; Y )+ e2c(x n x 0 ) 1 τ(h; Y ) c for. We also require 2hc 1; but this can be relaxed for multistep methods which are explicit. Not all multistep methods can be analyzed as above, but this does include many of the most widely used method.
STABILITY There is a similar result for stability of multistep solutions y n+1 = a j y n j + h b j f(x n j,y n j ), Suppose we perturb the initial values y 0,..., y p to z 0,..., z p, with y i z i ɛ, i =0, 1,..., p Consider the numerical solution z n+1 = a j z n j + h b j f(x n j,z n j ), Then by methods similar to the above, we can show (with the same assumptions) that y n z n ɛe 2c(x n x 0 )
OTHER ANALYSES We can also do asymptotic error analyses, rounding error analyses, and analyses involving systems of first order ODEs. Rather than doing a general theory, we do special cases in later sections.