Jim Lambers MAT 77 Fall Semester 010-11 Lecture 0 Notes These notes correspond to Sections 1.3, 1.4 and 1.5 in the text. Consistency and Convergence We have learned that the numerical solution obtained from Euler s method, y n+1 = y n + hft n, y n ), t n = t 0 + nh, converges to the exact solution yt) of the initial value problem y = ft, y), yt 0 ) = y 0, as h 0. We now analyze the convergence of a general one-step method of the form y n+1 = y n + hφt n, y n, h), for some continuous function Φt, y, h). We define the local truncation error of this one-step method by T n h) = yt n+1) yt n ) Φt n, yt n ), h). h That is, the local truncation error is the result of substituting the exact solution into the approximation of the ODE by the numerical method. As h 0 and n, in such a way that t 0 + nh = t [t 0, T ], we obtain T n h) y t) Φt, yt), 0). We therefore say that the one-step method is consistent if Φt, y, 0) = ft, y). A consistent one-step method is one that converges to the ODE as h 0. We then say that a one-step method is stable if Φt, y, h) is Lipschitz continuous in y. That is, for some constant L Φ. Φt, u, h) Φt, v, h) L Φ u v, t [t 0, T ], u, v R, h [0, h 0 ], 1
We now show that a consistent and stable one-step method is consistent. Using the same approach and notation as in the convergence proof of Euler s method, and the fact that the method is stable, we obtain the following bound for the global error e n = yt n ) y n : ) e L ΦT t 0 ) 1 e n max L T mh). Φ 0 m n 1 Because the method is consistent, we have lim max T nh) = 0. h 0 0 n T/h It follows that as h 0 and n in such a way that t 0 + nh = t, we have and therefore the method is convergent. In the case of Euler s method, we have Therefore, there exists a constant K such that lim e n = 0, n Φt, y, h) = ft, y), T n h) = h f τ), τ t 0, T ). T n h) Kh, 0 < h h 0, for some sufficiently small h 0. We say that Euler s method is first-order accurate. More generally, we say that a one-step method has order of accuracy p if, for any sufficiently smooth solution yt), there exists constants K and h 0 such that T n h) Kh p, 0 < h h 0. We now consider an example of a higher-order accurate method. An Implicit One-Step Method Suppose that we approximate the equation yt n+1 ) = yt n ) + tn+1 t n y s) ds by applying the Trapezoidal Rule to the integral. This yields a one-step method y n+1 = y n + h [ft n, y n ) + ft n+1, y n+1 )],
known as the trapezoidal method. It follows from the error in the Trapezoidal Rule that T n h) = yt n+1) yt n ) h 1 [ft n, y n ) + ft n+1, y n+1 )] = 1 1 h y τ n ), τ n t n, t n+1 ). Therefore, the trapezoidal method is second-order accurate. To show convergence, we must establish stability by finding a suitable Lipschitz constant L Φ for the function Φt, y, h) = 1 [ft n, y n ) + ft n+1, y n+1 )], assuming that L f is a Lipschitz constant for ft, y) in y. We have Φt, u, h) Φt, v, h) = 1 ft, u, h) + ft + h, u + hφt, u, h)) ft, v, h) ft + h, v + hφt, v, h) L f u v + h L f Φt, u, h) Φt, v, h). Therefore 1 h ) L f Φt, u, h) Φt, v, h) L f u v, and therefore L Φ L f 1 h L, f provided that h L f < 1. We conclude that for h sufficiently small, the trapezoidal method is stable, and therefore convergent, with Oh ) global error. The trapezoidal method constrasts with Euler s method because it is an implicit method, due to the evaluation of ft, y) at y n+1. It follows that it is generally necessary to solve a nonlinear equation to obtain y n+1 from y n. This additional computational effort is offset by the fact that implicit methods are generally more stable than explicit methods such as Euler s method. Another example of an implicit method is backward Euler s method y n+1 = y n + hft n+1, y n+1 ). Like Euler s method, backward Euler s method is first-order accurate. Runge-Kutta Methods We have seen that Euler s method is first-order accurate. We would like to use Taylor series to design methods that have a higher order of accuracy. First, however, we must get around the fact that an analysis of the global error, as was carried out for Euler s method, is quite cumbersome. 3
Instead, we will design new methods based on the criteria that their local truncation error, the error committed during a single time step, is higher-order in h. Using higher-order Taylor series directly to approximate yt n+1 ) is cumbersome, because it requires evaluating derivatives of f. Therefore, our approach will be to use evaluations of f at carefully chosen values of its arguments, t and y, in order to create an approximation that is just as accurate as a higher-order Taylor series expansion of yt + h). To find the right values of t and y at which to evaluate f, we need to take a Taylor expansion of f evaluated at these unknown) values, and then match the resulting numerical scheme to a Taylor series expansion of yt + h) around t. To that end, we state a generalization of Taylor s theorem to functions of two variables. Theorem Let ft, y) be n + 1) times continuously differentiable on a convex set D, and let t 0, y 0 ) D. Then, for every t, y) D, there exists ξ between t 0 and t, and μ between y 0 and y, such that ft, y) = P n t, y) + R n t, y), where P n t, y) is the nth Taylor polynomial of f about t 0, y 0 ), [ P n t, y) = ft 0, y 0 ) + t t 0 ) f t t 0, y 0 ) + y y 0 ) f ] y t 0, y 0 ) + [ t t0 ) f t t 0, y 0 ) + t t 0 )y y 0 ) f t y t 0, y 0 ) + y y 0) ] f y t 0, y 0 ) + + 1 n ) n t t n! j 0 ) n j y y 0 ) j n f t n j y j t 0, y 0 ), j=0 and R n t, y) is the remainder term associated with P n t, y), n+1 1 ) n + 1 R n t, y) = t t n + 1)! j 0 ) n+1 j y y 0 ) j n+1 f t n+1 j ξ, μ). yj j=0 We now illustrate our proposed approach in order to obtain a method that is second-order accurate; that is, its local truncation error is Oh ). This involves matching to y + hft, y) + h d h3 [ft, y)] + dt 6 y + ha 1 ft + α 1, y + β 1 ), d [fξ, y)] dt where t ξ t + h and the parameters a 1, α 1 and β 1 are to be determined. After simplifying by removing terms or factors that already match, we see that we only need to match ft, y) + h d h d [ft, y)] + [ft, yt))] dt 6 dt 4
with a 1 ft + α 1, y + β 1 ), at least up to terms of Oh), so that the local truncation error will be Oh ). Applying the multivariable version of Taylor s theorem to f, we obtain a 1 ft + α 1, y + β 1 ) = f a 1 ft, y) + a 1 α 1 t t, y) + a f 1β 1 t, y) + y α1 f t ξ, μ) + α f 1β 1 t y ξ, μ) + β 1 f ξ, μ), y where ξ is between t and t + α 1 and μ is between y and y + β 1. Meanwhile, computing the full derivatives with respect to t in the Taylor expansion of the solution yields ft, y) + h Comparing terms yields the equations f t t, y) + h f y t, y)ft, y) + Oh ). a 1 = 1, a 1 α 1 = h, a 1β 1 = h ft, y), which has the solutions a 1 = 1, α 1 = h, β 1 = h ft, y). The resulting numerical scheme is y n+1 = y n + hf t n + h, y n + h ) ft n, y n ). This scheme is known as the midpoint method, or the explicit midpoint method. Note that it evaluates f at the midpoint of the intervals [t n, t n+1 ] and [y n, y n+1 ], where the midpoint in y is approximated using Euler s method with time step h/. The midpoint method is the simplest example of a Runge-Kutta method, which is the name given to any of a class of time-stepping schemes that are derived by matching multivaraiable Taylor series expansions of ft, y) with terms in a Taylor series expansion of yt + h). Another often-used Runge-Kutta method is the modified Euler method y n+1 = y n + h [ft n, y n ) + ft n+1, y n + hft n, y n )], which resembles the Trapezoidal Rule from numerical integration, and is also second-order accurate. 5
However, the best-known Runge-Kutta method is the fourth-order Runge-Kutta method, which uses four evaluations of f during each time step. The method proceeds as follows: k 1 = hft n, y n ), k = hf t n + h, y n + 1 ) k 1, k 3 = hf t n + h, y n + 1 ) k, k 4 = hf t n+1, y n + k 3 ), y n+1 = y n + 1 6 k 1 + k + k 3 + k 4 ). In a sense, this method is similar to Simpson s Rule from numerical integration, which is also fourth-order accurate, as values of f at the midpoint in time are given four times as much weight as values at the endpoints t n and t n+1. Example We compare Euler s method with the fourth-order Runge-Kutta scheme on the initial value problem y = ty, 0 < t 1, y0) = 1, which has the exact solution yt) = e t. We use a time step of h = 0.1 for both methods. The computed solutions, and the exact solution, are shown in Figure 1. It can be seen that the fourth-order Runge-Kutta method is far more accurate than Euler s method, which is first-order accurate. In fact, the solution computed using the fourth-order Runge- Kutta method is visually indistinguishable from the exact solution. At the final time T = 1, the relative error in the solution computed using Euler s method is 0.038, while the relative error in the solution computing using the fourth-order Runge-Kutta method is 4.4 10 6. 6
Figure 1: Solutions of y = ty, y0) = 1 on [0, 1], computed using Euler s method and the fourth-order Runge-Kutta method 7