TMA4215: Lecture notes on numerical solution of ordinary differential equations.

Size: px
Start display at page:

Download "TMA4215: Lecture notes on numerical solution of ordinary differential equations."

Transcription

1 TMA45: Lecture notes on numerical solution of ordinary differential equations. Anne Kværnø November, 6 Contents Eulers method. 3 Some background on ODEs. 6 3 Numerical solution of ODEs Some examples of one-step methods Some examples of linear multistep methods Runge-Kutta methods 4. Order conditions for Runge-Kutta methods Error control and stepsize selection Dense output Collocation methods Stiff equations 4 5. Linear stability Nonlinear stability Order reduction

2 6 Linear multistep methods Consistency and order Linear difference equations Zero-stability and convergence Adams-Bashforth-Moulton methods Predictor-corrector methods

3 Eulers method. Let us start this introduction to the numerical solution of ordinary differential equations (ODEs) by something familiar. Given a scalar (one equation only) ODE y = f(t, y), t t t end, y(t ) = y, () in which the function f, the integration interval [t, t end ] and the initial value y is assumed to be given. The solution of this initial value problem (IVP) is a function y(t) on the interval [t, t end ]. Example.. The ODE/IVP y = ty, t, y() =. has as solution the function y(t) = e t. But in many practical situations, it is not possible to express the solution y(t) in closed form, even if a solution exist. In these cases, a numerical algorithm can give an approximation to the exact solution. Let us start with Eulers method, which should be known from some calculus classes. Divide the interval [t, t end ] into N step equal subintervals, each of size h = (t end t )/N step, and let t n = t + nh. Euler s method can be derived by several means. One possibility is to use the first few terms of the Taylor expansion of the exact solution, which is given by y(t + h) = y(t ) + hy (t ) + h y (t ) + + p! hp y (p) (t ) + (p + )! hp+ y (p+) (ξ), () where ξ is somewhere between t and t end. The integer p is a number of our own choice, but we have to require y to be sufficiently differentiable, in this case that y (p+) exist and is continuous. If h is small, we may assume that the solution will be completely dominated by the first two terms, thus y(t + h) y(t ) + hy (t ) = y + hf(t, y ), and we call this approximate solution y. Starting from the point t = t + h and y we can repeat the process. We have now developed Euler s method, given by resulting in approximations y n y(t n ). y n+ = y n + hf(t n, y n ), n =,,, N step, Example.. Eulers method with h =. applied to the ODE of Example. gives 3

4 t n y n In this case we know the exact solution, y(.) = e. =.3679 and the error at the endpoint is e = y(.) y =.38. If we repeat this experiment (write a MATLAB program to do so) with different stepsizes, and measure the error at the end of the interval, we get. h e Nstep = y(.) y Nstep From this example, it might look like the error at the endpoint e Nstep h, where h = (tend t )/N step. But is this true for all problems, and if yes, can we prove it? To do so, we need to see what kind of errors we have and how they behave. This is illustrated in Figure. For each step an error is made, and these errors are then propagated til the next steps and accumulate at the endpoint. Definition.. The local truncation error d n+ is the error done in one step when starting at the exact solution y(t n ). The global error is the difference between the exact and the numerical solution at point t n, thus e n = y(t n ) y n. The local truncation error of Euler s method is d n+ = y(t n + h) y(t n ) hf(t n, y(t n )) = h y (ξ), where ξ (t n, t n+ ). This is given from the Taylor-expansion of y(t n + h) around t n with p =. To see how the global error propagates from one step to the next, the trick is: We have Take the difference of these two, and get y(t n + h) = y(t n ) + hf(t n, y(t n )) + d n+, y n+ = y n + hf (t n, y n ). e n+ = e n + h (f (t n, y(t n )) f (t n, y n )) + d n+ = ( + hf y (t n, v))e n + d n+, (3) 4

5 y(t n ) y(t n+ ) y(t end ) d n+ y e n y n y n+ y Nstep t t n t n+ t end Figure : Lady Windermere s Fan where v is somewhere between y(t n ) and y n. We have here used the mean value theorem (Theorem.8 in Burden and Faires) for f (t n, y(t n )) f (t n, y n ). This is about as far as we get with exact calculations, since ξ in d n+ as well as v in f y are unknown, and will also change from one step to the next. So we will look for an upper bound of the global error. We will first assume upper bounds for our unknown, that is, we assume there exist positive constants D and L so that y D for all t (t, t end ) and f y L for all t [t, t end ] and for all y. Taken the absolute value of both sides of (3) and using the triangle inequality gives e n+ ( + hl) e n + Dh. Since e = (there is no error at the initial point) we can use this formula recursively to get an upper bound for the error at the endpoint: e Dh, e ( + hl)dh + Dh. N step e Nstep ( + hl) i Dh = ( + hl)nstep Dh. + hl i= 5

6 Using the fact that +hl e hl (why?) and h N step = t end t we finally reach the conclusion ( e Nstep e hl ) N step Dh = el(t end t ) D h = C h. L L The constant C = ( e hl ) D/L depends only on the problem, and we have proved convergence y(tend ) y Nstep when h (or N step ). Some background on ODEs. In this section some useful notation on ordinary differential equations will be presented. We will also give existence and uniqueness results, but without proofs. A system of m first order ordinary differential equation is given by y = f(t, y) (4) or, written out, as y = f (t, y,, y m ), y = f (t, y,, y m ),. y m = f m (t, y,, y m ). This is an initial value problem (IVP) if the solution is given at some point t, thus y (t ) = y,, y(t ) = y,, y m (t ) = y m,. Example.. The following equation is an example of the Lotka-Volterra equation: y = y y y, y = y y y. An ODE is called autonomous if f is not a function of t, but only of y. The Lotka-Volterra equation is an example of an autonomous ODE. A nonautonomous system can be made autonomous by a simple trick, just add the equation y m+ =, y m+ (t ) = t, and replace t with y m+. Also higher order ODE/IVPs u (m) = f(t, u, u,, u (m ) ), u(t ) = u, u (t ) = u,, u (m ) (t ) = u (m ), 6

7 where u (m) = d m u/dt m, can be written as a system of first order equations, again by a simple trick: Let y = u, y = u, y m = u (m ), and we get the system y = y, y (t ) = u, y = y 3, y (t ) = u,. y m = y m, y m = f(t, y, y,, y m ), Example.. Van der Pol s equation is given by. y m (t ) = u (m ), y m (t ) = u (m ). u + µ(u )u + u =. Using y = u and y = u this equation can be rewritten as y = y, y = µ( y)y y. This problem was first introduced by Van der Pol in 96 in the study of an electronic oscillator. In the rest of this course, we will assume that the following existence and uniqueness results holds: Definition.. A function f : R R m R m satisfies the Lipschitz condition with respect to y on a domain (a, b) D where D R m if there exist a constant L so that f(t, y) f(t, ỹ) L y ỹ, for all t (a, b), y, ỹ D. The constant L is called the Lipschitz constant. It is not hard to show that the function f satisfies the Lipschitz condition if f i / y j, i, j =,, m are continuous and bounded on the domain and D is open and convex. Theorem.. Consider the initial value problem If y = f(t, y), y(t ) = y. (5). f(t, y) is continuous in (a, b) D,. f(t, y) satisfies the Lipschitz condition with respect to y in (a, b) D. with given initial values t (a, b) and y D, then (5) has one and only one solution in (a, b) D. 7

8 Before concluding this section, we will include another definition, which is useful when we later will discuss stability: Definition.3. The function f : R m R m satisfy a one-sided Lipschitz condition with constant µ if f(y) f(ỹ), y ỹ µ y ỹ. And the following theorem holds: Theorem.4. If f(t, y) satisfies the one-sided Lipschitz condition with constant µ, and y(t), ỹ(t) both are solutions of y = f(t, y), then, for all t t y(t) ỹ(t) e µ(t t ) y(t ) ỹ(t ). Proof. We have d dt y(t) ỹ(t) = f(y(t)) f(ỹ(t)), y(t) ỹ(t) µ y(t) ỹ(t) Multiplying by e µ(t t) gives us d ( ) ( ) e µ(t t) y(t) ỹ(t) = e µ(t t ) d dt dt y(t) ỹ(t) µ y(t) ỹ(t), so e µ(t t ) y(t) ỹ(t) is a nondecreasing function, which proves the theorem. Flow of the system We will sometimes use the term flow of a system, ϕ t, defined as ϕ t (y ) = ϕ t y = y(t) if y() = y. where y(t) is the solution of the ODE. Thus you may think about φ t as all possible solutions of the ODE. 3 Numerical solution of ODEs. In this section we develop some simple methods for the solution of initial value problems. In both cases, let us assume that we somehow have found solutions y l y(t l ), for l =,,, n, and we want to find an approximation y n+ y(t n+ ) where t n+ = t n + h, where h is the stepsize. Basically, there are two different classes of methods in practical use.. One-step methods. Only y n is used to find the approximation y n+. One-step methods usually require more than one function evaluation pr. step. They can all be put in a general abstract form y n+ = y n + hφ(t n, y n ; h).. Linear multistep methods: y n+ is approximated from y n k+,, y n. 8

9 3. Some examples of one-step methods. Assume that t n, y n is known. The exact solution y(t n+ ) with t n+ = t n + h of (4) passing through this point is given by tn+ tn+ y(t n + h) = y n + y (τ)dτ = y n + f(τ, y(τ))dτ. (6) t n t n The idea is to find approximations to the last integral. The simplest idea is to use f(τ, y(τ)) f(t n, y n ), in which case we get the Euler method again: y n+ = y n + hf(t n, y n ). The integral can also be approximated by the trapezoidal rule tn+ t n f(τ, y(τ)) = h (f(t n, y n ) + f(t n+, y(t n+ )). By replacing the unknown solution y(t n+ ) by y n+ we get the trapezoidal method y n+ = y n + h (f(t n, y n ) + f(t n+, y n+ )). Here y n+ is available by solving a (usually) nonlinear system of equations. Such methods are called implicit. To avoid this extra difficulty, we could replace y n+ on the right hand side by the approximation from Eulers method, thus ỹ n+ = y n + hf(t n, y n ); y n+ = y n + h (f(t n, y n ) + f(t n+, ỹ n+ )). This method is called the improved Euler method. Similarly, we could have used the midpoint rule for the integral, tn+ f(τ, y(τ)) = (f(t n + h, y(t n + h ) ), t n and replaced y(t n + h ) by one half Euler step. The result is the modified Euler method: ỹ n+ = y n + h f(t n, y n ), y n+ = y n + hf(t n + h, ỹ n+ ). Do we gain anything by constructing these methods? Let us solve the problem from Example. using improved/modified Euler with h =.. For each step, also the global error e n = 9

10 y(t n ) y n is computed. For comparison, also the result for the Euler method is included. Euler improved Euler modified Euler t n y n e n y n e n y n e n As we can see, there is a significant improvement in accuracy, compared with the Euler method. 3. Some examples of linear multistep methods. Most of the classical linear multistep methods are based on some kind of interpolation. In the followin, we use f i = f(t i, y i ). The Adams methods Write the ODE (4) in integral form y(t n + h) = y(t n ) + tn+h t n f(τ, y(τ))dτ. Let p(t) be the polynomial interpolating (t i, f(t i, y i )) for i = n k +,..., n, and let y n+ = y n + tn+h t n p(τ)dτ. Methods constructed this way is called k step Adams-Bashforth methods. Examples: k = : y n+ = y n + hf n k = : y n+ = y n + h (3f n f n ) These methods are of order k.

11 Implicit Adams methods, or Adams-Moulton is constructed similarly, but the point t n+, f n+ is included in the interpolation polynomial. Examples: These are of order k +. k = : y n+ = y n + hf n+ k = : y n+ = y n + h (f n+ + f n ) k = : y n+ = y n + h (5f n+ + 8f n f n ) Backward differentiation formulas (BDF) Let p(t) be the polynomial interpolating (t i, y i ), i = n k +,..., n, n +. The BDF methods are then simply p (t n+ ) = f n+ Examples: k = : y n+ y n = hf n+ k = : (3y n+ y n + y n ) = hf n+ These methods are of order k, and have good stability properties. ODE5s is based on these formulas. The MATLAB solver 4 Runge-Kutta methods The Euler method, as well as the improved and modified Euler methods are all examples on explicit Runge-Kutta methods (ERK). Such schemes are given by k = f(t n, y n ), (7) k = f(t n + c h, y n + ha k ), k 3 = f ( t n + c 3 h, y n + h(a 3 k + a 3 k ) ),. k s = f ( s ) t n + c s h, y n + h a sj k j, y n+ = y n + h b i k i, i= where c i, a ij and b i are coefficients defining the method. We always require c i = s j= a ij. Here, s is the number of stages, or the number of function evaluations needed for each step. j=

12 The vectors k i are called stage derivatives. The improved Euler method is then a two-stage RK-method, written as k = f(t n, y n ), k = f(t n + h, y n + hk ), y n+ = y n + h (k + k ). Also implicit methods, like the trapezoidal rule, y n+ = y n + h ( f(tn, y n ) + f(t n + h, y n+ ) ) can be written in a similar form, k = f(t n, y n ), k = f ( t n + h, y n + h (k + k ) ), y n+ = y n + h (k + k ). But, contrary to what is the case for explicit methods, a nonlinear system of equations has to be solved to find k. Definition 4.. An s-stage Runge-Kutta method is given by k i = f ( ) t n + c i h, y n + h a ij k j, i =,,, s, y n+ = y n + h b i k i. i= The method is defined by its coefficients, which is given in a Butcher tableau c a a a s c a a a s... c s a s a s a ss b b b s j= or in short and we will always assume that c i = a ij, i =,, s or in short c = A. j= The method is explicit if a ij = whenever j i, otherwise implicit. Example 4.. The Butcher-tableaux for the methods presented so far are c A b T (8) Euler improved Euler modified Euler trapezoidal rule

13 When the method is explicit, the zeros on and above the diagonal is usually ignored. We conclude this section by presenting the maybe most popular among the RK-methods over times, The 4th order Runge-Kutta method (Kutta 9): k = f(t n, y n ) k = f(t n + h, y n + h k ) k 3 = f(t n + h, y n + h k ) or. (9) k 4 = f(t n + h, y n + hk 3 ) y n+ = y n + h 6 (k + k + k 3 + k 4 ) Order conditions for Runge-Kutta methods. Theorem 4.. Let y = f(t, y), y(t ) = y, t t t end be solved by a one-step method y n+ = y n + hφ(t n, y n ; h), () with stepsize h = (t end t )/N step. If. the increment function Φ is Lipschitz in y, and. the local truncation error d n+ = O(h p+ ), then the method is of order p, that is, the global error at t end satisfies e Nstep = y(t end ) y Nstep = O(h p ). The proof is left as an exercise. A RK method is a one-step method with increment function Φ(t n, y n ; h) = s i= b ik i. It is possible to show that Φ is Lipschitz in y whenever f is Lipschitz and h h max, where h max is some predefined maximal stepsize. What remains is the order of the local truncation error. To find it, we take the Taylor-expansions of the exact and the numerical solutions and compare. The local truncation error is O(h p+ ) if the two series matches for all terms corresponding to h q with q p. In principle, this is trivial. In practise, it becomes extremely tedious (give it a try). Fortunately, it is possible to express the two series very elegant by the use of B-series and rooted trees. 3

14 B series and rooted trees B series in different forms, and under different names, is essential the main tool for constructing order theory for time-dependent problems, like ODEs, DAEs and SDEs. In this note, with a B series we mean a formal series of the form B(ϕ, x ; h) = x + τ T α(τ) ϕ(τ)(h) F (τ)(x ). () Here, T is a set of rooted trees, T = T \ where refer to the initial value term, F (τ)(x ) the elementary differentials, ϕ(τ)(h) some integral, and α(τ) is a symmetry factor. The idea is to express the solutions of the exact and the numerical solution after one step as B series. For instance, consider the automomous ODE y = f(y), y(t ) = y, and let us solve this by the Euler method. Thus we have B(e, y ; h) = y(t + h) = y(t ) + hf(y ) + h f f + B(φ, y ; h) = y = y(t ) + hf(y ). So, if the these solution can be expressed as B series, which we still have to prove, the first terms will be τ α e φ F h h f. h f f But at the moment, we do not know how the rest of the terms looks like. Before a more formal derivation of the series, let present a few definitions and results: Definition 4.3. Let y = [y, y,, y m ] T R m and f(y) = [f (y), f (y),, f m (y)] T R m. The κ th Frechet derivative of f, denoted by f (κ) (y) is a κ-linear operator R m R m R m (κ times) R m. Evaluation of component i of this operator working on the m operands v, v, v κ R m is given by [ f (κ) (y)(v, v,, v κ ) ]i m = m j = j = m j κ= where v l = [v l,, v l,, v l,m ] R m for l =,,, κ. κ f i (y) y j y j y jκ v,j v,j v κ,jκ Note that the κ th Frechet derivative is independent of permutations of its operands, thus e.g. f (y)(v, v, v 3 ) = f (y)(v 3, v, v ). The multivariable Taylor expansion is, for y, v R m : f(y + v) = f(y) + κ= κ! f (κ) (y)(v, v,..., v) = κ= κ! f (κ) (y)(v κ ), () the expression to the right is only a convenient way to write the expression in the middle. 4

15 Finally, the multinomial theorem states: (v + v v q ) κ = r + +r q=κ A similar argument applied to the Frechet derivative gives f (κ) (y)(a v + a v a q v q ) κ = where α k R and v k R m. r + +r q=κ κ! r! r q! vr vrq q κ! r! r q! κ k= a k f (κ) (y)(v r,..., vrq q ) (3) A list of trees, denoted by {τ, τ,, τ κ }, τ i T, i =,, κ is an ordered set of trees, where each tree might appear more than once. If τ, τ T then {τ, τ, τ } and {τ, τ, τ } are two different lists. If a tree appear k times in the list, the tree has multiplicity k. A multiset of trees, denoted by (τ, τ,, τ κ ) is a set of trees where multiplicity is allowed and order does not matter. So (τ, τ, τ ) = (τ, τ, τ ). A tree with multiplicity k will sometimes be denoted by τ k, so (τ, τ, τ ) = (τ, τ ). The set of all possible lists of trees is denoted Ũ, and the set of all possible multisets is denoted U: Ũ = {{τ, τ,, τ κ } : τ i T, i =,, κ, κ =,,, }, U = {(τ, τ,, τ κ ) : τ i T, i =,, κ, κ =,,, }. In the lemma below, U f is the set of trees formed by taking a multiset from U and include a root f. Lemma 4.. If X(h) = B(φ, x ; h) is some B-series and f C (R m, R m ) then f(x(h)) can be written as a formal series of the form f(x(h)) = u U f β(u) ψ φ (u)(h) G(u)(x ) (4) where U f is a set of trees derived from T, by a) [ ] f U f, and if τ, τ,, τ κ T then [τ, τ,, τ κ ] f U f. b) G([ ] f )(x ) = f(x ) and G(u = [τ,, τ κ ] f )(x ) = f (κ) (x ) ( F (τ )(x ),, F (τ κ )(x ) ). κ c) β([ ] f ) = and β(u = [τ,, τ κ ] f ) = α(τ k ), r!r! r q! k= where r, r,, r q count equal trees among τ, τ,, τ κ. d) ψ φ ([ ] f )(h) and ψ φ (u = [τ,, τ κ ] f )(h) = κ k= φ(τ k)(h). 5

16 Proof. Writing X(h) as a B-series, we have f(x(h)) = f x + α(τ) φ(τ)(h) F (τ)(x ) τ T () = κ! f (κ) (x ) α(τ) φ(τ)(h) F (τ)(x ) τ T κ= (3) = f(x ) + κ= κ! (τ,τ, τ κ) U κ! r!r! r q! ( κ ) α(τ k ) φ(τ k )(h) f (κ) (x ) ( F (τ )(x ),, F (τ κ )(x ) ). k= The number above the equal sign refer to the equation used. The last sum is taken over all possible unordered combinations of κ trees in T. For each set of trees τ, τ,, τ κ T we assign a u = [τ, τ,, τ κ ] f U f. The theorem is now proved by comparing term by term with (4). κ To find the B series of the exact solution, write the ODE in integral form: y(t + h) = y + h Assume that the exact solution can be written as a B-series Plug this into (5), apply Theorem 4. to get y + τ T α(τ) e(τ)(h) F (τ)(y ) = y + f(y(t + s))ds. (5) y(t + h) = B(e, y ; h). (6) u U f β(u) h ψ e (u)(s)ds G(u)(y ). For each term on the left hand side, there has to be a corresponding term on the right. Or for each τ = [τ,..., τ κ ] T there is a corresponding u = [τ,..., τ κ ] f U f, and α(τ) = β(u), F (τ)(y ) = G(u)(y ) and finally e(τ)(h) = h ψ e(s)ds. This gives us the following theorem: Theorem 4.4. The exact solution of (4) can be written as a formal series of the form (6) with i) T, = [ ] T, and if τ,..., τ κ T then τ = [τ,..., τ κ ] T. ii) F ( )(y ) = y, F ( ) = f(y ), and F (τ)(y ) = f (κ) (y ) ( F (τ )(y ),..., F (τ κ )(y ) ). iii) α( ) =, α( ) = and α(τ) = κ r!r! r q! k= α(τ k), where r,..., r q counts equal trees among the subtrees τ,..., τ κ. iv) e( )(h) =, e( )(h) = h and e(τ)(h) = h κ k= e(τ k)(s)ds. 6

17 Notice that e(τ)(h) = γ(τ) hρ(τ), where γ(τ) is an integer and ρ(τ) is the number of nodes. This is called the order of the tree τ. To find the B series of the numerical solution, write one stop of the RK method in the form Y i = y + h y = y + h a ij f(y j ), i =,..., s (7) j= b i f(y j ). (8) i= and assume that both the stage values Y i and the numerical solutions can be written as Y i = B(φ i, y ; h), i =,..., s, and y = B(φ, y ; h). It is straighforward to see that φ i ( )(h) = φ( )(h) = and φ i ( ) = a ij h = c i h, φ( )(h) = j= b i h. i= For a general tree τ T, insert the B series for Y i and y into (8), apply Lemma 4. and compare equal terms. This results in the following reccurence formula for the weight functions φ i (τ) and φ(τ) for a given τ = [τ,..., τ κ ]: φ i (τ)(h) = κ a ij j= k= φ j (τ k )(h), φ(τ)(h) = κ b i i= k= φ i (τ k )(h) (9) Notice again that φ(τ)(h) = ˆφ(τ) h ρ(τ), where ˆφ(τ) is a constant depending of the method coefficients. Similar, we can write φ i (τ)(h) = ˆφ i (τ) h ρ(τ). Comparing the series for the exact and the numerical solutions and applying Theorem 4. gives the following fundamental theorem: Theorem 4.5. A Runge-Kutta method is of order p if and only if ˆφ(τ) =, τ T, ρ(τ) p. γ(τ) 7

18 All trees up to and including order 4 and their corresponding terms are listed below: τ ρ(τ) ˆφ(τ) = /γ(τ) 3 bi = bi c i = / bi c i = /3 bi a ij c j = /6 4 bi c 3 i = /4 bi c i a ij c j = /8 bi a ij c j = / bi a ij a jk c k = /4 4. Error control and stepsize selection. A user of some numerical black box software will usually require one thing: The accuracy of the numerical solution should be within some user specified tolerance. To accomplish this we have to measure the error, and if the error is too large, it has to be reduced. For ordinary differential equations, this means to reduce the stepsize. On the other hand, we would like our algorithm to be as efficient as possible, that is, to use large stepsizes. This leaves us with two problems: How to measure the error, and how to get the right balance between accuracy and efficiency. Local error estimate. As demonstrated in Figure, the global error y(t n ) y n comes from two sources: the local truncation error and the propagation of errors produced in preceding steps. This makes it difficult (but not impossible) to measure the global error. Fortunately it is surprisingly easy to measure the local error, l n+, the error produced in one step when starting at (t n, y n ), see Figure. Let y(t; t n, y n ) be the exact solution of the ODE through the point t n, y n. For a method of order p we get l n+ = y(t n + h; t n, y n ) y n+ = Ψ(t n, y n )h p+ + O(h p+ ), where O(h p+ ) refer to higher order terms The term Ψ(t n, y n )h p+ is called the principal error term, and we assume that this term is the dominating part of the error. This assumption Strictly speaking, the Landau-symbol O is defined by f(x) = O(g(x)) for x x if lim x x f(x) g(x) < K < for some unspecified constant K. Thus f(h) = O(h q ) means that f(h) Kh q when h, and refer to the remainder terms of a truncated series. 8

19 y(t n+ ) y(t n ) d n+ y n l n+ y n+ t t n t n+ Figure : Lady Windermere s Fan is true if the stepsize h is sufficiently small. Taking a step from the same point t n, y n with a method of order ˆp = p + gives a solution ŷ n+ with a local error satisfying The local error estimate is given by y(t n + h; t n, y n ) ŷ n+ = O(h p+ ). le n+ = ŷ n+ y n+ = Ψ(t n, y n )h p+ + O(h) p+ l n+. Embedded Runge-Kutta pair Given a Runge-Kutta method of order p. To be able to measure the local error, we need a method of order p + (or higher). But we do not want to spend more work (in terms of f-evaluations) than necessary. The solution is embedded 9

20 Runge-Kutta pairs, which, for explicit methods are given by c a c 3 a 3 a c s a s a s a s,s b b b s ˆb ˆb ˆbs ˆbs The method given by the b i s is of order p, the error estimating method given by the ˆb i s is of order p +. (Sometimes it is the other way round. The important thing is to have two methods of different order.) The local error estimate of y n+ is then given by le n+ = ŷ n+ y n+ = h (ˆb i b i )k i. Example 4.. A combination of the Euler method and improved Euler will result in the following pair so that k = f(t n, y n ), k = f(t n + h, y n + hk ), y n+ = y n + hk, l n+ le n+ = h ( k + k ). Example 4.3. Assume that you have decided to use improved Euler, which is of order, as your advancing method, and you would like to find an error estimating method of order 3. There are no -stage order 3 ERKs, so you have to add one stage to your method. This gives a method like c 3 a 3 a 3 i= ˆb ˆb ˆb3 where we require c 3 = a 3 + a 3, which give us five free parameters. These have to satisfy all four order condition for an order 3 method. Using c 3 as a free parameter, we get the following class of 3rd order methods: b = 3c 3 6c 3, b = 3c 3 6( c 3 ), b 3 = 6c 3 ( c 3 ), a 3 = c 3, a 3 = c 3 c 3.

21 It is also possible to use the highest order method to advance the solution. In this case, we still measure the local error estimate of the lowest order order solution, but we get a more accurate numerical solution for free. This idea is called local extrapolation. MATLAB has two integrators based on explicit Runge-Kutta schemes, ODE3 which is based on an order 3/ pair by Bogacki and Shampine, (a 3th order advancing and a nd order error estimating method), and ODE45 based on an order 5/4 pair by Dormand and Prince. Both use local extrapolation. Stepsize control Let the user specify a tolerance T ol, and a norm in which the error is measured. Let us start with t n, y n, and do one step forward in time with a stepsize h n, giving y n+ and le n+. If le n+ T ol the step is accepted, and we proceed till the next step, maybe with an increased stepsize. If le n+ > T ol the step is rejected and we try again with a smaller stepsize. In both cases, we would like to find a stepsize h new which gives a local error estimate smaller than T ol, but at the same time as close to T ol as possible. To find the right stepsize, we make one assumption: The function Ψ(t n, y n ) of the principle error term do not change much from one step to the next, thus Ψ(t n, y n ) Ψ(t n+, y n+ ) C. Then we have: le n+ C h p+ n we want: T ol C h p+ new We get rid of the unknown C by dividing the two equations with each other, and h new can be solved from ( ) le n+ p+ hn. T ol Rejected steps are wasted work, and it should be avoided. Thus we choose the new stepsize somewhat conservative. The new stepsize is computed by h new ( ) T ol p+ h new = P hn. () le n+ where P is a pessimist factor, usually chosen somewhere in the interval [.5,.95]. In the discussion so far we have used the requirement le n+ T ol, that is error pr. step (EPS). This do not take into account the fact that the smaller the step is, the more steps you take, and the local errors from each step adds up. From this point of view, it would make sense to rather use the requirement le n+ T ol h n, that is error pr. unit step (EPUS). The stepsize selection is then given by ( ) T ol p h new = P hn. () le n+ Careful analysis has proved that the local extrapolation together with EPS gives proportionality between the global error and the tolerance. The same is true for the use of the lower order method to advance the solution in combination with EPUS.

22 4.3 Dense output Given a RK method with coefficients (c, A, b). To find approximations to solutions between the steps, that is, at some point t = t n +θh n, θ (, ), we search for a continuous RK scheme, in which y(t n + θh) u n (θ) = y n + b i (θ)k i. The coefficients b i (θ) are now polynomials, and they should be determined so that i= u (θ) y(x + θh) = O(h p + ), and b i () = b i, i =,..., s. This condition is satisfied if and only if ρ(τ) b i (θ)ˆφ i (τ) = θ ρ(τ), τ T, ρ(τ) p. i= 4.4 Collocation methods. We will now see how the idea of orthogonal polynomials can be used to construct high order Runge-Kutta methods. Given an ordinary differential equation y (t) = f(t, y(t)), y(t n ) = y n, t [t n, t n + h]. Recall also the defintion of a Runge Kutta method applied to the ODE: k i = f(t n + c i h, y n + h a ij k j ) y n+ = y n + h b i k i. i= j= () The idea of collocation methods for ODEs is as follows: Assume that s distinct points c, c,..., c s [, ] are given. we will then search for a polynomial u P s satisfying the ODE in the points t n + c i h, i =,, s. These are the collocation conditions, expressed as u (t n + c i h) = f(t n + c i h, u(t n + c i h)), i =,,..., s, (3) The initial condition is u(t n ) = y n. When finally u(t) has been found, we will set y n+ = u(t n + h). Let k i be some (so far unknown) approximation to y (t n + c i h), and let u = p P s be the interpolation polynomial satisfying p(t n + c i h) = k i. For simplicity, introduce the change of variables t = t n + τh, τ [, ]. Then p(t) = p(t n + hτ) = k j l j (τ), l j (τ) = j= s k= k j τ c k c j c k.

23 The polynomial u(t) is given by u( t) = u(t n ) + t t n p(t)dt = y n + h The collocation condition becomes and in addition we get τ k j l j (τ)dτ, t [t n, t n + h] (thus τ [, ]) j= u (t n + c i h) = f t n + c i h, y n + h u(t n + h) = y n + h ci k j l j (τ)dτ. j= k j l j (τ)dτ. But u (t n + c i h) = k i, and y n+ = u(t n + h) so this is exactly the Runge Kutta method () with coefficients a ij = ci j= l j (τ)dτ, i, j =,..., s, b i = l i (τ)dτ, i =,..., s. (4) Runge Kutta methods constructed this way are called collocation methods and they are of order at least s. But a clever choice of the c i s will give better results. Example 4.4. Let P (x) = x /3 be the second order Legendre polynomial. Let c and c be the zeros of P (x ) (using a change of variable to transfer the interval [-,] to [,]), that is c = 3 6, c = The corresponding cardinal functions becomes l (τ) = 3 ( τ 3 6 ), l (τ) = 3 ( τ From (4) we get a Runge Kutta method with the following Butcher tableau: ) This method has order 4 (check it yourself) Famous collocation methods can be constructed by choosing c i as the zeros of a polynomial P : Gauss-Legendre methods: P (x) = ds dx s xs (x s ), order s. Radau IA: P (x) = ds dx s (xs (x ) s ). Radau IIA: P (x) = ds dx s (xs (x ) s ). Both of order s. Lobatto IIIA: P (x) = ds dx s (xs (x ) s ), of order s. 3

24 Simplifying assumptions These are tools for reducing the number of order conditions, and to make it easier to construct higher order methods: B(p) : C(η) : D(ζ) : i= j= i= b i c q i =, q =,,..., p (5a) q a ij c q j = cq i, i =,..., s, q =,,..., η (5b) q b i c q i a ij = b j q ( + cq j ), j =,..., s q =,..., ζ (5c) Theorem 4.6. If B(p), C(η) and D(ζ) are satisfied with p η + and p η + ζ +, then the method is of order p. For a proof, see [, II.7] The collocation methods mentioned above all satisfy C(s). 5 Stiff equations Example 5.. Given the ODE with exact solution y = y, y() =. y(t) = e t. Thus y(t) as t. The Euler method applied to this problem yields y n+ = y n hy n = ( h)y n. so that y n = ( h) n. This gives us two situations: If h < then y n as n. If h > then y n as n Clearly, the second situation does not make sense at all, as the numerical solution is unstable even if the exact solution is stable. We have to choose a stepsize h <. to get a stable numerical solution for this problem. To be more general: Consider a linear ODE y = My, y() = y, (6) 4

25 where M is a constant, m m matrix. We assume that M is diagonalizable, that is where V MV = Λ Λ = diag{λ, λ,, λ m }, V = [v, v,, v m ], where λ i, i =,, m are the eigenvalues of M and v i are the corresponding eigenvectors. By premultiplying (6) with V, we get or, using u = V y, V y = V MV V y, V y(t ) = V y u = Λu, u(t ) = V y = u. The system is now decoupled, and can be written componentwise as u i = λ i u i, u i () = u i,, λ i C, i =,, m. (7) We have to accept the possibility of complex eigenvalues, however, as M is a real matrix, then complex eigenvalues appears in complex conjugate pairs. In the following, we will consider the situation when Re(λ i ) < for i =,, m, thus y(t) as t. (8) Apply the Euler method to (6): y n+ = y n + hmy n. We can do exactly the same linear transformations as above, so the system can be rewritten as u i,n+ = ( + hλ i )u i,n, i =,, m. For the numerical solution to be stable, we have to require + hλ i, for all the eigenvalues λ i. (9) (The case + hλ i h = is included, as this is sufficient to prevent the solution from growing.) Example 5.. Given y = y, y() = with exact solution y (t) = y (t) = e t. The matrix has eigenvalues and. The initial values are chosen so that the fast decaying mode is missing in the exact solution. This problem is solved by Eulers method, with two almost equal stepsizes, h =. and h =.. The difference is striking, but completely in correspondence with (9) and the result of Example 5.. 5

26 Euler, h=. Euler, h=..5.5 solution.5 solution t t Example 5. is a typical example of a stiff equation. decaying component. The stepsize is restricted by a fast Example 5.3. Let M = with eigenvalues λ, = ± i. The requirement (9) becomes + h( ± i) or ( h) + h which is satisfied if and only if h. Stiffness occurs in situations with fast decaying solutions (transients) in combination with slow solutions. If you solve an ODE by an adaptive explicit scheme, and the stepsize becomes unreasonable small, stiffness is the most likely explanation. If the stepsizes in additions seems to be independent of your choice of tolerances, then you can be quite sure. The stepsize is restricted by stability related to the transients, and not by accuracy. The backward Euler method is one way to overcome this problem: or, applied to the problem of (7) y n+ = y n + hf(t n+, y n+ ) (3) u i,n+ = u i,n + hλ i u i,n+, u i,n+ = hλ i u i,n. Since /( hλ i ) whenever Re(λ i ) there is no stepsize restriction caused by stability issues. In fact, u i,n+ as Re(hλ i ), so fast transients decay quickly, as they are supposed to do. But this nice behaviour is not for free: for a nonlinear ODE a nonlinear system of equations has to be solved for each step. We will return to this topic later. 6

27 5. Linear stability. Given the linear test equation Thus λ = α + iβ. The solution can be expressed by y = λy, λ C. (3) y(t n + h) = e αh e iβh y(t n ). Clearly, the solution is stable if α, that is λ C. For the numerical solution we then require the stepsize h to be chosen so that When a RK method is applied (3), we simply get y n+ y n wheneverλ C (3) y n+ = R(z)y n, z = hλ where R is a polynomial or a rational function. R is called the stability function of the RK method. The numerical solution is stable if R(z), otherwise it is unstable. This motivates the following definition of the region of absolute stability as The condition (3) is satisfied for all h > if D = {z C : R(z) }. C D, Methods satisfying this condition are called A-stable. The Backward Euler method (3) is an example of an A-stable method. Example 5.4. A -stage ERK applied to (3) is given by: k = λy n, k = λ(y n + ha λy n ), y n+ = y n + hλ(b + b )y n + (hλ) b a y n If this method is of order, then b + b = and b a = /, so that R(z) = + z + z. The stability function of an s-stage ERKs is a polynomial of degree s. As a consequence, no ERKs can be A-stable! If the order of the method is s, then R(z) = i= z i i!. See Figure 3 for plots of the stability regions. But it has been proved that ERK with p = s only exist for s 4. To get an order 5 ERK, 6 stages are needed. 7

28 3 3 K3 K K K3 K K K K K K K3 p = s = p = s = K3 3 3 K3 K K K3 K K K K K K K3 p = s = 3 p = s = 4 K3 3 3 K 3 K 3 K K K K K3 Backward Euler K3 Trapezoidal rule Figure 3: Stability regions in C : The first four are the stability regions for explicit RK methods of order p = s. The white regions are stable, the grey unstable. 8

29 Example 5.5. The trapezoidal rule (see section 3.) applied to (3) gives y n+ = y n + h (λy n + λy n+ ) R(z) = + z z. In this case D = C, which is perfect. To find a general expression of the stability function, apply a RK-method to the linear test problem (3): k i = λ(y n + h a ij k j ), i =,..., s y n+ = y n + h i= j= or, in matrix form with K = [k,..., k s ] T as b i K = λ(y n + hak), y n+ = y n + hb T K. Solving the linear system with respect to K, and inserting this into the expression for y n+ gives y n+ = R(z)y n with R(z) = + zb T (I za). (33) To summarise: For a given λ C, choose a stepsize h so that hλ D. If your problem is stiff, use an A-stable method. There are no A-stable explicit methods. The Gauss methods, as well as the Radau methods are A stable. The stability constant is defined as R( ) = lim R(z). z For a method with a invertible coefficient matrix A, the stability constant is R( ) = b T A Definition 5.. A RK-method is called L-stable if it is A-stable and in addition R( ) =. L-stable methods are appropriate for solving problems for which Re(λ) <<. Definition 5.. A method is stiffly accurate if b i = a si, i =,..., s. Notice that in this case c s =. For stiffy accurate methods b T is the last row of A, and thus b T A is the last row of the identity matrix, we can conclude that R( ) = Be aware that this is not sufficient for A-stability. for all stiffly accurate methods. 9

30 5. Nonlinear stability The next step is to extend the concept of linear stability to nonlinear problems. Consider an ODE y = f(y) satisfying the condition f(y) f(z), y z (34) for all y, z D R m. Here,, denotes the Euclidean inner product. Let y(t) and z(t) be two solutions of an ODE. If (34) is satisfied, then y(t) z(t) y(t ) z(t ) assuming the inner product norm. This can be proved by d dt y(t) z(t) = f(y(t)) f(z(t)), y(t) z(t). Definition 5.3. A Runge-Kutta method is B-stable if for all h and for all problems satisfying (34), the numerical solutions satisfies y z y z Let M be the matrix with elements m ij = b i a ij + b j a ji b i b j, for i, j =,..., s. Theorem 5.4. If b i for i =,, s and M is positive semidefinite, then the RK method is B stable. For a proof, see e.g. [, IV.]. Runge Kutta methods satisfying the criterias of Theorem 5.4 are often called algebraically stable. Example 5.6. The Gauss-methods as well as the Radua methods are all B stable. So is the method The method (35) is called NT I (from Nørsett and Thomsen). 5 6 (35) 5.3 Order reduction Stiff problems may also suffer from another phenomena, the observed order can be lower than the theoretical order of the method. This can be demonstrated by the Prothero and Robinson example: y = λ(y g(t)) + g (t), λ C. (36) 3

31 λ=- λ=- Local error Local order 3 λ=- λ= h Figure 4: Local error and local order when the Prothero-Robinson example is solved by NTI. The exact solution of this problem is y(t) = e λ(t t ) (y g(t )) + g(t) so the solution consist of a smooth solution g(t) and a (normally fast) transient towards this solution. Example 5.7. Let g(t) = sin(t), t = and y = sin(t ). Do one step by the order 3 NTI method given in (35), and measure the local error and the local order. We expect to see order 4 (p + ). The results of the experiment is given in Figure 4. So, for λ =, the result is as expected, but for λ =, the order is only for h >., then it slowly grows to the expected order as h becomes smaller. How can we explain this? One step of a RK-method applied to (36) is given by k i = λ ( y + h j a ij k j g(t + c i h) ) + g (t + c i h) i =,..., s y = y + i b i k i. Let K = [k,..., k s ] T, G = [g(t + c h),..., g(t + c s h)] T and G = [g (t + c h),..., g (t + 3

32 c s h)] T K = λ(sy + hak G) + G y = y + hb T K After some manipulation, we find that this can be written as y = R(z)(y g(t )) + g(t ) + b T (I za) ( z(g(t ) G) + hg ) (37) where z = λh and R(z) = + zb T ( za). The local truncation error is then d = y(t + h) y = (e z R(z))(y g(t )) + ˆd. In our experiment, y = g(t ), so the first termd did not give any contribution to the error. So let us look at ˆd = g(t + h) g(t ) b T (I za) ( z(g(t ) G) + hg ) The local order is found by Taylor expansions. The Taylor-expansions of each element of G and G gives G = q= h q q! cq g (q) (t ), G = q= h q q! cq g (q+) (t ) = where c q = [c q,..., cq s] T. Thus (taking the minus sign into account) so ˆd = z(g g(t )) hg = q= ) h ( + b T (I za) (zc q qc (q ) q q= q= ( zc p qc (q )) h q q! g(q) (t ) q! g(q) (t ) = h q (q ) cq g (q) (t ), q= ψ q (z) hq q! g(q) (t ) (38) The problem now is that we have to deal with a situation where z is large, while h is small. If this is not the case, the next step would be to find the power series of (I za), that is the Neumann series (I za) = (za). Using this series, z = hλ and collecting equal terms of h will give some of the classical order conditions. (Why only some of them?). But the Neumann series only converges for za, that is for h λ A. In general, A (for NTII A =.). So, for λ =, this is h <, and we observe that in this case, the order is about 4, as expected. For λ =, the order is approaching 4 only when h <., so this seems to be correct. But how to get a theory for h > / λ? Write k= (I ha) = (za) (I (za) ) = 3 (za) k. k=

33 The Neumann series used here will converge as long as Using (39) in (38) we get (za) = z A < h > λ A. (39) ψ q (z) = + b T (I za) (zc q qc (q ) ) = b T = b T A c q k= Since c = A, we can conclude that ψ =. k= (za) k (zc q qc (q ) ) z k bt A k ( c q pac q ). (4) Theorem 5.5. If the simplifying assumptions B(p) and C(η) is satisfied, then ψ q =, q =,..., min(p, η). The proof is left as an exercise. See (5) for the definition of the simplifying assumptions. Write them in matrix form before proving the theorem. Example 5.8. (Example 5.7 continued.) For the NTI method we have ψ (z) =.74 z O( z ). Assuming z large, the first term dominates the solution, and the order will be. observe in the experiment. As we Comment: In such order analysis, we always assume that we can identify one term that dominates the error. However, as h becomes larger, and/or z becomes smaller, the influence of higher order terms are more significant. This explains the smooth transition from order to the classical order 4. Finally, for stiffly accurate methods, the first term b T A c q = for all q. So, for a method satisfying C() we get ψ (z) = ( ) z bt A (c Ac) + O z. Since z = λh the order is reduced to, but since z is large, the error will be small. Example 5.9. Repeat the experiment from Example 5.7, but now with the Alexander s method, given by α α α α α α, α = The order of this method is classical order. In our experiment, we have used λ = 5. The result i given in Figure 5. 33

34 NTI Alexander Local error Local order NTI Alexander h Figure 5: Local error and local order when the Prothero-Robinson example is solved by NTI and Alexander s method. Here, λ = 5. The severe order reduction is clearly visible, but, on the other hand, Alexander s method clearly produces a smaller error. 6 Linear multistep methods A k-step linear multistep method (LMM) applied to the ODE is given by y = f(t, y), y(t ) = y, t t t end. k α l y n+l = h l= k β l f n+l, (4) where α l, β l are the method coefficients, f j = f(t j, y j ) and t j = t + jh, h = (t end t )/N step. Usually we require α k = and α + β. To get started with a k-step method, we also need starting values y l y(t l ), l =,,, k. A method is explicit if β k =, otherwise implicit. The leapfrog method l= y n+ y n = hf(t n+, y n+ ) (4) and the method given by ( 3 y n+ y n+ = h f n+ ) f n 34 (43)

35 are both examples of explicit -step methods. Example 6.. Given the problem y = ty, y() = with exact solution y(t) = e t. Let h =., and y = e h. This problem is solved by (43), and the numerical solution and the error is given by t n y n e n Consistency and order. We define the local discretization error τ n+k (h) by hτ n+k (h) = k ( αl y(t n+l ) hβ l y (t n+l ) ). (44) l= You can think about the hτ n+k as the defect obtained when plugging the exact solution into the difference equation (4). A method is consistent if τ n+k (h). The term hτ n+k (h) can h be written as a power series in h hτ n+k (h) = C y(t n ) + C hy (t n ) + C h y (t n ) + + C q h q y (q) (t n ) +, by expanding y(t n + lh) and y (t n + lh) into their Taylor series around t n, y(t n + lh) = y(t n ) + (lh)y (t n ) + (lh) y (t n ) + + (lh)q y (q) (t n ) + q! y (t n + lh) = y (t n ) + (lh)y (t n ) + (lh) y (t n ) + + (lh)q q! y(q) (t n ) + 35

36 for sufficiently differentiable solutions y(t). Insert this into (44), get the following expressions for C q : k C = α l, C q = k ( l q α l ql q ) β l, q =,,. (45) q! l= The method is consistent if C = C =. It is of order p if l= C = C = = C p =, C p+. The constant C p+ is called the error constant. Example 6.. The LMM (43) is defined by thus α =, α =, α =, β =, β = 3, β =, C = α + α + α =. The method is consistent and of order. C = α + α (β + β + β ) = C = ( α + α (β + β ) ) =! C 3 = ( α + 3 α 3(β + β ) ) = 5 3!. Example 6.3. Is it possible to construct an explicit -step method of order 3? There are 4 free coefficients α, α, β, β, and 4 order conditions to be solved (C = C = C = C 3 = ). The solution is α = 5, α = 4, β =, β = 4. Test this method on the ODE of Example.. (Replace the method coefficients in lmm.m.) The result is nothing but disastrous. Taking smaller steps only increase the problem. To see why, you have to know a bit about how to solve difference equations. 6. Linear difference equations A linear difference equation with constant coefficients is given by k α l y n+l = ϕ n, n =,,,. (46) l= The solution of this equation is a sequence {y n } of numbers (or vectors). Let {ỹ n } be the general solution of the homogeneous problem k α l y n+l =. (47) l= 36

37 Let ψ n be one particular solution of (46). The general solution of (46) is then {y n } where y n = ỹ n + ψ n. To find a unique solution, we will need the starting values y, y,, y k. Let us try ỹ n = r n as a solution of the homogeneous equation (47). This is true if k α l r n+l = r n l= k α l r l =. The polynomial ρ(r) = k l= α lr l is called the characteristic polynomial, and {r n } is a solution of (47) if r is a root of ρ(r). The kth degree polynomial ρ(r) has k roots altogether, r, r,, r k, they can be distinct and real, they can be distinct and complex, in which case they appear in complex conjugate pairs, or they can be multiple. In the latter case, say r = r = = r µ we get a set of linear independent solutions {r n}, {nrn },, {nµ r n}. Altogether we have found k linear independent solutions {ỹ n,l } of the homogeneous equation, and the general solution is given by l= y n = k κ l ỹ n,l + ψ n. l= The coefficients κ l can be determined from the starting values. Example 6.4. Given The characteristic polynomial is given by y n+4 6y n+3 + 4y n+ 6y n+ + 8y n = n y =, y =, y = 3 y 3 = 4. ρ(r) = r 4 6r 3 + 4r 6r + 8 with roots r = r =, r 3 = + i, r 4 = i. As a particular solution we try ψ n = an + b. Inserted into the difference equation we find this to be a solution if a =, b =. The general solution has the form y n = κ n + κ n n + κ 3 ( + i) n + κ 4 ( i) n + n +. From the starting values we find that κ =, κ = 4, κ 3 = i/4 and κ 4 = i/4. So, the solution of the problem is ( y n = n n ) 4 i( + i)n i( i)n + + n + ( 4 4 = n n ) ( 4 n nπ ) sin + n +. 4 Example 6.5. The homogeneous part of the difference equation of Example 6.3 is ρ(r) = r + 4r 5 = (r )(r + 5). One root is 5. Thus, one solution component is multiplied by a factor -5 for each step, independent of the stepsize. Which explain why this method fails. 37

38 6.3 Zero-stability and convergence Let us start with the definition of convergence. As before, we consider the error at t end, using N step steps with constant stepsize h = (t end t )/N step. Definition 6.. A linear multistep method (4) is convergent if, for all ODEs satisfying the conditions of Theorem. we get y Nstep y(t end ), whenever y l y(t + lh), l =,,, k. h h The method is convergent of order p if, for all ODEs with f sufficiently differentiable, there exists a positive h such that for all h < h y(t end y Nstep Kh p whenever y(t +lh) y l K h p, l =,,, k. The first characteristic polynomial of an LMM (4) is ρ(r) = k α l r l, l= with roots r, r,, r k. From the section on difference equation, it follows that for the boundedness of the solution y n we require:. r i, for i =,,, k.. r i < if r i is a multiple root. A method satisfying these two conditions is called zero-stable. We can now state (without proof) the following important result: Theorem 6.. (Dahlquist) Convergence Zero-stability + Consistency. For a consistent method, C = k l= α l = so the characteristic polynomial ρ(r) will always have one root r =. The zero-stability requirement puts a severe restriction on the maximum order of a convergent k-step method: Theorem 6.3. (The first Dahlquist-barrier) The order p of a zero-stable k-step method satisfies p k + p k + if k is even, if k is odd, p k if β k. Notice that the last line include all explicit LMMs. 38

Numerical Methods for Differential Equations

Numerical Methods for Differential Equations Numerical Methods for Differential Equations Chapter 2: Runge Kutta and Multistep Methods Gustaf Söderlind Numerical Analysis, Lund University Contents V4.16 1. Runge Kutta methods 2. Embedded RK methods

More information

CHAPTER 5: Linear Multistep Methods

CHAPTER 5: Linear Multistep Methods CHAPTER 5: Linear Multistep Methods Multistep: use information from many steps Higher order possible with fewer function evaluations than with RK. Convenient error estimates. Changing stepsize or order

More information

Numerical Methods for Differential Equations

Numerical Methods for Differential Equations Numerical Methods for Differential Equations Chapter 2: Runge Kutta and Linear Multistep methods Gustaf Söderlind and Carmen Arévalo Numerical Analysis, Lund University Textbooks: A First Course in the

More information

Chapter 5 Exercises. (a) Determine the best possible Lipschitz constant for this function over 2 u <. u (t) = log(u(t)), u(0) = 2.

Chapter 5 Exercises. (a) Determine the best possible Lipschitz constant for this function over 2 u <. u (t) = log(u(t)), u(0) = 2. Chapter 5 Exercises From: Finite Difference Methods for Ordinary and Partial Differential Equations by R. J. LeVeque, SIAM, 2007. http://www.amath.washington.edu/ rjl/fdmbook Exercise 5. (Uniqueness for

More information

Consistency and Convergence

Consistency and Convergence Jim Lambers MAT 77 Fall Semester 010-11 Lecture 0 Notes These notes correspond to Sections 1.3, 1.4 and 1.5 in the text. Consistency and Convergence We have learned that the numerical solution obtained

More information

Introduction to the Numerical Solution of IVP for ODE

Introduction to the Numerical Solution of IVP for ODE Introduction to the Numerical Solution of IVP for ODE 45 Introduction to the Numerical Solution of IVP for ODE Consider the IVP: DE x = f(t, x), IC x(a) = x a. For simplicity, we will assume here that

More information

Initial value problems for ordinary differential equations

Initial value problems for ordinary differential equations Initial value problems for ordinary differential equations Xiaojing Ye, Math & Stat, Georgia State University Spring 2019 Numerical Analysis II Xiaojing Ye, Math & Stat, Georgia State University 1 IVP

More information

Numerical solution of ODEs

Numerical solution of ODEs Numerical solution of ODEs Arne Morten Kvarving Department of Mathematical Sciences Norwegian University of Science and Technology November 5 2007 Problem and solution strategy We want to find an approximation

More information

CS520: numerical ODEs (Ch.2)

CS520: numerical ODEs (Ch.2) .. CS520: numerical ODEs (Ch.2) Uri Ascher Department of Computer Science University of British Columbia ascher@cs.ubc.ca people.cs.ubc.ca/ ascher/520.html Uri Ascher (UBC) CPSC 520: ODEs (Ch. 2) Fall

More information

Numerical Differential Equations: IVP

Numerical Differential Equations: IVP Chapter 11 Numerical Differential Equations: IVP **** 4/16/13 EC (Incomplete) 11.1 Initial Value Problem for Ordinary Differential Equations We consider the problem of numerically solving a differential

More information

Numerical Methods - Initial Value Problems for ODEs

Numerical Methods - Initial Value Problems for ODEs Numerical Methods - Initial Value Problems for ODEs Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Initial Value Problems for ODEs 2013 1 / 43 Outline 1 Initial Value

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 9 Initial Value Problems for Ordinary Differential Equations Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign

More information

Initial value problems for ordinary differential equations

Initial value problems for ordinary differential equations AMSC/CMSC 660 Scientific Computing I Fall 2008 UNIT 5: Numerical Solution of Ordinary Differential Equations Part 1 Dianne P. O Leary c 2008 The Plan Initial value problems (ivps) for ordinary differential

More information

Solving Ordinary Differential equations

Solving Ordinary Differential equations Solving Ordinary Differential equations Taylor methods can be used to build explicit methods with higher order of convergence than Euler s method. The main difficult of these methods is the computation

More information

Lecture 4: Numerical solution of ordinary differential equations

Lecture 4: Numerical solution of ordinary differential equations Lecture 4: Numerical solution of ordinary differential equations Department of Mathematics, ETH Zürich General explicit one-step method: Consistency; Stability; Convergence. High-order methods: Taylor

More information

Mathematics for chemical engineers. Numerical solution of ordinary differential equations

Mathematics for chemical engineers. Numerical solution of ordinary differential equations Mathematics for chemical engineers Drahoslava Janovská Numerical solution of ordinary differential equations Initial value problem Winter Semester 2015-2016 Outline 1 Introduction 2 One step methods Euler

More information

Ordinary Differential Equations

Ordinary Differential Equations CHAPTER 8 Ordinary Differential Equations 8.1. Introduction My section 8.1 will cover the material in sections 8.1 and 8.2 in the book. Read the book sections on your own. I don t like the order of things

More information

Math 128A Spring 2003 Week 12 Solutions

Math 128A Spring 2003 Week 12 Solutions Math 128A Spring 2003 Week 12 Solutions Burden & Faires 5.9: 1b, 2b, 3, 5, 6, 7 Burden & Faires 5.10: 4, 5, 8 Burden & Faires 5.11: 1c, 2, 5, 6, 8 Burden & Faires 5.9. Higher-Order Equations and Systems

More information

CS 450 Numerical Analysis. Chapter 9: Initial Value Problems for Ordinary Differential Equations

CS 450 Numerical Analysis. Chapter 9: Initial Value Problems for Ordinary Differential Equations Lecture slides based on the textbook Scientific Computing: An Introductory Survey by Michael T. Heath, copyright c 2018 by the Society for Industrial and Applied Mathematics. http://www.siam.org/books/cl80

More information

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 9

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 9 Lecture Notes to Accompany Scientific Computing An Introductory Survey Second Edition by Michael T. Heath Chapter 9 Initial Value Problems for Ordinary Differential Equations Copyright c 2001. Reproduction

More information

Multistage Methods I: Runge-Kutta Methods

Multistage Methods I: Runge-Kutta Methods Multistage Methods I: Runge-Kutta Methods Varun Shankar January, 0 Introduction Previously, we saw that explicit multistep methods (AB methods) have shrinking stability regions as their orders are increased.

More information

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit V Solution of

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit V Solution of Scientific Computing with Case Studies SIAM Press, 2009 http://www.cs.umd.edu/users/oleary/sccswebpage Lecture Notes for Unit V Solution of Differential Equations Part 1 Dianne P. O Leary c 2008 1 The

More information

MTH 452/552 Homework 3

MTH 452/552 Homework 3 MTH 452/552 Homework 3 Do either 1 or 2. 1. (40 points) [Use of ode113 and ode45] This problem can be solved by a modifying the m-files odesample.m and odesampletest.m available from the author s webpage.

More information

8.1 Introduction. Consider the initial value problem (IVP):

8.1 Introduction. Consider the initial value problem (IVP): 8.1 Introduction Consider the initial value problem (IVP): y dy dt = f(t, y), y(t 0)=y 0, t 0 t T. Geometrically: solutions are a one parameter family of curves y = y(t) in(t, y)-plane. Assume solution

More information

2 Numerical Methods for Initial Value Problems

2 Numerical Methods for Initial Value Problems Numerical Analysis of Differential Equations 44 2 Numerical Methods for Initial Value Problems Contents 2.1 Some Simple Methods 2.2 One-Step Methods Definition and Properties 2.3 Runge-Kutta-Methods 2.4

More information

1 Ordinary Differential Equations

1 Ordinary Differential Equations Ordinary Differential Equations.0 Mathematical Background.0. Smoothness Definition. A function f defined on [a, b] is continuous at ξ [a, b] if lim x ξ f(x) = f(ξ). Remark Note that this implies existence

More information

Math 660 Lecture 4: FDM for evolutionary equations: ODE solvers

Math 660 Lecture 4: FDM for evolutionary equations: ODE solvers Math 660 Lecture 4: FDM for evolutionary equations: ODE solvers Consider the ODE u (t) = f(t, u(t)), u(0) = u 0, where u could be a vector valued function. Any ODE can be reduced to a first order system,

More information

Applied Math for Engineers

Applied Math for Engineers Applied Math for Engineers Ming Zhong Lecture 15 March 28, 2018 Ming Zhong (JHU) AMS Spring 2018 1 / 28 Recap Table of Contents 1 Recap 2 Numerical ODEs: Single Step Methods 3 Multistep Methods 4 Method

More information

Ordinary differential equation II

Ordinary differential equation II Ordinary Differential Equations ISC-5315 1 Ordinary differential equation II 1 Some Basic Methods 1.1 Backward Euler method (implicit method) The algorithm looks like this: y n = y n 1 + hf n (1) In contrast

More information

Bindel, Fall 2011 Intro to Scientific Computing (CS 3220) Week 12: Monday, Apr 18. HW 7 is posted, and will be due in class on 4/25.

Bindel, Fall 2011 Intro to Scientific Computing (CS 3220) Week 12: Monday, Apr 18. HW 7 is posted, and will be due in class on 4/25. Logistics Week 12: Monday, Apr 18 HW 6 is due at 11:59 tonight. HW 7 is posted, and will be due in class on 4/25. The prelim is graded. An analysis and rubric are on CMS. Problem du jour For implicit methods

More information

Validated Explicit and Implicit Runge-Kutta Methods

Validated Explicit and Implicit Runge-Kutta Methods Validated Explicit and Implicit Runge-Kutta Methods Alexandre Chapoutot joint work with Julien Alexandre dit Sandretto and Olivier Mullier U2IS, ENSTA ParisTech 8th Small Workshop on Interval Methods,

More information

Ordinary differential equations - Initial value problems

Ordinary differential equations - Initial value problems Education has produced a vast population able to read but unable to distinguish what is worth reading. G.M. TREVELYAN Chapter 6 Ordinary differential equations - Initial value problems In this chapter

More information

Fourth Order RK-Method

Fourth Order RK-Method Fourth Order RK-Method The most commonly used method is Runge-Kutta fourth order method. The fourth order RK-method is y i+1 = y i + 1 6 (k 1 + 2k 2 + 2k 3 + k 4 ), Ordinary Differential Equations (ODE)

More information

Part IB Numerical Analysis

Part IB Numerical Analysis Part IB Numerical Analysis Definitions Based on lectures by G. Moore Notes taken by Dexter Chua Lent 206 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after

More information

2tdt 1 y = t2 + C y = which implies C = 1 and the solution is y = 1

2tdt 1 y = t2 + C y = which implies C = 1 and the solution is y = 1 Lectures - Week 11 General First Order ODEs & Numerical Methods for IVPs In general, nonlinear problems are much more difficult to solve than linear ones. Unfortunately many phenomena exhibit nonlinear

More information

Ordinary Differential Equations

Ordinary Differential Equations Ordinary Differential Equations Professor Dr. E F Toro Laboratory of Applied Mathematics University of Trento, Italy eleuterio.toro@unitn.it http://www.ing.unitn.it/toro September 19, 2014 1 / 55 Motivation

More information

Southern Methodist University.

Southern Methodist University. Title: Continuous extensions Name: Lawrence F. Shampine 1, Laurent O. Jay 2 Affil./Addr. 1: Department of Mathematics Southern Methodist University Dallas, TX 75275 USA Phone: +1 (972) 690-8439 E-mail:

More information

CHAPTER 10: Numerical Methods for DAEs

CHAPTER 10: Numerical Methods for DAEs CHAPTER 10: Numerical Methods for DAEs Numerical approaches for the solution of DAEs divide roughly into two classes: 1. direct discretization 2. reformulation (index reduction) plus discretization Direct

More information

4 Stability analysis of finite-difference methods for ODEs

4 Stability analysis of finite-difference methods for ODEs MATH 337, by T. Lakoba, University of Vermont 36 4 Stability analysis of finite-difference methods for ODEs 4.1 Consistency, stability, and convergence of a numerical method; Main Theorem In this Lecture

More information

Euler s Method, cont d

Euler s Method, cont d Jim Lambers MAT 461/561 Spring Semester 009-10 Lecture 3 Notes These notes correspond to Sections 5. and 5.4 in the text. Euler s Method, cont d We conclude our discussion of Euler s method with an example

More information

Graded Project #1. Part 1. Explicit Runge Kutta methods. Goals Differential Equations FMN130 Gustaf Söderlind and Carmen Arévalo

Graded Project #1. Part 1. Explicit Runge Kutta methods. Goals Differential Equations FMN130 Gustaf Söderlind and Carmen Arévalo 2008-11-07 Graded Project #1 Differential Equations FMN130 Gustaf Söderlind and Carmen Arévalo This homework is due to be handed in on Wednesday 12 November 2008 before 13:00 in the post box of the numerical

More information

The family of Runge Kutta methods with two intermediate evaluations is defined by

The family of Runge Kutta methods with two intermediate evaluations is defined by AM 205: lecture 13 Last time: Numerical solution of ordinary differential equations Today: Additional ODE methods, boundary value problems Thursday s lecture will be given by Thomas Fai Assignment 3 will

More information

Explicit One-Step Methods

Explicit One-Step Methods Chapter 1 Explicit One-Step Methods Remark 11 Contents This class presents methods for the numerical solution of explicit systems of initial value problems for ordinary differential equations of first

More information

Finite Differences for Differential Equations 28 PART II. Finite Difference Methods for Differential Equations

Finite Differences for Differential Equations 28 PART II. Finite Difference Methods for Differential Equations Finite Differences for Differential Equations 28 PART II Finite Difference Methods for Differential Equations Finite Differences for Differential Equations 29 BOUNDARY VALUE PROBLEMS (I) Solving a TWO

More information

Chapter 6 - Ordinary Differential Equations

Chapter 6 - Ordinary Differential Equations Chapter 6 - Ordinary Differential Equations 7.1 Solving Initial-Value Problems In this chapter, we will be interested in the solution of ordinary differential equations. Ordinary differential equations

More information

AN OVERVIEW. Numerical Methods for ODE Initial Value Problems. 1. One-step methods (Taylor series, Runge-Kutta)

AN OVERVIEW. Numerical Methods for ODE Initial Value Problems. 1. One-step methods (Taylor series, Runge-Kutta) AN OVERVIEW Numerical Methods for ODE Initial Value Problems 1. One-step methods (Taylor series, Runge-Kutta) 2. Multistep methods (Predictor-Corrector, Adams methods) Both of these types of methods are

More information

Runge-Kutta and Collocation Methods Florian Landis

Runge-Kutta and Collocation Methods Florian Landis Runge-Kutta and Collocation Methods Florian Landis Geometrical Numetric Integration p.1 Overview Define Runge-Kutta methods. Introduce collocation methods. Identify collocation methods as Runge-Kutta methods.

More information

Linear Multistep Methods I: Adams and BDF Methods

Linear Multistep Methods I: Adams and BDF Methods Linear Multistep Methods I: Adams and BDF Methods Varun Shankar January 1, 016 1 Introduction In our review of 5610 material, we have discussed polynomial interpolation and its application to generating

More information

Module 4: Numerical Methods for ODE. Michael Bader. Winter 2007/2008

Module 4: Numerical Methods for ODE. Michael Bader. Winter 2007/2008 Outlines Module 4: for ODE Part I: Basic Part II: Advanced Lehrstuhl Informatik V Winter 2007/2008 Part I: Basic 1 Direction Fields 2 Euler s Method Outlines Part I: Basic Part II: Advanced 3 Discretized

More information

Introduction to standard and non-standard Numerical Methods

Introduction to standard and non-standard Numerical Methods Introduction to standard and non-standard Numerical Methods Dr. Mountaga LAM AMS : African Mathematic School 2018 May 23, 2018 One-step methods Runge-Kutta Methods Nonstandard Finite Difference Scheme

More information

Ordinary Differential Equation Theory

Ordinary Differential Equation Theory Part I Ordinary Differential Equation Theory 1 Introductory Theory An n th order ODE for y = y(t) has the form Usually it can be written F (t, y, y,.., y (n) ) = y (n) = f(t, y, y,.., y (n 1) ) (Implicit

More information

Partitioned Runge-Kutta Methods for Semi-explicit Differential-Algebraic Systems of Index 2

Partitioned Runge-Kutta Methods for Semi-explicit Differential-Algebraic Systems of Index 2 Partitioned Runge-Kutta Methods for Semi-explicit Differential-Algebraic Systems of Index 2 A. Murua Konputazio Zientziak eta A. A. Saila, EHU/UPV, Donostia-San Sebastián email ander@si.ehu.es December

More information

Introduction - Motivation. Many phenomena (physical, chemical, biological, etc.) are model by differential equations. f f(x + h) f(x) (x) = lim

Introduction - Motivation. Many phenomena (physical, chemical, biological, etc.) are model by differential equations. f f(x + h) f(x) (x) = lim Introduction - Motivation Many phenomena (physical, chemical, biological, etc.) are model by differential equations. Recall the definition of the derivative of f(x) f f(x + h) f(x) (x) = lim. h 0 h Its

More information

Lecture 5: Single Step Methods

Lecture 5: Single Step Methods Lecture 5: Single Step Methods J.K. Ryan@tudelft.nl WI3097TU Delft Institute of Applied Mathematics Delft University of Technology 1 October 2012 () Single Step Methods 1 October 2012 1 / 44 Outline 1

More information

The collocation method for ODEs: an introduction

The collocation method for ODEs: an introduction 058065 - Collocation Methods for Volterra Integral Related Functional Differential The collocation method for ODEs: an introduction A collocation solution u h to a functional equation for example an ordinary

More information

You may not use your books, notes; calculators are highly recommended.

You may not use your books, notes; calculators are highly recommended. Math 301 Winter 2013-14 Midterm 1 02/06/2014 Time Limit: 60 Minutes Name (Print): Instructor This exam contains 8 pages (including this cover page) and 6 problems. Check to see if any pages are missing.

More information

NUMERICAL SOLUTION OF ODE IVPs. Overview

NUMERICAL SOLUTION OF ODE IVPs. Overview NUMERICAL SOLUTION OF ODE IVPs 1 Quick review of direction fields Overview 2 A reminder about and 3 Important test: Is the ODE initial value problem? 4 Fundamental concepts: Euler s Method 5 Fundamental

More information

Ordinary Differential Equations

Ordinary Differential Equations Chapter 7 Ordinary Differential Equations Differential equations are an extremely important tool in various science and engineering disciplines. Laws of nature are most often expressed as different equations.

More information

Jim Lambers MAT 772 Fall Semester Lecture 21 Notes

Jim Lambers MAT 772 Fall Semester Lecture 21 Notes Jim Lambers MAT 772 Fall Semester 21-11 Lecture 21 Notes These notes correspond to Sections 12.6, 12.7 and 12.8 in the text. Multistep Methods All of the numerical methods that we have developed for solving

More information

Notes for Numerical Analysis Math 5466 by S. Adjerid Virginia Polytechnic Institute and State University (A Rough Draft) Contents Numerical Methods for ODEs 5. Introduction............................

More information

Linear Multistep Methods

Linear Multistep Methods Linear Multistep Methods Linear Multistep Methods (LMM) A LMM has the form α j x i+j = h β j f i+j, α k = 1 i 0 for the approximate solution of the IVP x = f (t, x), x(a) = x a. We approximate x(t) on

More information

Integration of Ordinary Differential Equations

Integration of Ordinary Differential Equations Integration of Ordinary Differential Equations Com S 477/577 Nov 7, 00 1 Introduction The solution of differential equations is an important problem that arises in a host of areas. Many differential equations

More information

Solving Ordinary Differential Equations

Solving Ordinary Differential Equations Solving Ordinary Differential Equations Sanzheng Qiao Department of Computing and Software McMaster University March, 2014 Outline 1 Initial Value Problem Euler s Method Runge-Kutta Methods Multistep Methods

More information

Review Higher Order methods Multistep methods Summary HIGHER ORDER METHODS. P.V. Johnson. School of Mathematics. Semester

Review Higher Order methods Multistep methods Summary HIGHER ORDER METHODS. P.V. Johnson. School of Mathematics. Semester HIGHER ORDER METHODS School of Mathematics Semester 1 2008 OUTLINE 1 REVIEW 2 HIGHER ORDER METHODS 3 MULTISTEP METHODS 4 SUMMARY OUTLINE 1 REVIEW 2 HIGHER ORDER METHODS 3 MULTISTEP METHODS 4 SUMMARY OUTLINE

More information

Solution of ordinary Differential Equations

Solution of ordinary Differential Equations 42 Chapter 4 Solution of ordinary Differential Equations Highly suggested Reading Press et al., 1992. Numerical Recipes, 2nd Edition. Chapter 16. (Good description). See also chapter 17. And Matlab help

More information

Initial Value Problems

Initial Value Problems Chapter 2 Initial Value Problems 21 Introduction The first part of this introduction is based on [5, Chap 6] The rest of the notes mostly follow [1, Chap 12] The growth of some tumours can be modelled

More information

Numerical Methods for the Solution of Differential Equations

Numerical Methods for the Solution of Differential Equations Numerical Methods for the Solution of Differential Equations Markus Grasmair Vienna, winter term 2011 2012 Analytical Solutions of Ordinary Differential Equations 1. Find the general solution of the differential

More information

Lecture Notes on Numerical Differential Equations: IVP

Lecture Notes on Numerical Differential Equations: IVP Lecture Notes on Numerical Differential Equations: IVP Professor Biswa Nath Datta Department of Mathematical Sciences Northern Illinois University DeKalb, IL. 60115 USA E mail: dattab@math.niu.edu URL:

More information

multistep methods Last modified: November 28, 2017 Recall that we are interested in the numerical solution of the initial value problem (IVP):

multistep methods Last modified: November 28, 2017 Recall that we are interested in the numerical solution of the initial value problem (IVP): MATH 351 Fall 217 multistep methods http://www.phys.uconn.edu/ rozman/courses/m351_17f/ Last modified: November 28, 217 Recall that we are interested in the numerical solution of the initial value problem

More information

CS 450 Numerical Analysis. Chapter 8: Numerical Integration and Differentiation

CS 450 Numerical Analysis. Chapter 8: Numerical Integration and Differentiation Lecture slides based on the textbook Scientific Computing: An Introductory Survey by Michael T. Heath, copyright c 2018 by the Society for Industrial and Applied Mathematics. http://www.siam.org/books/cl80

More information

Ordinary Differential Equations

Ordinary Differential Equations Chapter 13 Ordinary Differential Equations We motivated the problem of interpolation in Chapter 11 by transitioning from analzying to finding functions. That is, in problems like interpolation and regression,

More information

1 Error Analysis for Solving IVP

1 Error Analysis for Solving IVP cs412: introduction to numerical analysis 12/9/10 Lecture 25: Numerical Solution of Differential Equations Error Analysis Instructor: Professor Amos Ron Scribes: Yunpeng Li, Mark Cowlishaw, Nathanael Fillmore

More information

Chapter 10. Initial value Ordinary Differential Equations

Chapter 10. Initial value Ordinary Differential Equations Chapter 10 Initial value Ordinary Differential Equations Consider the problem of finding a function y(t) that satisfies the following ordinary differential equation (ODE): dy dt = f(t, y), a t b. The function

More information

Advanced methods for ODEs and DAEs

Advanced methods for ODEs and DAEs Advanced methods for ODEs and DAEs Lecture 8: Dirk and Rosenbrock Wanner methods Bojana Rosic, 14 Juni 2016 General implicit Runge Kutta method Runge Kutta method is presented by Butcher table: c1 a11

More information

The Initial Value Problem for Ordinary Differential Equations

The Initial Value Problem for Ordinary Differential Equations Chapter 5 The Initial Value Problem for Ordinary Differential Equations In this chapter we begin a study of time-dependent differential equations, beginning with the initial value problem (IVP) for a time-dependent

More information

The Milne error estimator for stiff problems

The Milne error estimator for stiff problems 13 R. Tshelametse / SAJPAM. Volume 4 (2009) 13-28 The Milne error estimator for stiff problems Ronald Tshelametse Department of Mathematics University of Botswana Private Bag 0022 Gaborone, Botswana. E-mail

More information

Efficiency of Runge-Kutta Methods in Solving Simple Harmonic Oscillators

Efficiency of Runge-Kutta Methods in Solving Simple Harmonic Oscillators MATEMATIKA, 8, Volume 3, Number, c Penerbit UTM Press. All rights reserved Efficiency of Runge-Kutta Methods in Solving Simple Harmonic Oscillators Annie Gorgey and Nor Azian Aini Mat Department of Mathematics,

More information

Parallel Methods for ODEs

Parallel Methods for ODEs Parallel Methods for ODEs Levels of parallelism There are a number of levels of parallelism that are possible within a program to numerically solve ODEs. An obvious place to start is with manual code restructuring

More information

The Definition and Numerical Method of Final Value Problem and Arbitrary Value Problem Shixiong Wang 1*, Jianhua He 1, Chen Wang 2, Xitong Li 1

The Definition and Numerical Method of Final Value Problem and Arbitrary Value Problem Shixiong Wang 1*, Jianhua He 1, Chen Wang 2, Xitong Li 1 The Definition and Numerical Method of Final Value Problem and Arbitrary Value Problem Shixiong Wang 1*, Jianhua He 1, Chen Wang 2, Xitong Li 1 1 School of Electronics and Information, Northwestern Polytechnical

More information

Solving scalar IVP s : Runge-Kutta Methods

Solving scalar IVP s : Runge-Kutta Methods Solving scalar IVP s : Runge-Kutta Methods Josh Engwer Texas Tech University March 7, NOTATION: h step size x n xt) t n+ t + h x n+ xt n+ ) xt + h) dx = ft, x) SCALAR IVP ASSUMED THROUGHOUT: dt xt ) =

More information

INTRODUCTION TO COMPUTER METHODS FOR O.D.E.

INTRODUCTION TO COMPUTER METHODS FOR O.D.E. INTRODUCTION TO COMPUTER METHODS FOR O.D.E. 0. Introduction. The goal of this handout is to introduce some of the ideas behind the basic computer algorithms to approximate solutions to differential equations.

More information

Dynamics of Adaptive Time-Stepping ODE solvers

Dynamics of Adaptive Time-Stepping ODE solvers Dynamics of Adaptive Time-Stepping ODE solvers A.R. Humphries Tony.Humphries@mcgill.ca Joint work with: NICK CHRISTODOULOU PAUL FENTON RATNAM VIGNESWARAN ARH acknowledges support of the Engineering and

More information

Do not turn over until you are told to do so by the Invigilator.

Do not turn over until you are told to do so by the Invigilator. UNIVERSITY OF EAST ANGLIA School of Mathematics Main Series UG Examination 216 17 INTRODUCTION TO NUMERICAL ANALYSIS MTHE612B Time allowed: 3 Hours Attempt QUESTIONS 1 and 2, and THREE other questions.

More information

Math 411 Preliminaries

Math 411 Preliminaries Math 411 Preliminaries Provide a list of preliminary vocabulary and concepts Preliminary Basic Netwon's method, Taylor series expansion (for single and multiple variables), Eigenvalue, Eigenvector, Vector

More information

Math Numerical Analysis Homework #4 Due End of term. y = 2y t 3y2 t 3, 1 t 2, y(1) = 1. n=(b-a)/h+1; % the number of steps we need to take

Math Numerical Analysis Homework #4 Due End of term. y = 2y t 3y2 t 3, 1 t 2, y(1) = 1. n=(b-a)/h+1; % the number of steps we need to take Math 32 - Numerical Analysis Homework #4 Due End of term Note: In the following y i is approximation of y(t i ) and f i is f(t i,y i ).. Consider the initial value problem, y = 2y t 3y2 t 3, t 2, y() =.

More information

MATH 353 LECTURE NOTES: WEEK 1 FIRST ORDER ODES

MATH 353 LECTURE NOTES: WEEK 1 FIRST ORDER ODES MATH 353 LECTURE NOTES: WEEK 1 FIRST ORDER ODES J. WONG (FALL 2017) What did we cover this week? Basic definitions: DEs, linear operators, homogeneous (linear) ODEs. Solution techniques for some classes

More information

HIGHER ORDER METHODS. There are two principal means to derive higher order methods. b j f(x n j,y n j )

HIGHER ORDER METHODS. There are two principal means to derive higher order methods. b j f(x n j,y n j ) HIGHER ORDER METHODS There are two principal means to derive higher order methods y n+1 = p j=0 a j y n j + h p j= 1 b j f(x n j,y n j ) (a) Method of Undetermined Coefficients (b) Numerical Integration

More information

Runge-Kutta Theory and Constraint Programming Julien Alexandre dit Sandretto Alexandre Chapoutot. Department U2IS ENSTA ParisTech SCAN Uppsala

Runge-Kutta Theory and Constraint Programming Julien Alexandre dit Sandretto Alexandre Chapoutot. Department U2IS ENSTA ParisTech SCAN Uppsala Runge-Kutta Theory and Constraint Programming Julien Alexandre dit Sandretto Alexandre Chapoutot Department U2IS ENSTA ParisTech SCAN 2016 - Uppsala Contents Numerical integration Runge-Kutta with interval

More information

Astrodynamics (AERO0024)

Astrodynamics (AERO0024) Astrodynamics (AERO0024) 5. Numerical Methods Gaëtan Kerschen Space Structures & Systems Lab (S3L) Why Different Propagators? Analytic propagation: Better understanding of the perturbing forces. Useful

More information

Ordinary Differential Equations

Ordinary Differential Equations Ordinary Differential Equations Introduction: first order ODE We are given a function f(t,y) which describes a direction field in the (t,y) plane an initial point (t 0,y 0 ) We want to find a function

More information

Second Order ODEs. CSCC51H- Numerical Approx, Int and ODEs p.130/177

Second Order ODEs. CSCC51H- Numerical Approx, Int and ODEs p.130/177 Second Order ODEs Often physical or biological systems are best described by second or higher-order ODEs. That is, second or higher order derivatives appear in the mathematical model of the system. For

More information

Advanced Numerical Analysis. Lecture Notes Prof. Daniel Kressner EPFL / SB / MATHICSE / ANCHP Spring 2015

Advanced Numerical Analysis. Lecture Notes Prof. Daniel Kressner EPFL / SB / MATHICSE / ANCHP Spring 2015 Advanced Numerical Analysis Lecture Notes Prof. Daniel Kressner EPFL / SB / MATHICSE / ANCHP Spring 205 February 9, 205 Remarks Although not necessary, you may find it helpful to complement these lecture

More information

Differential Equations

Differential Equations Differential Equations Overview of differential equation! Initial value problem! Explicit numeric methods! Implicit numeric methods! Modular implementation Physics-based simulation An algorithm that

More information

CS 257: Numerical Methods

CS 257: Numerical Methods CS 57: Numerical Methods Final Exam Study Guide Version 1.00 Created by Charles Feng http://www.fenguin.net CS 57: Numerical Methods Final Exam Study Guide 1 Contents 1 Introductory Matter 3 1.1 Calculus

More information

ORBIT 14 The propagator. A. Milani et al. Dipmat, Pisa, Italy

ORBIT 14 The propagator. A. Milani et al. Dipmat, Pisa, Italy ORBIT 14 The propagator A. Milani et al. Dipmat, Pisa, Italy Orbit 14 Propagator manual 1 Contents 1 Propagator - Interpolator 2 1.1 Propagation and interpolation............................ 2 1.1.1 Introduction..................................

More information

Fixed point iteration and root finding

Fixed point iteration and root finding Fixed point iteration and root finding The sign function is defined as x > 0 sign(x) = 0 x = 0 x < 0. It can be evaluated via an iteration which is useful for some problems. One such iteration is given

More information

Modeling & Simulation 2018 Lecture 12. Simulations

Modeling & Simulation 2018 Lecture 12. Simulations Modeling & Simulation 2018 Lecture 12. Simulations Claudio Altafini Automatic Control, ISY Linköping University, Sweden Summary of lecture 7-11 1 / 32 Models of complex systems physical interconnections,

More information

Introduction to Initial Value Problems

Introduction to Initial Value Problems Chapter 2 Introduction to Initial Value Problems The purpose of this chapter is to study the simplest numerical methods for approximating the solution to a first order initial value problem (IVP). Because

More information

Defect-based a-posteriori error estimation for implicit ODEs and DAEs

Defect-based a-posteriori error estimation for implicit ODEs and DAEs 1 / 24 Defect-based a-posteriori error estimation for implicit ODEs and DAEs W. Auzinger Institute for Analysis and Scientific Computing Vienna University of Technology Workshop on Innovative Integrators

More information

ECE257 Numerical Methods and Scientific Computing. Ordinary Differential Equations

ECE257 Numerical Methods and Scientific Computing. Ordinary Differential Equations ECE257 Numerical Methods and Scientific Computing Ordinary Differential Equations Today s s class: Stiffness Multistep Methods Stiff Equations Stiffness occurs in a problem where two or more independent

More information