Efficient numerical methods for the solution of stiff initial-value problems and differential algebraic equations

Size: px
Start display at page:

Download "Efficient numerical methods for the solution of stiff initial-value problems and differential algebraic equations"

Transcription

1 /rspa R EVIEW PAPER Efficient numerical methods for the solution of stiff initial-value problems and differential algebraic equations By J. R. Cash Department of Mathematics, Imperial College of Science, Technology and Medicine, South Kensington, London SW7 2AZ, UK (j.cash@ic.ac.uk) Received 13 December 2002; accepted 22 January 2003; published online 24 February 2003 In recent years, after a prolonged period of intense activity, the study of numerical methods for solving stiff initial-value problems for ordinary differential equations and differential algebraic equations has reached a certain maturity. There now exist some excellent codes which are both efficient and reliable for solving these particular classes of problems. In this paper, we sketch some of the main theory which underpins stiff integration methods and we use this to describe, and put into context, some of the best codes currently available. By referencing only codes which have been thoroughly tested and are widely available, our aim is to direct users of numerical software to those codes which they should try initially if faced with the problem of solving ordinary differential equations of this type. An additional feature is that the codes which we propose serve as benchmarks against which any new codes can be evaluated. Keywords: stiff initial-value problems; differential algebraic equations; A-stability 1. Introduction In the early 1950s, as a result of some pioneering work by Curtiss & Hirschfelder (1952), it was realized that there was an important class of ordinary differential equations (ODEs), which have become known as stiff equations, which presented a severe challenge to numerical methods that existed at that time. Since then an enormous amount of effort has gone into the analysis of stiff problems and, as a result, a great many numerical methods have been proposed for their solution. More recently, however, there have been some strong indications that the theory which underpins stiff computation is now quite well understood, and, in particular, the excellent text of Hairer & Wanner (1996) has helped put this theory on a firm basis. As a result of this, some powerful codes have now been developed and these can solve quite difficult problems in a routine and reliable way. The main purpose of this paper is to outline some of the important theory behind stiff computation and to direct users of numerical software to those codes which are most likely to be effective for their particular problem. We are careful to recommend 459, c 2003 The Royal Society

2 798 J. R. Cash codes which are freely available and are accompanied by extensive documentation that explains clearly how the codes should be used. We anticipate that this will serve the dual purpose of helping the user decide whether their problem can be solved by these particular codes and, if it can, to demonstrate clearly how the codes should be used. In writing this article we have been careful not to use words such as survey or state of the art so as not to downgrade codes that we have not mentioned. There do exist many other codes, some of which are described in Hairer & Wanner (1996, p. 143), which are also freely available. Furthermore, we cannot claim that the three codes that we discuss will necessarily remain the methods of choice. What we do claim, however, is that the codes described in this paper are amongst the best currently available, and have been for a few years, and at present they set the standards by which other codes can be judged. In particular we recommend these codes as being the ones that a user should try first if they have a stiff ODE or initial-value differential algebraic equation to solve. 2. Stiff differential systems One of the major difficulties associated with the study of stiff differential systems is that a good mathematical definition of the concept of stiffness does not exist. This is rather remarkable given the depth of analysis carried out on these problems, but it simply reflects the difficulty in general of analysing nonlinear ODEs. In particular we find that, if we linearize our system using standard techniques based on small perturbations, then we do not get the information we require. To get an intuitive idea of what stiffness is, it is convenient to consider the simple model problem u = z, z = (λ +1)z λu, (2.1) where λ>0. The general solution of this first-order system is We rewrite (2.1) in the usual vector form, u = A exp( x)+b exp( λx). (2.2) y = f(x, y), where y =(u, z) T, f(x, y) =(z, (λ +1)z λu) T, (2.3) and we integrate it numerically using the (explicit) forward Euler rule y n+1 = y n + hf(x n,y n ), where y n =(u n,z n ) T, (2.4) using a fixed step-length h. A simple stability analysis shows that if we require y n 0 as n, (2.5) which mirrors the asymptotic behaviour of the analytic solution (2.2), then the two inequalities 0 <h<2 and 0 <h< 2 (2.6) λ must be satisfied. It is clear that if λ 0 then the step-length of integration is severely restricted by (2.6) even though the component of the solution, that is exp( λx), which causes this restriction becomes negligible very quickly. Indeed, for

3 Review 799 the numerical solution y n to have the desired asymptotic behaviour (2.5), it is necessary for (2.6) to hold even after the term exp( λx) has become negligible and even if the initial conditions are such that B = 0. We thus have the very undesirable situation that the larger the value of λ, the more severe the step-size restriction defined by (2.6), even though the transient exp( λx) tends to zero more quickly, and the transient phase becomes much shorter, as λ increases. Contrast this behaviour with the solution of u = u (2.7) using the forward Euler method. After a very short time the solution of (2.7) is virtually identical to (2.2), but the only restriction on the step-length when integrating (2.7) by the forward Euler method is 0 <h<2, (2.8) which for large λ is a much less severe restriction than (2.6). Furthermore, if instead of using (2.4), we use the (implicit) backward Euler method y n+1 = y n + hf(x n+1,y n+1 ) (2.9) to integrate (2.1), then there is no step-size restriction due to stability. Thus in the case of linear differential equations with constant coefficients we are able to gain an idea of what the problem of stiffness is. However, these ideas are difficult to extend to general nonlinear systems and there is the added complication that account needs to be taken of both the range of integration and the initial conditions. (For example, if the range of integration of (2.1) is [0, 1/λ], then the restriction (2.6) would not be a problem.) An intuitive idea of what stiff equations are is that they are problems with some smooth and some transient solutions, where all solutions reach the smooth one after a short time (after the transient phase has finished). If an explicit method is used to solve such a system, then the step-length of integration is controlled by a transient solution (which quickly becomes negligible), whereas outside the transient phase we wish the step-length to be controlled by accuracy alone. This suggests a rather more pragmatic definition of stiffness, where the definition is not based on an analysis of the actual differential equation to be solved but is instead based on the relative performance of implicit and explicit integration methods. Perhaps the best, and the oldest, of definitions of this type is due to Curtiss & Hirschfelder, who, in 1952, said stiff equations are equations where certain implicit methods, and in particular backward-differentiation formulae (BDFs), perform better, usually tremendously better, than explicit ones. We will discuss BDFs later in 3. For now we will simply note that the backward Euler method (2.9) is a BDF and that codes based on these formulae are amongst the most popular and efficient currently available. Although the definition of Curtiss & Hirschfelder (1952) is not at all precise is does encapsulate our intuitive idea of what stiffness is. A third intuitive idea of the nature of stiff equations is that they are multiscale problems. That is, stiff equations represent coupled physical systems having components which vary on very different time-scales. Several examples which elaborate on this intuition can be found in Hairer & Wanner (1996, ch. 4). In view of the developments of this section, the predictable reaction of numerical analysts was to look at the derivation of implicit methods for stiff systems. Typically, the numerical methods that were available for non-stiff problems at that time

4 800 J. R. Cash were explicit Runge Kutta methods and (explicit) Adams methods often used in a predictor corrector framework. The natural way forward, therefore, was to analyse the performance of implicit Runge Kutta and implicit linear multistep methods, and we will consider these two classes of formulae in the next two sections. 3. Linear multistep methods In this section we consider the numerical integration of the first-order system dy dx = f(x, y), y(x 0)=y 0, (3.1) using the implicit linear multistep method k k α j y n+j = h β j f n+j, where α k =1, β k 0. (3.2) Following the work of Curtiss & Hirschfelder (1952), the problem confronting numerical analysts was to derive high-order methods of the form (3.2) which have the stability necessary for dealing with stiff differential systems. Before analysing this problem we first need to give some definitions. We define L(y(x n ); h) = k α j y(x n+j ) h k β j f(x n+j,y(x n+j )), (3.3) where y(x) is the true solution of (3.1) and x n+j = x n + jh. If L(y(x n ); h) =C p+1 h p+1 y p+1 (x n )+O(h p+2 ), (3.4) then the linear multistep method (3.2) is said to have order p. Traditionally the term C p+1 was called the error constant and this is uniquely defined for (3.3). However, more recently, Hairer et al. (1987) proposed a different normalization and defined C p+1 k i=0 β i (3.5) to be the error constant (see Hairer et al. 1987, p. 373). In order to carry out a stability analysis of (3.2) it is usual to consider the scalar equation y = λy with Re(λ) < 0. (3.6) Note that the solution of (3.6) tends to zero as x, and we require the numerical solution to have a similar behaviour. If we now apply (3.2) to the solution of (3.6), we have k (α j qβ j )y n+j =0, where q = hλ. (3.7) This is a linear difference equation with constant coefficients, and so it can be solved using the standard substitution y n+j = ab n+j, (3.8)

5 Review 801 where a and b are constants. Substituting (3.8) into (3.7) we have k (α j qβ j )b j =0. (3.9) The solutions of (3.7) are stable if all roots b j of (3.9) satisfy b j 1, with any repeated roots being strictly less than one in modulus. We now need to give one further stability definition which will allow us to state a celebrated theorem due to Dahlquist (1963). Definition 3.1 (absolute stability). Let S = {q C; all roots of (3.9) satisfy b i 1 with repeated roots b i satisfying b i < 1}. (3.10) Then S is called the stability domain, or region of absolute stability of (3.2). Based on this, Dahlquist gave the following definition for a method with stability domain S. Definition 3.2 (A-stability). If C S, then a method is said to be A-stable. This definition requires that a numerical integration method should be stable for the integration of (3.6) for any λ with Re(λ) < 0. As far as stability is concerned, the property of A-stability is an excellent one for a code intended for the solution of stiff systems to possess. It might seem therefore that the problem of stiffness can be solved rather simply by deriving high-order A-stable methods. A rather more pessimistic way of looking at this stability requirement is to note that since it comes from a linearization of (3.1) it could be regarded as a necessary but not a sufficient condition for a method to be efficient for integrating stiff systems. Put another way, we might claim that, if a numerical method is not able to solve the very simple problem (3.6) in a satisfactory way, then it is not likely to be successful for solving general nonlinear systems. However, this is all academic because the impossibility of obtaining high-order A-stable linear multistep methods was demonstrated by the following theorem. Theorem 3.3 (Dahlquist 1963). (i) An explicit linear multistep method cannot be A-stable. (ii) An A-stable linear multistep method of order p has p 2. (iii) If the order of an A-stable linear multistep method is 2, then the error constant of the method satisfies C Furthermore, Dahlquist showed that the only A-stable linear multistep method with C = 1 12 is the trapezium rule and there are arguments against using even this method (Gourlay 1970) as it has the incorrect damping for dealing with very stiff systems. This theorem of Dahlquist s is arguably one of the most important ever published in the field of stiff computation. It demonstrates vividly the difficulty of deriving efficient linear multistep methods for solving stiff differential equations and has probably stopped numerous researchers from wasting their time looking for high-order A-stable linear multistep methods. Since methods of order less than two are not sufficiently

6 802 J. R. Cash accurate to be of general practical use for solving stiff equations, we are faced with a difficult decision. If we are to obtain useful methods with order greater than two, we need either to lessen the requirement of A-stability to something which is not so strong or else to consider a completely different class of methods. We first consider the approach where we weaken the stability requirement. There have been many proposals concerning how the stability property can be weakened from A-stability to something less demanding while still producing methods which are effective for solving stiff equations. The two most useful of these definitions are probably stiff stability due to Gear (1971) and A(α)-stability as proposed by Widlund (1963). In what follows we will give the definition of A(α)-stability. Definition 3.4 (Widlund 1967). A convergent linear multistep method is said to be A(α)-stable for α (0,π/2) if its region of absolute stability S satisfies S S α = {q; arg( q) <α, q 0}. (3.11) This definition calls for a numerical method to have a stability region which is unbounded but which does not include the whole complex left-hand half-plane. One of the reasons why this is such a useful definition is that many problems have eigenvalues of the Jacobian that lie in a sector S α. It is an interesting byproduct that BDFs satisfy this property with order up to six. Codes based on BDFs are perhaps the most famous and widely used of all codes for solving stiff differential systems and differential algebraic equations. The general form taken by the BDF is k α j y n+j = hβ k f n+k. (3.12) Since these are linear multistep formulae, they cannot be A-stable with order higher than two. By imposing the condition that, for general k, the BDF (3.12) has order k, we obtain unique values for the coefficients appearing in (3.12), where we normalize by setting α k = 1. The resulting formulae are A-stable for orders up to two and are A(α)-stable for orders up to six. As might be expected, as the order of the BDF is increased, the angle of A(α)-stability decreases and for k 7 the BDFs are not A(α)-stable for any value of α. Because of their relative simplicity and good stability properties, BDFs form the basis of some very powerful codes (Brenan et al. 1989; Hindmarsh 1980), and we will return to these in Implicit Runge Kutta methods We saw in the previous section that, if we relax the stability requirement from A-stability to A(α)-stability, then it is possible to derive high-order linear multistep methods satisfying this condition. In particular we are able to derive A(α)-stable BDFs with order up to six. In this section we will be concerned with an alternative approach where we demand full A-stability but where we look at a class of methods other than linear multistep methods. In the previous section we defined the concepts of stability domain and A-stability for linear multistep methods and the extension of these definitions to other classes of formulae is straightforward (see, for example, Hairer & Wanner 1996). Although Dahlquist s theorems apply exclusively to linear multistep methods, it is clear that no explicit integration method can be A(0)-stable

7 Review 803 (Hairer & Wanner 1996) (essentially because the stability function of an explicit method is a polynomial). In view of the fact that explicit Runge Kutta methods are widely used for the numerical solution of non-stiff initial-value problems (IVPs), it is natural to extend their definition to include implicit methods. The general class of Runge Kutta methods is defined as s ( r y n+1 = y n + h b i k i, k i = f x n + c i h, y n + h a ij k j ). (4.1) i=1 If the upper limit in the sum defining the k i is i 1, then we have an explicit Runge Kutta method; if it is i, then we have a diagonally implicit Runge Kutta method (Alexander 1977); and if it is s, then we have a fully implicit Runge Kutta method. The integer s appearing in this definition is known as the stage number of the method. If we apply (4.1) to the test equation (3.6), we obtain y n+1 = R(hλ)y n, (4.2) where R(z) =1+zb T (I za) 1 e. (4.3) Here e is a vector of length s with all elements equal to unity, b is the vector of weights b i and A is the matrix (a ij ). Implicit Runge Kutta formulae became of considerable interest for the integration of stiff equations when it was shown by Butcher (1964) that the s-stage fully implicit Gaussian Runge Kutta method of order 2s is A-stable for all s. This is in sharp contrast to what is possible for linear multistep methods. However, the price to be paid for considering the fully implicit form (4.1) is that it is relatively expensive to solve for the k i. If, for example, y R N, then (4.1) calls for the solution of a set of Ns nonlinear algebraic equations at each time-step. If this is carried out in a naive way using Newtonian iteration followed by Gaussian elimination to solve the resulting system of linear algebraic equations, then this calls for 1 3 (Ns)3 flops at each time-step. This is s 3 times the computational effort required by linear multistep methods to solve the linear equations. Of course, in a practical implementation, an attempt would be made to freeze the Jacobian over several successive steps, but even in this case our conclusions would remain broadly the same. Many people have worked on the problem of trying to implement fully implicit Runge Kutta formulae in an efficient way (Burrage 1982; Hairer & Wanner 1981; Varah 1979). Liniger & Willoughby (1970) were amongst the first to demonstrate clearly that it is generally much more efficient to use Newtonian iteration, rather than direct iteration, to solve for the k i at each step. When applying a single step of Newtonian iteration it is necessary to solve, at each step, a system of linear algebraic equations with the coefficient matrix I ha J (4.4) for the Newtonian increment. Butcher (1976) and Bickart (1977) independently proposed premultiplying these linear equations by (ha) 1 I. If we now define a new matrix P by P 1 A 1 P = D, (4.5) where D is a diagonal matrix, then the linear system of algebraic equations to be solved has the coefficient matrix h 1 D I I J. (4.6) j=1

8 804 J. R. Cash The approach now is to look for special classes of Runge Kutta formulae for which this matrix takes a particularly simple form. In particular it would be desirable to identify those matrices A which are such that the equations are to some extent uncoupled. For the special case where the matrix A 1 has one real eigenvalue and a complex conjugate pair of eigenvalues, which is the case for a three-stage implicit Runge Kutta method such as the fifth-order Radau formula, the coefficient matrix becomes γi J αi J βi. 0 βi αi J In this case, rather than having one linear system of size 3N to solve, we have two linear systems to solve, one of size N and one of size 2N. In general it is, of course, a much less computationally expensive task to solve the uncoupled systems rather than to solve the full system of size 3N. The precise way in which this uncoupled system can be solved is described fully in Hairer & Wanner (1996). The conclusion is that the work involved in solving the linear algebraic equations for a three-stage Radau Runge Kutta method is 5s 3 /3 flops rather than the 27s 3 /3 flops that would be required if the Newtonian iteration had been naively applied. Using many of these ideas, a code Radau5, which is based on the fifth-order, three-stage Radau Runge Kutta method has been derived by Hairer & Wanner (1996). Numerical testing has shown this method to be a very powerful one (to the surprise of quite a few numerical analysts), and we will return to it in 6. Another attempt to reduce the operation count for implicit Runge Kutta methods was due to Nørsett (1976), who considered Runge Kutta methods having a coefficient matrix with a one point spectrum. One such class of formulae is the singly implicit Runge Kutta methods which have a coefficient matrix with a single repeated real eigenvalue. These methods were further extended by Burrage (1978), who derived local error estimates, and by Burrage et al. (1980), who derived a code based on these methods. The key to deriving singly implicit methods is to choose the c i appearing in (4.1) as c i = γx i, (4.7) where the x i are the zeros of the Laguerre polynomial L s (x). However, a potential problem with this formulation is that many of the c i lie outside the interval of integration. In an attempt to overcome this problem Butcher & Cash (1990) proposed a class of diagonally extended singly implicit methods which modifies the original singly implicit method by adding on extra diagonally implicit stages. A code based on these methods is presented in Butcher et al. (1996). However, while these codes are really quite effective, it is probably fair to say that they are not yet generally competitive with the codes described in this paper. In an attempt to generalize Runge Kutta methods to a multistep form, Butcher (1987) considered the concept of general linear methods. The aim was, of course, to combine the desirable properties of Runge Kutta and linear multistep methods into a single class of formulae. In particular a multistep formulation would allow the methods, in general, to use less function evaluations per step than Runge Kutta methods. However, they would use more functions per step than linear multistep methods, but the gain would be considerably enhanced stability. It is also possible to approach this goal from a different direction by asking how linear multistep formulae

9 Review 805 can be modified to obtain better stability at the cost of a few extra function evaluations. Adopting this latter point of view Cash (2000) proposed a class of extended multistep methods, and we will consider these in the next section. (a) General linear methods: extended multistep methods As we saw earlier in this section, a major crossroads in the development of numerical methods for solving stiff initial-value problems was due to the theorem of Dahlquist which bounds the order of A-stable linear multistep methods. As we explained, the reaction to this was either to continue with linear multistep methods but to accept something less than A-stability (this led to the development of BDF codes) or to consider instead a different class of methods such as implicit Runge Kutta methods (and this led to the code Radau5). A third approach, which is somewhere between these two extremes, is to consider a class of extended linear multistep methods which take the form k α j y n+j = hβ k f n+k + hβ k+1 f n+k+1. (4.8) The key idea here is that we seek to obtain improved stability by including the superfuture point y n+k+1. The result of this is that the unknown y n+k is computed using past values y n+j, j<k, as well as the future value y n+k+1, and so this approach can be regarded in some sense as requiring the solution of a localized boundary-value problem. For each value of the integer k>0, we obtain a formula of the form (4.8), with order k + 1 by solving α j j p = p β j j p 1, p =0, 1,...,k+1, with normalization α k =1. (4.9) j j When using these formulae in a practical implementation, the immediate problem that confronts us is how to compute y n+k+1. This problem has been analysed in some detail (Cash 2000), and we now describe the final algorithm that is used to compute y n+k. A single step of a modified extended BDF (MEBDF) is as follows. Assuming that the back values y n,y n+1,...,y n+k 1, are known, (i) compute ȳ n+k from the k-step BDF of order k, k 1 ȳ n+k + ᾱ j y n+j = h β k f(x n+k, ȳ n+k ); (4.10) (ii) compute ȳ n+k+1 from the k-step BDF of order k, k 2 ȳ n+k+1 +ᾱ k 1 ȳ n+k + ᾱ j y n+j+1 = h β k f(x n+k+1, ȳ n+k+1 ); (4.11) (iii) compute y n+k from the modified multistep formula k 1 y n+k + α j y n+j = hβ k+1 f(x n+k+1, ȳ n+k+1 ) + h β k f n+k + h(β k β k )f(x n+k, ȳ n+k ). (4.12)

10 806 J. R. Cash Assuming that all the intermediate error checks are satisfied, the solution y n+k computed from (iii) is the accepted approximation to y(x n+k ). This scheme, which is a general linear method, consists of three distinct stages, and so it may, at first sight, seem to be much more expensive than a linear multistep method. However, this is not normally the case. We first note that the three systems of algebraic equations to be solved, one at each step, have the same coefficient matrix I h β k J. Also, in regimes where the step-size is constant, we have extremely good approximations to the quantities ȳ n+k and y n+k. For this reason, the MEBDF scheme that we have just described is often not a great deal more expensive per step than a BDF. Of course the gain that we have made is that the MEBDFs have considerably better stability than BDFs. In fact it can be shown that the MEBDFs are A-stable with order up to and including four (of course they are not linear multistep methods and so are not restricted by Dahlquist s theorem) and A(α)-stable up to order nine. Even better stability can be obtained by adding more superfuture points, and there exist several conjectures linking the number of such points with the highest order of A-stability that can be achieved. Using the ideas explained in this section, the code Mebdfdae has been developed for stiff initial-value ODEs and differential algebraic equations (DAEs). Numerical testing has shown this code to be a very powerful one, and we return to it in 6. Plots of the stability regions for MEBDFs are given in Hairer & Wanner (1996, p. 267). Further development of the boundary-value approach where conditions on the solution are specified at both the beginning and the end of the integration range has been carried out by Brugnano & Trigiante (1998), who introduced the concept of top-order methods. The idea here is to derive a standard linear k-step method of high order but to distribute the side conditions so that for a method of step-number k there are j boundary conditions imposed initially at x 0,x 1,...,x j 1, and k j conditions imposed for large x at x N,x N+1,...,x N+k j 1. In this way the original initial-value problem is converted into a (much more stable) boundary-value problem. This is exactly the philosophy that was behind the original class of extended multistep methods, and this approach has been widely used in the numerical solution of linear recurrence relations in the case where forward recursion is unstable. In developing such boundary-value methods, the major difficulty is to determine where to place the boundary conditions and to determine how to change the step-size in an efficient way. For a full description of this approach the reader is referred to Brugnano & Trigiante (1998, p. 173). 5. Differential algebraic equations The theory behind the numerical solution of differential algebraic equations is less well developed than for ODEs, and, as a consequence, the codes for DAEs are often not as reliable or easy to use. However, there do now exist some excellent books which explain both the theory and the practicalities associated with DAEs. In particular, we mention the classic text due to Brenan et al. (1989), which gives an excellent account of the theory behind DAEs and considers in detail the practical aspects of using the BDF code Dassl. More recently the book of Hairer & Wanner (1996) has updated some of this theory and includes a description of the Runge Kutta code Radau5.

11 Review 807 Perhaps the most straightforward way of studying DAEs, certainly from the viewpoint of the applied mathematician, is to use a singular perturbation analysis. With this in mind we consider the singular perturbation problem If we now let ɛ 0, we obtain the reduced equations y = f(y, z), y(0) = y 0, (5.1) ɛz = g(y, z), z(0) = z 0. (5.2) y = f(y, z), (5.3) 0=g(y, z). (5.4) A fruitful way of looking at equations (5.3), (5.4) is to consider them as a differential equation for which the solution is constrained to lie on a manifold g(y, z) =0 (Rheinboldt 1984). Of course we need to start the solution off on this manifold, and so we require the initial conditions to satisfy g(y 0,z 0 )=0. (5.5) Following the mathematician s famous trait of always trying to reduce a problem to a previous case, we could try to solve (5.4) for z and substitute this into (5.3). In fact we see that, if g z (y, z) is invertible in the neighbourhood of a solution, then we can find a relation of the form z = F (y), and it follows that (5.3) is simply an ODE, y = f(y, F(y)). (5.6) It is common to refer to (5.6) as the state-space form. This simple example illustrates the important principle that we can convert a DAE into an ODE by repeated analytic differentiation. If our assumption that g z (y, z) is invertible does not hold, then we will need to differentiate the constraint more times to reach the underlying ODE. This example demonstrates an approach that we might use to derive numerical methods for solving DAEs. We would first develop an algorithm for the numerical solution of the ODE (5.1), (5.2). Having done this, we would formally put ɛ = 0 in the numerical method, and by doing so we obtain an algorithm for the solution of (5.3), (5.4). There are many forms in which a DAE may appear, and the main codes do not have a standardized form in which they require the equations to be presented. For example, the Radau5 Runge Kutta code considers DAEs in the form My = f(y), (5.7) where the matrix M is singular. The derivation of Runge Kutta methods to solve (5.7) is straightforward and is a natural extension of what is described above. What we do is consider the matrix M to be non-singular, derive a numerical method for the ODE y = M 1 f(y) (5.8) and then multiply through by M to get a method for the solution of (5.7). In contrast the BDF code Dassl considers the form F (x, y,y)=0. (5.9) Of course if F y is invertible then, by the implicit function theorem, it follows that (5.9) is really an ODE. We are therefore concerned with the case where F y is singular.

12 808 J. R. Cash The code Mebdfdae considers problems in both forms (5.7) and (5.9). There is a certain equivalence between (5.7) and (5.9), since we can convert (5.9) into the form (5.7) by putting y = z, (5.10) 0=F (x, z, y), (5.11) and so the matrix M for this formulation is ( ) It is important to realize that one-step numerical methods, such as Runge Kutta methods, do not in general achieve the same order of accuracy for DAEs as they do for (non-stiff) ODEs, and this phenomenon is known as order reduction. This was first discovered by Prothero & Robinson (1974), who observed that, when solving the stiff initial-value problem y = λ(y φ(x)) + φ (x), y(x 0 )=φ(x 0 ) (5.12) using some standard one-step methods, the order of accuracy achieved by these methods was not as expected. Note that (5.12) generalizes (3.6) in that it has the particular solution y = φ(x) as well as the unwanted complementary function exp(λx). Due to the simple nature of this problem it is possible to obtain estimates of both the local and the global errors in the numerical solution. In a classical non-stiff analysis we are interested in what happens as h 0. However, in the stiff case we are interested in the behaviour of one-step methods for (5.12) in the case where h λ 1. To allow this behaviour to be studied in a more systematic way, Frank et al. (1985) introduced the concept of B-convergence. If an integration method is B-convergent, then we are able to obtain estimates of the global error which are independent of the stiffness of the problem, and this is an excellent way to study order reduction. A summary of order reduction results for one-step methods applied to stiff initial-value ODEs is given in Hairer & Wanner (1996, p. 226). Here it is noted that stiffly accurate Runge Kutta methods, i.e. those with a sj = b j for all 1 j s, have the very desirable stability behaviour that they produce the asymptotically correct solution in the limit hλ. The desirability of stiffly accurate Runge Kutta methods for DAEs was confirmed by Petzold (Brenan et al. 1989; Petzold 1986, p. 86), who looked at the order of consistency of some one-step methods applied to linear DAEs with constant coefficients, and by Griepentrog & März (1986), who analysed the state-space form of index-1 DAEs. We saw earlier that in a simple case we could convert a particular DAE into an ODE simply by carrying out repeated analytic differentiation. Intuitively we would think that, the more times we need to differentiate a DAE before we can pick out the underlying ODE, the harder the problem is to solve. This intuitive idea is true in general and leads us to define the idea of the differentiation index of a DAE. Definition 5.1 (differentiation index). The nonlinear DAE F (y,y) = 0 (5.13) is said to have differentiation index i if i is the minimum number of analytic differentiations needed to allow us to extract by algebraic manipulation an explicit ODE

13 Review 809 from the system d j F (y,y) dx j =0, j =0, 1,...,i. (5.14) This explicit ODE is often called the underlying ODE. It is of interest to look at some specific forms of DAEs and consider the conditions necessary for them to have a given (differentiation) index. (i) Index-1 systems. Consider y = f(y, z), (5.15) 0=g(y, z). (5.16) We showed earlier that, if g z is invertible in a neighbourhood of the solution, then this problem has index 1 since in this case one differentiation of (5.16) gives a first-order ODE for z. The initial conditions y 0 and z 0 are required to satisfy (5.5) and are then said to be consistent. (ii) Index-2 systems. Consider the semi-explicit form Differentiating the constraint with respect to y we have y = f(y, z), (5.17) 0=g(y). (5.18) dy 0=g y dx = g yf(y, z). (5.19) Relation (5.19) is often referred to as the hidden constraint. Differentiating again with respect to z we find that (5.17), (5.18) is of index 2 if g y f z is invertible in a neighbourhood of the solution. Note that we require the initial conditions y 0, z 0 to satisfy g(y 0 )=0, g y (y 0 )f(y 0,z 0 )=0. (5.20) Initial values that satisfy these conditions are said to be consistent for index-2 problems. (iii) Index-3 systems. Consider the system y = k(y, z, w), (5.21) z = f(y, z), (5.22) 0=g(z). (5.23) This problem is in Hessenberg form (Brenan et al. 1989, p. 34) and has index 3 if g z f y k w (5.24) is invertible in the neighbourhood of a solution. Consistent initial conditions for this problem are given in Hairer & Wanner (1996, p. 456).

14 810 J. R. Cash The technique of successively differentiating a given DAE to produce an underlying ODE is in theory a very powerful approach for solving DAEs, especially given the existence of powerful analytic differentiation codes, but is not as useful in practice as may at first seem likely. The problem is that the constraints are changed by differentiating them and the solution of the differential system can lead to a phenomenon known as drift-off. This is explained in some detail in Hairer & Wanner (1996), together with some strategies that have been used in an attempt to overcome this problem. If we apply codes to general problems of index greater than one, then we need to carefully examine the Newtonian iteration scheme to make sure that we do not have a degradation of the rate of convergence. If, for example, we consider the index-2 problem described earlier, it turns out that we need to rescale our problem in an appropriate way. What we effectively do is multiply the z component by h. This means that we measure the convergence of Newtonian iteration in the norm y + h z, and by doing so we gain a factor of h for each simplified Newtonian iteration. This technique, described in Hairer & Wanner (1996), is exactly what is used in both Mebdfdae and Radau5, and the extension to problems of higher index is straightforward. Finally, in this section we note that there are many important classes of problems which are naturally represented as DAEs. In particular, we mention control problems (some interesting problems from trajectory prescribed path control are given in Brenan et al. (1989, p. 157)), optimal control problems where we wish to minimize a cost function (Hairer & Wanner 1996, p. 461), mechanical systems and systems of rigid bodies (Hairer & Wanner 1996, p. 463; Brenan et al. 1989, p. 130), electric networks which arise naturally as DAEs due to Kirchhoff s laws (Brenan et al. 1989, p. 170), multibody and constrained Hamiltonian systems (Hairer & Wanner 1996, p. 530) and incompressible fluid-flow problems (Brenan et al. 1989, p. 181). 6. Numerical algorithms and codes There are two main aims of this paper. The first is to explain some of the important theory that underpins the numerical solution of stiff initial-value problems and differential algebraic equations. The second aim is to direct users of numerical software, who often wish to use the codes simply as black boxes, to those codes they should try first when faced with problems of this type. In this section we consider the main codes which we recommend. The first code we consider is the linear multistep code Dassl (Brenan et al. 1989). (a) Dassl The earliest and still the most widely used codes for stiff ODEs and initial-value DAEs are those based on BDFs. These codes were originally developed by Gear and co-workers for the solution of stiff initial-value problems. Subsequently, they were extended by Petzold to be applicable to DAEs, and this led to the development of the well-known code Dassl (Brenan et al. 1989). Much of the mathematical theory behind the numerical integration of DAEs using BDFs is now well understood and is explained in detail in Brenan et al. (1989, ch. 3), for example. The main results concerning the convergence of BDF for DAEs is as follows. A k-step BDF is accurate

15 Review 811 to O(h k ) for nonlinear index-1 problems, for semi-explicit index-2 problems and for linear constant coefficient problems of any index. For Hessenberg systems of index 3, implemented with a constant step-length h, the k-step BDF is accurate to O(h k ), providing that the starting values are consistent to order O(h k+1 ) and that the algebraic equations are solved to an accuracy of O(h k+2 ) for k 2 and O(h 4 ) for k = 1. Furthermore, Dassl is applicable to general implicit differential equations and differential algebraic equations of index up to one. Such problems can be written in the standard form F (x, y,y)=0. (6.1) As mentioned earlier, it is hoped that a user will often be able to solve his or her problem simply by taking an appropriate powerful integrator off the shelf. This is all that many users will want to do. However, in some cases a little more analysis may be required on the part of the user to get the problem in the correct form, especially if the problem is a differential algebraic equation. In Brenan et al. (1989, p. 129ff) there is an excellent section on how to get started with Dassl. Topics covered include the determination of consistent initial conditions, how to deal with problems of index higher than one and how to deal with an inaccurate Jacobian or singular iteration matrix. Much of this wisdom, especially the discussion on how to determine consistent initial conditions, is applicable to many other codes as well as to Dassl. The most up-to-date version of Dassl is Daspk (which includes the preconditioned Krylov iterative methods for solving large-scale DAEs, but also includes the methods of Dassl as a special case). This is available (as is the code Mebdfdae) from Netlib at Finally, there is a very recent version of Dassl, called Daspk3, which is available via the Web site of Linda Petzold at cse/. Daspk3 also includes options for sensitivity analysis and can solve Hessenberg index-2 systems. However, there are some copyright restrictions for Daspk3. It is freely available for research, but its use as part of commercial software packages is restricted. The second code we wish to consider is the Runge Kutta code. (b) Radau5 This code is based on a fifth-order, three-stage Radau Runge Kutta method which has the important property of being stiffly accurate. It is applicable to stiff IVPs and to DAEs of the form My = f(x, y), y(x 0 )=y 0, (6.2) where M is a constant square matrix. If M is non-singular, this defines an ODE, otherwise problem (6.2) is a DAE. Of course, many initial-value problems are trivially of the form (6.2) with M = I. Although the form (6.2) looks rather restrictive, it turns out that by means of suitable transformations, or the introduction of new variables, all differential algebraic equations (6.1) can be brought into the form (6.2). The theory behind the implementation of Radau formulae was sketched in 4. In particular, the three-stage Radau formula is A-stable and the cost of solving the linear systems of algebraic equations arising from the application of Newtonian iteration is reduced, due to the fact that these linear equations become decoupled. For a full description of this code the reader is referred to Hairer & Wanner (1996, p. 121). There it is shown that the total work required by Radau5 to solve a system of N

16 812 J. R. Cash linear algebraic equations is approximately 5N 3 /3 flops compared with the 27N 3 /3 flops that are required if a naive approach is used. The code Radau5 is applicable to initial-value DAEs with index up to three as well as to stiff IVPs and it is described in some detail in Hairer & Wanner (1996). The final code we consider is Mebdfdae. (c) Mebdfdae This code is based on a family of multistep methods and is applicable to stiff ODEs and to DAEs of index up to three. Both of the forms (6.1) and (6.2) can be solved directly by this code. The Mebdfdae algorithm is based on the extended linear multistep method described in 4 a. There it was shown that, by using superfuture points, it is possible to get considerably enhanced stability. Because this algorithm is considerably more complicated than BDFs, it is rather more difficult to analyse. Some results can be obtained from the fact that it is a general linear method and for the special case k = 1 it is a standard Runge Kutta method. However, there are still gaps in the analysis of the convergence of these methods when applied to DAEs. Despite this, the Mebdfdae code often works very well in practice, and for a detailed description of this code the reader is referred to Lioen & de Swart (1998) and jcash/ivp software/readme.html. Each of these codes has certain advantages and disadvantages which can really only be fully understood with the aid of numerical experiment. Dassl, for example, has the major advantage that it is likely to be computationally the least expensive method per step but it is A-stable only up to order 2 and can only deal with general differential algebraic equations of index up to one. If only low accuracy is required, which might be the case in the method of lines solution of PDEs, then BDF codes are likely to be extremely powerful, as only low-order methods would normally be used. The code Dassl is able to use both variable order and variable step-length but the cost of this is that the overhead associated with linear multistep methods tends to be relatively high compared with that for Runge Kutta methods. In contrast, the code Radau5 is a one-step method and this means that step changing is relatively easy. However, Radau5 does not use variable order and the cost of solving the nonlinear algebraic equations is higher per step than it is with Dassl. Since Radau5 is a one-step method, the cost of re-starts, for example after encountering a singularity or in a moving mesh solution of a time-dependent PDE environment, should be relatively small. In contrast, the code Mebdfdae lies somewhere between these two extremes. It can deal with DAEs of index up to three, it uses variable order and variable step, it is a multistep code, it has considerably better stability than codes based on BDFs and it can use high-order formulae, but it is more expensive per step than Dassl. Because of the many differing strengths and weaknesses of the three codes we have discussed, it may be difficult for a user to decide which code is the best for their use or even to decide whether or not the codes are applicable to their problem at all. A major step forward in the testing of numerical algorithms for stiff differential and differential algebraic equations came with the work of Lioen & de Swart (1998). In an internal report (also available on testset/ via a link to the University of Bari) they describe the results of some extensive comparisons which shed a considerable amount of light on the performance of various codes.

17 Review 813 Some of the more important of their contributions were the following. (i) They liaised with software providers and persuaded them to upgrade their codes so that they were of a more uniform standard. They then made these modified codes widely available. (ii) They produced an extremely challenging set of test problems. Before the work of Lioen & de Swart (1998), the accepted way of testing new codes was to run them on the well-known test set DETEST (Enright et al. 1975). However, as the standard of codes improved, it became clear that this test set is rather too easy. In particular the problems in DETEST are quite small, they are not particularly stiff and there are no differential algebraic equations. In contrast, the test set of Lioen & de Swart is much more challenging. It consists of 16 test problems, the smallest being of dimension six and the biggest of dimension 400. There are seven ODEs, seven DAEs of index 1 3 and two implicit differential equations. In particular there is a Wheelset problem from contact mechanics, a two-bit adding problem which has discontinuities in the derivatives and a slider crank problem which is an example of a stiff mechanical system. Some additional problems, which again are much more challenging than those of DETEST, are given by Hairer & Wanner (1996, p. 144). (iii) They carried out some extensive comparisons on a whole range of test problems. Very importantly the comparisons were performed in an unbiased way (as were the comparisons carried out by Hairer & Wanner) and demonstrated the performance of the different codes on the test set. The comparison of different codes, even if they are attempting to perform the same task, is a notoriously difficult and inexact science. In particular it is rarely possible to say that a particular code is in any sense the best. We take the much more pragmatic point of view that the purpose of numerical testing is to show up weaknesses in codes or, in some cases, to eliminate codes altogether. Using these guidelines we simply say that the three codes that we have recommended perform very well, and in particular tend to be very reliable, on a difficult set of test problems. However, the three codes we have considered are not applicable to exactly the same set of problems and for further information the reader should examine Lioen & de Swart (1998) and testset/ in detail. We complete this section with a summary of our advice. (i) If a user wishes to use a black-box integrator to solve stiff IVPs and initial-value DAEs, then he or she should try one or more of the codes Dassl (Brenan et al. 1989), Radau5 (Hairer & Wanner 1996) or Mebdfdae (Lioen & de Swart 1998; software/readme.html). A quick and dirty approach would be to use each of these codes and to check if they produce roughly the same results! Note that Dassl and Mebdfdae are also available from Netlib on (ii) If a user experiences difficulty in getting one of the codes to work, then he or she should consult the computational sections in Hairer & Wanner (1996) and Brenan et al. (1989), should carefully read the write-ups in the codes themselves and should look at the problems in Brenan et al. (1989) and Hairer

18 814 J. R. Cash & Wanner (1996) to see if any of them is similar to his problem. As a last resort they might write to the authors of the code. (iii) Researchers who wish to learn more of the theory of DAEs and stiff ODEs should consult Hairer & Wanner (1996), Brenan et al. (1989), Hairer et al. (1989) and the references therein. (iv) People who wish to develop their own codes for solving stiff IVPs and DAEs should use the test results given in Lioen & de Swart (1998) and Hairer & Wanner (1996) as a benchmark against which to measure their codes. The author is grateful to Ernst Hairer, Linda Petzold, Gustaf Söderlindt and Donato Trigiante for some fruitful discussions concerning this work. References Alexander, R Diagonally implicit Runge Kutta methods for stiff ODEs. SIAM J. Numer. Analysis 14, Bickart, T. A An efficient solution process for implicit Runge Kutta methods. SIAM J. Numer. Analysis 14, Brenan, K. E., Campbell, S. L. & Petzold, L Numerical solution of initial value problems in differential-algebraic equations. Amsterdam: North-Holland. Brugnano, L. & Trigiante, D Solving differential problems by multistep initial value and boundary value methods. London: Gordon and Breach. Burrage, K A special family of Runge Kutta methods for solving stiff differential equations. BIT 18, Burrage, K Efficiently implementable algebraically stable Runge Kutta methods. SIAM J. Numer. Analysis 19, Burrage, K., Butcher, J. C. & Chipman, F. H An implementation of singly implicit Runge Kutta methods. BIT 20, Butcher, J. C Implicit Runge Kutta processes. Math. Comp. 18, Butcher, J. C On the implementation of implicit Runge Kutta methods. BIT 6, Butcher, J. C Linear and nonlinear stability for general linear methods. BIT 27, Butcher, J. C. & Cash, J. R Towards efficient Runge Kutta methods for stiff systems. SIAM J. Numer. Analysis 27, Butcher, J. C., Cash, J. R. & Diamantakis, M. T DESI methods for stiff initial value problems. ACM Trans. Math. Software 22, Cash, J. R Modified extended backward differentiation formulae for the numerical solution of stiff initial-value problems in ODEs and DAEs. J. Comput. Appl. Math. 125, Curtiss, C. F. & Hirschfelder, J. O Integration of stiff equations. Proc. Natl Acad. Sci. USA 38, Dahlquist, G A special stability problem for linear multistep methods. BIT 3, Enright, W. H., Hull, T. E. & Lindberg, B Comparing numerical methods for stiff systems of ODEs. BIT 15, Frank, R., Schneid, J. & Ueberhuber, C. W Order results for methods applied to stiff systems. SIAM J. Numer. Analysis 22, Gear, C. W Numerical initial value problems in ordinary differential equations. Englewood Cliffs, NJ: Prentice Hall. Gourlay, A. R A note on trapezoidal methods for the solution of initial value problems. Math. Comp. 24,

CHAPTER 10: Numerical Methods for DAEs

CHAPTER 10: Numerical Methods for DAEs CHAPTER 10: Numerical Methods for DAEs Numerical approaches for the solution of DAEs divide roughly into two classes: 1. direct discretization 2. reformulation (index reduction) plus discretization Direct

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 9 Initial Value Problems for Ordinary Differential Equations Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign

More information

Numerical Methods for Differential Equations

Numerical Methods for Differential Equations Numerical Methods for Differential Equations Chapter 2: Runge Kutta and Multistep Methods Gustaf Söderlind Numerical Analysis, Lund University Contents V4.16 1. Runge Kutta methods 2. Embedded RK methods

More information

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 9

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 9 Lecture Notes to Accompany Scientific Computing An Introductory Survey Second Edition by Michael T. Heath Chapter 9 Initial Value Problems for Ordinary Differential Equations Copyright c 2001. Reproduction

More information

CS 450 Numerical Analysis. Chapter 9: Initial Value Problems for Ordinary Differential Equations

CS 450 Numerical Analysis. Chapter 9: Initial Value Problems for Ordinary Differential Equations Lecture slides based on the textbook Scientific Computing: An Introductory Survey by Michael T. Heath, copyright c 2018 by the Society for Industrial and Applied Mathematics. http://www.siam.org/books/cl80

More information

Numerical Integration of Equations of Motion

Numerical Integration of Equations of Motion GraSMech course 2009-2010 Computer-aided analysis of rigid and flexible multibody systems Numerical Integration of Equations of Motion Prof. Olivier Verlinden (FPMs) Olivier.Verlinden@fpms.ac.be Prof.

More information

Richarson Extrapolation for Runge-Kutta Methods

Richarson Extrapolation for Runge-Kutta Methods Richarson Extrapolation for Runge-Kutta Methods Zahari Zlatevᵃ, Ivan Dimovᵇ and Krassimir Georgievᵇ ᵃ Department of Environmental Science, Aarhus University, Frederiksborgvej 399, P. O. 358, 4000 Roskilde,

More information

The Milne error estimator for stiff problems

The Milne error estimator for stiff problems 13 R. Tshelametse / SAJPAM. Volume 4 (2009) 13-28 The Milne error estimator for stiff problems Ronald Tshelametse Department of Mathematics University of Botswana Private Bag 0022 Gaborone, Botswana. E-mail

More information

Numerical integration of DAE s

Numerical integration of DAE s Numerical integration of DAE s seminar Sandra Allaart-Bruin sbruin@win.tue.nl seminar p.1 Seminar overview February 18 Arie Verhoeven Introduction to DAE s seminar p.2 Seminar overview February 18 Arie

More information

Ordinary differential equations - Initial value problems

Ordinary differential equations - Initial value problems Education has produced a vast population able to read but unable to distinguish what is worth reading. G.M. TREVELYAN Chapter 6 Ordinary differential equations - Initial value problems In this chapter

More information

CHAPTER 5: Linear Multistep Methods

CHAPTER 5: Linear Multistep Methods CHAPTER 5: Linear Multistep Methods Multistep: use information from many steps Higher order possible with fewer function evaluations than with RK. Convenient error estimates. Changing stepsize or order

More information

16.7 Multistep, Multivalue, and Predictor-Corrector Methods

16.7 Multistep, Multivalue, and Predictor-Corrector Methods 740 Chapter 16. Integration of Ordinary Differential Equations 16.7 Multistep, Multivalue, and Predictor-Corrector Methods The terms multistepand multivaluedescribe two different ways of implementing essentially

More information

16.7 Multistep, Multivalue, and Predictor-Corrector Methods

16.7 Multistep, Multivalue, and Predictor-Corrector Methods 16.7 Multistep, Multivalue, and Predictor-Corrector Methods 747 } free_vector(ysav,1,nv); free_vector(yerr,1,nv); free_vector(x,1,kmaxx); free_vector(err,1,kmaxx); free_matrix(dfdy,1,nv,1,nv); free_vector(dfdx,1,nv);

More information

The Derivation of Interpolants for Nonlinear Two-Point Boundary Value Problems

The Derivation of Interpolants for Nonlinear Two-Point Boundary Value Problems European Society of Computational Methods in Sciences and Engineering ESCMSE) Journal of Numerical Analysis, Industrial and Applied Mathematics JNAIAM) vol. 1, no. 1, 2006, pp. 49-58 ISSN 1790 8140 The

More information

Solving PDEs with PGI CUDA Fortran Part 4: Initial value problems for ordinary differential equations

Solving PDEs with PGI CUDA Fortran Part 4: Initial value problems for ordinary differential equations Solving PDEs with PGI CUDA Fortran Part 4: Initial value problems for ordinary differential equations Outline ODEs and initial conditions. Explicit and implicit Euler methods. Runge-Kutta methods. Multistep

More information

-Stable Second Derivative Block Multistep Formula for Stiff Initial Value Problems

-Stable Second Derivative Block Multistep Formula for Stiff Initial Value Problems IAENG International Journal of Applied Mathematics, :3, IJAM 3_7 -Stable Second Derivative Bloc Multistep Formula for Stiff Initial Value Problems (Advance online publication: 3 August ) IAENG International

More information

Mathematics for chemical engineers. Numerical solution of ordinary differential equations

Mathematics for chemical engineers. Numerical solution of ordinary differential equations Mathematics for chemical engineers Drahoslava Janovská Numerical solution of ordinary differential equations Initial value problem Winter Semester 2015-2016 Outline 1 Introduction 2 One step methods Euler

More information

A NOTE ON EXPLICIT THREE-DERIVATIVE RUNGE-KUTTA METHODS (ThDRK)

A NOTE ON EXPLICIT THREE-DERIVATIVE RUNGE-KUTTA METHODS (ThDRK) BULLETIN OF THE INTERNATIONAL MATHEMATICAL VIRTUAL INSTITUTE ISSN 303-4874 (p), ISSN (o) 303-4955 www.imvibl.org / JOURNALS / BULLETIN Vol. 5(015), 65-7 Former BULLETIN OF THE SOCIETY OF MATHEMATICIANS

More information

SOME PROPERTIES OF SYMPLECTIC RUNGE-KUTTA METHODS

SOME PROPERTIES OF SYMPLECTIC RUNGE-KUTTA METHODS SOME PROPERTIES OF SYMPLECTIC RUNGE-KUTTA METHODS ERNST HAIRER AND PIERRE LEONE Abstract. We prove that to every rational function R(z) satisfying R( z)r(z) = 1, there exists a symplectic Runge-Kutta method

More information

Linear Multistep Methods I: Adams and BDF Methods

Linear Multistep Methods I: Adams and BDF Methods Linear Multistep Methods I: Adams and BDF Methods Varun Shankar January 1, 016 1 Introduction In our review of 5610 material, we have discussed polynomial interpolation and its application to generating

More information

Numerical solution of ODEs

Numerical solution of ODEs Numerical solution of ODEs Arne Morten Kvarving Department of Mathematical Sciences Norwegian University of Science and Technology November 5 2007 Problem and solution strategy We want to find an approximation

More information

Quadratic SDIRK pair for treating chemical reaction problems.

Quadratic SDIRK pair for treating chemical reaction problems. Quadratic SDIRK pair for treating chemical reaction problems. Ch. Tsitouras TEI of Chalkis, Dept. of Applied Sciences, GR 34400 Psahna, Greece. I. Th. Famelis TEI of Athens, Dept. of Mathematics, GR 12210

More information

Numerical Methods for Differential Equations

Numerical Methods for Differential Equations Numerical Methods for Differential Equations Chapter 2: Runge Kutta and Linear Multistep methods Gustaf Söderlind and Carmen Arévalo Numerical Analysis, Lund University Textbooks: A First Course in the

More information

Numerical Algorithms as Dynamical Systems

Numerical Algorithms as Dynamical Systems A Study on Numerical Algorithms as Dynamical Systems Moody Chu North Carolina State University What This Study Is About? To recast many numerical algorithms as special dynamical systems, whence to derive

More information

A New Block Method and Their Application to Numerical Solution of Ordinary Differential Equations

A New Block Method and Their Application to Numerical Solution of Ordinary Differential Equations A New Block Method and Their Application to Numerical Solution of Ordinary Differential Equations Rei-Wei Song and Ming-Gong Lee* d09440@chu.edu.tw, mglee@chu.edu.tw * Department of Applied Mathematics/

More information

Parallel Methods for ODEs

Parallel Methods for ODEs Parallel Methods for ODEs Levels of parallelism There are a number of levels of parallelism that are possible within a program to numerically solve ODEs. An obvious place to start is with manual code restructuring

More information

SYMMETRIC PROJECTION METHODS FOR DIFFERENTIAL EQUATIONS ON MANIFOLDS

SYMMETRIC PROJECTION METHODS FOR DIFFERENTIAL EQUATIONS ON MANIFOLDS BIT 0006-3835/00/4004-0726 $15.00 2000, Vol. 40, No. 4, pp. 726 734 c Swets & Zeitlinger SYMMETRIC PROJECTION METHODS FOR DIFFERENTIAL EQUATIONS ON MANIFOLDS E. HAIRER Section de mathématiques, Université

More information

On the Diagonal Approximation of Full Matrices

On the Diagonal Approximation of Full Matrices On the Diagonal Approximation of Full Matrices Walter M. Lioen CWI P.O. Box 94079, 090 GB Amsterdam, The Netherlands ABSTRACT In this paper the construction of diagonal matrices, in some

More information

A Family of L(α) stable Block Methods for Stiff Ordinary Differential Equations

A Family of L(α) stable Block Methods for Stiff Ordinary Differential Equations American Journal of Computational and Applied Mathematics 214, 4(1): 24-31 DOI: 1.5923/.acam.21441.4 A Family of L(α) stable Bloc Methods for Stiff Ordinary Differential Equations Aie I. J. 1,*, Ihile

More information

Ordinary Differential Equations

Ordinary Differential Equations CHAPTER 8 Ordinary Differential Equations 8.1. Introduction My section 8.1 will cover the material in sections 8.1 and 8.2 in the book. Read the book sections on your own. I don t like the order of things

More information

Variable Step Runge-Kutta-Nyström Methods for the Numerical Solution of Reversible Systems

Variable Step Runge-Kutta-Nyström Methods for the Numerical Solution of Reversible Systems Variable Step Runge-Kutta-Nyström Methods for the Numerical Solution of Reversible Systems J. R. Cash and S. Girdlestone Department of Mathematics, Imperial College London South Kensington London SW7 2AZ,

More information

A Zero-Stable Block Method for the Solution of Third Order Ordinary Differential Equations.

A Zero-Stable Block Method for the Solution of Third Order Ordinary Differential Equations. A Zero-Stable Block Method for the Solution of Third Order Ordinary Differential Equations. K. Rauf, Ph.D. 1 ; S.A. Aniki (Ph.D. in view) 2* ; S. Ibrahim (Ph.D. in view) 2 ; and J.O. Omolehin, Ph.D. 3

More information

A definition of stiffness for initial value problems for ODEs SciCADE 2011, University of Toronto, hosted by the Fields Institute, Toronto, Canada

A definition of stiffness for initial value problems for ODEs SciCADE 2011, University of Toronto, hosted by the Fields Institute, Toronto, Canada for initial value problems for ODEs SciCADE 2011, University of Toronto, hosted by the Fields Institute, Toronto, Canada Laurent O. Jay Joint work with Manuel Calvo (University of Zaragoza, Spain) Dedicated

More information

Stabilität differential-algebraischer Systeme

Stabilität differential-algebraischer Systeme Stabilität differential-algebraischer Systeme Caren Tischendorf, Universität zu Köln Elgersburger Arbeitstagung, 11.-14. Februar 2008 Tischendorf (Univ. zu Köln) Stabilität von DAEs Elgersburg, 11.-14.02.2008

More information

Graded Project #1. Part 1. Explicit Runge Kutta methods. Goals Differential Equations FMN130 Gustaf Söderlind and Carmen Arévalo

Graded Project #1. Part 1. Explicit Runge Kutta methods. Goals Differential Equations FMN130 Gustaf Söderlind and Carmen Arévalo 2008-11-07 Graded Project #1 Differential Equations FMN130 Gustaf Söderlind and Carmen Arévalo This homework is due to be handed in on Wednesday 12 November 2008 before 13:00 in the post box of the numerical

More information

CS520: numerical ODEs (Ch.2)

CS520: numerical ODEs (Ch.2) .. CS520: numerical ODEs (Ch.2) Uri Ascher Department of Computer Science University of British Columbia ascher@cs.ubc.ca people.cs.ubc.ca/ ascher/520.html Uri Ascher (UBC) CPSC 520: ODEs (Ch. 2) Fall

More information

Astronomy 8824: Numerical Methods Notes 2 Ordinary Differential Equations

Astronomy 8824: Numerical Methods Notes 2 Ordinary Differential Equations Astronomy 8824: Numerical Methods Notes 2 Ordinary Differential Equations Reading: Numerical Recipes, chapter on Integration of Ordinary Differential Equations (which is ch. 15, 16, or 17 depending on

More information

1.2 Derivation. d p f = d p f(x(p)) = x fd p x (= f x x p ). (1) Second, g x x p + g p = 0. d p f = f x g 1. The expression f x gx

1.2 Derivation. d p f = d p f(x(p)) = x fd p x (= f x x p ). (1) Second, g x x p + g p = 0. d p f = f x g 1. The expression f x gx PDE-constrained optimization and the adjoint method Andrew M. Bradley November 16, 21 PDE-constrained optimization and the adjoint method for solving these and related problems appear in a wide range of

More information

ON THE IMPLEMENTATION OF SINGLY IMPLICIT RUNGE-KUTTA METHODS

ON THE IMPLEMENTATION OF SINGLY IMPLICIT RUNGE-KUTTA METHODS MATHEMATICS OF COMPUTATION VOLUME 57, NUMBER 196 OCTOBER 1991, PAGES 663-672 ON THE IMPLEMENTATION OF SINGLY IMPLICIT RUNGE-KUTTA METHODS G. J. COOPER Abstract. A modified Newton method is often used to

More information

PRECONDITIONING AND PARALLEL IMPLEMENTATION OF IMPLICIT RUNGE-KUTTA METHODS.

PRECONDITIONING AND PARALLEL IMPLEMENTATION OF IMPLICIT RUNGE-KUTTA METHODS. PRECONDITIONING AND PARALLEL IMPLEMENTATION OF IMPLICIT RUNGE-KUTTA METHODS. LAURENT O. JAY Abstract. A major problem in obtaining an efficient implementation of fully implicit Runge- Kutta IRK methods

More information

Southern Methodist University.

Southern Methodist University. Title: Continuous extensions Name: Lawrence F. Shampine 1, Laurent O. Jay 2 Affil./Addr. 1: Department of Mathematics Southern Methodist University Dallas, TX 75275 USA Phone: +1 (972) 690-8439 E-mail:

More information

A CLASS OF CONTINUOUS HYBRID LINEAR MULTISTEP METHODS FOR STIFF IVPs IN ODEs

A CLASS OF CONTINUOUS HYBRID LINEAR MULTISTEP METHODS FOR STIFF IVPs IN ODEs ANALELE ŞTIINŢIFICE ALE UNIVERSITĂŢII AL.I. CUZA DIN IAŞI (S.N.) MATEMATICĂ, Tomul LVIII, 0, f. A CLASS OF CONTINUOUS HYBRID LINEAR MULTISTEP METHODS FOR STIFF IVPs IN ODEs BY R.I. OKUONGHAE Abstract.

More information

Semi-implicit Krylov Deferred Correction Methods for Ordinary Differential Equations

Semi-implicit Krylov Deferred Correction Methods for Ordinary Differential Equations Semi-implicit Krylov Deferred Correction Methods for Ordinary Differential Equations Sunyoung Bu University of North Carolina Department of Mathematics CB # 325, Chapel Hill USA agatha@email.unc.edu Jingfang

More information

Review Higher Order methods Multistep methods Summary HIGHER ORDER METHODS. P.V. Johnson. School of Mathematics. Semester

Review Higher Order methods Multistep methods Summary HIGHER ORDER METHODS. P.V. Johnson. School of Mathematics. Semester HIGHER ORDER METHODS School of Mathematics Semester 1 2008 OUTLINE 1 REVIEW 2 HIGHER ORDER METHODS 3 MULTISTEP METHODS 4 SUMMARY OUTLINE 1 REVIEW 2 HIGHER ORDER METHODS 3 MULTISTEP METHODS 4 SUMMARY OUTLINE

More information

HIGHER ORDER METHODS. There are two principal means to derive higher order methods. b j f(x n j,y n j )

HIGHER ORDER METHODS. There are two principal means to derive higher order methods. b j f(x n j,y n j ) HIGHER ORDER METHODS There are two principal means to derive higher order methods y n+1 = p j=0 a j y n j + h p j= 1 b j f(x n j,y n j ) (a) Method of Undetermined Coefficients (b) Numerical Integration

More information

The family of Runge Kutta methods with two intermediate evaluations is defined by

The family of Runge Kutta methods with two intermediate evaluations is defined by AM 205: lecture 13 Last time: Numerical solution of ordinary differential equations Today: Additional ODE methods, boundary value problems Thursday s lecture will be given by Thomas Fai Assignment 3 will

More information

Introduction to the Numerical Solution of IVP for ODE

Introduction to the Numerical Solution of IVP for ODE Introduction to the Numerical Solution of IVP for ODE 45 Introduction to the Numerical Solution of IVP for ODE Consider the IVP: DE x = f(t, x), IC x(a) = x a. For simplicity, we will assume here that

More information

Research Article Diagonally Implicit Block Backward Differentiation Formulas for Solving Ordinary Differential Equations

Research Article Diagonally Implicit Block Backward Differentiation Formulas for Solving Ordinary Differential Equations International Mathematics and Mathematical Sciences Volume 212, Article ID 767328, 8 pages doi:1.1155/212/767328 Research Article Diagonally Implicit Block Backward Differentiation Formulas for Solving

More information

Butcher tableau Can summarize an s + 1 stage Runge Kutta method using a triangular grid of coefficients

Butcher tableau Can summarize an s + 1 stage Runge Kutta method using a triangular grid of coefficients AM 205: lecture 13 Last time: ODE convergence and stability, Runge Kutta methods Today: the Butcher tableau, multi-step methods, boundary value problems Butcher tableau Can summarize an s + 1 stage Runge

More information

Multistep Methods for IVPs. t 0 < t < T

Multistep Methods for IVPs. t 0 < t < T Multistep Methods for IVPs We are still considering the IVP dy dt = f(t,y) t 0 < t < T y(t 0 ) = y 0 So far we have looked at Euler s method, which was a first order method and Runge Kutta (RK) methods

More information

Tong Sun Department of Mathematics and Statistics Bowling Green State University, Bowling Green, OH

Tong Sun Department of Mathematics and Statistics Bowling Green State University, Bowling Green, OH Consistency & Numerical Smoothing Error Estimation An Alternative of the Lax-Richtmyer Theorem Tong Sun Department of Mathematics and Statistics Bowling Green State University, Bowling Green, OH 43403

More information

4 Stability analysis of finite-difference methods for ODEs

4 Stability analysis of finite-difference methods for ODEs MATH 337, by T. Lakoba, University of Vermont 36 4 Stability analysis of finite-difference methods for ODEs 4.1 Consistency, stability, and convergence of a numerical method; Main Theorem In this Lecture

More information

Modelling Physical Phenomena

Modelling Physical Phenomena Modelling Physical Phenomena Limitations and Challenges of the Differential Algebraic Equations Approach Olaf Trygve Berglihn Department of Chemical Engineering 30. June 2010 2 Outline Background Classification

More information

Elliptic Problems / Multigrid. PHY 604: Computational Methods for Physics and Astrophysics II

Elliptic Problems / Multigrid. PHY 604: Computational Methods for Physics and Astrophysics II Elliptic Problems / Multigrid Summary of Hyperbolic PDEs We looked at a simple linear and a nonlinear scalar hyperbolic PDE There is a speed associated with the change of the solution Explicit methods

More information

On the stability regions of implicit linear multistep methods

On the stability regions of implicit linear multistep methods On the stability regions of implicit linear multistep methods arxiv:1404.6934v1 [math.na] 8 Apr 014 Lajos Lóczi April 9, 014 Abstract If we apply the accepted definition to determine the stability region

More information

A Study on Linear and Nonlinear Stiff Problems. Using Single-Term Haar Wavelet Series Technique

A Study on Linear and Nonlinear Stiff Problems. Using Single-Term Haar Wavelet Series Technique Int. Journal of Math. Analysis, Vol. 7, 3, no. 53, 65-636 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/.988/ijma.3.3894 A Study on Linear and Nonlinear Stiff Problems Using Single-Term Haar Wavelet Series

More information

A Family of Block Methods Derived from TOM and BDF Pairs for Stiff Ordinary Differential Equations

A Family of Block Methods Derived from TOM and BDF Pairs for Stiff Ordinary Differential Equations American Journal of Mathematics and Statistics 214, 4(2): 121-13 DOI: 1.5923/j.ajms.21442.8 A Family of Bloc Methods Derived from TOM and BDF Ajie I. J. 1,*, Ihile M. N. O. 2, Onumanyi P. 1 1 National

More information

ORBIT 14 The propagator. A. Milani et al. Dipmat, Pisa, Italy

ORBIT 14 The propagator. A. Milani et al. Dipmat, Pisa, Italy ORBIT 14 The propagator A. Milani et al. Dipmat, Pisa, Italy Orbit 14 Propagator manual 1 Contents 1 Propagator - Interpolator 2 1.1 Propagation and interpolation............................ 2 1.1.1 Introduction..................................

More information

Journal of Quality Measurement and Analysis JQMA 4(1) 2008, 1-9 Jurnal Pengukuran Kualiti dan Analisis NUMERICAL ANALYSIS JOHN BUTCHER

Journal of Quality Measurement and Analysis JQMA 4(1) 2008, 1-9 Jurnal Pengukuran Kualiti dan Analisis NUMERICAL ANALYSIS JOHN BUTCHER Journal of Quality Measurement and Analysis JQMA 4(1) 2008, 1-9 Jurnal Pengukuran Kualiti dan Analisis NUMERICAL ANALYSIS JOHN BUTCHER ABSTRACT Mathematics has applications in virtually every scientific

More information

CHAPTER 7. An Introduction to Numerical Methods for. Linear and Nonlinear ODE s

CHAPTER 7. An Introduction to Numerical Methods for. Linear and Nonlinear ODE s A SERIES OF CLASS NOTES FOR 2005-2006 TO INTRODUCE LINEAR AND NONLINEAR PROBLEMS TO ENGINEERS, SCIENTISTS, AND APPLIED MATHEMATICIANS DE CLASS NOTES 1 A COLLECTION OF HANDOUTS ON FIRST ORDER ORDINARY DIFFERENTIAL

More information

NORGES TEKNISK-NATURVITENSKAPELIGE UNIVERSITET. Singly diagonally implicit Runge-Kutta methods with an explicit first stage

NORGES TEKNISK-NATURVITENSKAPELIGE UNIVERSITET. Singly diagonally implicit Runge-Kutta methods with an explicit first stage NORGES TEKNISK-NATURVITENSKAPELIGE UNIVERSITET Singly diagonally implicit Runge-Kutta methods with an explicit first stage by Anne Kværnø PREPRINT NUMERICS NO. 1/2004 NORWEGIAN UNIVERSITY OF SCIENCE AND

More information

Second Derivative Generalized Backward Differentiation Formulae for Solving Stiff Problems

Second Derivative Generalized Backward Differentiation Formulae for Solving Stiff Problems IAENG International Journal of Applied Mathematics, 48:, IJAM_48 Second Derivative Generalized Bacward Differentiation Formulae for Solving Stiff Problems G C Nwachuwu,TOor Abstract Second derivative generalized

More information

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit V Solution of

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit V Solution of Scientific Computing with Case Studies SIAM Press, 2009 http://www.cs.umd.edu/users/oleary/sccswebpage Lecture Notes for Unit V Solution of Differential Equations Part 1 Dianne P. O Leary c 2008 1 The

More information

Chapter 6 - Ordinary Differential Equations

Chapter 6 - Ordinary Differential Equations Chapter 6 - Ordinary Differential Equations 7.1 Solving Initial-Value Problems In this chapter, we will be interested in the solution of ordinary differential equations. Ordinary differential equations

More information

A First Course on Kinetics and Reaction Engineering Supplemental Unit S5. Solving Initial Value Differential Equations

A First Course on Kinetics and Reaction Engineering Supplemental Unit S5. Solving Initial Value Differential Equations Supplemental Unit S5. Solving Initial Value Differential Equations Defining the Problem This supplemental unit describes how to solve a set of initial value ordinary differential equations (ODEs) numerically.

More information

Научный потенциал регионов на службу модернизации. Астрахань: АИСИ, с.

Научный потенциал регионов на службу модернизации. Астрахань: АИСИ, с. MODELING OF FLOWS IN PIPING TREES USING PROJECTION METHODS В.В. Войков, Астраханский инженерно-строительный институт, г. Астрахань, Россия Jason Mayes, Mihir Sen University of Notre Dame, Indiana, USA

More information

Lecture 5: Single Step Methods

Lecture 5: Single Step Methods Lecture 5: Single Step Methods J.K. Ryan@tudelft.nl WI3097TU Delft Institute of Applied Mathematics Delft University of Technology 1 October 2012 () Single Step Methods 1 October 2012 1 / 44 Outline 1

More information

Fourth Order RK-Method

Fourth Order RK-Method Fourth Order RK-Method The most commonly used method is Runge-Kutta fourth order method. The fourth order RK-method is y i+1 = y i + 1 6 (k 1 + 2k 2 + 2k 3 + k 4 ), Ordinary Differential Equations (ODE)

More information

Differential-Algebraic Equations (DAEs)

Differential-Algebraic Equations (DAEs) Differential-Algebraic Equations (DAEs) L. T. Biegler Chemical Engineering Department Carnegie Mellon University Pittsburgh, PA 15213 biegler@cmu.edu http://dynopt.cheme.cmu.edu Introduction Simple Examples

More information

University of Houston, Department of Mathematics Numerical Analysis II. Chapter 4 Numerical solution of stiff and differential-algebraic equations

University of Houston, Department of Mathematics Numerical Analysis II. Chapter 4 Numerical solution of stiff and differential-algebraic equations Chapter 4 Numerical solution of stiff and differential-algebraic equations 4.1 Characteristics of stiff systems The solution has components with extremely different growth properties. Example: 2 u (x)

More information

Multistage Methods I: Runge-Kutta Methods

Multistage Methods I: Runge-Kutta Methods Multistage Methods I: Runge-Kutta Methods Varun Shankar January, 0 Introduction Previously, we saw that explicit multistep methods (AB methods) have shrinking stability regions as their orders are increased.

More information

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Lecture - 3 Simplex Method for Bounded Variables We discuss the simplex algorithm

More information

An initialization subroutine for DAEs solvers: DAEIS

An initialization subroutine for DAEs solvers: DAEIS Computers and Chemical Engineering 25 (2001) 301 311 www.elsevier.com/locate/compchemeng An initialization subroutine for DAEs solvers: DAEIS B. Wu, R.E. White * Department of Chemical Engineering, Uniersity

More information

NUMERICAL SOLUTION OF ODE IVPs. Overview

NUMERICAL SOLUTION OF ODE IVPs. Overview NUMERICAL SOLUTION OF ODE IVPs 1 Quick review of direction fields Overview 2 A reminder about and 3 Important test: Is the ODE initial value problem? 4 Fundamental concepts: Euler s Method 5 Fundamental

More information

Review for Exam 2 Ben Wang and Mark Styczynski

Review for Exam 2 Ben Wang and Mark Styczynski Review for Exam Ben Wang and Mark Styczynski This is a rough approximation of what we went over in the review session. This is actually more detailed in portions than what we went over. Also, please note

More information

Study the Numerical Methods for Solving System of Equation

Study the Numerical Methods for Solving System of Equation Study the Numerical Methods for Solving System of Equation Ravi Kumar 1, Mr. Raj Kumar Duhan 2 1 M. Tech. (M.E), 4 th Semester, UIET MDU Rohtak 2 Assistant Professor, Dept. of Mechanical Engg., UIET MDU

More information

The Plan. Initial value problems (ivps) for ordinary differential equations (odes) Review of basic methods You are here: Hamiltonian systems

The Plan. Initial value problems (ivps) for ordinary differential equations (odes) Review of basic methods You are here: Hamiltonian systems Scientific Computing with Case Studies SIAM Press, 2009 http://www.cs.umd.edu/users/oleary/sccswebpage Lecture Notes for Unit V Solution of Differential Equations Part 2 Dianne P. O Leary c 2008 The Plan

More information

Slope Fields: Graphing Solutions Without the Solutions

Slope Fields: Graphing Solutions Without the Solutions 8 Slope Fields: Graphing Solutions Without the Solutions Up to now, our efforts have been directed mainly towards finding formulas or equations describing solutions to given differential equations. Then,

More information

Initial value problems for ordinary differential equations

Initial value problems for ordinary differential equations AMSC/CMSC 660 Scientific Computing I Fall 2008 UNIT 5: Numerical Solution of Ordinary Differential Equations Part 1 Dianne P. O Leary c 2008 The Plan Initial value problems (ivps) for ordinary differential

More information

Improved Starting Methods for Two-Step Runge Kutta Methods of Stage-Order p 3

Improved Starting Methods for Two-Step Runge Kutta Methods of Stage-Order p 3 Improved Starting Methods for Two-Step Runge Kutta Methods of Stage-Order p 3 J.H. Verner August 3, 2004 Abstract. In [5], Jackiewicz and Verner derived formulas for, and tested the implementation of two-step

More information

European Consortium for Mathematics in Industry E. Eich-Soellner and C. FUhrer Numerical Methods in Multibody Dynamics

European Consortium for Mathematics in Industry E. Eich-Soellner and C. FUhrer Numerical Methods in Multibody Dynamics European Consortium for Mathematics in Industry E. Eich-Soellner and C. FUhrer Numerical Methods in Multibody Dynamics European Consortium for Mathematics in Industry Edited by Leif Arkeryd, Goteborg Heinz

More information

CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares

CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares Robert Bridson October 29, 2008 1 Hessian Problems in Newton Last time we fixed one of plain Newton s problems by introducing line search

More information

Applied Numerical Analysis

Applied Numerical Analysis Applied Numerical Analysis Using MATLAB Second Edition Laurene V. Fausett Texas A&M University-Commerce PEARSON Prentice Hall Upper Saddle River, NJ 07458 Contents Preface xi 1 Foundations 1 1.1 Introductory

More information

Chapter 1 Computer Arithmetic

Chapter 1 Computer Arithmetic Numerical Analysis (Math 9372) 2017-2016 Chapter 1 Computer Arithmetic 1.1 Introduction Numerical analysis is a way to solve mathematical problems by special procedures which use arithmetic operations

More information

c 2004 Society for Industrial and Applied Mathematics

c 2004 Society for Industrial and Applied Mathematics SIAM J. SCI. COMPUT. Vol. 26, No. 2, pp. 359 374 c 24 Society for Industrial and Applied Mathematics A POSTERIORI ERROR ESTIMATION AND GLOBAL ERROR CONTROL FOR ORDINARY DIFFERENTIAL EQUATIONS BY THE ADJOINT

More information

Geometric Numerical Integration

Geometric Numerical Integration Geometric Numerical Integration (Ernst Hairer, TU München, winter 2009/10) Development of numerical ordinary differential equations Nonstiff differential equations (since about 1850), see [4, 2, 1] Adams

More information

Advanced methods for ODEs and DAEs

Advanced methods for ODEs and DAEs Advanced methods for ODEs and DAEs Lecture 8: Dirk and Rosenbrock Wanner methods Bojana Rosic, 14 Juni 2016 General implicit Runge Kutta method Runge Kutta method is presented by Butcher table: c1 a11

More information

MATH 353 LECTURE NOTES: WEEK 1 FIRST ORDER ODES

MATH 353 LECTURE NOTES: WEEK 1 FIRST ORDER ODES MATH 353 LECTURE NOTES: WEEK 1 FIRST ORDER ODES J. WONG (FALL 2017) What did we cover this week? Basic definitions: DEs, linear operators, homogeneous (linear) ODEs. Solution techniques for some classes

More information

Numerical solution of stiff systems of differential equations arising from chemical reactions

Numerical solution of stiff systems of differential equations arising from chemical reactions Iranian Journal of Numerical Analysis and Optimization Vol 4, No. 1, (214), pp 25-39 Numerical solution of stiff systems of differential equations arising from chemical reactions G. Hojjati, A. Abdi, F.

More information

Investigation on the Most Efficient Ways to Solve the Implicit Equations for Gauss Methods in the Constant Stepsize Setting

Investigation on the Most Efficient Ways to Solve the Implicit Equations for Gauss Methods in the Constant Stepsize Setting Applied Mathematical Sciences, Vol. 12, 2018, no. 2, 93-103 HIKARI Ltd, www.m-hikari.com https://doi.org/10.12988/ams.2018.711340 Investigation on the Most Efficient Ways to Solve the Implicit Equations

More information

Reducing round-off errors in symmetric multistep methods

Reducing round-off errors in symmetric multistep methods Reducing round-off errors in symmetric multistep methods Paola Console a, Ernst Hairer a a Section de Mathématiques, Université de Genève, 2-4 rue du Lièvre, CH-1211 Genève 4, Switzerland. (Paola.Console@unige.ch,

More information

Math 660 Lecture 4: FDM for evolutionary equations: ODE solvers

Math 660 Lecture 4: FDM for evolutionary equations: ODE solvers Math 660 Lecture 4: FDM for evolutionary equations: ODE solvers Consider the ODE u (t) = f(t, u(t)), u(0) = u 0, where u could be a vector valued function. Any ODE can be reduced to a first order system,

More information

A SYMBOLIC-NUMERIC APPROACH TO THE SOLUTION OF THE BUTCHER EQUATIONS

A SYMBOLIC-NUMERIC APPROACH TO THE SOLUTION OF THE BUTCHER EQUATIONS CANADIAN APPLIED MATHEMATICS QUARTERLY Volume 17, Number 3, Fall 2009 A SYMBOLIC-NUMERIC APPROACH TO THE SOLUTION OF THE BUTCHER EQUATIONS SERGEY KHASHIN ABSTRACT. A new approach based on the use of new

More information

Bindel, Fall 2011 Intro to Scientific Computing (CS 3220) Week 12: Monday, Apr 18. HW 7 is posted, and will be due in class on 4/25.

Bindel, Fall 2011 Intro to Scientific Computing (CS 3220) Week 12: Monday, Apr 18. HW 7 is posted, and will be due in class on 4/25. Logistics Week 12: Monday, Apr 18 HW 6 is due at 11:59 tonight. HW 7 is posted, and will be due in class on 4/25. The prelim is graded. An analysis and rubric are on CMS. Problem du jour For implicit methods

More information

Solving Constrained Differential- Algebraic Systems Using Projections. Richard J. Hanson Fred T. Krogh August 16, mathalacarte.

Solving Constrained Differential- Algebraic Systems Using Projections. Richard J. Hanson Fred T. Krogh August 16, mathalacarte. Solving Constrained Differential- Algebraic Systems Using Projections Richard J. Hanson Fred T. Krogh August 6, 2007 www.vni.com mathalacarte.com Abbreviations and Terms ODE = Ordinary Differential Equations

More information

A FAMILY OF EXPONENTIALLY FITTED MULTIDERIVATIVE METHOD FOR STIFF DIFFERENTIAL EQUATIONS

A FAMILY OF EXPONENTIALLY FITTED MULTIDERIVATIVE METHOD FOR STIFF DIFFERENTIAL EQUATIONS A FAMILY OF EXPONENTIALLY FITTED MULTIDERIVATIVE METHOD FOR STIFF DIFFERENTIAL EQUATIONS ABSTRACT. ABHULIMENC.E * AND UKPEBOR L.A Department Of Mathematics, Ambrose Alli University, Ekpoma, Nigeria. In

More information

Exponential Integrators

Exponential Integrators Exponential Integrators John C. Bowman and Malcolm Roberts (University of Alberta) June 11, 2009 www.math.ualberta.ca/ bowman/talks 1 Outline Exponential Integrators Exponential Euler History Generalizations

More information

MATH 415, WEEKS 14 & 15: 1 Recurrence Relations / Difference Equations

MATH 415, WEEKS 14 & 15: 1 Recurrence Relations / Difference Equations MATH 415, WEEKS 14 & 15: Recurrence Relations / Difference Equations 1 Recurrence Relations / Difference Equations In many applications, the systems are updated in discrete jumps rather than continuous

More information

19.2 Mathematical description of the problem. = f(p; _p; q; _q) G(p; q) T ; (II.19.1) g(p; q) + r(t) _p _q. f(p; v. a p ; q; v q ) + G(p; q) T ; a q

19.2 Mathematical description of the problem. = f(p; _p; q; _q) G(p; q) T ; (II.19.1) g(p; q) + r(t) _p _q. f(p; v. a p ; q; v q ) + G(p; q) T ; a q II-9-9 Slider rank 9. General Information This problem was contributed by Bernd Simeon, March 998. The slider crank shows some typical properties of simulation problems in exible multibody systems, i.e.,

More information

Solving Ordinary Differential equations

Solving Ordinary Differential equations Solving Ordinary Differential equations Taylor methods can be used to build explicit methods with higher order of convergence than Euler s method. The main difficult of these methods is the computation

More information