Flavius Guiaş. X(t + h) = X(t) + F (X(s)) ds.

Similar documents
Polynomial Interpolation

5 Ordinary Differential Equations: Finite Difference Methods for Boundary Problems

The Laplace equation, cylindrically or spherically symmetric case

Blanca Bujanda, Juan Carlos Jorge NEW EFFICIENT TIME INTEGRATORS FOR NON-LINEAR PARABOLIC PROBLEMS

lecture 26: Richardson extrapolation

Chapter 4: Numerical Methods for Common Mathematical Problems

Differential equations. Differential equations

Exercises for numerical differentiation. Øyvind Ryan

Copyright c 2008 Kevin Long

4. The slope of the line 2x 7y = 8 is (a) 2/7 (b) 7/2 (c) 2 (d) 2/7 (e) None of these.

Chapter 5 FINITE DIFFERENCE METHOD (FDM)

Application of numerical integration methods to continuously variable transmission dynamics models a

Polynomial Interpolation

Average Rate of Change

The derivative function

Order of Accuracy. ũ h u Ch p, (1)

Consider a function f we ll specify which assumptions we need to make about it in a minute. Let us reformulate the integral. 1 f(x) dx.

Numerical Differentiation

How to Find the Derivative of a Function: Calculus 1

Combining functions: algebraic methods

LIMITATIONS OF EULER S METHOD FOR NUMERICAL INTEGRATION

Finite Difference Methods Assignments

A = h w (1) Error Analysis Physics 141

A Reconsideration of Matter Waves

HOW TO DEAL WITH FFT SAMPLING INFLUENCES ON ADEV CALCULATIONS

HOMEWORK HELP 2 FOR MATH 151

Journal of Computational and Applied Mathematics

Robotic manipulation project

Material for Difference Quotient

LIMITS AND DERIVATIVES CONDITIONS FOR THE EXISTENCE OF A LIMIT

The Krewe of Caesar Problem. David Gurney. Southeastern Louisiana University. SLU 10541, 500 Western Avenue. Hammond, LA

SECTION 1.10: DIFFERENCE QUOTIENTS LEARNING OBJECTIVES

Runge-Kutta methods. With orders of Taylor methods yet without derivatives of f (t, y(t))

Efficient algorithms for for clone items detection

ch (for some fixed positive number c) reaching c

1 The concept of limits (p.217 p.229, p.242 p.249, p.255 p.256) 1.1 Limits Consider the function determined by the formula 3. x since at this point

Differential Calculus (The basics) Prepared by Mr. C. Hull

Lecture 15. Interpolation II. 2 Piecewise polynomial interpolation Hermite splines

Numerical Analysis MTH603. dy dt = = (0) , y n+1. We obtain yn. Therefore. and. Copyright Virtual University of Pakistan 1

Regularized Regression

= 0 and states ''hence there is a stationary point'' All aspects of the proof dx must be correct (c)

Simulation and verification of a plate heat exchanger with a built-in tap water accumulator

New Streamfunction Approach for Magnetohydrodynamics

A Numerical Scheme for Particle-Laden Thin Film Flow in Two Dimensions

SECTION 3.2: DERIVATIVE FUNCTIONS and DIFFERENTIABILITY

3.1 Extreme Values of a Function

Introduction to Derivatives

Mathematical Modeling of Malaria

CS522 - Partial Di erential Equations

AMS 147 Computational Methods and Applications Lecture 09 Copyright by Hongyun Wang, UCSC. Exact value. Effect of round-off error.

The Complexity of Computing the MCD-Estimator

Some Applications of Fractional Step Runge-Kutta Methods

Arbitrary order exactly divergence-free central discontinuous Galerkin methods for ideal MHD equations

arxiv: v1 [physics.flu-dyn] 3 Jun 2015

FEM solution of the ψ-ω equations with explicit viscous diffusion 1

Dedicated to the 70th birthday of Professor Lin Qun

Lecture 2: Symplectic integrators

1 Introduction Radiative corrections can ave a significant impact on te predicted values of Higgs masses and couplings. Te radiative corrections invol

The Verlet Algorithm for Molecular Dynamics Simulations

An Approximation to the Solution of the Brusselator System by Adomian Decomposition Method and Comparing the Results with Runge-Kutta Method

Chapter 2 Limits and Continuity

Math 102 TEST CHAPTERS 3 & 4 Solutions & Comments Fall 2006

Parameter Fitted Scheme for Singularly Perturbed Delay Differential Equations

A method of Lagrange Galerkin of second order in time. Une méthode de Lagrange Galerkin d ordre deux en temps

1watt=1W=1kg m 2 /s 3

Numerical Solution to Parabolic PDE Using Implicit Finite Difference Approach

Discontinuous Galerkin Methods for Relativistic Vlasov-Maxwell System

Differentiation in higher dimensions

Solution. Solution. f (x) = (cos x)2 cos(2x) 2 sin(2x) 2 cos x ( sin x) (cos x) 4. f (π/4) = ( 2/2) ( 2/2) ( 2/2) ( 2/2) 4.

Energy-preserving variant of collocation methods 1

Click here to see an animation of the derivative

Lab 6 Derivatives and Mutant Bacteria

Continuity and Differentiability Worksheet

Entropy and the numerical integration of conservation laws

LECTURE 14 NUMERICAL INTEGRATION. Find

Reflection Symmetries of q-bernoulli Polynomials

Impact of Lightning Strikes on National Airspace System (NAS) Outages

Recall from our discussion of continuity in lecture a function is continuous at a point x = a if and only if

Notes on Neural Networks

arxiv: v1 [math.pr] 28 Dec 2018

EXTENSION OF A POSTPROCESSING TECHNIQUE FOR THE DISCONTINUOUS GALERKIN METHOD FOR HYPERBOLIC EQUATIONS WITH APPLICATION TO AN AEROACOUSTIC PROBLEM

5.1 We will begin this section with the definition of a rational expression. We

A First-Order System Approach for Diffusion Equation. I. Second-Order Residual-Distribution Schemes

NUMERICAL DIFFERENTIATION. James T. Smith San Francisco State University. In calculus classes, you compute derivatives algebraically: for example,

Exam 1 Review Solutions

Quaternion Dynamics, Part 1 Functions, Derivatives, and Integrals. Gary D. Simpson. rev 01 Aug 08, 2016.

New Fourth Order Quartic Spline Method for Solving Second Order Boundary Value Problems

Cubic Functions: Local Analysis

Bob Brown Math 251 Calculus 1 Chapter 3, Section 1 Completed 1 CCBC Dundalk

The total error in numerical differentiation

EOQ and EPQ-Partial Backordering-Approximations

Lines, Conics, Tangents, Limits and the Derivative

1 Calculus. 1.1 Gradients and the Derivative. Q f(x+h) f(x)

Solution for the Homework 4

The entransy dissipation minimization principle under given heat duty and heat transfer area conditions

Lecture 21. Numerical differentiation. f ( x+h) f ( x) h h

Inf sup testing of upwind methods

Optimal parameters for a hierarchical grid data structure for contact detection in arbitrarily polydisperse particle systems

Math 2921, spring, 2004 Notes, Part 3. April 2 version, changes from March 31 version starting on page 27.. Maps and di erential equations

Lecture 10: Carnot theorem

Transcription:

Numerical solvers for large systems of ordinary differential equations based on te stocastic direct simulation metod improved by te and Runge Kutta principles Flavius Guiaş Abstract We present a numerical sceme for approximating solutions of large systems of ordinary differential equations wic at its core employs a stocastic component. Te approac used for tis level is called stocastic direct simulation metod and is based on pat simulations of suitable Markov jump processes. It is efficient especially for large systems wit a sparse incidence matrix, te most typical application being spatially discretized partial differential equations, for example by finite differences. Due to its explicit caracter, tis metod is easy to implement and can serve as a predictor for improved approximations. One possibility to reac tis is by te principle. Since we ave simulated a full pat of te corresponding Markov jump process, we can obtain more precise values by using iterations over small time intervals. Tis requires te computation of integrals of step functions, wic can be performed explicitly. A furter way to increase te precision of te direct simulation metod is to use a Runge Kutta principle. In contrast to te sceme, ere one integrates a polynomial function wic interpolates eiter te values of te original jump process, or te values improved by te iterations, at some equidistant points in te time discretization interval. Tese integrals can be computed by a proper quadrature formula from te Newton Cotes family, wic is also used in te standard deterministic Runge Kutta scemes. However, te intermediate values wic are plugged into te quadrature formulae are computed in our metod by stocastic simulation, possibly followed by a iteration, wile in te usual Runge Kutta metods one uses te well-known Butcer tableau. We also introduce a time adaptive version of te stocastic Runge Kutta sceme. Here we do not take fixed time intervals, but a fixed number of jumps of te underlying process. Depending on te sceme, we may consider intermediate points after te alf of tis number of jumps. Since in tis case te points are not necessarily equidistant in time, we ave to compute te corresponding interpolation polynomial and its integral exactly. If ig precision is required, tis adaptive variant of te stocastic Runge Kutta metod combined wit iterations turns out to be te most effective if compared to te oter metods from tis family. We illustrate te features of all considered scemes at a standard bencmark problem, a reaction diffusion equation modeling a combustion process in one space dimension (1D) and two space dimensions (2D). Keywords numerical approximation of ordinary differential equations, stocastic direct simulation metod, iterations, Runge Kutta principle. Flavius Guiaş is wit te Dortmund University of Applied Sciences and Arts, Sonnenstr. 96, 44139 Dortmund, Germany; pone: +4931-9112260; e-mail: flavius.guias@f-dortmund.de I. INTRODUCTION Consider n-dimensional autonomous systems of ordinary differential equations (ODEs) Ẋ = F (X), F = (F i ) n i=1 written in an integral form on a small time interval: X(t + ) = X(t) + t+ t F (X(s)) ds. Te value X(t) X(t) is assumed to be computed by a certain numerical sceme and te goal is now to determine a value X(t+) wic approximates te solution at te next point of te time discretization grid. If we ave computed a predictor X(s) on te wole interval [t, t, +], ten one can in principle obtain a better approximation result by performing a iteration: X(t + ) = X(t) + t+ t F ( X(s)) ds. (1) In our approac we compute te predictor X(s) by te stocastic direct simulation metod (see also [2], [3], [4], [5]) as a pat of an appropriate Markov jump process, in wic at every jump only one component of te process is canged wit a fixed increment (±1/N). Te steps of tis metod are te following: 1) Given te state vector X(s) of te Markov process at time s : 2) Coose a component i wit probability proportional to F i ( X). 3) Te random waiting time δt is exponentially distributed wit parameter λ = N n F j=1 j( X) : δt = log U/λ, were U is a uniformly distributed RV on (0, 1). 4) Update te value of te time variable: s = s + δt. 5) Update te value of te sampled component: X i X i + 1 sign(fi( X)). N 6) Update te values F j ( X) for all j for wic F j ( X) depends on te sampled component i. 7) GOTO 1. We note tat te larger N or te larger te absolute values of te r..s. of te equations, te smaller te random time step between two jumps. Tis implies an automatic time adaption of te sceme wic computes te predictor X(s) ISSN: 1998-0159 93

by direct simulation. Te result of tis simple algoritm is a vector valued step function wic, if we let te magnitude of te jumps being sufficiently small, is a decent approximation for te exact solution. Rigorous results concerning te stocastic convergence of Markov jump processes towards solutions of ODEs were proved in [1], and [7], wile [4] formulates te convergence result for tis particular sceme. Having computed tis jump process, te integral in te iteration step (1) can be computed exactly (to be precise, it can be updated at every jump of X(s)) and so we obtain an improved approximation X(t+) X(t+). For eac component j of te system we define te quantities: I j (s) = s t F j( X(u))du and τ j as te last time moment wen I j was canged due to a jump of a component wic influences te term F j ( X(s)). Te algoritm based on Markov jump processes using iterations can be ten described as follows: 1) Given te state vector X(s) of te Markov process at time s : 2) Coose a component i wit probability proportional to F i ( X). 3) Te random waiting time δt is exponentially distributed wit parameter λ = N n F j=1 j( X) : δt = log U/λ, were U is a uniformly distributed RV on (0, 1). 4) Update te value of te time variable: s = s + δt. 5) Update te -integrals: I j I j + (s τ j)f j( X) and set τ j = s, for all j for wic F j( X) depends on te sampled component i. 6) Update te value of te sampled component: X i X i + 1 sign(f N i( X)). 7) Update te values of F j ( X) for all j for wic F j ( X) depends on te sampled component i. 8) Counter=Counter+1; 9) if (Counter=M) -Iteration(); 10) GOTO 1. Te above loop canges only te values of te process X(s). In te procedure -Iteration() we perform te following steps by wic we obtain te improved approximation X(t + ) given te known value X(t): 1) I j := I j + (s τ j )F j ( X) for all j. 2) Update all components: X j X j + I j for all j. 3) Update all sampling probabilities, setting tem as F j ( X) for all j. 4) Reset: X j = X j, Counter=0, I j = 0, τ j = s, for all j. A more detailed discussion of tis issue can be found in [5], were also te sampling and implementation metods are explained. Based on te predictor X(s), by performing a iteration we tus obtain improved approximations X(t) and X(t + ) (possibly also at several intermediate time steps X(t + c i ), 0 c i 1). Having done tis, we discuss next te possibility of furter improving our approximation of te exact solution, by computing a new quantity following te integral sceme: X (t + ) = X (t) + t+ t Q(s) ds. (2) As integrand Q(s) we take te polynomial wic interpolates some intermediate values of F ( X( )) or F ( X( )). Te directly simulated process X or te approximations X are evaluated ere at some few equidistant points between t and t +. By using an exact quadrature formula in order to compute te integral in (2), we can employ te principle of te Runge Kutta metod in order to furter improve our approximation. Te purpose of tis paper is to present a general framework for tis family of scemes and to illustrate tis principle wit several concrete examples. We mention tat one sceme belonging to tis family was previously introduced in our paper [5]. II. THE FRAMEWORK OF THE SCHEMES BASED ON THE RUNGE KUTTA PRINCIPLE Te family of Runge Kutta (RK) metods is based on te sceme m X (t + ) = X (t) + b i k i, (3) i=1 were k i are suitable approximations for F (X(t+c i )), 0 c i 1 (we consider ere only te case of autonomous systems, to wic te stocastic simulation metod can be efficiently applied). For different values of m, te sum in te r..s. of (3) is taken as an exact quadrature formula for te polynomial Q(s) wic interpolates te nodes (t + c i, k i ), i = 1... m, on te interval [t, t + ]. For m = 2 and c 1 = 0, c 2 = 1 we obtain te trapezoidal rule: 2 k 1 + 1 ) 2 k 2 (4) wic integrates exactly te linear interpolant for te two nodes. For m = 3 and c 1 = 0, c 2 = 1/2, c 3 = 1 we obtain Simpson s 1/3 (or Kepler s) rule: 6 k 1 + 4 6 k 2 + 1 ) 6 k 3 (5) wic integrates exactly te quadratic interpolation polynomial for te tree nodes, but in tis particular case all cubic polynomials are also integrated exactly. For m = 4 and c 1 = 0, c 2 = 1/3, c 3 = 2/3, c 4 = 1 we obtain Simpson s 3/8 rule: 8 k 1 + 3 8 k 2 + 3 8 k 3 + 1 ) 8 k 4 (6) wic integrates exactly te cubic interpolation polynomial for te four nodes. By considering for te k i te values of F ( X(t + c i )) or F ( X(t+c i )) based on te simulated Markov process or on ISSN: 1998-0159 94

its iterate, we can obtain several stocastic Runge Kutta or Runge Kutta scemes. Te sceme (5) was already introduced in [5] (in a sligtly modified version), wile te oter ones are new. To make a comparison: for te well-known classical Runge Kutta metods te k s can be computed recursively by i 1 k i = F (X (t) + a ij k j ). j=1 Te values of b i, c i and a ij ave to be properly cosen, in order to obtain consistency and convergence. Te most typical examples are enumerated in te following. For m = 3 te sceme corresponding to (5) leads to te RK metod of order 3: k 1 = F (X (t)), k 2 = F (X (t) + k 1 /2), k 3 = F (X (t) k 1 + 2k 2 ). But, since te used quadrature formula integrates exactly also cubic polynomials, in te deterministic setting one can take c 1 = 0, c 2 = c 3 = 1/2, c 4 = 1 in order to obtain te standard RK sceme of order 4: k 1 = F (X (t)), k 2 = F (X (t) + k 1 /2), k 3 = F (X (t) + k 2 /2), k 4 = F (X (t) + k 3 ) based on te integration formula 6 k 1 + 2 6 k 2 + 2 6 k 3 + 1 ) 6 k 4 wic is fact derived from (5) but wit two approximating values k 2 and k 3 at te middle of te interval. For m = 4, te sceme corresponding to (6) leads to te 3/8 RK metod of order 4: k 1 = F (X (t)), k 2 = F (X (t) + k 1 /3), k 3 = F (X (t) k 1 /3 + k 2 ), k 4 = F (X (t) + k 1 k 2 + k 3 ). We note tat te classical sceme (7) as no natural stocastic counterpart, since in te middle of te interval we would need two approximating values k 2 and k 3. Neverteless, te stocastic sceme based on (6) is formally similar to te 3/8 RK metod of order 4 presented above. As we will see in te numerical examples, altoug te intermediate values k i computed by te stocastic metod demand more computational effort, tey are more precise tan tose used by te classical deterministic RK scemes and te overall efficiency of te new solver can be superior, especially in te case of equations wic are difficult to integrate. (7) III. STOCHASTIC RUNGE KUTTA SCHEMES In tis paper we consider tree versions of stocastic Runge Kutta scemes, based on te quadrature formulae (4), (5), (6). We will denote tem by RK2, RK3, RK4 respectively, te number suggesting te convergence order of te corresponding deterministic metod. To be precise, we consider eiter k i = F ( X(t + c i )), were X( ) is te pat simulated by (dode), or k i = F ( X(t + c i )), were X(t + c i ) is te result of te iteration (1) at te corresponding time moment, taken on te interval elapsed since te previous iteration. We will illustrate te features of te metod on a very simple example. Consider te ODE x = x, x(0) = 1 wit exact solution x(t) = e t. We cose for example N = 50 and t max = 0.3. We simulate te stocastic process X( ) by te algoritm: X(0) = 1, X X + 1/N after a random waiting time τ = log(u)/(n X), were U is a uniformly distributed RV on (0, 1). Tis means tat te waiting time τ is exponentially distributed wit parameter λ = N X. Moreover, te time steps τ are automatically adapted, since tis approac imposes tat te current value of X canges always by te fixed quantity 1/N. We note tat te size of te time steps is inverse proportional to te value of X and also to te value of N, wic can be increased if we want to improve te time resolution and by tis te overall precision of our approximation. We compute successive jumps, until te simulation time t of te process exceeds t max, so X(t ) will be te first approximation for e t (predictor). Since we ave computed a full pat, we gained information not only by te final value of te simulation, wic may be biased by random fluctuations, but also by te wole beavior of te process X on te current time interval. We can terefore exploit tis global information by performing a iteration at t = t in order to improve te approximation: X = x(0) + t 0 X(s) ds. Tis is te integral of a step function wic can be computed explicitly as X(ti )(t i+1 t i ), were t i are te jump times of te step function X( ) (more precisely, te value of te integral cand be updated after every jump). We can also compute an improved approximation using te RK2-step (4), were k 1 = x(0) = 1 and eiter k 2 = X(t ) (RK2-metod) or k 2 = X (- RK2-metod). Te results are plotted in Fig. 1. We illustrate ere a situation were te error of te computed pat is very large compared to te exact value e t and sow ow te iteration, te RK2-step or te combination of bot improves te approximation. In te same figure is plotted te error curve from 48 simulations (magnified by te factor 5) of te four stocastic approximations of te exact value of te exponential function (absolute value of te differences at t ). For tis error curve te tics on te x-axis ave no relevance. We note tat te fluctuations of (dode) are very large, wile or RK2 ave te effect to significantly ISSN: 1998-0159 95

1.6 1.4 sudden lost of precision if we use quadratic polynomials in te RK3-metod or even in caotic results if we use cubic interpolation polynomials in te RK4-metod. 1.2 1 0.8 0.6 0.4 0.2 dode RK2 RK2- exact error dode x5 error RK2 x5 error x5 error RK2- x5 0 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 Fig. 1. A sample pat of X( ) and te improved approximations at t = t wit error curves from 48 simulations reduce te error. Moreover, te combination of bot is even more precise. Turning back to te general problem, we note tat since we consider stocastic processes wit small random jump times, we cannot it exactly te prescribed time steps required by te Runge Kutta sceme. Instead, we take te first time wen te prescribed time step is exceeded during te simulation. If te jump time intervals τ of te stocastic process are small compared to te time step considered in te Runge Kutta sceme (a typical example: = 10 7, τ = O(10 11 )), te error is expected also to be sufficiently small. Alternatively, one can work wit te exact times given by te stocastic process, but ten te simple formulae (4)- (6) ave to be replaced wit te exact computation of te linear, quadratic or cubic interpolation polynomial and of its integral. Tis eliminates te error induced by (5)-(6) wen using sligtly different time moments, but at a iger computational cost. Tis version was used in [5] in order to correct te formula (5). For tis paper we performed several experiments, wic sowed te following facts. First, if te time steps are not extremely small, te overall efficiency does not cange, regardless wic alternative we will coose: te ceap approximate computation of te integral wit inexact time moments, or te more expensive interpolation-based exact computation. Moreover, te stocastic RK2-metod for fixed time steps is too imprecise in problems wic require a ig time resolution as in our test case, were te oter two are preferable. Neverteless, we will see later tat it is a good coice if we consider a stocastic Runge Kutta sceme wit adapted time steps. Second, if te time steps are taken very small, ten te computation of te interpolation polynomials (eiter by te Lagrange or by te Newton formula) will involve difference quotients wit very small terms and te output given by te computer is biased by rounding errors. Tis penomenon can be observed in a IV. ADAPTIVE STOCHASTIC RUNGE KUTTA SCHEMES Anoter possibility is to use a stocastic RK-sceme wit adapted time steps. Tat is, we do not prescribe te value of te time step, but we fix a number M of jumps of te process (typical example: M = n in 1D and M = 0.3n in 2D, were n is te number of equations) and te time step is determined automatically as te sum of all lengts of jump intervals since te previous step (wic are in turn automatically adapted, by te nature of te jump process). As in te situation wit fixed time intervals, we can eiter use as predictor only te simulated Markov jump process X( ), or its iterates X( ). For te RK2-metod we tus consider simply te process or its iterate at te time moment wen M jumps were performed. Te formula (4) yields te exact integral of te corresponding linear interpolant. For te RK3 and RK4-metods we ave to compute te intermediate k i s, now related to te number M of jumps: after M/2 jumps in case of RK3 and after M/3 and 2M/3 jumps in case of RK4. But now, since te time moments are not necessarily equidistant (even not approximately so), we are obliged to use an exact computation of te interpolation polynomial and of its integral. Te drawbacks of tis approac were presented above and te experience sowed tat te adapted RK4-metod, altoug teoretically it sould be te most precise, is impracticable due to te mentioned rounding errors. Neverteless, te adapted RK3- metod functions well and is in principle sligtly better tan te adapted RK2-metod, except in te range were te time intervals after M jumps become too small (due to large values of N considered in te stocastic sceme). Wit increasing values of N, te convergence of te RK2- metod is improving, wile te RK3-metod, for N larger tan a tresold value, remains at te same level of te error due to te rounding effects. Anoter aspect of te adapted RK3-metod is te andling of te results wen reacing te maximal computation time t max. If tis occurs after computing k 2 after M/2 jumps, ten we simply compute k 3 at te stopping time of te process and take te usual approac. However, if we reac t max wit less tan M/2 jumps, ten we simply perform a RK2-step for tis last time interval. V. NUMERICAL EXAMPLES We illustrate te application of several stocastic, but also deterministic metods at te test equation u t ( 5eδ = u + (2 u) exp δ ) δ u wit initial condition u 0 1, eiter on (0, 1) wit boundary conditions ν u(0) = 0 and u(1) = 1, or on te square (8) ISSN: 1998-0159 96

1D, δ=30 1D, n=400, δ=20 2 1.8 rk2 rk3 rk4 u(t,x) 1.6 1.4 1.2 t=0.240 t=0.239 t=0.241 t=0.242 t=0.243 t=0.244 RK3 RK3- RK4 RK4- RK2-adap RK3-adap RK2-adap- RK3-adap- 1 0 0.2 0.4 0.6 0.8 1 x -9 1 1.5 2 2.5 3 Fig. 2. Te solution of (8) in one space dimension for δ = 30. Fig. 3. Efficiency comparison in 1D δ = 20, n = 400 (0, 1) 2, wit boundary conditions ν u = 0 if x = 0 or y = 0 and u = 1 if x = 1 or y = 1. Tis is a standard bencmark problem wic requires a ig time resolution [6]. Its solution in 1D is plotted in Fig. 2. Te variable u denotes a temperature wic increases gradually up to a critical value wen ignition occurs (in tis example at time t = 0.240) resulting in a fast propagation of a reaction front towards te rigt end of te interval. Due to te very ig speed and te steepness of te front, a very precise time resolution is crucial in any numerical approximation of tis problem. By a finite difference discretization wit spatial mes size of 1/400 we obtain a system of ODEs, for wic we compute te solution up to t = 0.244 in te case δ = 30 and up to t = 0.27 in te case δ = 20. Note tat te larger te value of δ, te larger te stiffness of te problem. As error measure we consider te supremum norm of te difference to a reference solution vector u at te end of te computational time interval. Te reference solution is computed in MATLAB wit te igest possible degree of precision by te standard solver ode45. As alternative we test also te stiff solver ode15s. For δ = 20 te CPUtimes were 18 seconds for ode45 and less tan 2 seconds for ode15s, te maximal difference between te two solutions being of about 1.4 10 9. For δ = 30 te CPU-times were 36 seconds and 14 seconds for ode45 and ode15s respectively, te difference between te two solution being sligtly less tan 10 7. Since te solver ode15s as a lower convergence order, we consider as reference solution te results of ode45, wic are muc closer (O(10 13 ) for δ = 20 and O(10 10 ) for δ = 30) to te results produced by te solver ode113. Te ig performance of all tese solvers relies on te powerful facilities of MATLAB, wic involve computations performed in a vectorial manner. Te codes ave only a few program lines, wit few function calls and te pattern of te corresponding Jacobian is computed apriori, witin a separate function. -9 RK3 RK3- RK4 RK4- RK2-adap RK3-adap RK2-adap- RK3-adap- 1D, n=400, δ=30 rk2 rk3 rk4 1 1.5 2 2.5 3 3.5 4 Fig. 4. Efficiency comparison in 1D for δ = 30, n = 400 Our algoritms were owever programmed in an objectoriented manner in C++ and te computations involving vectors were performed sequentially, witin usual loops. For tis reason, in order to get a better comparison, we use also a self-programmed version of ode45, wic is noting but a time-adaptive Runge Kutta metod of order 5, called te Dormand Prince metod. In contrast to MATLAB, ere te updates of te involved vectorial quantities occur also witin normal loops. In te legend tis metod is referred to as. At te same time we use also te results produced by tree standard Runge Kutta solvers wit fixed time steps, called rk2, rk3 and rk4, wic correspond to te scemes (4), (5) and (7). Fig. 3 sows te results of te different metods for te case δ = 20, wic is te easier case, wit a less steep front. We note tat te standard solvers rk2, rk3 and rk4 exibit an unstable beavior. For a certain range of te time steps te error is extremely small, but if we take larger or smaller values, te error will get larger. However, te time adapted solver sows stability and a good ISSN: 1998-0159 97

precision. We discuss now te performance of te stocastic solvers. Due to te poor performance in problems like tis, were an extremely precise time resolution is needed, we omit plotting te results of (dode) (i.e. te direct simulation of te Markov jump process, wic is at te core of all te oter scemes) and of RK2. Neverteless, te solver (dode) combined wit iterations sows a similar performance to RK3 and RK4. Te metod RK2-adap beaves sligtly better, wile RK3-adap, after sowing for a wile a similar efficiency to te latter solver, exibits afterward te previously described penomenon of rounding errors, wic becomes visible by te loss of precision if we increase N, decreasing tus te lengt of te time intervals. Te score obtained by te deterministic solver is sligtly better tan tat of te previous group of stocastic metods. As next group of stocastic solvers we consider RK3- and RK4-. Tat is, at te time moments prescribed by te Runge Kutta metods we do not use te values produced by (dode), but we improve tem by a iteration. In tis case, te precision is increased by taking larger N and smaller time steps. Note tat cannot be arbitrarily increased (for deterministic metods tis would yield instability, ere larger errors), wile for small, we ave to take a sufficiently large N in order to obtain good results. Tis implies smaller jump times and terefore longer computation times. Neverteless, if N is too large, for fixed we do not observe an increased precision. Tere is terefore an optimal dependency between and N, wic can be determined experimentally in order to obtain te best efficiency. We note tat te RK- solvers are not ceap ones, teir employment makes terefore sense only if we seek a ig degree of precision, even at a iger computational cost. Anyway, at te same (ig) computational cost, tey are definitely more precise tan te previous groups of solvers. Te last group of solvers consists of te adaptive Runge Kutta- solvers RK2-adap- and RK3-adap- in its first variant (were we stop after M/2 and M iterations). In a certain range tey seem to exibit a similar performance, clearly better tan all te oter solvers, wile for large values of N te already discussed imprecision of RK3-adap- sows up again. Neverteless, since in te solver RK2-adap- we don t use interpolation polynomials of ig degree, tis type of errors does not appear and overall tis solver may be considered as te testwinner. Fig. 4 sows te results for δ = 30, wic is a muc more demanding problem tan te previous one. We will mention only some differences compared to te former situation. Te solvers RK3 and RK4 perform now similarly to and better tan te (dode)- solver or RK2-adap or RK3-adap. Witin a certain range, te best efficiency is provided tis time by RK3-adap, and RK2- -9-10 RK2-adap- RK3-adap- 1D, n=1000, δ=20 2.4 2.6 2.8 3 3.2 3.4 3.6 3.8 4 Fig. 5. 1D for δ = 20, n = 1000 RK2-adap- RK3-adap- 1D, n=1000, δ=30 2.2 2.4 2.6 2.8 3 3.2 3.4 3.6 3.8 4 Fig. 6. 1D for δ = 30, n = 1000 adap surpasses it only for larger values of N. Witin tis range, even if RK2-adap is again te best one, its efficiency is comparable to tat of RK3-adap, RK3- and RK4-. We perform next computations for te smaller spatial discretization step of 1/1000. In order to obtain a good reference solution we use te solver RK2-adap- wit a large value of N. Te reason for tis coice is tat in tis case te MATLAB-solvers reac teir limits. Te solver ode15s wit te minimal value of te parameter RelTol =2.2 10 14 is still fast but, by its construction, it is not a ig precision solver. For δ = 20 we can use te solver ode113 wit te smallest possible value RelTol =10 10 (due to memory reasons), te difference between te two computed solutions being of 10 8. So, tis is virtually te best precision wic can be reaced by te MATLABsolvers. For δ = 30 we can use ode113 only wit RelTol larger tan 10 5, te difference to te solution delivered by ode15s being of 10 4. Te results of te numerical experiments are plotted in Fig. 5 and 6. We compare ere ISSN: 1998-0159 98

RK2-adap- RK3-adap- rk4 ode15s 2D, n=120x120, δ=30 2 2.5 3 3.5 4 4.5 5 Fig. 7. Efficiency comparison in 2D for δ = 30, n = 120 120 RK2-adap- RK3-adap- ode15s 2D, n=200x200, δ=30 2.5 3 3.5 4 4.5 5 Fig. 8. Efficiency comparison in 2D for δ = 30, n = 200 200 only stocastic solvers, all based on -iterations in different variants: and te adaptive scemes RK2- adap- and RK3-adap-. As deterministic solver in tis test we use te standard one,. We note tat tis sceme performs similarly to te stocastic solver and in te precision range up to 10 4 10 5 bot of tem are faster tan te adaptive stocastic RK-solvers. Neverteless, if iger precision is required (for example to compute a good reference solution), te latter solvers are more performant. RK2-adap- turns to be again a robust solver wit a good precision. We observe te known problems caused by rounding errors in te case of te solver RK3-adap-. Altoug in a certain precision range tis solver performs sligtly better tan te oter stocastic Runge Kutta solver, by increasing te precision parameters above a certain level, te precision of te computed solutions does not improve. However, tis is not te case for te solver RK2-adap-. We note also tat by using tis sceme, we can obtain a iger precision tan using te solvers available in MATLAB. Te next figures depict te results in case of te 2Dproblem for δ = 30, computed up to t max = 0.268. Fig. 7 corresponds to te spatial discretization step of 1/120 (reference solution computed by ode113), wile Fig. 8 to a step of 1/200 (reference solution computed by RK2-adap- ). We compare te same solvers as in te previous example, togeter wit te MATLAB-solver ode15s and te standard Runge Kutta metod rk4 (only for te spatial discretization step of 1/120). We note tat in te case of tis difficult problem, wit a large number of equations, te performance of tis solver is poor compared to te oters. Concerning te stocastic solvers, te picture is almost te same as before. Te solver performs similarly to te deterministic rk5 -metod. If one needs a iger precision, te stocastic solvers RK2-adap- and RK3-adap- are more performant. In bot cases we considered also te results of te MATLAB-solver ode15s, set up to te maximal possible precision. Due to te fast MATLABcodes, based internally on strong parallelization, te ODEsolvers available in tis software are generally very fast. Neverteless, for difficult problems (as in te 1D case wit a very small discretization step of 1/1000 or in te 2D case for steps 1/200 or smaller), te stocastic solvers presented in tis paper perform better in te range of ig precision. VI. CONCLUSIONS Te conclusion of tese experiments is tat by using te and/or Runge Kutta principle, te performance of te stocastic direct simulation metod can be enanced considerably. Te stocastic solvers presented ere are comparable to deterministic ones, at least in a precision range relevant for PDEs. Witin te framework of all Runge- Kutta-solvers of stocastic type, we distinguis te time adaptive solver RK2-adap-, wic sows te best allround performance, eiter if we need a good precision at a mostly ceap computational cost, or if we need a very ig precision, even at te price of an expensive computational cost. REFERENCES [1] S. Etier, T.G. Kurtz, Markov Processes: Caracterization and Convergence, Wiley, 1986 [2] D. Gillespie, General Metod for Numerically Simulating te Stocastic Time Evolution of Coupled Cemical Reactions, J. Comp. Pys. 22 (1976) 403 434 [3] D. Gillespie, Stocastic Simulation of Cemical Kinetics, Annu. Rev. Pys. Cem. 58 (2007) 35 55 [4] F. Guiaş, Direct simulation of te infinitesimal dynamics of semidiscrete approximations for convection diffusion reaction problems, Mat. Comput. Simulation 81 (2010) 820 836 [5] F.Guiaş, P. Eremeev, Improving te stocastic direct simulation metod wit applications to evolution partial differential equations, Applied Matematics and Computation 289 (2016) 353 370 [6] W. Hundsdorfer, J.G. Verwer, Numerical Solution of Time-Dependent Advection Diffusion Reaction Equations, Springer Verlag, Berlin, 2003 [7] T.G. Kurtz, Limit teorems for sequences of jump Markov processes approximating ordinary differential processes, J.Appl. Prob. 8 (1971) 344 356 ISSN: 1998-0159 99