Flavius Guiaş. X(t + h) = X(t) + F (X(s)) ds.

Size: px
Start display at page:

Download "Flavius Guiaş. X(t + h) = X(t) + F (X(s)) ds."

Transcription

1 Numerical solvers for large systems of ordinary differential equations based on te stocastic direct simulation metod improved by te and Runge Kutta principles Flavius Guiaş Abstract We present a numerical sceme for approximating solutions of large systems of ordinary differential equations wic at its core employs a stocastic component. Te approac used for tis level is called stocastic direct simulation metod and is based on pat simulations of suitable Markov jump processes. It is efficient especially for large systems wit a sparse incidence matrix, te most typical application being spatially discretized partial differential equations, for example by finite differences. Due to its explicit caracter, tis metod is easy to implement and can serve as a predictor for improved approximations. One possibility to reac tis is by te principle. Since we ave simulated a full pat of te corresponding Markov jump process, we can obtain more precise values by using iterations over small time intervals. Tis requires te computation of integrals of step functions, wic can be performed explicitly. A furter way to increase te precision of te direct simulation metod is to use a Runge Kutta principle. In contrast to te sceme, ere one integrates a polynomial function wic interpolates eiter te values of te original jump process, or te values improved by te iterations, at some equidistant points in te time discretization interval. Tese integrals can be computed by a proper quadrature formula from te Newton Cotes family, wic is also used in te standard deterministic Runge Kutta scemes. However, te intermediate values wic are plugged into te quadrature formulae are computed in our metod by stocastic simulation, possibly followed by a iteration, wile in te usual Runge Kutta metods one uses te well-known Butcer tableau. We also introduce a time adaptive version of te stocastic Runge Kutta sceme. Here we do not take fixed time intervals, but a fixed number of jumps of te underlying process. Depending on te sceme, we may consider intermediate points after te alf of tis number of jumps. Since in tis case te points are not necessarily equidistant in time, we ave to compute te corresponding interpolation polynomial and its integral exactly. If ig precision is required, tis adaptive variant of te stocastic Runge Kutta metod combined wit iterations turns out to be te most effective if compared to te oter metods from tis family. We illustrate te features of all considered scemes at a standard bencmark problem, a reaction diffusion equation modeling a combustion process in one space dimension (1D) and two space dimensions (2D). Keywords numerical approximation of ordinary differential equations, stocastic direct simulation metod, iterations, Runge Kutta principle. Flavius Guiaş is wit te Dortmund University of Applied Sciences and Arts, Sonnenstr. 96, Dortmund, Germany; pone: ; flavius.guias@f-dortmund.de I. INTRODUCTION Consider n-dimensional autonomous systems of ordinary differential equations (ODEs) Ẋ = F (X), F = (F i ) n i=1 written in an integral form on a small time interval: X(t + ) = X(t) + t+ t F (X(s)) ds. Te value X(t) X(t) is assumed to be computed by a certain numerical sceme and te goal is now to determine a value X(t+) wic approximates te solution at te next point of te time discretization grid. If we ave computed a predictor X(s) on te wole interval [t, t, +], ten one can in principle obtain a better approximation result by performing a iteration: X(t + ) = X(t) + t+ t F ( X(s)) ds. (1) In our approac we compute te predictor X(s) by te stocastic direct simulation metod (see also [2], [3], [4], [5]) as a pat of an appropriate Markov jump process, in wic at every jump only one component of te process is canged wit a fixed increment (±1/N). Te steps of tis metod are te following: 1) Given te state vector X(s) of te Markov process at time s : 2) Coose a component i wit probability proportional to F i ( X). 3) Te random waiting time δt is exponentially distributed wit parameter λ = N n F j=1 j( X) : δt = log U/λ, were U is a uniformly distributed RV on (0, 1). 4) Update te value of te time variable: s = s + δt. 5) Update te value of te sampled component: X i X i + 1 sign(fi( X)). N 6) Update te values F j ( X) for all j for wic F j ( X) depends on te sampled component i. 7) GOTO 1. We note tat te larger N or te larger te absolute values of te r..s. of te equations, te smaller te random time step between two jumps. Tis implies an automatic time adaption of te sceme wic computes te predictor X(s) ISSN:

2 by direct simulation. Te result of tis simple algoritm is a vector valued step function wic, if we let te magnitude of te jumps being sufficiently small, is a decent approximation for te exact solution. Rigorous results concerning te stocastic convergence of Markov jump processes towards solutions of ODEs were proved in [1], and [7], wile [4] formulates te convergence result for tis particular sceme. Having computed tis jump process, te integral in te iteration step (1) can be computed exactly (to be precise, it can be updated at every jump of X(s)) and so we obtain an improved approximation X(t+) X(t+). For eac component j of te system we define te quantities: I j (s) = s t F j( X(u))du and τ j as te last time moment wen I j was canged due to a jump of a component wic influences te term F j ( X(s)). Te algoritm based on Markov jump processes using iterations can be ten described as follows: 1) Given te state vector X(s) of te Markov process at time s : 2) Coose a component i wit probability proportional to F i ( X). 3) Te random waiting time δt is exponentially distributed wit parameter λ = N n F j=1 j( X) : δt = log U/λ, were U is a uniformly distributed RV on (0, 1). 4) Update te value of te time variable: s = s + δt. 5) Update te -integrals: I j I j + (s τ j)f j( X) and set τ j = s, for all j for wic F j( X) depends on te sampled component i. 6) Update te value of te sampled component: X i X i + 1 sign(f N i( X)). 7) Update te values of F j ( X) for all j for wic F j ( X) depends on te sampled component i. 8) Counter=Counter+1; 9) if (Counter=M) -Iteration(); 10) GOTO 1. Te above loop canges only te values of te process X(s). In te procedure -Iteration() we perform te following steps by wic we obtain te improved approximation X(t + ) given te known value X(t): 1) I j := I j + (s τ j )F j ( X) for all j. 2) Update all components: X j X j + I j for all j. 3) Update all sampling probabilities, setting tem as F j ( X) for all j. 4) Reset: X j = X j, Counter=0, I j = 0, τ j = s, for all j. A more detailed discussion of tis issue can be found in [5], were also te sampling and implementation metods are explained. Based on te predictor X(s), by performing a iteration we tus obtain improved approximations X(t) and X(t + ) (possibly also at several intermediate time steps X(t + c i ), 0 c i 1). Having done tis, we discuss next te possibility of furter improving our approximation of te exact solution, by computing a new quantity following te integral sceme: X (t + ) = X (t) + t+ t Q(s) ds. (2) As integrand Q(s) we take te polynomial wic interpolates some intermediate values of F ( X( )) or F ( X( )). Te directly simulated process X or te approximations X are evaluated ere at some few equidistant points between t and t +. By using an exact quadrature formula in order to compute te integral in (2), we can employ te principle of te Runge Kutta metod in order to furter improve our approximation. Te purpose of tis paper is to present a general framework for tis family of scemes and to illustrate tis principle wit several concrete examples. We mention tat one sceme belonging to tis family was previously introduced in our paper [5]. II. THE FRAMEWORK OF THE SCHEMES BASED ON THE RUNGE KUTTA PRINCIPLE Te family of Runge Kutta (RK) metods is based on te sceme m X (t + ) = X (t) + b i k i, (3) i=1 were k i are suitable approximations for F (X(t+c i )), 0 c i 1 (we consider ere only te case of autonomous systems, to wic te stocastic simulation metod can be efficiently applied). For different values of m, te sum in te r..s. of (3) is taken as an exact quadrature formula for te polynomial Q(s) wic interpolates te nodes (t + c i, k i ), i = 1... m, on te interval [t, t + ]. For m = 2 and c 1 = 0, c 2 = 1 we obtain te trapezoidal rule: 2 k ) 2 k 2 (4) wic integrates exactly te linear interpolant for te two nodes. For m = 3 and c 1 = 0, c 2 = 1/2, c 3 = 1 we obtain Simpson s 1/3 (or Kepler s) rule: 6 k k ) 6 k 3 (5) wic integrates exactly te quadratic interpolation polynomial for te tree nodes, but in tis particular case all cubic polynomials are also integrated exactly. For m = 4 and c 1 = 0, c 2 = 1/3, c 3 = 2/3, c 4 = 1 we obtain Simpson s 3/8 rule: 8 k k k ) 8 k 4 (6) wic integrates exactly te cubic interpolation polynomial for te four nodes. By considering for te k i te values of F ( X(t + c i )) or F ( X(t+c i )) based on te simulated Markov process or on ISSN:

3 its iterate, we can obtain several stocastic Runge Kutta or Runge Kutta scemes. Te sceme (5) was already introduced in [5] (in a sligtly modified version), wile te oter ones are new. To make a comparison: for te well-known classical Runge Kutta metods te k s can be computed recursively by i 1 k i = F (X (t) + a ij k j ). j=1 Te values of b i, c i and a ij ave to be properly cosen, in order to obtain consistency and convergence. Te most typical examples are enumerated in te following. For m = 3 te sceme corresponding to (5) leads to te RK metod of order 3: k 1 = F (X (t)), k 2 = F (X (t) + k 1 /2), k 3 = F (X (t) k 1 + 2k 2 ). But, since te used quadrature formula integrates exactly also cubic polynomials, in te deterministic setting one can take c 1 = 0, c 2 = c 3 = 1/2, c 4 = 1 in order to obtain te standard RK sceme of order 4: k 1 = F (X (t)), k 2 = F (X (t) + k 1 /2), k 3 = F (X (t) + k 2 /2), k 4 = F (X (t) + k 3 ) based on te integration formula 6 k k k ) 6 k 4 wic is fact derived from (5) but wit two approximating values k 2 and k 3 at te middle of te interval. For m = 4, te sceme corresponding to (6) leads to te 3/8 RK metod of order 4: k 1 = F (X (t)), k 2 = F (X (t) + k 1 /3), k 3 = F (X (t) k 1 /3 + k 2 ), k 4 = F (X (t) + k 1 k 2 + k 3 ). We note tat te classical sceme (7) as no natural stocastic counterpart, since in te middle of te interval we would need two approximating values k 2 and k 3. Neverteless, te stocastic sceme based on (6) is formally similar to te 3/8 RK metod of order 4 presented above. As we will see in te numerical examples, altoug te intermediate values k i computed by te stocastic metod demand more computational effort, tey are more precise tan tose used by te classical deterministic RK scemes and te overall efficiency of te new solver can be superior, especially in te case of equations wic are difficult to integrate. (7) III. STOCHASTIC RUNGE KUTTA SCHEMES In tis paper we consider tree versions of stocastic Runge Kutta scemes, based on te quadrature formulae (4), (5), (6). We will denote tem by RK2, RK3, RK4 respectively, te number suggesting te convergence order of te corresponding deterministic metod. To be precise, we consider eiter k i = F ( X(t + c i )), were X( ) is te pat simulated by (dode), or k i = F ( X(t + c i )), were X(t + c i ) is te result of te iteration (1) at te corresponding time moment, taken on te interval elapsed since te previous iteration. We will illustrate te features of te metod on a very simple example. Consider te ODE x = x, x(0) = 1 wit exact solution x(t) = e t. We cose for example N = 50 and t max = 0.3. We simulate te stocastic process X( ) by te algoritm: X(0) = 1, X X + 1/N after a random waiting time τ = log(u)/(n X), were U is a uniformly distributed RV on (0, 1). Tis means tat te waiting time τ is exponentially distributed wit parameter λ = N X. Moreover, te time steps τ are automatically adapted, since tis approac imposes tat te current value of X canges always by te fixed quantity 1/N. We note tat te size of te time steps is inverse proportional to te value of X and also to te value of N, wic can be increased if we want to improve te time resolution and by tis te overall precision of our approximation. We compute successive jumps, until te simulation time t of te process exceeds t max, so X(t ) will be te first approximation for e t (predictor). Since we ave computed a full pat, we gained information not only by te final value of te simulation, wic may be biased by random fluctuations, but also by te wole beavior of te process X on te current time interval. We can terefore exploit tis global information by performing a iteration at t = t in order to improve te approximation: X = x(0) + t 0 X(s) ds. Tis is te integral of a step function wic can be computed explicitly as X(ti )(t i+1 t i ), were t i are te jump times of te step function X( ) (more precisely, te value of te integral cand be updated after every jump). We can also compute an improved approximation using te RK2-step (4), were k 1 = x(0) = 1 and eiter k 2 = X(t ) (RK2-metod) or k 2 = X (- RK2-metod). Te results are plotted in Fig. 1. We illustrate ere a situation were te error of te computed pat is very large compared to te exact value e t and sow ow te iteration, te RK2-step or te combination of bot improves te approximation. In te same figure is plotted te error curve from 48 simulations (magnified by te factor 5) of te four stocastic approximations of te exact value of te exponential function (absolute value of te differences at t ). For tis error curve te tics on te x-axis ave no relevance. We note tat te fluctuations of (dode) are very large, wile or RK2 ave te effect to significantly ISSN:

4 sudden lost of precision if we use quadratic polynomials in te RK3-metod or even in caotic results if we use cubic interpolation polynomials in te RK4-metod dode RK2 RK2- exact error dode x5 error RK2 x5 error x5 error RK2- x Fig. 1. A sample pat of X( ) and te improved approximations at t = t wit error curves from 48 simulations reduce te error. Moreover, te combination of bot is even more precise. Turning back to te general problem, we note tat since we consider stocastic processes wit small random jump times, we cannot it exactly te prescribed time steps required by te Runge Kutta sceme. Instead, we take te first time wen te prescribed time step is exceeded during te simulation. If te jump time intervals τ of te stocastic process are small compared to te time step considered in te Runge Kutta sceme (a typical example: = 10 7, τ = O(10 11 )), te error is expected also to be sufficiently small. Alternatively, one can work wit te exact times given by te stocastic process, but ten te simple formulae (4)- (6) ave to be replaced wit te exact computation of te linear, quadratic or cubic interpolation polynomial and of its integral. Tis eliminates te error induced by (5)-(6) wen using sligtly different time moments, but at a iger computational cost. Tis version was used in [5] in order to correct te formula (5). For tis paper we performed several experiments, wic sowed te following facts. First, if te time steps are not extremely small, te overall efficiency does not cange, regardless wic alternative we will coose: te ceap approximate computation of te integral wit inexact time moments, or te more expensive interpolation-based exact computation. Moreover, te stocastic RK2-metod for fixed time steps is too imprecise in problems wic require a ig time resolution as in our test case, were te oter two are preferable. Neverteless, we will see later tat it is a good coice if we consider a stocastic Runge Kutta sceme wit adapted time steps. Second, if te time steps are taken very small, ten te computation of te interpolation polynomials (eiter by te Lagrange or by te Newton formula) will involve difference quotients wit very small terms and te output given by te computer is biased by rounding errors. Tis penomenon can be observed in a IV. ADAPTIVE STOCHASTIC RUNGE KUTTA SCHEMES Anoter possibility is to use a stocastic RK-sceme wit adapted time steps. Tat is, we do not prescribe te value of te time step, but we fix a number M of jumps of te process (typical example: M = n in 1D and M = 0.3n in 2D, were n is te number of equations) and te time step is determined automatically as te sum of all lengts of jump intervals since te previous step (wic are in turn automatically adapted, by te nature of te jump process). As in te situation wit fixed time intervals, we can eiter use as predictor only te simulated Markov jump process X( ), or its iterates X( ). For te RK2-metod we tus consider simply te process or its iterate at te time moment wen M jumps were performed. Te formula (4) yields te exact integral of te corresponding linear interpolant. For te RK3 and RK4-metods we ave to compute te intermediate k i s, now related to te number M of jumps: after M/2 jumps in case of RK3 and after M/3 and 2M/3 jumps in case of RK4. But now, since te time moments are not necessarily equidistant (even not approximately so), we are obliged to use an exact computation of te interpolation polynomial and of its integral. Te drawbacks of tis approac were presented above and te experience sowed tat te adapted RK4-metod, altoug teoretically it sould be te most precise, is impracticable due to te mentioned rounding errors. Neverteless, te adapted RK3- metod functions well and is in principle sligtly better tan te adapted RK2-metod, except in te range were te time intervals after M jumps become too small (due to large values of N considered in te stocastic sceme). Wit increasing values of N, te convergence of te RK2- metod is improving, wile te RK3-metod, for N larger tan a tresold value, remains at te same level of te error due to te rounding effects. Anoter aspect of te adapted RK3-metod is te andling of te results wen reacing te maximal computation time t max. If tis occurs after computing k 2 after M/2 jumps, ten we simply compute k 3 at te stopping time of te process and take te usual approac. However, if we reac t max wit less tan M/2 jumps, ten we simply perform a RK2-step for tis last time interval. V. NUMERICAL EXAMPLES We illustrate te application of several stocastic, but also deterministic metods at te test equation u t ( 5eδ = u + (2 u) exp δ ) δ u wit initial condition u 0 1, eiter on (0, 1) wit boundary conditions ν u(0) = 0 and u(1) = 1, or on te square (8) ISSN:

5 1D, δ=30 1D, n=400, δ= rk2 rk3 rk4 u(t,x) t=0.240 t=0.239 t=0.241 t=0.242 t=0.243 t=0.244 RK3 RK3- RK4 RK4- RK2-adap RK3-adap RK2-adap- RK3-adap x Fig. 2. Te solution of (8) in one space dimension for δ = 30. Fig. 3. Efficiency comparison in 1D δ = 20, n = 400 (0, 1) 2, wit boundary conditions ν u = 0 if x = 0 or y = 0 and u = 1 if x = 1 or y = 1. Tis is a standard bencmark problem wic requires a ig time resolution [6]. Its solution in 1D is plotted in Fig. 2. Te variable u denotes a temperature wic increases gradually up to a critical value wen ignition occurs (in tis example at time t = 0.240) resulting in a fast propagation of a reaction front towards te rigt end of te interval. Due to te very ig speed and te steepness of te front, a very precise time resolution is crucial in any numerical approximation of tis problem. By a finite difference discretization wit spatial mes size of 1/400 we obtain a system of ODEs, for wic we compute te solution up to t = in te case δ = 30 and up to t = 0.27 in te case δ = 20. Note tat te larger te value of δ, te larger te stiffness of te problem. As error measure we consider te supremum norm of te difference to a reference solution vector u at te end of te computational time interval. Te reference solution is computed in MATLAB wit te igest possible degree of precision by te standard solver ode45. As alternative we test also te stiff solver ode15s. For δ = 20 te CPUtimes were 18 seconds for ode45 and less tan 2 seconds for ode15s, te maximal difference between te two solutions being of about For δ = 30 te CPU-times were 36 seconds and 14 seconds for ode45 and ode15s respectively, te difference between te two solution being sligtly less tan Since te solver ode15s as a lower convergence order, we consider as reference solution te results of ode45, wic are muc closer (O(10 13 ) for δ = 20 and O(10 10 ) for δ = 30) to te results produced by te solver ode113. Te ig performance of all tese solvers relies on te powerful facilities of MATLAB, wic involve computations performed in a vectorial manner. Te codes ave only a few program lines, wit few function calls and te pattern of te corresponding Jacobian is computed apriori, witin a separate function. -9 RK3 RK3- RK4 RK4- RK2-adap RK3-adap RK2-adap- RK3-adap- 1D, n=400, δ=30 rk2 rk3 rk Fig. 4. Efficiency comparison in 1D for δ = 30, n = 400 Our algoritms were owever programmed in an objectoriented manner in C++ and te computations involving vectors were performed sequentially, witin usual loops. For tis reason, in order to get a better comparison, we use also a self-programmed version of ode45, wic is noting but a time-adaptive Runge Kutta metod of order 5, called te Dormand Prince metod. In contrast to MATLAB, ere te updates of te involved vectorial quantities occur also witin normal loops. In te legend tis metod is referred to as. At te same time we use also te results produced by tree standard Runge Kutta solvers wit fixed time steps, called rk2, rk3 and rk4, wic correspond to te scemes (4), (5) and (7). Fig. 3 sows te results of te different metods for te case δ = 20, wic is te easier case, wit a less steep front. We note tat te standard solvers rk2, rk3 and rk4 exibit an unstable beavior. For a certain range of te time steps te error is extremely small, but if we take larger or smaller values, te error will get larger. However, te time adapted solver sows stability and a good ISSN:

6 precision. We discuss now te performance of te stocastic solvers. Due to te poor performance in problems like tis, were an extremely precise time resolution is needed, we omit plotting te results of (dode) (i.e. te direct simulation of te Markov jump process, wic is at te core of all te oter scemes) and of RK2. Neverteless, te solver (dode) combined wit iterations sows a similar performance to RK3 and RK4. Te metod RK2-adap beaves sligtly better, wile RK3-adap, after sowing for a wile a similar efficiency to te latter solver, exibits afterward te previously described penomenon of rounding errors, wic becomes visible by te loss of precision if we increase N, decreasing tus te lengt of te time intervals. Te score obtained by te deterministic solver is sligtly better tan tat of te previous group of stocastic metods. As next group of stocastic solvers we consider RK3- and RK4-. Tat is, at te time moments prescribed by te Runge Kutta metods we do not use te values produced by (dode), but we improve tem by a iteration. In tis case, te precision is increased by taking larger N and smaller time steps. Note tat cannot be arbitrarily increased (for deterministic metods tis would yield instability, ere larger errors), wile for small, we ave to take a sufficiently large N in order to obtain good results. Tis implies smaller jump times and terefore longer computation times. Neverteless, if N is too large, for fixed we do not observe an increased precision. Tere is terefore an optimal dependency between and N, wic can be determined experimentally in order to obtain te best efficiency. We note tat te RK- solvers are not ceap ones, teir employment makes terefore sense only if we seek a ig degree of precision, even at a iger computational cost. Anyway, at te same (ig) computational cost, tey are definitely more precise tan te previous groups of solvers. Te last group of solvers consists of te adaptive Runge Kutta- solvers RK2-adap- and RK3-adap- in its first variant (were we stop after M/2 and M iterations). In a certain range tey seem to exibit a similar performance, clearly better tan all te oter solvers, wile for large values of N te already discussed imprecision of RK3-adap- sows up again. Neverteless, since in te solver RK2-adap- we don t use interpolation polynomials of ig degree, tis type of errors does not appear and overall tis solver may be considered as te testwinner. Fig. 4 sows te results for δ = 30, wic is a muc more demanding problem tan te previous one. We will mention only some differences compared to te former situation. Te solvers RK3 and RK4 perform now similarly to and better tan te (dode)- solver or RK2-adap or RK3-adap. Witin a certain range, te best efficiency is provided tis time by RK3-adap, and RK RK2-adap- RK3-adap- 1D, n=1000, δ= Fig. 5. 1D for δ = 20, n = 1000 RK2-adap- RK3-adap- 1D, n=1000, δ= Fig. 6. 1D for δ = 30, n = 1000 adap surpasses it only for larger values of N. Witin tis range, even if RK2-adap is again te best one, its efficiency is comparable to tat of RK3-adap, RK3- and RK4-. We perform next computations for te smaller spatial discretization step of 1/1000. In order to obtain a good reference solution we use te solver RK2-adap- wit a large value of N. Te reason for tis coice is tat in tis case te MATLAB-solvers reac teir limits. Te solver ode15s wit te minimal value of te parameter RelTol = is still fast but, by its construction, it is not a ig precision solver. For δ = 20 we can use te solver ode113 wit te smallest possible value RelTol =10 10 (due to memory reasons), te difference between te two computed solutions being of So, tis is virtually te best precision wic can be reaced by te MATLABsolvers. For δ = 30 we can use ode113 only wit RelTol larger tan 10 5, te difference to te solution delivered by ode15s being of Te results of te numerical experiments are plotted in Fig. 5 and 6. We compare ere ISSN:

7 RK2-adap- RK3-adap- rk4 ode15s 2D, n=120x120, δ= Fig. 7. Efficiency comparison in 2D for δ = 30, n = RK2-adap- RK3-adap- ode15s 2D, n=200x200, δ= Fig. 8. Efficiency comparison in 2D for δ = 30, n = only stocastic solvers, all based on -iterations in different variants: and te adaptive scemes RK2- adap- and RK3-adap-. As deterministic solver in tis test we use te standard one,. We note tat tis sceme performs similarly to te stocastic solver and in te precision range up to bot of tem are faster tan te adaptive stocastic RK-solvers. Neverteless, if iger precision is required (for example to compute a good reference solution), te latter solvers are more performant. RK2-adap- turns to be again a robust solver wit a good precision. We observe te known problems caused by rounding errors in te case of te solver RK3-adap-. Altoug in a certain precision range tis solver performs sligtly better tan te oter stocastic Runge Kutta solver, by increasing te precision parameters above a certain level, te precision of te computed solutions does not improve. However, tis is not te case for te solver RK2-adap-. We note also tat by using tis sceme, we can obtain a iger precision tan using te solvers available in MATLAB. Te next figures depict te results in case of te 2Dproblem for δ = 30, computed up to t max = Fig. 7 corresponds to te spatial discretization step of 1/120 (reference solution computed by ode113), wile Fig. 8 to a step of 1/200 (reference solution computed by RK2-adap- ). We compare te same solvers as in te previous example, togeter wit te MATLAB-solver ode15s and te standard Runge Kutta metod rk4 (only for te spatial discretization step of 1/120). We note tat in te case of tis difficult problem, wit a large number of equations, te performance of tis solver is poor compared to te oters. Concerning te stocastic solvers, te picture is almost te same as before. Te solver performs similarly to te deterministic rk5 -metod. If one needs a iger precision, te stocastic solvers RK2-adap- and RK3-adap- are more performant. In bot cases we considered also te results of te MATLAB-solver ode15s, set up to te maximal possible precision. Due to te fast MATLABcodes, based internally on strong parallelization, te ODEsolvers available in tis software are generally very fast. Neverteless, for difficult problems (as in te 1D case wit a very small discretization step of 1/1000 or in te 2D case for steps 1/200 or smaller), te stocastic solvers presented in tis paper perform better in te range of ig precision. VI. CONCLUSIONS Te conclusion of tese experiments is tat by using te and/or Runge Kutta principle, te performance of te stocastic direct simulation metod can be enanced considerably. Te stocastic solvers presented ere are comparable to deterministic ones, at least in a precision range relevant for PDEs. Witin te framework of all Runge- Kutta-solvers of stocastic type, we distinguis te time adaptive solver RK2-adap-, wic sows te best allround performance, eiter if we need a good precision at a mostly ceap computational cost, or if we need a very ig precision, even at te price of an expensive computational cost. REFERENCES [1] S. Etier, T.G. Kurtz, Markov Processes: Caracterization and Convergence, Wiley, 1986 [2] D. Gillespie, General Metod for Numerically Simulating te Stocastic Time Evolution of Coupled Cemical Reactions, J. Comp. Pys. 22 (1976) [3] D. Gillespie, Stocastic Simulation of Cemical Kinetics, Annu. Rev. Pys. Cem. 58 (2007) [4] F. Guiaş, Direct simulation of te infinitesimal dynamics of semidiscrete approximations for convection diffusion reaction problems, Mat. Comput. Simulation 81 (2010) [5] F.Guiaş, P. Eremeev, Improving te stocastic direct simulation metod wit applications to evolution partial differential equations, Applied Matematics and Computation 289 (2016) [6] W. Hundsdorfer, J.G. Verwer, Numerical Solution of Time-Dependent Advection Diffusion Reaction Equations, Springer Verlag, Berlin, 2003 [7] T.G. Kurtz, Limit teorems for sequences of jump Markov processes approximating ordinary differential processes, J.Appl. Prob. 8 (1971) ISSN:

Polynomial Interpolation

Polynomial Interpolation Capter 4 Polynomial Interpolation In tis capter, we consider te important problem of approximatinga function fx, wose values at a set of distinct points x, x, x,, x n are known, by a polynomial P x suc

More information

5 Ordinary Differential Equations: Finite Difference Methods for Boundary Problems

5 Ordinary Differential Equations: Finite Difference Methods for Boundary Problems 5 Ordinary Differential Equations: Finite Difference Metods for Boundary Problems Read sections 10.1, 10.2, 10.4 Review questions 10.1 10.4, 10.8 10.9, 10.13 5.1 Introduction In te previous capters we

More information

The Laplace equation, cylindrically or spherically symmetric case

The Laplace equation, cylindrically or spherically symmetric case Numerisce Metoden II, 7 4, und Übungen, 7 5 Course Notes, Summer Term 7 Some material and exercises Te Laplace equation, cylindrically or sperically symmetric case Electric and gravitational potential,

More information

Blanca Bujanda, Juan Carlos Jorge NEW EFFICIENT TIME INTEGRATORS FOR NON-LINEAR PARABOLIC PROBLEMS

Blanca Bujanda, Juan Carlos Jorge NEW EFFICIENT TIME INTEGRATORS FOR NON-LINEAR PARABOLIC PROBLEMS Opuscula Matematica Vol. 26 No. 3 26 Blanca Bujanda, Juan Carlos Jorge NEW EFFICIENT TIME INTEGRATORS FOR NON-LINEAR PARABOLIC PROBLEMS Abstract. In tis work a new numerical metod is constructed for time-integrating

More information

lecture 26: Richardson extrapolation

lecture 26: Richardson extrapolation 43 lecture 26: Ricardson extrapolation 35 Ricardson extrapolation, Romberg integration Trougout numerical analysis, one encounters procedures tat apply some simple approximation (eg, linear interpolation)

More information

Chapter 4: Numerical Methods for Common Mathematical Problems

Chapter 4: Numerical Methods for Common Mathematical Problems 1 Capter 4: Numerical Metods for Common Matematical Problems Interpolation Problem: Suppose we ave data defined at a discrete set of points (x i, y i ), i = 0, 1,..., N. Often it is useful to ave a smoot

More information

Differential equations. Differential equations

Differential equations. Differential equations Differential equations A differential equation (DE) describes ow a quantity canges (as a function of time, position, ) d - A ball dropped from a building: t gt () dt d S qx - Uniformly loaded beam: wx

More information

Exercises for numerical differentiation. Øyvind Ryan

Exercises for numerical differentiation. Øyvind Ryan Exercises for numerical differentiation Øyvind Ryan February 25, 2013 1. Mark eac of te following statements as true or false. a. Wen we use te approximation f (a) (f (a +) f (a))/ on a computer, we can

More information

Copyright c 2008 Kevin Long

Copyright c 2008 Kevin Long Lecture 4 Numerical solution of initial value problems Te metods you ve learned so far ave obtained closed-form solutions to initial value problems. A closedform solution is an explicit algebriac formula

More information

4. The slope of the line 2x 7y = 8 is (a) 2/7 (b) 7/2 (c) 2 (d) 2/7 (e) None of these.

4. The slope of the line 2x 7y = 8 is (a) 2/7 (b) 7/2 (c) 2 (d) 2/7 (e) None of these. Mat 11. Test Form N Fall 016 Name. Instructions. Te first eleven problems are wort points eac. Te last six problems are wort 5 points eac. For te last six problems, you must use relevant metods of algebra

More information

Chapter 5 FINITE DIFFERENCE METHOD (FDM)

Chapter 5 FINITE DIFFERENCE METHOD (FDM) MEE7 Computer Modeling Tecniques in Engineering Capter 5 FINITE DIFFERENCE METHOD (FDM) 5. Introduction to FDM Te finite difference tecniques are based upon approximations wic permit replacing differential

More information

Application of numerical integration methods to continuously variable transmission dynamics models a

Application of numerical integration methods to continuously variable transmission dynamics models a ttps://doi.org/10.1051/ssconf/2018440005 Application of numerical integration metods to continuously variable transmission dynamics models a Orlov Stepan 1 1 Peter te Great St. Petersburg Polytecnic University

More information

Polynomial Interpolation

Polynomial Interpolation Capter 4 Polynomial Interpolation In tis capter, we consider te important problem of approximating a function f(x, wose values at a set of distinct points x, x, x 2,,x n are known, by a polynomial P (x

More information

Average Rate of Change

Average Rate of Change Te Derivative Tis can be tougt of as an attempt to draw a parallel (pysically and metaporically) between a line and a curve, applying te concept of slope to someting tat isn't actually straigt. Te slope

More information

The derivative function

The derivative function Roberto s Notes on Differential Calculus Capter : Definition of derivative Section Te derivative function Wat you need to know already: f is at a point on its grap and ow to compute it. Wat te derivative

More information

Order of Accuracy. ũ h u Ch p, (1)

Order of Accuracy. ũ h u Ch p, (1) Order of Accuracy 1 Terminology We consider a numerical approximation of an exact value u. Te approximation depends on a small parameter, wic can be for instance te grid size or time step in a numerical

More information

Consider a function f we ll specify which assumptions we need to make about it in a minute. Let us reformulate the integral. 1 f(x) dx.

Consider a function f we ll specify which assumptions we need to make about it in a minute. Let us reformulate the integral. 1 f(x) dx. Capter 2 Integrals as sums and derivatives as differences We now switc to te simplest metods for integrating or differentiating a function from its function samples. A careful study of Taylor expansions

More information

Numerical Differentiation

Numerical Differentiation Numerical Differentiation Finite Difference Formulas for te first derivative (Using Taylor Expansion tecnique) (section 8.3.) Suppose tat f() = g() is a function of te variable, and tat as 0 te function

More information

How to Find the Derivative of a Function: Calculus 1

How to Find the Derivative of a Function: Calculus 1 Introduction How to Find te Derivative of a Function: Calculus 1 Calculus is not an easy matematics course Te fact tat you ave enrolled in suc a difficult subject indicates tat you are interested in te

More information

Combining functions: algebraic methods

Combining functions: algebraic methods Combining functions: algebraic metods Functions can be added, subtracted, multiplied, divided, and raised to a power, just like numbers or algebra expressions. If f(x) = x 2 and g(x) = x + 2, clearly f(x)

More information

LIMITATIONS OF EULER S METHOD FOR NUMERICAL INTEGRATION

LIMITATIONS OF EULER S METHOD FOR NUMERICAL INTEGRATION LIMITATIONS OF EULER S METHOD FOR NUMERICAL INTEGRATION LAURA EVANS.. Introduction Not all differential equations can be explicitly solved for y. Tis can be problematic if we need to know te value of y

More information

Finite Difference Methods Assignments

Finite Difference Methods Assignments Finite Difference Metods Assignments Anders Söberg and Aay Saxena, Micael Tuné, and Maria Westermarck Revised: Jarmo Rantakokko June 6, 1999 Teknisk databeandling Assignment 1: A one-dimensional eat equation

More information

A = h w (1) Error Analysis Physics 141

A = h w (1) Error Analysis Physics 141 Introduction In all brances of pysical science and engineering one deals constantly wit numbers wic results more or less directly from experimental observations. Experimental observations always ave inaccuracies.

More information

A Reconsideration of Matter Waves

A Reconsideration of Matter Waves A Reconsideration of Matter Waves by Roger Ellman Abstract Matter waves were discovered in te early 20t century from teir wavelengt, predicted by DeBroglie, Planck's constant divided by te particle's momentum,

More information

HOW TO DEAL WITH FFT SAMPLING INFLUENCES ON ADEV CALCULATIONS

HOW TO DEAL WITH FFT SAMPLING INFLUENCES ON ADEV CALCULATIONS HOW TO DEAL WITH FFT SAMPLING INFLUENCES ON ADEV CALCULATIONS Po-Ceng Cang National Standard Time & Frequency Lab., TL, Taiwan 1, Lane 551, Min-Tsu Road, Sec. 5, Yang-Mei, Taoyuan, Taiwan 36 Tel: 886 3

More information

HOMEWORK HELP 2 FOR MATH 151

HOMEWORK HELP 2 FOR MATH 151 HOMEWORK HELP 2 FOR MATH 151 Here we go; te second round of omework elp. If tere are oters you would like to see, let me know! 2.4, 43 and 44 At wat points are te functions f(x) and g(x) = xf(x)continuous,

More information

Journal of Computational and Applied Mathematics

Journal of Computational and Applied Mathematics Journal of Computational and Applied Matematics 94 (6) 75 96 Contents lists available at ScienceDirect Journal of Computational and Applied Matematics journal omepage: www.elsevier.com/locate/cam Smootness-Increasing

More information

Robotic manipulation project

Robotic manipulation project Robotic manipulation project Bin Nguyen December 5, 2006 Abstract Tis is te draft report for Robotic Manipulation s class project. Te cosen project aims to understand and implement Kevin Egan s non-convex

More information

Material for Difference Quotient

Material for Difference Quotient Material for Difference Quotient Prepared by Stepanie Quintal, graduate student and Marvin Stick, professor Dept. of Matematical Sciences, UMass Lowell Summer 05 Preface Te following difference quotient

More information

LIMITS AND DERIVATIVES CONDITIONS FOR THE EXISTENCE OF A LIMIT

LIMITS AND DERIVATIVES CONDITIONS FOR THE EXISTENCE OF A LIMIT LIMITS AND DERIVATIVES Te limit of a function is defined as te value of y tat te curve approaces, as x approaces a particular value. Te limit of f (x) as x approaces a is written as f (x) approaces, as

More information

The Krewe of Caesar Problem. David Gurney. Southeastern Louisiana University. SLU 10541, 500 Western Avenue. Hammond, LA

The Krewe of Caesar Problem. David Gurney. Southeastern Louisiana University. SLU 10541, 500 Western Avenue. Hammond, LA Te Krewe of Caesar Problem David Gurney Souteastern Louisiana University SLU 10541, 500 Western Avenue Hammond, LA 7040 June 19, 00 Krewe of Caesar 1 ABSTRACT Tis paper provides an alternative to te usual

More information

SECTION 1.10: DIFFERENCE QUOTIENTS LEARNING OBJECTIVES

SECTION 1.10: DIFFERENCE QUOTIENTS LEARNING OBJECTIVES (Section.0: Difference Quotients).0. SECTION.0: DIFFERENCE QUOTIENTS LEARNING OBJECTIVES Define average rate of cange (and average velocity) algebraically and grapically. Be able to identify, construct,

More information

Runge-Kutta methods. With orders of Taylor methods yet without derivatives of f (t, y(t))

Runge-Kutta methods. With orders of Taylor methods yet without derivatives of f (t, y(t)) Runge-Kutta metods Wit orders of Taylor metods yet witout derivatives of f (t, y(t)) First order Taylor expansion in two variables Teorem: Suppose tat f (t, y) and all its partial derivatives are continuous

More information

Efficient algorithms for for clone items detection

Efficient algorithms for for clone items detection Efficient algoritms for for clone items detection Raoul Medina, Caroline Noyer, and Olivier Raynaud Raoul Medina, Caroline Noyer and Olivier Raynaud LIMOS - Université Blaise Pascal, Campus universitaire

More information

ch (for some fixed positive number c) reaching c

ch (for some fixed positive number c) reaching c GSTF Journal of Matematics Statistics and Operations Researc (JMSOR) Vol. No. September 05 DOI 0.60/s4086-05-000-z Nonlinear Piecewise-defined Difference Equations wit Reciprocal and Cubic Terms Ramadan

More information

1 The concept of limits (p.217 p.229, p.242 p.249, p.255 p.256) 1.1 Limits Consider the function determined by the formula 3. x since at this point

1 The concept of limits (p.217 p.229, p.242 p.249, p.255 p.256) 1.1 Limits Consider the function determined by the formula 3. x since at this point MA00 Capter 6 Calculus and Basic Linear Algebra I Limits, Continuity and Differentiability Te concept of its (p.7 p.9, p.4 p.49, p.55 p.56). Limits Consider te function determined by te formula f Note

More information

Differential Calculus (The basics) Prepared by Mr. C. Hull

Differential Calculus (The basics) Prepared by Mr. C. Hull Differential Calculus Te basics) A : Limits In tis work on limits, we will deal only wit functions i.e. tose relationsips in wic an input variable ) defines a unique output variable y). Wen we work wit

More information

Lecture 15. Interpolation II. 2 Piecewise polynomial interpolation Hermite splines

Lecture 15. Interpolation II. 2 Piecewise polynomial interpolation Hermite splines Lecture 5 Interpolation II Introduction In te previous lecture we focused primarily on polynomial interpolation of a set of n points. A difficulty we observed is tat wen n is large, our polynomial as to

More information

Numerical Analysis MTH603. dy dt = = (0) , y n+1. We obtain yn. Therefore. and. Copyright Virtual University of Pakistan 1

Numerical Analysis MTH603. dy dt = = (0) , y n+1. We obtain yn. Therefore. and. Copyright Virtual University of Pakistan 1 Numerical Analysis MTH60 PREDICTOR CORRECTOR METHOD Te metods presented so far are called single-step metods, were we ave seen tat te computation of y at t n+ tat is y n+ requires te knowledge of y n only.

More information

Regularized Regression

Regularized Regression Regularized Regression David M. Blei Columbia University December 5, 205 Modern regression problems are ig dimensional, wic means tat te number of covariates p is large. In practice statisticians regularize

More information

= 0 and states ''hence there is a stationary point'' All aspects of the proof dx must be correct (c)

= 0 and states ''hence there is a stationary point'' All aspects of the proof dx must be correct (c) Paper 1: Pure Matematics 1 Mark Sceme 1(a) (i) (ii) d d y 3 1x 4x x M1 A1 d y dx 1.1b 1.1b 36x 48x A1ft 1.1b Substitutes x = into teir dx (3) 3 1 4 Sows d y 0 and states ''ence tere is a stationary point''

More information

Simulation and verification of a plate heat exchanger with a built-in tap water accumulator

Simulation and verification of a plate heat exchanger with a built-in tap water accumulator Simulation and verification of a plate eat excanger wit a built-in tap water accumulator Anders Eriksson Abstract In order to test and verify a compact brazed eat excanger (CBE wit a built-in accumulation

More information

New Streamfunction Approach for Magnetohydrodynamics

New Streamfunction Approach for Magnetohydrodynamics New Streamfunction Approac for Magnetoydrodynamics Kab Seo Kang Brooaven National Laboratory, Computational Science Center, Building 63, Room, Upton NY 973, USA. sang@bnl.gov Summary. We apply te finite

More information

A Numerical Scheme for Particle-Laden Thin Film Flow in Two Dimensions

A Numerical Scheme for Particle-Laden Thin Film Flow in Two Dimensions A Numerical Sceme for Particle-Laden Tin Film Flow in Two Dimensions Mattew R. Mata a,, Andrea L. Bertozzi a a Department of Matematics, University of California Los Angeles, 520 Portola Plaza, Los Angeles,

More information

SECTION 3.2: DERIVATIVE FUNCTIONS and DIFFERENTIABILITY

SECTION 3.2: DERIVATIVE FUNCTIONS and DIFFERENTIABILITY (Section 3.2: Derivative Functions and Differentiability) 3.2.1 SECTION 3.2: DERIVATIVE FUNCTIONS and DIFFERENTIABILITY LEARNING OBJECTIVES Know, understand, and apply te Limit Definition of te Derivative

More information

3.1 Extreme Values of a Function

3.1 Extreme Values of a Function .1 Etreme Values of a Function Section.1 Notes Page 1 One application of te derivative is finding minimum and maimum values off a grap. In precalculus we were only able to do tis wit quadratics by find

More information

Introduction to Derivatives

Introduction to Derivatives Introduction to Derivatives 5-Minute Review: Instantaneous Rates and Tangent Slope Recall te analogy tat we developed earlier First we saw tat te secant slope of te line troug te two points (a, f (a))

More information

Mathematical Modeling of Malaria

Mathematical Modeling of Malaria Matematical Modeling of Malaria - Metods for Simulation of Epidemics Patrik Joansson patrijo@student.calmers.se Jacob Leander jaclea@student.calmers.se Course: Matematical Modeling MVE160 Examiner: Alexei

More information

CS522 - Partial Di erential Equations

CS522 - Partial Di erential Equations CS5 - Partial Di erential Equations Tibor Jánosi April 5, 5 Numerical Di erentiation In principle, di erentiation is a simple operation. Indeed, given a function speci ed as a closed-form formula, its

More information

AMS 147 Computational Methods and Applications Lecture 09 Copyright by Hongyun Wang, UCSC. Exact value. Effect of round-off error.

AMS 147 Computational Methods and Applications Lecture 09 Copyright by Hongyun Wang, UCSC. Exact value. Effect of round-off error. Lecture 09 Copyrigt by Hongyun Wang, UCSC Recap: Te total error in numerical differentiation fl( f ( x + fl( f ( x E T ( = f ( x Numerical result from a computer Exact value = e + f x+ Discretization error

More information

The Complexity of Computing the MCD-Estimator

The Complexity of Computing the MCD-Estimator Te Complexity of Computing te MCD-Estimator Torsten Bernolt Lerstul Informatik 2 Universität Dortmund, Germany torstenbernolt@uni-dortmundde Paul Fiscer IMM, Danisc Tecnical University Kongens Lyngby,

More information

Some Applications of Fractional Step Runge-Kutta Methods

Some Applications of Fractional Step Runge-Kutta Methods Some Applications of Fractional Step Runge-Kutta Metods JORGE, J.C., PORTERO, L. Dpto. de Matematica e Informatica Universidad Publica de Navarra Campus Arrosadia, s/n 3006 Pamplona Navarra SPAIN Abstract:

More information

Arbitrary order exactly divergence-free central discontinuous Galerkin methods for ideal MHD equations

Arbitrary order exactly divergence-free central discontinuous Galerkin methods for ideal MHD equations Arbitrary order exactly divergence-free central discontinuous Galerkin metods for ideal MHD equations Fengyan Li, Liwei Xu Department of Matematical Sciences, Rensselaer Polytecnic Institute, Troy, NY

More information

arxiv: v1 [physics.flu-dyn] 3 Jun 2015

arxiv: v1 [physics.flu-dyn] 3 Jun 2015 A Convective-like Energy-Stable Open Boundary Condition for Simulations of Incompressible Flows arxiv:156.132v1 [pysics.flu-dyn] 3 Jun 215 S. Dong Center for Computational & Applied Matematics Department

More information

FEM solution of the ψ-ω equations with explicit viscous diffusion 1

FEM solution of the ψ-ω equations with explicit viscous diffusion 1 FEM solution of te ψ-ω equations wit explicit viscous diffusion J.-L. Guermond and L. Quartapelle 3 Abstract. Tis paper describes a variational formulation for solving te D time-dependent incompressible

More information

Dedicated to the 70th birthday of Professor Lin Qun

Dedicated to the 70th birthday of Professor Lin Qun Journal of Computational Matematics, Vol.4, No.3, 6, 4 44. ACCELERATION METHODS OF NONLINEAR ITERATION FOR NONLINEAR PARABOLIC EQUATIONS Guang-wei Yuan Xu-deng Hang Laboratory of Computational Pysics,

More information

Lecture 2: Symplectic integrators

Lecture 2: Symplectic integrators Geometric Numerical Integration TU Müncen Ernst Hairer January February 010 Lecture : Symplectic integrators Table of contents 1 Basic symplectic integration scemes 1 Symplectic Runge Kutta metods 4 3

More information

1 Introduction Radiative corrections can ave a significant impact on te predicted values of Higgs masses and couplings. Te radiative corrections invol

1 Introduction Radiative corrections can ave a significant impact on te predicted values of Higgs masses and couplings. Te radiative corrections invol RADCOR-2000-001 November 15, 2000 Radiative Corrections to Pysics Beyond te Standard Model Clint Eastwood 1 Department of Radiative Pysics California State University Monterey Bay, Seaside, CA 93955 USA

More information

The Verlet Algorithm for Molecular Dynamics Simulations

The Verlet Algorithm for Molecular Dynamics Simulations Cemistry 380.37 Fall 2015 Dr. Jean M. Standard November 9, 2015 Te Verlet Algoritm for Molecular Dynamics Simulations Equations of motion For a many-body system consisting of N particles, Newton's classical

More information

An Approximation to the Solution of the Brusselator System by Adomian Decomposition Method and Comparing the Results with Runge-Kutta Method

An Approximation to the Solution of the Brusselator System by Adomian Decomposition Method and Comparing the Results with Runge-Kutta Method Int. J. Contemp. Mat. Sciences, Vol. 2, 27, no. 2, 983-989 An Approximation to te Solution of te Brusselator System by Adomian Decomposition Metod and Comparing te Results wit Runge-Kutta Metod J. Biazar

More information

Chapter 2 Limits and Continuity

Chapter 2 Limits and Continuity 4 Section. Capter Limits and Continuity Section. Rates of Cange and Limits (pp. 6) Quick Review.. f () ( ) () 4 0. f () 4( ) 4. f () sin sin 0 4. f (). 4 4 4 6. c c c 7. 8. c d d c d d c d c 9. 8 ( )(

More information

Math 102 TEST CHAPTERS 3 & 4 Solutions & Comments Fall 2006

Math 102 TEST CHAPTERS 3 & 4 Solutions & Comments Fall 2006 Mat 102 TEST CHAPTERS 3 & 4 Solutions & Comments Fall 2006 f(x+) f(x) 10 1. For f(x) = x 2 + 2x 5, find ))))))))) and simplify completely. NOTE: **f(x+) is NOT f(x)+! f(x+) f(x) (x+) 2 + 2(x+) 5 ( x 2

More information

Parameter Fitted Scheme for Singularly Perturbed Delay Differential Equations

Parameter Fitted Scheme for Singularly Perturbed Delay Differential Equations International Journal of Applied Science and Engineering 2013. 11, 4: 361-373 Parameter Fitted Sceme for Singularly Perturbed Delay Differential Equations Awoke Andargiea* and Y. N. Reddyb a b Department

More information

A method of Lagrange Galerkin of second order in time. Une méthode de Lagrange Galerkin d ordre deux en temps

A method of Lagrange Galerkin of second order in time. Une méthode de Lagrange Galerkin d ordre deux en temps A metod of Lagrange Galerkin of second order in time Une métode de Lagrange Galerkin d ordre deux en temps Jocelyn Étienne a a DAMTP, University of Cambridge, Wilberforce Road, Cambridge CB3 0WA, Great-Britain.

More information

1watt=1W=1kg m 2 /s 3

1watt=1W=1kg m 2 /s 3 Appendix A Matematics Appendix A.1 Units To measure a pysical quantity, you need a standard. Eac pysical quantity as certain units. A unit is just a standard we use to compare, e.g. a ruler. In tis laboratory

More information

Numerical Solution to Parabolic PDE Using Implicit Finite Difference Approach

Numerical Solution to Parabolic PDE Using Implicit Finite Difference Approach Numerical Solution to arabolic DE Using Implicit Finite Difference Approac Jon Amoa-Mensa, Francis Oene Boateng, Kwame Bonsu Department of Matematics and Statistics, Sunyani Tecnical University, Sunyani,

More information

Discontinuous Galerkin Methods for Relativistic Vlasov-Maxwell System

Discontinuous Galerkin Methods for Relativistic Vlasov-Maxwell System Discontinuous Galerkin Metods for Relativistic Vlasov-Maxwell System He Yang and Fengyan Li December 1, 16 Abstract e relativistic Vlasov-Maxwell (RVM) system is a kinetic model tat describes te dynamics

More information

Differentiation in higher dimensions

Differentiation in higher dimensions Capter 2 Differentiation in iger dimensions 2.1 Te Total Derivative Recall tat if f : R R is a 1-variable function, and a R, we say tat f is differentiable at x = a if and only if te ratio f(a+) f(a) tends

More information

Solution. Solution. f (x) = (cos x)2 cos(2x) 2 sin(2x) 2 cos x ( sin x) (cos x) 4. f (π/4) = ( 2/2) ( 2/2) ( 2/2) ( 2/2) 4.

Solution. Solution. f (x) = (cos x)2 cos(2x) 2 sin(2x) 2 cos x ( sin x) (cos x) 4. f (π/4) = ( 2/2) ( 2/2) ( 2/2) ( 2/2) 4. December 09, 20 Calculus PracticeTest s Name: (4 points) Find te absolute extrema of f(x) = x 3 0 on te interval [0, 4] Te derivative of f(x) is f (x) = 3x 2, wic is zero only at x = 0 Tus we only need

More information

Energy-preserving variant of collocation methods 1

Energy-preserving variant of collocation methods 1 European Society of Computational Metods in Sciences and Engineering (ESCMSE) Journal of Numerical Analysis, Industrial and Applied Matematics (JNAIAM) vol.?, no.?, 21, pp. 1-12 ISSN 179 814 Energy-preserving

More information

Click here to see an animation of the derivative

Click here to see an animation of the derivative Differentiation Massoud Malek Derivative Te concept of derivative is at te core of Calculus; It is a very powerful tool for understanding te beavior of matematical functions. It allows us to optimize functions,

More information

Lab 6 Derivatives and Mutant Bacteria

Lab 6 Derivatives and Mutant Bacteria Lab 6 Derivatives and Mutant Bacteria Date: September 27, 20 Assignment Due Date: October 4, 20 Goal: In tis lab you will furter explore te concept of a derivative using R. You will use your knowledge

More information

Continuity and Differentiability Worksheet

Continuity and Differentiability Worksheet Continuity and Differentiability Workseet (Be sure tat you can also do te grapical eercises from te tet- Tese were not included below! Typical problems are like problems -3, p. 6; -3, p. 7; 33-34, p. 7;

More information

Entropy and the numerical integration of conservation laws

Entropy and the numerical integration of conservation laws Pysics Procedia Pysics Procedia 00 2011) 1 28 Entropy and te numerical integration of conservation laws Gabriella Puppo Dipartimento di Matematica, Politecnico di Torino Italy) Matteo Semplice Dipartimento

More information

LECTURE 14 NUMERICAL INTEGRATION. Find

LECTURE 14 NUMERICAL INTEGRATION. Find LECTURE 14 NUMERCAL NTEGRATON Find b a fxdx or b a vx ux fx ydy dx Often integration is required. However te form of fx may be suc tat analytical integration would be very difficult or impossible. Use

More information

Reflection Symmetries of q-bernoulli Polynomials

Reflection Symmetries of q-bernoulli Polynomials Journal of Nonlinear Matematical Pysics Volume 1, Supplement 1 005, 41 4 Birtday Issue Reflection Symmetries of q-bernoulli Polynomials Boris A KUPERSHMIDT Te University of Tennessee Space Institute Tullaoma,

More information

Impact of Lightning Strikes on National Airspace System (NAS) Outages

Impact of Lightning Strikes on National Airspace System (NAS) Outages Impact of Ligtning Strikes on National Airspace System (NAS) Outages A Statistical Approac Aurélien Vidal University of California at Berkeley NEXTOR Berkeley, CA, USA aurelien.vidal@berkeley.edu Jasenka

More information

Recall from our discussion of continuity in lecture a function is continuous at a point x = a if and only if

Recall from our discussion of continuity in lecture a function is continuous at a point x = a if and only if Computational Aspects of its. Keeping te simple simple. Recall by elementary functions we mean :Polynomials (including linear and quadratic equations) Eponentials Logaritms Trig Functions Rational Functions

More information

Notes on Neural Networks

Notes on Neural Networks Artificial neurons otes on eural etwors Paulo Eduardo Rauber 205 Consider te data set D {(x i y i ) i { n} x i R m y i R d } Te tas of supervised learning consists on finding a function f : R m R d tat

More information

arxiv: v1 [math.pr] 28 Dec 2018

arxiv: v1 [math.pr] 28 Dec 2018 Approximating Sepp s constants for te Slepian process Jack Noonan a, Anatoly Zigljavsky a, a Scool of Matematics, Cardiff University, Cardiff, CF4 4AG, UK arxiv:8.0v [mat.pr] 8 Dec 08 Abstract Slepian

More information

EXTENSION OF A POSTPROCESSING TECHNIQUE FOR THE DISCONTINUOUS GALERKIN METHOD FOR HYPERBOLIC EQUATIONS WITH APPLICATION TO AN AEROACOUSTIC PROBLEM

EXTENSION OF A POSTPROCESSING TECHNIQUE FOR THE DISCONTINUOUS GALERKIN METHOD FOR HYPERBOLIC EQUATIONS WITH APPLICATION TO AN AEROACOUSTIC PROBLEM SIAM J. SCI. COMPUT. Vol. 26, No. 3, pp. 821 843 c 2005 Society for Industrial and Applied Matematics ETENSION OF A POSTPROCESSING TECHNIQUE FOR THE DISCONTINUOUS GALERKIN METHOD FOR HYPERBOLIC EQUATIONS

More information

5.1 We will begin this section with the definition of a rational expression. We

5.1 We will begin this section with the definition of a rational expression. We Basic Properties and Reducing to Lowest Terms 5.1 We will begin tis section wit te definition of a rational epression. We will ten state te two basic properties associated wit rational epressions and go

More information

A First-Order System Approach for Diffusion Equation. I. Second-Order Residual-Distribution Schemes

A First-Order System Approach for Diffusion Equation. I. Second-Order Residual-Distribution Schemes A First-Order System Approac for Diffusion Equation. I. Second-Order Residual-Distribution Scemes Hiroaki Nisikawa W. M. Keck Foundation Laboratory for Computational Fluid Dynamics, Department of Aerospace

More information

NUMERICAL DIFFERENTIATION. James T. Smith San Francisco State University. In calculus classes, you compute derivatives algebraically: for example,

NUMERICAL DIFFERENTIATION. James T. Smith San Francisco State University. In calculus classes, you compute derivatives algebraically: for example, NUMERICAL DIFFERENTIATION James T Smit San Francisco State University In calculus classes, you compute derivatives algebraically: for example, f( x) = x + x f ( x) = x x Tis tecnique requires your knowing

More information

Exam 1 Review Solutions

Exam 1 Review Solutions Exam Review Solutions Please also review te old quizzes, and be sure tat you understand te omework problems. General notes: () Always give an algebraic reason for your answer (graps are not sufficient),

More information

Quaternion Dynamics, Part 1 Functions, Derivatives, and Integrals. Gary D. Simpson. rev 01 Aug 08, 2016.

Quaternion Dynamics, Part 1 Functions, Derivatives, and Integrals. Gary D. Simpson. rev 01 Aug 08, 2016. Quaternion Dynamics, Part 1 Functions, Derivatives, and Integrals Gary D. Simpson gsim1887@aol.com rev 1 Aug 8, 216 Summary Definitions are presented for "quaternion functions" of a quaternion. Polynomial

More information

New Fourth Order Quartic Spline Method for Solving Second Order Boundary Value Problems

New Fourth Order Quartic Spline Method for Solving Second Order Boundary Value Problems MATEMATIKA, 2015, Volume 31, Number 2, 149 157 c UTM Centre for Industrial Applied Matematics New Fourt Order Quartic Spline Metod for Solving Second Order Boundary Value Problems 1 Osama Ala yed, 2 Te

More information

Cubic Functions: Local Analysis

Cubic Functions: Local Analysis Cubic function cubing coefficient Capter 13 Cubic Functions: Local Analysis Input-Output Pairs, 378 Normalized Input-Output Rule, 380 Local I-O Rule Near, 382 Local Grap Near, 384 Types of Local Graps

More information

Bob Brown Math 251 Calculus 1 Chapter 3, Section 1 Completed 1 CCBC Dundalk

Bob Brown Math 251 Calculus 1 Chapter 3, Section 1 Completed 1 CCBC Dundalk Bob Brown Mat 251 Calculus 1 Capter 3, Section 1 Completed 1 Te Tangent Line Problem Te idea of a tangent line first arises in geometry in te context of a circle. But before we jump into a discussion of

More information

The total error in numerical differentiation

The total error in numerical differentiation AMS 147 Computational Metods and Applications Lecture 08 Copyrigt by Hongyun Wang, UCSC Recap: Loss of accuracy due to numerical cancellation A B 3, 3 ~10 16 In calculating te difference between A and

More information

EOQ and EPQ-Partial Backordering-Approximations

EOQ and EPQ-Partial Backordering-Approximations USING A ONSTANT RATE TO APPROXIMATE A LINEARLY HANGING RATE FOR THE EOQ AND EPQ WITH PARTIAL BAKORDERING David W. Pentico, Palumo-Donaue Scool of Business, Duquesne University, Pittsurg, PA 158-18, pentico@duq.edu,

More information

Lines, Conics, Tangents, Limits and the Derivative

Lines, Conics, Tangents, Limits and the Derivative Lines, Conics, Tangents, Limits and te Derivative Te Straigt Line An two points on te (,) plane wen joined form a line segment. If te line segment is etended beond te two points ten it is called a straigt

More information

1 Calculus. 1.1 Gradients and the Derivative. Q f(x+h) f(x)

1 Calculus. 1.1 Gradients and the Derivative. Q f(x+h) f(x) Calculus. Gradients and te Derivative Q f(x+) δy P T δx R f(x) 0 x x+ Let P (x, f(x)) and Q(x+, f(x+)) denote two points on te curve of te function y = f(x) and let R denote te point of intersection of

More information

Solution for the Homework 4

Solution for the Homework 4 Solution for te Homework 4 Problem 6.5: In tis section we computed te single-particle translational partition function, tr, by summing over all definite-energy wavefunctions. An alternative approac, owever,

More information

The entransy dissipation minimization principle under given heat duty and heat transfer area conditions

The entransy dissipation minimization principle under given heat duty and heat transfer area conditions Article Engineering Termopysics July 2011 Vol.56 No.19: 2071 2076 doi: 10.1007/s11434-010-4189-x SPECIAL TOPICS: Te entransy dissipation minimization principle under given eat duty and eat transfer area

More information

Lecture 21. Numerical differentiation. f ( x+h) f ( x) h h

Lecture 21. Numerical differentiation. f ( x+h) f ( x) h h Lecture Numerical differentiation Introduction We can analytically calculate te derivative of any elementary function, so tere migt seem to be no motivation for calculating derivatives numerically. However

More information

Inf sup testing of upwind methods

Inf sup testing of upwind methods INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING Int. J. Numer. Met. Engng 000; 48:745 760 Inf sup testing of upwind metods Klaus-Jurgen Bate 1; ;, Dena Hendriana 1, Franco Brezzi and Giancarlo

More information

Optimal parameters for a hierarchical grid data structure for contact detection in arbitrarily polydisperse particle systems

Optimal parameters for a hierarchical grid data structure for contact detection in arbitrarily polydisperse particle systems Comp. Part. Mec. 04) :357 37 DOI 0.007/s4057-04-000-9 Optimal parameters for a ierarcical grid data structure for contact detection in arbitrarily polydisperse particle systems Dinant Krijgsman Vitaliy

More information

Math 2921, spring, 2004 Notes, Part 3. April 2 version, changes from March 31 version starting on page 27.. Maps and di erential equations

Math 2921, spring, 2004 Notes, Part 3. April 2 version, changes from March 31 version starting on page 27.. Maps and di erential equations Mat 9, spring, 4 Notes, Part 3. April version, canges from Marc 3 version starting on page 7.. Maps and di erential equations Horsesoe maps and di erential equations Tere are two main tecniques for detecting

More information

Lecture 10: Carnot theorem

Lecture 10: Carnot theorem ecture 0: Carnot teorem Feb 7, 005 Equivalence of Kelvin and Clausius formulations ast time we learned tat te Second aw can be formulated in two ways. e Kelvin formulation: No process is possible wose

More information