= c c + g (n; c) (.b) on the interior of the unit square (; ) (; ) in space and for t (; t final ] in time for the unknown functions n(x; y; t)

Similar documents
Mathematics Research Report No. MRR 003{96, HIGH RESOLUTION POTENTIAL FLOW METHODS IN OIL EXPLORATION Stephen Roberts 1 and Stephan Matthai 2 3rd Febr

Index. higher order methods, 52 nonlinear, 36 with variable coefficients, 34 Burgers equation, 234 BVP, see boundary value problems

Spurious Chaotic Solutions of Dierential. Equations. Sigitas Keras. September Department of Applied Mathematics and Theoretical Physics

The WENO Method for Non-Equidistant Meshes

COMPLETED RICHARDSON EXTRAPOLATION IN SPACE AND TIME

The method of lines (MOL) for the diffusion equation

MASSACHUSETTS INSTITUTE OF TECHNOLOGY DEPARTMENT OF MECHANICAL ENGINEERING CAMBRIDGE, MASSACHUSETTS NUMERICAL FLUID MECHANICS FALL 2011

Scientific Computing: An Introductory Survey

An Efficient Algorithm Based on Quadratic Spline Collocation and Finite Difference Methods for Parabolic Partial Differential Equations.

2 Multidimensional Hyperbolic Problemss where A(u) =f u (u) B(u) =g u (u): (7.1.1c) Multidimensional nite dierence schemes can be simple extensions of

Basic Aspects of Discretization

Order Results for Mono-Implicit Runge-Kutta Methods. K. Burrage, F. H. Chipman, and P. H. Muir

Review. Numerical Methods Lecture 22. Prof. Jinbo Bi CSE, UConn

Scientific Computing: An Introductory Survey

30 crete maximum principle, which all imply the bound-preserving property. But most

1e N

Stability of implicit extrapolation methods. Abstract. Multilevel methods are generally based on a splitting of the solution

Numerical Analysis of Differential Equations Numerical Solution of Elliptic Boundary Value

ERIK STERNER. Discretize the equations in space utilizing a nite volume or nite dierence scheme. 3. Integrate the corresponding time-dependent problem

FDM for parabolic equations

ME Computational Fluid Mechanics Lecture 5

Kasetsart University Workshop. Multigrid methods: An introduction

Part 1. The diffusion equation

19.2 Mathematical description of the problem. = f(p; _p; q; _q) G(p; q) T ; (II.19.1) g(p; q) + r(t) _p _q. f(p; v. a p ; q; v q ) + G(p; q) T ; a q

Multistage Schemes With Multigrid for Euler and Navier-Stokes Equations

x n+1 = x n f(x n) f (x n ), n 0.

Additive Manufacturing Module 8

Time Integration Methods for the Heat Equation

Technical University Hamburg { Harburg, Section of Mathematics, to reduce the number of degrees of freedom to manageable size.

DELFT UNIVERSITY OF TECHNOLOGY

Finite Difference Methods (FDMs) 1

Finite Differences for Differential Equations 28 PART II. Finite Difference Methods for Differential Equations

Time stepping methods

Mat1062: Introductory Numerical Methods for PDE

quantum semiconductor devices. The model is valid to all orders of h and to rst order in the classical potential energy. The smooth QHD equations have

Numerical Schemes from the Perspective of Consensus

October 7, :8 WSPC/WS-IJWMIP paper. Polynomial functions are renable

Numerical Oscillations and how to avoid them

Blowup for Hyperbolic Equations. Helge Kristian Jenssen and Carlo Sinestrari

Waveform Relaxation Method with Toeplitz. Operator Splitting. Sigitas Keras. August Department of Applied Mathematics and Theoretical Physics

THEORETICAL FOUNDATION, NUMERICAL EXPERIMENT KLAUS BERNERT

Solutions Preliminary Examination in Numerical Analysis January, 2017

Numerical Solution Techniques in Mechanical and Aerospace Engineering

Appendix C: Recapitulation of Numerical schemes

A general theory of discrete ltering. for LES in complex geometry. By Oleg V. Vasilyev AND Thomas S. Lund

Calculus and linear algebra for biomedical engineering Week 3: Matrices, linear systems of equations, and the Gauss algorithm

Rening the submesh strategy in the two-level nite element method: application to the advection diusion equation

Numerical Analysis of Differential Equations Numerical Solution of Parabolic Equations

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 9

Numerical Solutions to Partial Differential Equations

3.4. Monotonicity of Advection Schemes

n i,j+1/2 q i,j * qi+1,j * S i+1/2,j

1. Introduction Let the least value of an objective function F (x), x2r n, be required, where F (x) can be calculated for any vector of variables x2r

Design of optimal Runge-Kutta methods

Introduction to numerical schemes

The Riccati transformation method for linear two point boundary problems

2 Introduction T(,t) = = L Figure..: Geometry of the prismatical rod of Eample... T = u(,t) = L T Figure..2: Geometry of the taut string of Eample..2.

Stability of implicit-explicit linear multistep methods

3.3. Phase and Amplitude Errors of 1-D Advection Equation

Chapter Two: Numerical Methods for Elliptic PDEs. 1 Finite Difference Methods for Elliptic PDEs

Method of Lines. Received April 20, 2009; accepted July 9, 2009

Computation Fluid Dynamics

Numerical methods for the Navier- Stokes equations

Finite Difference Method for PDE. Y V S S Sanyasiraju Professor, Department of Mathematics IIT Madras, Chennai 36

Tutorial 2. Introduction to numerical schemes

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

Validating an A Priori Enclosure Using. High-Order Taylor Series. George F. Corliss and Robert Rihm. Abstract

R. Schaback. numerical method is proposed which rst minimizes each f j separately. and then applies a penalty strategy to gradually force the

CONVERGENCE OF GAUGE METHOD FOR INCOMPRESSIBLE FLOW CHENG WANG AND JIAN-GUO LIU

LECTURE 15 + C+F. = A 11 x 1x1 +2A 12 x 1x2 + A 22 x 2x2 + B 1 x 1 + B 2 x 2. xi y 2 = ~y 2 (x 1 ;x 2 ) x 2 = ~x 2 (y 1 ;y 2 1

H. L. Atkins* NASA Langley Research Center. Hampton, VA either limiters or added dissipation when applied to

Application of turbulence. models to high-lift airfoils. By Georgi Kalitzin

Partial Differential Equations

Eect of boundary vorticity discretization on explicit stream-function vorticity calculations

Finite difference methods for the diffusion equation

Finite Differences: Consistency, Stability and Convergence

Linear Regression and Its Applications

Renormalization Group analysis of 2D Ising model

A Stable Finite Dierence Ansatz for Higher Order Dierentiation of Non-Exact. Data. Bob Anderssen and Frank de Hoog,

Scientific Computing I

Lehrstuhl Informatik V. Lehrstuhl Informatik V. 1. solve weak form of PDE to reduce regularity properties. Lehrstuhl Informatik V

An explicit nite element method for convection-dominated compressible viscous Stokes system with inow boundary

Finite Difference Methods for Boundary Value Problems

Numerical Methods for Engineers and Scientists

Numerical Solutions for Hyperbolic Systems of Conservation Laws: from Godunov Method to Adaptive Mesh Refinement

Voronoi cell nite dierence method for the diusion operator on arbitrary unstructured grids

MODELLING OF FLEXIBLE MECHANICAL SYSTEMS THROUGH APPROXIMATED EIGENFUNCTIONS L. Menini A. Tornambe L. Zaccarian Dip. Informatica, Sistemi e Produzione

Numerical Integration for Multivariable. October Abstract. We consider the numerical integration of functions with point singularities over

1 Matrices and Systems of Linear Equations

Linear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02)

ENO and WENO schemes. Further topics and time Integration

An Overly Simplified and Brief Review of Differential Equation Solution Methods. 1. Some Common Exact Solution Methods for Differential Equations

CS 450 Numerical Analysis. Chapter 9: Initial Value Problems for Ordinary Differential Equations

Pointwise convergence rate for nonlinear conservation. Eitan Tadmor and Tao Tang

Multistage Methods I: Runge-Kutta Methods

t x 0.25

Review for Exam 2 Ben Wang and Mark Styczynski

Iterative procedure for multidimesional Euler equations Abstracts A numerical iterative scheme is suggested to solve the Euler equations in two and th

Finite Difference and Finite Element Methods

Math 1270 Honors ODE I Fall, 2008 Class notes # 14. x 0 = F (x; y) y 0 = G (x; y) u 0 = au + bv = cu + dv

Transcription:

Splitting Methods for Mixed Hyperbolic-Parabolic Systems Alf Gerisch, David F. Griths y, Rudiger Weiner, and Mark A. J. Chaplain y Institut fur Numerische Mathematik, Fachbereich Mathematik und Informatik Martin-Luther Universitat Halle-Wittenberg Postfach, 699 Halle (Saale), Germany Fax: +49 ()345 5574 E-mail: fgerisch, weinerg@mathematik.uni-halle.de y Department of Mathematics University of Dundee Dundee, DD 4HN, U.K. Fax: +44 ()38 34556 E-mail: fdfg, chaplaing@mcs.dundee.ac.uk In this paper we present two dierent approaches to the numerical solution of a system of coupled parabolic{hyperbolic partial dierential equations (PDEs). The rst approach uses a nite dierence scheme based on a splitting of the PDE system. This method is computationally inexpensive but is prone to generating spurious oscillations in the solution in certain regions of the spatial domain. These lead to incorrect negative solution values. The second approach uses the method of lines with special attention being given to preserving the positivity of the solution. Here we use splitting techniques for the solution of the resulting system of sti ordinary dierential equations. The performance of the two approaches on a model problem from mathematical biology is compared and discussed and conclusions drawn. Keywords: Mixed parabolic{hyperbolic PDE system, Finite dierence approximation, Method of lines, Splitting methods I. INTRODUCTION We consider the hyperbolic-parabolic system of partial dierential equations (PDEs) @n @t = r (ng (c)rc) + g (n; c) (.a) This manuscript appears as Numerical Analysis Report NA/86, University of Dundee, December 998.

@c @t = c c + g (n; c) (.b) on the interior of the unit square (; ) (; ) in space and for t (; t final ] in time for the unknown functions n(x; y; t) and c(x; y; t) where r denotes the vector ( @x @ ; @ @y ) and is the Laplace operator. The quantity n is convected by a velocity eld which depends nonlinearly on the quantity c and its gradient. For G (c), n will be convected towards areas of larger c values. The quantity c is the solution of the diusion equation with nonlinear source term (.b) in this case but might, in other applications, be described by a dierential equation of dierent type (for example, an ordinary dierential equation (ODE)). Beside this coupling of the equations, there is also a coupling of the system of PDEs via the source terms g and g. The functions G ; g and g are prescribed functions of n and c. The equations may be termed a \reaction-diusion-convection" system. The system of PDEs is supplied with initial conditions n(x; y; ) = n (x; y); c(x; y; ) = c (x; y) (.) and appropriate boundary conditions consisting of one condition for c at every point of the boundary and Dirichlet boundary conditions for n on inow boundaries. There are no boundary conditions for n on outow boundaries. In this paper we will concentrate on the following boundary conditions n(; y; t) = n r (y) (assuming inow on the right boundary) and (.3) c(; y; t) = c l (y); c(; y; t) = c r (y); c y (x; ; t) = c y (x; ; t) = : (.4) The functions n ; c ; n r ; c l and c r are given such that initial and boundary conditions are consistent. The algorithms presented here have to be adjusted slightly to other boundary conditions. Problems of this or similar kind and our motivation to consider them arise for instance in biomathematical models describing tumour induced angiogenesis [5, ]. Our test problem is such a model and is derived and described in greater detail in [5, 8]. Example. initial conditions n (x; y) = Equations (.a,.b) with 8 >< >: c (x; y) = cos boundary conditions dened as G (c) = ; (.5) g (n; c) = n( n) max f; c c? g n; (.6) g (n; c) = nc + c ; (.7) 4[ : x :95 and y i= : otherwise y i 35 ; y i = :; :36; :5; :64; :8 (.8) x ; (.9) n r (y) = n (; y); (.) c l (y) = ; c r (y) = ; (.)

3 and parameter values (typical for the application [5]) = ; = 4; = ; = :7; = ; = ; c? = : : (.) There is a variety of approximate solution methods for pure hyperbolic or pure parabolic PDEs and we make use of these by decoupling the system (.a,.b) through splitting techniques. In particular, we treat the hyperbolic equation (or parts arising from it) with explicit methods and the parabolic equation implicitly. In Section II., a fully discrete splitting method is proposed. It is based on a splitting of the PDE system and solves in turn the easier subproblems (.a) for n and (.b) for c (independently of each other) with small time steps. ADI methods combined with Strang splitting are then used for the solution of the parabolic subproblem and MacCormack's method combined with Strang splitting is the main tool for advancing the hyperbolic equation. The time step size will be selected such that the Courant number in the hyperbolic subproblem satises a stability condition. In Section III., we follow a method of lines (MOL) approach. Here we consider the case that n and c describe concentrations or densities, and as such, are always non-negative. We therefore expect the functions G ; g and g and the boundary conditions to be of such a structure that the exact solution of the system (.a,.b) is non-negative for all times t > provided the initial conditions for n and c are non-negative. The rst and second order spatial derivatives of c in the system are approximated by standard second order central dierences but the convection operator in (.a) is approximated by a ux-limited second order discretization similar to the one proposed in []. The aim of the limiting procedure is that the non-negativity property of the PDE's solution is carried over to the solution of the ODE system obtained by the spatial approximation. The result is a large sti system of ODEs which is solved by splitting the right hand side into the terms representing the convection and the diusion/reaction of the underlying PDE system. Within this splitting, explicit Runge-Kutta methods (RKMs) are used to solve the resulting systems of ODEs with the right hand side from the convection discretization and implicit schemes are employed for ODE systems with the right hand side representing diusion and reaction (these are themselves splitting schemes). It is shown that the method is second order in time. Section IV. contains a discussion and comparison of the results of the two dierent algorithms applied to the test problems. An equidistant linear grid covers the spatial domain having M intervals in each spatial direction with grid spacing h = =M. The index i refers always to grid values along the y-axis and the index j always to grid values along the x-axis. We dene x j = jh, y i = ih and denote the continuous time approximations to n(x j ; y i ; t) and c(x j ; y i ; t) by N ij (t) and C ij (t), respectively. The dependence on t will be omitted if it is clear from the context. The time scale will be covered by an irregular grid with grid spacing k dened adaptively by an appropriate step size control which generates grid points t k+ = t k + k for k = ; ; : : : and t =. In the following, the variable time step size k will be denoted by (without subscript). Nij k and Ck ij denote the point approximations to n(x j; y i ; t k ) and c(x j ; y i ; t k ), respectively. The index or superscript k always refers to points on the time scale. For a given grid function U ij, the standard second order central dierence operator in the x-direction, x, is dened as x U ij = U i;j+ U ij + U i;j : (.3)

4 The central dierence ( x ), forward dierence ( x ) and backward dierence (r x ) operators are dened as x U ij = U i;j+ U i;j ; x U ij = U i;j+ U i;j ; r x U ij = U i;j U i;j ; (.4) respectively. The operators y; y ; y and r y for the y-direction are dened in a similar manner. II. THE FINITE DIFFERENCE APPROACH The general idea of this approach is to solve the system (.a,.b) by advancing each of the solutions n and c independently of each other (but in turn) by small time steps. FD Algorithm k = ; t k = ; /*Initialization*/ Cij k = c (x j ; y i ); (i = ; ; : : : ; M; j = ; ; : : : ; M ); Nij k = n (x j ; y i ); (i = ; ; : : : ; M; j = ; ; : : : ; M ); while (t k < t final ) do /*Main loop*/ () Compute Cij k+ (i = ; ; : : : ; M; j = ; ; : : : ; M ) from Cij k using (.b) with n(x j; y i ; t) xed as Nij k ; () Compute Nij k+ (i = ; ; : : : ; M; j = ; ; : : : ; M ) from Nij k using (.a) with c(x j ; y i ; t) = Cij k or = Cij k+ as required by the method employed; (3) t k+ := t k + ; k := k + ; Compute for next step; end while; End of FD Algorithm. In the following subsections, we explain the steps () to (3) in greater detail. A. Advancing the Solution of the Parabolic Equation We consider the numerical solution of the equation (.b) from time t k to t k+ with a time step. The grid values Nij k and Cij k of n and c at time t k are given and we x the value of n(x j ; y i ; t) in time to Nij k in (.b) such that a single PDE for the unknown function c results @c @t = c c + g (Nij k ; c): (.) To avoid the time step size being restricted by diusion, an implicit method is used to compute the grid values Cij k+ of c at time t k+. The spatial derivatives in (.) are approximated by standard second order central dierences. If the resulting large nonlinear equation system is solved by a method which makes use of the Jacobian of the system then the resulting linear algebraic systems have a bandwidth of M and will therefore be expensive to solve. We will therefore use an ADI method for the approximation of (.) to reduce the bandwidth. This faces us with the new problem of how to include the source term in the scheme. Our proposed solution is to apply a Strang splitting [6, p86] such that the right hand side of (.) is split into the two terms c c and g (Nij k ; c). This kind of splitting requires the solution of equation (.) with g (Nij k ; c), which

is equivalent to @s @t = s; where s(x; y; t) = c(x; y; t)e(t t k) : (.) The values of s at time t k are those of c at t k but the boundary conditions for s are those of c multiplied by e (t t k). The procedure we have just described for Stage () of the FD algorithm is summarized in FD Subalgorithm () (.) Solve c t = g (Nij k ; c) with initial condition c(x j; y i ; t k ) = Cij k from t k to t k + = with a method of consistency order (at least) one in time and denote the result by Cij. (We use the implicit Euler method.); (.) Solve s t = s with initial condition S ij = Cij and boundary conditions for s as those of c multiplied by e (t tk) from t k to t k + by an ADI method with second order accuracy in space and at least rst order in time and denote the result by Sij ; (.3) Rescale ^Cij = e Sij ; (.4) Solve c t = g (Nij k ; c) with initial condition c(x j; y i ; t k + =) = ^Cij from t k + = to t k + with a method of consistency order (at least) one in time and denote the result by C k+ (We use the implicit Euler method.); End of FD Subalgorithm (). Remark. The implicit Euler step Cij = Cij k + g (Nij k ; Cij ) in step (.) where Nij k is used as a constant (in time) approximation to n(x j ; y i ; t) is the same as a step with the rst order one-stage Radau-IA method [8, p4] applied to the problem with timecontinuous approximation to n(x j ; y i ; t). In step (.4), we would prefer to use a better approximation to n(x j ; y i ; t + =) than Nij k but these values are not given to us. The equation in the implicit Euler steps (.) and (.4) for the source term g (n; c) dened in Example I. can be solved exactly. ij 5 Lemma.. The FD Subalgorithm () for the solution of equation (.) has a local accuracy of O( + h ). Proof. At least rst order in time is used in each step and second order in space. In step (.) of the algorithm we have to solve the following problem from t k to t k + @s = s = s xx + s yy ; (.3a) @t s(x j ; y i ; t k ) = C k ij (at all grid points); (.3b) s(; y; t) = c l (y)e (t t k) ; s(; y; t) = c r (y)e (t t k) ; s y (x; ; t) = s y (x; ; t) = : (.3c) We use the Peaceman{Rachford ADI method for the solution of (.3a-.3c). It is second order accurate in time and space, and unconditionally stable (see [, p65]). This method can be regarded as an approximate factorization of the two dimensional Crank{Nicolson method [, p68]. The method can be written as r x S = ij = + r y Cij (.4a)

6 r y Sij = + r x S = ij ; (.4b) where the mesh ratio r = h. The boundary conditions (.3c) prescribe the values S = i = c l (y i )e = and S = im = c r(y i )e = (and this choice guarantees an overall second order method as shown in []). The remaining unknowns in (.4a) are those with indices i = ; ; : : : ; M and j = ; ; : : : ; M. A closer look at (.4a) reveals that equations with dierent indices i on the left have no common unknowns and the linear system (.4a) splits up into an (M )-dimensional tridiagonal system of linear equations for each i. This system reads for i = ; ; : : : ; M tridiag r ; + r; r B @ S = i S = i. S = i(m ) C A = + r y B @ C i C i. C i(m ) C A + b i (.5) with b i of dimension M containing the known boundary values from the left hand side, r b i = S= i ; ; : : : ; ; r T S= im : (.6) For i = and i = M, two rows of ctitious grid points are introduced at y and y M+, respectively. The no ux boundary conditions at y = and y = are approximated by central dierences y h resulting in C j C j = and C(M+)j C(M )j = : (.7) These equations are used to eliminate the ctitious grid points in a standard way when imposing (.4a) for i = and i = M. This leads to linear systems with the same left hand side as in (.5) and right hand sides dened by B @ ( r) C +r C ( r) C +r C C. A + b and ( r) C(M ) +r C(M ) B @ ( r) CM +r C(M ) ( r) CM +r C(M ) C. A + b M ( r) CM(M ) +r C(M )(M ) (.8) for i = and i = M, respectively. It is important to remark that the coecient matrix of all linear systems is the same and therefore only one LU factorization is necessary in order to solve all M + systems. Most of the computational eort is devoted to constructing the right hand sides of the systems. Assembling these M + vectors and the solution of the linear systems could be performed in parallel. All necessary values of S = ij are now computed and we can proceed with the solution of (.4b). Equations of this system with dierent indices j on the left hand side are independent and the system again splits up. The Dirichlet boundary conditions supply the values Si = c l (y i )e and SiM = c r (y i )e and we have to solve systems when j = ; ; : : : ; M is xed. Fictitious grid points are introduced at the upper and lower

boundary of the domain to deal with the no ux boundary conditions and these are approximated by central dierences y h at the new time level leading to S j S j = and S(M+)j S(M )j = : (.9) For each j = ; ; : : : ; M, equation (.4b) is employed for i = ; ; : : : ; M and the ctitious grid points are eliminated with the equations just derived. Altogether, this results in the system 6 4 + r r : : : r + r r : : :.................... : : : + r r : : : r + r 3 B @ 7 5 S j S j. S (M )j S Mj C A = + r x B @ S = j S = j. S = (M )j S = Mj C A : 7 (.) The coecient matrix is the same for all these systems and the same comments as after the rst step of the ADI scheme apply. Step (.) of the algorithm is now completed. If the initial condition is smooth then the Peaceman{Rachford ADI method generates very good solutions and can be used with large time steps. However, it was observed experimentally that the method requires smaller and smaller time steps in order to obtain an acceptable solution as the initial data becomes less and less smooth. The same is reported for the Crank{Nicholson method in [7]. There, it is advocated that the solution process should start with some steps of the implicit Euler method to smooth the data and then to continue with Crank{Nicholson. This means, in the present context, that we should use the Backward Time Centred Space (BTCS) scheme to execute step (.) of the algorithm S ij r x S ij r y S ij = Cij : (.) A simple approximate factorization (in order to reduce computational costs) of this is given by r x S = ij = Cij (.) r y Sij = S = ij : (.3) This scheme is rst order accurate in time [9, p3] and second order in space (because second order accurate approximations to the second order derivatives are used). It is also unconditionally stable [9, p33] and we refer to it as the implicit Euler ADI scheme in the following. The tridiagonal systems (.,.3) split up in the same way as the stage equations of the Peaceman{Rachford scheme but the right hand sides are much simpler. The coecient matrices of the small systems are the matrices in (.5) and (.), respectively, when r is replaced by r. There are no diculties in constructing the right hand sides of (.) as these are just the values at the old time level plus the boundary terms b i (replace r by r) from the left hand side and it is not necessary to distinguish between i = ; M

8 and other values of i. The Dirichlet boundary values of S = ij for j = and j = M are set to c l (y i )e and c r (y i )e, respectively. Fictitious grid points are introduced for the solution of (.3) in the same manner as for the solution of (.4b). Experiments established that the implicit Euler ADI method is much less sensitive to non-smooth initial data than the Peaceman{Rachford ADI method and larger time steps are possible while maintaining smooth solutions. B. Advancing the Solution of the Hyperbolic Equation We consider the numerical solution of the equation (.a) from time t k to t k+ with a time step. The main ingredients for a solution of this problem will be MacCormack's method and, again, splitting schemes. A comparison was made in [8] between several dierent approaches for the solution of a one-dimensional analogue of our problem (.a-.b). It was found that a combination of MacCormack's method and splitting schemes worked well and was cost-eective and therefore we will use an adaptation of this approach in two dimensions. MacCormack's method is based on the second order accurate improved Euler method (Heun's method) [8, p5] dened by the Butcher array in Figure. It is best explained FIG.. Butcher array of the improved Euler method. in the one dimensional case (see [8]). Consider the ODE u (t) = s(u(t)) with given initial data U k (e.g. an approximation to u(t k )). A step from t k to t k+ of the improved Euler method for this ODE can be written as Z = U k + s(u k ) (.4a) W = U k + s(z) (.4b) U k+ = (Z + W ) : (.4c) For a one dimensional hyperbolic PDE (with velocity function v(x; t)) @u(x; t) @t = (v(x; t)u(x; t)) x ; (.5) we substitute s = (vu) x in the above scheme and approximate the spatial derivative in (.4a) by rst order forward dierences x h and the spatial derivative in (.4b) by rst order backward dierences rx h (the application of forward and backward dierences can be reversed). The resulting scheme is MacCormack's method for equation (.5) and is second order accurate in time and space [8, p]. The method is equivalent to the Lax{Wendro scheme [6, p67] for the constant coecient case v = const and hence stable in the von Neumann sense for Courant numbers C = jj h satisfying C. An additional source term on the right of the hyperbolic equation (.5) can be included by an averaging procedure without sacricing the order of accuracy or stability [8, p]. The values of Z are a rst order approximation (in time) to grid values of u(x; t k+ ) and therefore the boundary conditions of u at t k+ apply. Furthermore, the values of v at

time t k+ must be used in equation (.4b) (otherwise the accuracy of the method drops to order one). The equation which we have to solve as part of step () in the FD algorithm is a little more complicated and reads @n @t = (ng (c)c x ) x ; (.6) and we also have to approximate the spatial derivative c x. This is done by forward dierences if the MacCormack stage equation contains a backward dierence and vice versa. This strategy guarantees an almost centred stencil of the method and results in the following scheme (in two-dimensional notation) Z ij W ij = N k ij h = N k ij h N k i;j+ G C k i;j+ (Crx ) k i;j+ N ij k G C k ij (Crx ) k ij Z ij G C k+ ij (Cx ) k+ ij Zi;j G k C k+ i;j (Cx ) k+ i;j 9 (.7a) (.7b) N k+ ij = (Z ij + W ij ) ; (.7c) where we use the notation (C rx ) k ij = C k h ij Ci;j k (C x ) k ij = C k h i;j+ Cij k (.8) (.9) for backward and forward dierence approximations to c x (x j ; y i ; t k ). The algorithm presented by (.7a-.7c) has a straight-forward extension to two dimensions by including the approximations of the convection in the y-direction. Unfortunately, this approach results in an even greater restriction of the allowable time step size for reasons of stability and leads to an unacceptable increase in computational cost. We circumvent this diculty by using a three term Strang type splitting of (.a) which uses one dimensional hyperbolic equations as subproblems. With MacCormack's method in one dimension at hand we are now ready to state the algorithm for step () of the FD algorithm. Assuming that we are given Nij, k Cij k and Cij k+ as approximations of n(x j ; y i ; t k ), c(x j ; y i ; t k ) and c(x j ; y i ; t k +), respectively, we perform a time step to obtain Nij k+ as an approximation to n(x j ; y i ; t k + ), the solution of equation (.a) at time t k +. FD Subalgorithm () (.) Solve @t @ v() ij = g (v () ij ; Ck ij ); v() ij () = N ij k approximation to v () ij to obtain v() ij () as an (=) for i; j = ; ; : : : ; M; (.) Solve v () t = (v () G (Cij k )(c x) k ij ) x with initial data v () ij () from step (.) and constant boundary values at x = to obtain v (3) ij () as an approximation to v () (=) (i = ; ; : : : ; M and j = ; ; : : : ; M ) (.3) Solve v (3) t ij employing MacCormack's method as described above; = (v (3) G (C fk;k+g ij (c y ) fk;k+g ij ) y with initial data v (3) ij ()

from step (.) to obtain v (4) ij () as an approximation to v(3) ij () (i = ; ; : : : ; M and j = ; ; : : : ; M ) using MacCormack's method as described above but adjusted for the y-direction. The rst stage of MacCormack's method uses approximations of c and c y at the kth time level (Cij k ) whereas the second step uses those at time level k + (Ck+ ij ); (.4) Solve v t (4) = (v (4) G (Cij k+ )(c x ) k+ ij ) x with initial data v (4) ij () from step (.3) to obtain v (5) ij () as an approximation to v(4) ij (=) (i = ; ; : : : ; M and j = ; ; : : : ; M ) employing MacCormack's method as described above. Constant boundary values at x = are used and the correct choice is discussed below; (.5) Solve @t @ v(5) ij = g (v (5) ij ; Ck+ ij ) with initial data v (5) ij () from step (.4) to obtain Nij k+ as an approximation to v (5) ij (=) and n(x j; y i ; t k + ) for i = ; ; : : : ; M and j = ; ; : : : ; M ; End of FD Subalgorithm (). We cannot use the boundary values of n at x = in the intermediate steps of the splitting algorithm because the intermediate solutions are not related to these values. An approach to correct intermediate boundary values is given in [5, 4]. We require boundary values in steps (.) and (.4) only because steps (.) and (.5) are initial value problems and step (.3) is pure convection in the y-direction and the y-boundaries are outow boundaries and no condition is imposed there. On x = we will use constant boundary conditions in steps (.) and (.4) because the conditions for n are time-independent. In step (.) we use the result of step (.) at x = as boundary condition because the source term g has an impact also on the boundary and this is just approximated by step (.). Similarly, g has an impact on the values along x = in step (.5) and the result of step (.5) for x = should be the correct boundary data of n. Therefore correct initial data for step (.5) is required along x =. Our aim is v (5) (; y; =) = n r (y) and we prescribe initial values v (5) (; y; ) such that we arrive at the desired result. We write the initial condition in terms of the nal result and obtain v (5) (; y; ) = v (5) (; y; =) v(5) t (; y; =) + O( ) (.) = n r (y) g (v (5) (; y; =); C k+ im ) + O( ) (.) n r (y) g (n r (y); CiM k+ ); (.) where we have used a Taylor expansion of the solution, the boundary condition, the dierential equation from step (.5) and nally truncated the series to obtain an approximation for v (5) (; y; ). This approximation of the initial condition of step (.5) along x = is, in fact, also the boundary condition at the nal time of step (.4) and, because we chose constant boundary conditions, also at initial time (hence dening the initial condition of step (.4) along x = ). The decision of whether to solve in the x- or y-convection with full or half time steps in the splitting algorithm is important if we are to obtain the largest allowable time step size subject to stability, as we will see in the next section. If the order of x- and y-convection is reversed, then the boundary values at x = have to be adjusted accordingly.

C. Time Step Size Control Based on Courant Numbers When MacCormack's method is applied to solve the advection equation u t + u x = with constant velocity, then a von Neumann stability analysis reveals that it is stable for Courant numbers satisfying C, where C = jj h : (.3) It is well known (and observed in [8]) that the method works best in the sense of accuracy, for Courant numbers C. The velocity function used in steps (.) { (.4) is not constant, but is space dependent. We therefore dene the Courant number of our problem to be C = max i;j jg (Cij k ) maxf=j(c rx) k ij j; j(c ) k ry ij jgj h and select the step size so that (.4) C C max ; where C max (; ) (.5) is a safety factor. The values of (C rx ) k ij and (C ry ) k ij are required for the next step in the FD algorithm and consequently there is almost no computational overhead for the time step selection. If the more active direction of convection is treated with half time steps in step () of the FD algorithm (this is the x-direction in our description) then we obtain smaller values of C for a given pair h,. This, in turn, means that we can choose a larger without violating condition (.5) and therefore to favour overall time steps. D. Observations on the Implementation of the Scheme We implemented the method derived in this section and applied it to Example I. The results can be found in [8]. We used the implicit Euler ADI method in the solution of the parabolic equation (.b) because the source term in (.b) introduces a non-smoothness in the solution and the Peaceman{Rachford ADI method requires too small time steps. As reported in [8], a problem of small oscillations in the solution of n was observed in regions where the gradient of n was steep. This led to negative values of n at some grid points, which in turn caused the solution to become unbounded at later times. It was found that this problem could be avoided by using suciently small time steps but this increased considerably the overall computational eort. In order to overcome the diculty of generating spurious negative values of n, in the next section we turn our attention to a method which preserves the positivity of the solution. Furthermore, we will aim for a second order accurate method in time. III. THE METHOD OF LINES APPROACH The method of lines (MOL) is a standard approach used for the solution of timedependent PDEs. The PDE is replaced by a large, in general, sti system of ODEs by approximating the spatial derivatives through nite dierences. The solution of the ODE is a time-continuous function for each spatial grid point and hence denes an approximation to the solution of the PDE restricted to the grid. Subsection A. describes the spatial approximation of (.a-.b). We assume in the following that n and c describe non-negative quantities and we want this property to

be preserved in the solution of the ODE system. Space discretizations may destroy this property and must therefore be carried out carefully. Our approach leads to a second order accurate semidiscretization away from extremal points of the solution. In Subsection B., a second order splitting scheme for the numerical solution of the ODE system is developed. It is specially designed with the ODE system obtained in Subsection A. in mind and we address issues related to computational eciency and positive solutions. Finally, Subsection C. presents numerical results of the MOL approach applied to Example. A. Approximation of the Spatial Derivatives As stated in the introduction, we consider the case that the solutions n and c of our PDE system (.a-.b) are non-negative for all times t. This property should carry over to the ODE system obtained by spatially discretizing the PDE and we dene Denition. (Positive ODE system, positive semidiscretization) An ODE system u (t) = f(u(t)); t [; t final ] is called positive if its solution satises u(t) for all t [; t final ]. A semidiscretization of a given PDE (with non-negative solution) is called positive if it leads to a positive ODE system. Lemma 3.. The ODE system u (t) = f(u(t)); t [; t final ] with a continuously differentiable function f = (f ; f ; : : : f m ) is positive if and only if, for any vector v R m and all i = ; ; : : : ; m, v i = ; v j for all j 6= i ) f i (v) : (3.) Proof. See [, p6] In the following, we look at the discretization of the parabolic and hyperbolic equation separately. The Parabolic Equation (.b) The semidiscretization of a linear diusion equation (with periodic boundary conditions and without source term) is considered in [, p8] with respect to positivity. The positivity requirement leads to an order barrier two and it is shown that the standard second order central dierence operator leads to a minimal error coecient. Guided by these results, Equation (.b) is discretized in space as d C ij (t) d t = h x C ij (t) + y C ij(t) C ij (t) + g (N ij (t); C ij (t)) (3.) for i = ; ; : : : ; M and j = ; ; : : : ; (M ). The given boundary values of c are used on the left and right boundary and rows of ctitious grid points are introduced at y = h and y = + h in order to approximate the homogeneous Neumann boundary conditions at y = and y = with second order accuracy by central dierences. This process leads to C ;j = C j and C M+;j = C M ;j for all j = ; ; : : : ; (M ). The second order accurate spatial discretization of (.b) by (3.) is now fully dened.

Lemma 3.. The ODE system dened by (3. ) is positive if the prescribed Dirichlet boundary values are non-negative, the source function g is continuously dierentiable and if g (n; ) for all possible values of n. Proof. This result follows from Lemma 3.. Remark. The condition required by Lemma 3. is satised for the source function g of Example and for the given boundary conditions. 3 The Hyperbolic Equation (.a) For the semidiscretization of Equation (.a), we will again focus on positivity of the resulting ODE system. A rst order upwind space discretization would give us such a system but would also introduce a large amount of numerical diusion which nonphysically damps any travelling wave solution. This diusive eect is less pronounced on ner spatial meshes but this leads to even larger ODE systems and hence to greater computational cost. On the other hand, higher order (linear) space discretizations are more accurate but prone to introducing \wiggles" in the solution, and possibly negative values, which we would like to avoid. A widely used approach for combining high order methods and oscillation{free solutions is that of ux limiters. We will follow the construction of a positive semidiscretization given in []. The approximation of two-dimensional convection is built from the one-dimensional setting @w(x; y; t) = (v(x; y; t)w(x; y; t)) x ; t ; (x; y) (; ) ; (3.3) @t w(x; y; ) = w (x; y) ; (3.4) w(; y; t) = w (y; t); (3.5) where the quantity w is convected in the x-direction only by the space and time dependent velocity eld v. The function v(x; y; t) is assumed to be known for the following discussion. We suppose an inow boundary at x = and therefore require v(; y; t). The function ~f(x; y; t) = v(x; y; t)w(x; y; t) (3.6) denotes the ux function of (3.3) and W ij (t) denotes a time-continuous approximation of w(x j ; y i ; t). We dene the semidiscrete ux f ij = v(x j ; y i ; t)w ij (t) (3.7) and let F i;j+= be a consistent approximation of ~ f(x j+= ; y i ; t) depending on the neighbouring (semidiscrete) uxes f il ; l = ; ; : : : ; M. Here consistency means that F i;j+= = ~f(x j+= ; y i ; t) if f il = ~ f(x j+= ; y i ; t) for l = ; ; : : : ; M, [9]. Equation (3.3) is now approximated in space by the semidiscrete, conservative expression d W ij (t) d t = h Fi;j+= F i;j = ; (3.8) where we have yet to dene the terms F i;j+= for j = ; ; ; : : : ; M. This is our next concern.

4 The general ux expression F i;j+= = f ij + (r i;j+=)(f ij f i;j ) (3.9) is considered for positive velocities v(x j ; y i ; t) in []. The function is called the limiter function, and the ratio r monitors the smoothness of the ux and is dened as Remark. In actual computations we use r i;j+= = f i;j+ f i;j f i;j f i;j : (3.) r i;j+= = f i;j+ f i;j + " f i;j f i;j + " with " some very small number (e.g. 3 ) in order to avoid division by zero in uniform ow regions []. The expression corresponding to (3.9) for negative velocity v(x j ; y i ; t) is obtained by reecting all indices that appear in f i; about j + = yielding F i;j+= = f i;j+ + (r i;j+3= )(f i;j+ f i;j+ ): (3.) As can be seen, we favour the ux interpolation approach in contrast to the state interpolation approach used in []. The reason is that, in our objective equation (.a), the velocity depends on c and values thereof are known only at grid points. Inserting (3.9) and (3.) in Equation (3.8) gives d W ij (t) d t = h + (r i;j+=) (r i;j =) r i;j = (f ij f i;j ) (3.) and d W ij (t) d t = h " + (r ) i;j+= (r ) i;j+3= r i;j+3= # (f i;j+ f ij ); (3.3) respectively. Following [], we require the bracketed terms to be non-negative, that is + (a) (b) b for all a; b R. This will lead to a positive semidiscretization (see below). Sucient conditions on are (r) = if r ; and (r) ; (r) r if r > : (3.4) Here, > serves as a free parameter which can be used to obtain higher accuracy near peaks []. With regard to the conditions in Lemma 3., the limiter should be chosen as a continuously dierentiable function. The limiters proposed in [] lead, whenever allowed by the conditions (3.4), to the so called -methods (including the second order central, second order upwind and third order upwind biased approximation) but are not dierentiable for r. Therefore, we propose using the limiter function due to van Leer, see e.g. [9], V L (r) = jrj + r + jrj : (3.5)

This limiter satises the conditions (3.4) for = and is continuously dierentiable for all r 6= ; it is not dierentiable for r =. Remark. Numerical experiments with both limiters indicate the superiority of van Leer's limiter with respect to positivity. However, if positivity of the numerical solution is less important than accuracy then the limiter proposed in [] should be used. 5 Lemma 3.3. Dene F i;j= by (3.9) or (3.) depending on the sign of v(x j ; y i ; t) using van Leer's limiter (3.5). If, depending on the sign of v(x j ; y i ; t), r i;j= > or r i;j+= > (that is away from local extrema of w if v has the same sign locally around the point (x j ; y i )) then the approximation (3.8) is second order accurate in space for suciently smooth functions w and v, i.e Proof. h (F i;j+= F i;j = ) (v(x j ; y i ; t)w(x j ; y i ; t)) x = O(h ): (3.6) Taylor expansion of the expressions. The statement of the lemma is that the approximation is second order in smooth, monotone regions of the solution w. Next we describe exactly which of the ux approximations (3.9, 3.) to apply for the semidiscretization at a specic point (x j ; y i ) in order to obtain a positive semidiscretization.. W ij (t) >. In this case Lemma 3. does not impose any conditions and we compute F i;j= by applying (3.9) if v(x j ; y i ; t) and (3.) if v(x j ; y i ; t) <.. W ij (t) =. Here Lemma 3. requires @t for the right hand side of Equation (3.8). From W ij (t) = it follows that f ij = and we decide upon the approximation depending on the sign of v at neighbouring points. @Wij (t) a. v(x j ; y i ; t) : Use (3.9) to compute F i;j= giving Equation (3.). The bracketed expression is non-negative and hence so is the right hand side. b. Else if v(x j+ ; y i ; t) : Use (3.) to compute F i;j= giving Equation (3.3). The bracketed expression is non-negative and hence so is the right hand side. c. v(x j ; y i ; t) < and v(x j+ ; y i ; t) >. This exceptional case can only occur if the PDE's solution has negative values (see the remark below), or because of too coarse a spatial grid, or through rounding errors. In this case we simply use central dierences to approximate the spatial derivative at the point (x j ; y i ). Remark. Suppose, at time t and point (x j ; y i ), that case c occurs and that w(x; y; t) for all times t, w(x j ; y i ; t) = but w 6= locally around (x j ; y i ) for time t and that the function w and v are suciently smooth. If the spatial grid is suciently ne then it follows from the conditions of case c that (v(x j ; y i ; t)w(x j ; y i ; t)) x >. The PDE (3.3) then implies that w t (x j ; y i ; t) <. That is, for small " >, we have w(x j ; y i ; t + ") <, a contradiction to the non-negativity assumption on w. We now consider the semidiscretization near the boundary. Figure shows the stencils of the semidiscretization for both positive and negative velocities for grid points away from the boundary.

6 j- j- j j+ j- j j+ j+ FIG.. Stencils of the semidiscretization at (x j ; y i) using (3.9) on the left and (3.) on the right. We rst consider the inow boundary at x = and the discretization at the point (x M ; y i ) (the values of w(x M ; y i ; t) are given by the Dirichlet boundary condition). Points further to the left constitute no diculties. If (3.9) is applied for the discretization of the convection term then all required uxes f il ; l = M 3; : : : ; M are known. The approximation (3.) is used if W i;m > and v(x M ; y i ; t) or if W i;m = and v(x M ; y i ; t) (the velocity is always non-positive on an inow boundary at x = and the exceptional case c cannot occur). The application of (3.) requires the ux f i;m+ which we dene by second order extrapolation, namely f i;m+ := 3f im 3f i;m + f i;m : (3.7) In any case, the semidiscretization does not contradict Lemma 3. regarding positivity. We now consider the outow boundary at x = for which we have v(; y i ; t). We dene f i; by second order extrapolation f i; = 3f i; 3f i; + f i; : (3.8) This allows us to apply approximation (3.) at the grid point (; y i ) if W i; > or if W i; = and v(x ; y i ; t) and leads to a positive semidiscretization. In the exceptional case W i; = and v(x ; y i ; t) > we apply the nd order central dierence discretization using the dened value of f i;. At the grid point (x ; y i ) we apply (3.) if W i; > and v(x ; y i ; t) < or if W i; = and v(x ; y i ; t). If W i; > and v(x ; y i ; t) or if W i; = and v(x ; y i ; t) (that is = because of the outow boundary) then we apply (3.9) using the dened value of f i;. All of these possibilities lead to a positive semidiscretization. In the remaining exceptional case we again use central dierences. Finally, the scheme we have developed is applied to the hyperbolic equation (.a) of the PDE system. Again we consider the discretization of the convection in the x-direction only. The corresponding y-part is computed in a similar manner and added, with the source term g (N ij ; C ij ), to the right hand side of the semidiscretization. The velocity function v is now dened as v(x; y; t) = G (c(x; y; t))c x (x; y; t) (3.9) and the function n is the counterpart of the convected quantity w above. We apply the scheme just described to this new situation and all that remains is to approximate the velocity v at spatial grid points. We dene approximations of c x (x j ; y i ; t) by (C ij ) x = h (C i;j+ C i;j ); i = ; ; : : : ; M; j = ; ; : : : ; M ; (3.) (C i; ) x = h (C i; 3C i; + C i; ); i = ; ; : : : ; M; (3.) (C i;m ) x = h (C i;m 3C i;m + C i;m ); i = ; ; : : : ; M; (3.)

where the boundary values of c are used if necessary. This approximation is second order accurate. The approximation for v(x j ; y i ; t) is then given by 7 v(x j ; y i ; t) G (C ij )(C ij ) x : (3.3) Remark. This semidiscretization of Equation (.a) is second order accurate in smooth monotone regions of the solution n. The resulting ODE system is not positive (by Lemma 3.) because even Van Leer's limiter is not smooth enough and some exceptional cases in the convection discretization might lead to positivity violation (boundary treatment and case c). Despite this complication we expect the scheme to perform satisfactorily and the numerical results presented in Subsection C. support this. The ODEs for N ij (t) and C ij (t) constitute a large autonomous system of ODEs with initial conditions N ij () = n (x j ; y i ) and C ij () = c (x j ; y i ) and an appropriate numerical scheme for the solution of this system is developed in the next section. B. Time Split Scheme for the ODE System The discretization of the spatial derivatives in the previous section results in a large m-dimensional system of ODEs in autonomous form u (t) = f (u(t)) + f (u(t)); t (; t final ]; u() = u : (3.4) The vector u(t) contains all unknown time continuous functions N ij (t) and C ij (t) in an appropriate order. The function f (u(t)) represents the approximation of the diusion and reaction terms in (.a,.b) and f (u(t)) the discretized convection. We regard f as the sti part and f as the non-sti part of the ODEs. In this section, we will develop a method for the solution of (3.4) which is both computationally ecient and positive in the sense of the following denition. The last requirement is claried by Denition. (Positive ODE solver) An ODE solver is called positive if it generates non-negative approximations to the solution of a positive ODE system. An implicit method should be used for the solution of u = f (u) because of the stiness of this equation, otherwise stability of the scheme would unnecessarily restrict the time step. We could also use an implicit solver for the ODE u = f (u) but here the second requirement comes into play. Even if the step size in the algorithm is not restricted by stability, it will be restricted by positivity (except when we use the rst order implicit Euler method) and one of the major advantages of implicitness is lost. Furthermore, experiments indicate that methods that require the Jacobian of the right hand side are less appropriate for the solution of ODE systems resulting from a (non-smooth) uxlimited spatial discretization of the advection equation. There is a very large number of rejected steps which is not the case if unlimited discretizations are used (but then we have negative values in the solution). The non-smoothness of the ODEs' right hand side, introduced by the limiter, is the cause of this behaviour here and we therefore apply an explicit method with good positivity properties for the solution of this ODE. With these two competing concepts (implicitness and stability versus explicitness and positivity) in mind, it seems natural to use a splitting approach for the solution of (3.4).

8 The solution of (3.4) at time t k + with initial data u k at time t k can be written as u(t k + ) = u k + = u k + Z tk + Z tk + t k [f (u()) + f (u())] d (3.5) t k {z } =z (t k + ) Z tk + f (u())d + f (u())d + t k Z tk + t k + f (u())d; (3.6) where z tk + is the solution of z (t) = f (z (t)); z (t k ) = u k at t = t k +. Therefore we propose the following (Strang) splitting step for (3.4) z (t) = f (z (t)); z (t k ) = u k ) z tk + ; z (t) = f (z (t)); z (t k ) = z tk + ) z (t k + ) ; z 3(t) = f (z 3 (t)); z 3 tk + = z (t k + ) ) z 3 (t k + ) : (3.7) Theorem 3.4. If each ODE system in the splitting scheme (3.7) is solved with a method of consistency order two or higher and the approximation to z 3 (t k + ) is used as an approximation for u(t k + ), then the order of consistency of the splitting scheme (3.7) for the ODE (3.4) is two. Proof. Straightforward by Taylor expansion. Remark. If the right hand side of an ODE system is split into more than two parts then the theorem, applied recursively to an appropriately extended splitting scheme, also gives a consistency order two. This can be useful for multidimensional diusion discretization or for separate treatment of source terms. As stated at the beginning of this section, we will treat ODEs with right hand side f explicitly and those with f implicitly. We can therefore regard this splitting scheme as an implicit-explicit (IMEX) scheme. Such schemes have appeared frequently in recent literature, e.g. [3,, 7]. An important question in these investigations is the linear stability of IMEX schemes. The scalar test equation u (t) = u(t) + u(t); ; C (3.8) is considered in [7] and the following question is investigated: Is the IMEX method stable for given eigenvalues ; if is inside the stability domain of the underlying explicit scheme and is inside the stability domain of the underlying implicit scheme? The answer for the methods considered in [7] is negative, except for the IMEX Euler method of order one. For the splitting algorithm proposed here we obtain the following result. Lemma 3.5. The stability function of the splitting algorithm (3.7) applied to the test equation (3.8) is given by R(; ) = R () R () () R () ; (3.9)

where R (i) is the stability function of the method used to solve ODEs with right hand side f i, (i = ; ). We can now answer the question raised earlier. 9 Lemma 3.6. The splitting algorithm (3.7) is stable with respect to the test equation (3.8) if is inside the stability domain of the underlying explicit scheme and is inside the stability domain of the underlying implicit scheme. Remark. Note that the explicit method with stability function R () is applied with time steps = which, in fact, means that its stability domain is enlarged by a factor two. The specic choice of methods for the solution of the ODEs in (3.7) is considered next. The Explicit Method A positive, non-sti problem of the form u (t) = f (u(t)); u(t k ) = u k (3.3) has to be solved in steps one and three of the splitting scheme (3.7). We will use an explicit RKM to advance the initial condition over a time step. Theorem 3.4 states that we require only a second order accurate method in order to obtain overall second order consistency of the splitting scheme. We therefore restrict ourselves to such methods. A more important question is how to restrict the time step so that the solution of the RKM is positive. We will look at positivity of the method applied to the constant coecient problem with f (u) = P u. Constant coecient problems are positive if and only if the o-diagonal elements of the matrix P are non-negative [3]. The question of positivity of a method reduces to the question of absolute monotonicity of its stability function R(x). Denition. [3] A rational function R(x) is called absolutely monotonic at a point x R if and only if R and all its derivatives are non-negative in x. The threshold factor of R, T (R), is dened as T (R) = supfrjr = or (r > and R is absolutely monotonic at x = r)g: (3.3) Let >. Dene the (linear) positivity threshold max () of a method with stability function R as the largest [; ] such that the method is positive for all step sizes (; ) on all positive systems (3.3) with f (u) = P u and diagonal elements p ii. With these denitions at hand we state the following result due to Bolley and Crouzeix [4], see also [3], Theorem 3.7. It holds max () = T (R) : (3.3) Absolute monotonicity of polynomials is investigated in [3], and for s-stage explicit RKMs of order s, it is shown that T (R) =. The optimal stability polynomial with

regard to absolute monotonicity for three stage, second order methods is R(x) = + x + x + x3 with T (R) =, in other words, the largest allowable step size for positivity of the method is doubled at the cost of only one more evaluation of the right hand side of the ODE. An examination of the domain of linear stability of this function shows that it is also larger than the corresponding domain of a -stage, second order method. The solution of the order conditions for order two and the optimality conditions on the stability function for a 3-stage explicit RKM are given in form of a Butcher array in Figure 3 (left). We use the free parameters (a 3 ; b ; b 3 ) in the method to satisfy one of the two third-order conditions. The Butcher array on the right of Figure 3 gives one possible choice of these parameters and this is the method that will be used for the solution of the ODEs in steps one and three of the splitting scheme (3.7). We note that we cannot satisfy both third-order conditions without reducing the threshold factor of the method to T (R) =. b 3a 3 b+3b 3 a 3 6b3a3 b 3 a3 a 3 9 b b b b 3 FIG. 3. Butcher array of a general 3-stage, second order explicit RKM with optimal linear positivity (left) and a specic choice which satises one of the two third-order conditions (right). 6 4 3 4 The Implicit Method The ODE system z (t) = f (z (t)); z (t k ) = ~z (3.33) has to be solved from t k to t k + in step two of the splitting algorithm (3.7). It is the semidiscretization of a heat conduction equation and, because of its stiness, we solve it with a linearly implicit scheme. The only implicit relations in these methods are linear equation systems involving the Jacobian of the right hand side of the ODE system. The Jacobian J of f is broadly banded in our case (i.e. a bandwidth O(M) on an M M grid, whatever smart arrangement of the components of z is used) and therefore the application of direct solvers for the linear systems would ruin the eciency of the splitting scheme (3.7). Iterative methods could be a way out of this but we favour another approach. The right hand side of the ODE can be written as f = f a + f b, where f a contains the terms arising from the discretization of c yy and f b contains all the remaining terms. We now note that the Jacobian J a of f a is a tridiagonal matrix if the components of z are appropriately arranged and that the Jacobian J b of f b is a pentadiagonal matrix for a dierent arrangement of z. Therefore, from a computational eciency point of view, it seems worthwhile to consider linearly implicit splitting methods. We employ the linearly implicit trapezoidal splitting method [6]. This method involves only the solution of two linear systems (one with J a, the other with J b ) per time step. A time step of this method to obtain an approximation ~z of z (t k + )