Abstract. The probabilistic approach isused for constructing special layer methods to solve thecauchy problem for semilinear parabolic equations with

Similar documents
NUMERICAL ALGORITHMS FOR SEMILINEAR PARABOLIC EQUATIONS WITH SMALL PARAMETER BASED ON APPROXIMATION OF STOCHASTIC EQUATIONS

The WENO Method for Non-Equidistant Meshes

Iterative procedure for multidimesional Euler equations Abstracts A numerical iterative scheme is suggested to solve the Euler equations in two and th

Pointwise convergence rate for nonlinear conservation. Eitan Tadmor and Tao Tang

THE LUBRICATION APPROXIMATION FOR THIN VISCOUS FILMS: REGULARITY AND LONG TIME BEHAVIOR OF WEAK SOLUTIONS A.L. BERTOZZI AND M. PUGH.

Lectures 15: Parallel Transport. Table of contents

AM 205: lecture 14. Last time: Boundary value problems Today: Numerical solution of PDEs

Applied Mathematics 505b January 22, Today Denitions, survey of applications. Denition A PDE is an equation of the form F x 1 ;x 2 ::::;x n ;~u

Finite Differences for Differential Equations 28 PART II. Finite Difference Methods for Differential Equations

Spurious Chaotic Solutions of Dierential. Equations. Sigitas Keras. September Department of Applied Mathematics and Theoretical Physics

The Best Circulant Preconditioners for Hermitian Toeplitz Systems II: The Multiple-Zero Case Raymond H. Chan Michael K. Ng y Andy M. Yip z Abstract In

Garrett: `Bernstein's analytic continuation of complex powers' 2 Let f be a polynomial in x 1 ; : : : ; x n with real coecients. For complex s, let f

ON THE ASYMPTOTIC STABILITY IN TERMS OF TWO MEASURES FOR FUNCTIONAL DIFFERENTIAL EQUATIONS. G. Makay

Preliminary Examination in Numerical Analysis

Chapter 3 HIGHER ORDER SLIDING MODES. L. FRIDMAN and A. LEVANT. Chihuahua Institute of Technology, Chihuahua, Mexico.

LECTURE 15 + C+F. = A 11 x 1x1 +2A 12 x 1x2 + A 22 x 2x2 + B 1 x 1 + B 2 x 2. xi y 2 = ~y 2 (x 1 ;x 2 ) x 2 = ~x 2 (y 1 ;y 2 1

Numerical Solution of Hybrid Fuzzy Dierential Equation (IVP) by Improved Predictor-Corrector Method

BSDEs and PDEs with discontinuous coecients Applications to homogenization K. Bahlali, A. Elouain, E. Pardoux. Jena, March 2009

Part 1. The diffusion equation

Abstract. A front tracking method is used to construct weak solutions to

Institute for Advanced Computer Studies. Department of Computer Science. On the Convergence of. Multipoint Iterations. G. W. Stewart y.

Department of Applied Mathematics Preliminary Examination in Numerical Analysis August, 2013

Linear Regression and Its Applications

ON SCALAR CONSERVATION LAWS WITH POINT SOURCE AND STEFAN DIEHL

Lecture 4: Numerical solution of ordinary differential equations

KINGS COLLEGE OF ENGINEERING DEPARTMENT OF MATHEMATICS ACADEMIC YEAR / EVEN SEMESTER QUESTION BANK

1 Introduction Travelling-wave solutions of parabolic equations on the real line arise in a variety of applications. An important issue is their stabi

Lecture Introduction

N.G. Dueld. M. Kelbert. Yu.M. Suhov. In this paper we consider a queueing network model under an arrivalsynchronization

t x 0.25

1 Introduction We will consider traveling waves for reaction-diusion equations (R-D) u t = nx i;j=1 (a ij (x)u xi ) xj + f(u) uj t=0 = u 0 (x) (1.1) w

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

Numerical methods for solving stochastic differential equations

Scattering for the NLS equation

STOCHASTIC DIFFERENTIAL EQUATIONS WITH EXTRA PROPERTIES H. JEROME KEISLER. Department of Mathematics. University of Wisconsin.

H. L. Atkins* NASA Langley Research Center. Hampton, VA either limiters or added dissipation when applied to

Memoirs on Dierential Equations and Mathematical Physics

Dynamical Systems & Scientic Computing: Homework Assignments

The first order quasi-linear PDEs

Linearized methods for ordinary di erential equations

EULER MARUYAMA APPROXIMATION FOR SDES WITH JUMPS AND NON-LIPSCHITZ COEFFICIENTS

Department of Mathematics. University of Notre Dame. Abstract. In this paper we study the following reaction-diusion equation u t =u+

From Fractional Brownian Motion to Multifractional Brownian Motion

c 2006 Society for Industrial and Applied Mathematics

Travelling waves. Chapter 8. 1 Introduction

CS 257: Numerical Methods

Lecture 1. Finite difference and finite element methods. Partial differential equations (PDEs) Solving the heat equation numerically

(1) u (t) = f(t, u(t)), 0 t a.

A Stable Finite Dierence Ansatz for Higher Order Dierentiation of Non-Exact. Data. Bob Anderssen and Frank de Hoog,

Contents Introduction Energy nalysis of Convection Equation 4 3 Semi-discrete nalysis 6 4 Fully Discrete nalysis 7 4. Two-stage time discretization...

Defect-based a-posteriori error estimation for implicit ODEs and DAEs

Discretization of SDEs: Euler Methods and Beyond

ORBIT 14 The propagator. A. Milani et al. Dipmat, Pisa, Italy

Order Results for Mono-Implicit Runge-Kutta Methods. K. Burrage, F. H. Chipman, and P. H. Muir

Lectures 18: Gauss's Remarkable Theorem II. Table of contents

A general theory of discrete ltering. for LES in complex geometry. By Oleg V. Vasilyev AND Thomas S. Lund

On averaging principle for diusion processes with null-recurrent fast component

Novel determination of dierential-equation solutions: universal approximation method

Constrained Leja points and the numerical solution of the constrained energy problem

Ordinary Differential Equations

Linear Ordinary Differential Equations

2 Multidimensional Hyperbolic Problemss where A(u) =f u (u) B(u) =g u (u): (7.1.1c) Multidimensional nite dierence schemes can be simple extensions of

Strassen s law of the iterated logarithm for diusion processes for small time 1

Congurations of periodic orbits for equations with delayed positive feedback

QUADRATIC RATE OF CONVERGENCE FOR CURVATURE DEPENDENT SMOOTH INTERFACES: A SIMPLE PROOF 1

The Erwin Schrodinger International Pasteurgasse 6/7. Institute for Mathematical Physics A-1090 Wien, Austria

Blowup for Hyperbolic Equations. Helge Kristian Jenssen and Carlo Sinestrari

Fourth Order RK-Method

WORD SERIES FOR THE ANALYSIS OF SPLITTING SDE INTEGRATORS. Alfonso Álamo/J. M. Sanz-Serna Universidad de Valladolid/Universidad Carlos III de Madrid

Monte Carlo Methods for Statistical Inference: Variance Reduction Techniques

Finite element approximation of the stochastic heat equation with additive noise

Introduction to Computational Stochastic Differential Equations

Mathematics Research Report No. MRR 003{96, HIGH RESOLUTION POTENTIAL FLOW METHODS IN OIL EXPLORATION Stephen Roberts 1 and Stephan Matthai 2 3rd Febr

1 Solutions to selected problems

64 Before going on, it must be emphasized that the numerical analysis of stochastic dierential systems is at its very beginning: at our knowledge, at

H 1 optimisation. Is hoped that the practical advantages of receding horizon control might be combined with the robustness advantages of H 1 control.

THE WAVE EQUATION. d = 1: D Alembert s formula We begin with the initial value problem in 1 space dimension { u = utt u xx = 0, in (0, ) R, (2)

Error Estimates for Trigonometric Interpolation. of Periodic Functions in Lip 1. Knut Petras

Minimum and maximum values *

Lecture 2: Review of Prerequisites. Table of contents

Acid con- Exposure dose. Resist. Dissolution. rate. profile. centration. C(x,z) R(x,z) z(x) Photoresist. z Wafer

Lecture 6: Bayesian Inference in SDE Models

460 HOLGER DETTE AND WILLIAM J STUDDEN order to examine how a given design behaves in the model g` with respect to the D-optimality criterion one uses

Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games

Novel Approach to Analysis of Nonlinear Recursions. 1 Department of Physics, Bar-Ilan University, Ramat-Gan, ISRAEL

1 Ordinary points and singular points

THE METHOD OF LINES FOR PARABOLIC PARTIAL INTEGRO-DIFFERENTIAL EQUATIONS

Journal of Computational and Applied Mathematics. Higher-order semi-implicit Taylor schemes for Itô stochastic differential equations

SDE Coefficients. March 4, 2008

A Concise Course on Stochastic Partial Differential Equations

Alignment processes on the sphere

FLUX IDENTIFICATION IN CONSERVATION LAWS 2 It is quite natural to formulate this problem more or less like an optimal control problem: for any functio

hal , version 1-13 Mar 2013

Itô s formula. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Entropy-stable discontinuous Galerkin nite element method with streamline diusion and shock-capturing

Introduction to the Numerical Solution of IVP for ODE

Applied Math Qualifying Exam 11 October Instructions: Work 2 out of 3 problems in each of the 3 parts for a total of 6 problems.

An Introduction to Stochastic Partial Dierential Equations

Stability, Queue Length and Delay of Deterministic and Stochastic Queueing Networks Cheng-Shang Chang IBM Research Division T.J. Watson Research Cente

PDE Solvers for Fluid Flow

Transcription:

NUMERICAL METHODS FOR NONLINEAR PARABOLIC EQUATIONS WITH SMALL PARAMETER BASED ON PROBABILITY APPROACH GRIGORI N. MILSTEIN, MICHAEL V. TRETYAKOV Weierstra -Institut f r Angewandte Analysis und Stochastik Mohrenstr. 39, D-10117 Berlin, Germany e-mail: milstein@wias-berlin.de Department of Mathematics Ural State University, Lenin str. 51, 60083 Ekaterinburg, Russia e-mail: Michael.Tretyakov@usu.ru 1991 Mathematics Subject Classication. 35K55, 60H10, 60H30, 65M99. Key words and phrases. Semilinear parabolic equations, reaction-diusion systems, probabilistic representations for equations of mathematical physics, stochastic dierential equations with small noise.

Abstract. The probabilistic approach isused for constructing special layer methods to solve thecauchy problem for semilinear parabolic equations with small parameter. In spite of the probabilistic nature these methods are nevertheless deterministic. The algorithms are tested by simulating the Burgers equation with small viscosity andthe generalized KPP-equation with a small parameter. 1991 Mathematics Subject Classication. 35K55, 60H10, 60H30, 65M99. Key words and phrases. Semilinear parabolic equations, reaction-diusion systems, probabilistic representations for equations of mathematical physics, stochastic dierential equations with small noise. 1. Introduction Nonlinear partial dierential equations (PDE) are usually not susceptible of analytic solutions and mostly investigated by means of numerical methods. Numerical methods used for solving PDE are traditionally based on deterministic approaches (see, e.g., [,3, 6] and references therein). A class of layer methods intended to solve semilinear parabolic equations is introduced in [18], where the well-known probabilistic representations of solutions to linear parabolic equations and the ideas of weak sense numerical integration of stochastic dierential equations (SDE) are used to construct numerical algorithms. In spite of the probabilistic nature these methods are nevertheless deterministic. Nonlinear parabolic equations with small parameter arise in a variety of applications (see, e.g., [, 4, 8, 11, 5] and references therein). For instance, they are used in gas dynamics, when one has to take into account small viscosity and small heat conductivity. Some problems of combustion are described by PDE with small parameter. They also arise as the result of introducing articial viscosity in systems of rst-order hyperbolic equations that is one of the popular approaches to numerical solving of inviscid problems of gas dynamics [0, 4, 9]. Here we construct some layer methods for solving the Cauchy problem for semilinear parabolic equations with small parameter of the form (1.1) (1.) @u @t + " dx i j=1 a ij @ u (t x u) @x i @x j + dx i=1 (b i (t x u) +" c i (t x u)) @u + g(t x u) =0 @xi t [t 0 T) x R d u(t x)='(x): The probabilistic representations of the solution to the problem (1.1)-(1.) are connected with systems of SDE with small noise. Namely for such systems, special weak approximations are proposed in [19]. Applying these special approximations, we get new layer methods intended to solve the Cauchy problem (1.1)-(1.). As it turned out, if the solution of (1.1)-(1.) is regular, errors of the proposed methods have the form of O(h p + " l h q ) p>q l > 0 h is a step of time discretization. Owing to the fact that the accuracy order of such methods is equal to a comparatively small q the methods are not too complicated. But due to the large p and the small factor " l at h q their errors are fairly low and therefore these methods are highly ecient. The singular case, when derivatives of the solution go to innity as"! 0 requires a special theoretical investigation. For the equations of a particular type, wegive the corresponding theoretical results. As to the equation (1.1) of the general type, we restrict ourselves to the analysis of the one-step error. Further theoretical investigations should rest on a stability analysis and on particular properties of the solution. However, we test the methods constructed 1

here on model problems for which shock waves are observed. The tests give quite good results not only in simulations of wave formation, that corresponds to the regular case, but also in simulations of wave propagation, i.e., in the singular case. The reasons of these experimental facts and possible ways to get realistic errors of the methods in the singular case are discussed. Section gives some results from [18, 19] used in this paper. In Section 3, new implicit and explicit layer methods for the problem (1.1)-(1.) are proposed. Some theorems on their rates of convergence both in the regular and in the singular cases are proved. For implementation of the layer methods, we need in a space discretization. The numerical algorithms based on the proposed layer methods and on the linear interpolation are constructed in Section 4. For the sake of simplicity in writing, Sections 3 and 4 deal with the one-dimensional case of the problem (1.1)-(1.). In Section 5, extensions for the multidimensional case and for a system of reaction-diusion equations are given. In Sections 6 and 7, we propose layer methods in two particular cases of the problem (1.1)-(1.). Together with two-layer methods, some three-layer methods are obtained in Section 6. For constructing a layer method in Section 7, we attract the exact simulation of the Brownian motion [17] instead of the weak simulation used in Sections -6 to approximate SDE arising in the probabilistic representations. All the numerical algorithms presented in the paper are tested through computer experiments. Some results of numerical tests on the Burgers equation with small viscosity and on the generalized KPP-equation with a small parameter are given in Section 8. This paper is devoted to initial value problems. Boundary value problems for nonlinear parabolic equations with small parameter will be considered in a separate work. The probability approach to linear boundary value problems is treated in [15, 16, 17].. Preliminaries Here we give, in the required form, some results from [18, 19] which are used in the next sections..1. Probabilistic approach to constructing numerical methods for semilinear PDE. Let the Cauchy problem (1.1)-(1.) have the unique solution u = u(t x) which is suciently smooth and satises some needed conditions of boundedness (see the corresponding theoretical results, e.g., in [13, 8]). If we substitute u = u(t x) in the coecients of (1.1), we obtain a linear parabolic equation with small parameter. The solution to this linear equation has the following probabilistic representation (.1) u(t x) =E('(X t x (T )) + Z t x 0 (T )) t T x R d where X t x (s) Z t x z (s) s t is the solution to the Cauchy problem for the system of stochastic dierential equations (.) dx =(b(s X u(s X)) + " c(s X u(s X)))ds +"(s X u(s X))dw X(t) =x (.3) dz = g(s X u(s X))ds Z(t) =z: Here w(s) =(w 1 (s) ::: w d (s)) > is a d-dimensional standard Wiener process, b(s x u) and c(s x u) are d-dimensional column-vectors compounded from the coecients b i (s x u) and c i (s x u) of (1.1), (s x u) is a d d-matrix obtained from the equation a(s x u) = (s x u) > (s x u) where a = fa ij g the equation is solvable with respect to (for instance, by alower triangular matrix) at least in the case of a positively denite a:

Note that to simplify the notation we write u(t x) instead of u(t x ") and X t x (s) instead of X t x " (s) in the whole paper. Introduce a discretization, for deniteness the equidistant one: T = t N >t N;1 > >t 0 = t h := T ; t 0 N : Note that all the methods given in the paper can be easily adapted for a nonequidistant discretization. For instance, we use a variable discretization step h in some our numerical tests (see Section 8.1). Because the right-hand side of the equation (.3) is independent of Z, we have from (.1): (.4) u(t k x)=e('(x tk x(t )) + Z tk x 0(T )) = E('(X tk+1 X tk x(t k+1)(t )) + Z tk+1 X tk x(t k+1) Z tk x 0(t k+1)(t )) = EE('(X tk+1 X tk x(t k+1)(t )) + Z tk+1 X tk x(t k+1) Z tk x 0(t k+1)(t )X tk x(t k+1 )) Z tk x 0(t k+1 )) = E(u(t k+1 X tk x(t k+1 )) + Z tk x 0(t k+1 )): In accordance with the probabilistic approach to constructing numerical methods for semilinear PDE from [18], the ideas of weak sense numerical integration of SDE [1, 14, 1] are attracted to obtain some approximate relations from (.)-(.4). The relations allow us to express approximations u(t k x) of the solution u(t k x) in terms of u(t k+1 x) recurrently, i.e., to construct some layer methods which are discrete in the variable t only. To make clear the approach, it is relevant to derive one of the methods from [18] which is used in this paper in a broad fashion. For simplicity in writing, we restrict ourselves to the case d =1. Applying the explicit weak Euler scheme with the simplest noise simulation [1, 14, 1] to the system (.)-(.3), we get (.5) X tk x(t k+1 ) ' X tk x(t k+1 )=x + hb(t k x u(t k x)) + " hc(t k x u(t k x)) +"h 1= (t k x u(t k x)) k Z tk x z(t k+1 ) ' Z tk x z(t k+1 )=z + hg(t k x u(t k x)) where N;1 N; ::: 0 are i.i.d. random variables with the law P ( = 1) = 1=: Using (.4)-(.5), we obtain (.6) u(t k x) ' E(u(t k+1 X tk x(t k+1 )) + Z tk x 0(t k+1 )) = 1 u(t k+1 x+ hb(t k x u(t k x)) + " hc(t k x u(t k x)) + "h 1= (t k x u(t k x))) + 1 u(t k+1 x+ hb(t k x u(t k x)) + " hc(t k x u(t k x)) ; "h 1= (t k x u(t k x))) +hg(t k x u(t k x)): Thus, we can calculate the approximations u(t k x) layerwise: (.7) u(t N x)='(x) u(t k x)= 3

= 1 u(t k+1 x+ hb(t k x u(t k x)) + " hc(t k x u(t k x)) + "h 1= (t k x u(t k x))) + 1 u(t k+1 x+ hb(t k x u(t k x)) + " hc(t k x u(t k x)) ; "h 1= (t k x u(t k x))) +hg(t k x u(t k x)) k = N ; 1 ::: 1 0: The method (.7) is an implicit layer method for solving the Cauchy problem (1.1)- (1.). This method is deterministic though the probabilistic approach is used for its construction. Applying the method of simple iteration to (.7) with u(t k+1 x) asanull iteration, we get the rst iteration (we denote it as u(t k x) again): (.8) u(t N x)='(x) u(t k x)= = 1 u(t k+1 x+ hb(t k x u(t k+1 x)) + " hc(t k x u(t k+1 x)) + "h 1= (t k x u(t k+1 x))) + 1 u(t k+1 x+ hb(t k x u(t k+1 x)) + " hc(t k x u(t k+1 x)) ; "h 1= (t k x u(t k+1 x))) +hg(t k x u(t k+1 x)) k = N ; 1 ::: 1 0: According to the propositions proved in [18], this explicit layer method has the one-step error estimated by O(h ) and the global error O(h). Of course, all the results from [18] can be applied to solving the problem with small parameter (1.1)-(1.). But since the probabilistic representation of the solution to (1.1)- (1.) is connected with the system of dierential equations with small noise (.)-(.3), one can expect that use of weak approximations for SDE with small noise [19] leads to new eective methods for solving (1.1)-(1.)... Some weak approximations for SDE with small noise. Here we recall some weak approximations for SDE with small noise (.9) dx = b(t X)dt + " c(t X)dt + "(t X)dw(t) X(t 0 )=x t [t 0 T] 0 " " 0 where X, b(t x) and c(t x) are d-dimensional column-vectors, (t x) is a d m-matrix, w(s) = (w 1 (t) ::: w m (t)) > is an m-dimensional standard Wiener process, " 0 is a positive number. The coecients are supposed to satisfy the corresponding conditions of smoothness and boundedness. The one-step error and the global error R ofaweak approximation X k = X t x (t k ) at the point (t x) are dened as = jef(x t x (t + h)) ; Ef( X t x (t + h))j R = jef(x t x (T )) ; Ef( X t x (T ))j where f (x) is a function belonging to a suciently wide class (see details, e.g., in [1, 14, 1]). Below we write down a number of weak schemes from [14] and [19] which are used in the next sections. 4

The Euler scheme: (.10) X k+1 = X k + hb(t k X k )+" hc(t k X k )+"h 1= (t k X k ) k = O(h ) R = O(h) where k =( 1 k ::: m k ) are i.i.d. m-dimensional vectors with i.i.d. components and each component isdistributed by the law P ( = 1) = 1=: The Runge-Kutta scheme with error O(h + " h): (.11) X k+1 = X k + 1 hb(t k X k )+ 1 hb(t k+1 X k + hb(t k X k )) +" hc(t k X k )+"h 1= (t k X k ) k = O(h 3 + " h ) R = O(h + " h) where k are the same as in (.10). The special second-order Runge-Kutta scheme (for SDE with small additive noise, i.e., is a constant matrix c 0): (.1) X k+1 = X k + 1 hb(t k X k )+ 1 hb(t k+1 X k + hb(t k X k )+"h 1= k )) + "h 1= k = O(h 3 ) R = O(h ) where k =(k 1 ::: m k ) are i.i.d. m-dimensional vectors with i.i.d. components and each component isdistributed by the law P ( =0)==3 P ( = p 3) = 1=6: The special Runge-Kutta scheme with error O(h 4 + " h ) (for SDE with small additive noise, i.e., is a constant matrix c 0): (.13) X k+1 = X k + 1 6 (k 1 +k +k 3 + k 4 )+"h 1= k k 1 = hb(t k X k ) k = hb(t k+1= X k + k 1 =) k 3 = hb(t k+1= X k + k =+"h 1= k ) k 4 = hb(t k+1 X k + k 3 + "h 1= k ) = O(h 5 + " h 3 ) R = O(h 4 + " h ) where k are the same as in (.1). Note that in Sections 6 and 7 we also attract some other special weak approximations. 3. Methods for a general semilinear parabolic equation with small parameter Here we construct new layer methods for semilinear parabolic equations with small parameter (1.1)-(1.) using the probabilistic approach from [18] and applying specic weak approximations from [19] to the system with small noise (.)-(.3). 3.1. Implicit layer method. For the sake of simplicity in writing, let us consider the Cauchy problem (1.1)-(1.) under d =1: (3.1) u @u @t + " (t x u) @ @x +(b(t x u) +" c(t x u)) @u + g(t x u) =0 @x t [t 0 T) x R 1 5

(3.) u(t x)='(x): The probabilistic representation of the solution u(t x) to this problem has the form (.1)-(.3) with d =1: Applying the Runge-Kutta scheme (.11) to the system (.)-(.3), we get (3.3) X tk x(t k+1 ) ' X tk x(t k+1 )=x + 1 hb k + 1 hb(t k+1 x+ hb k u(t k+1 x+ hb k )) +" hc k + "h 1= k k Z tk x z(t k+1 ) ' Z tk x z(t k+1 )=z + 1 hg k + 1 hg(t k+1 x+ hb k u(t k+1 x+ hb k )) where b k c k k and g k are the coecients b c and g calculated at the point (t k x u(t k x)): Using the probabilistic representation (.4), we obtain u(t k x) ' E(u(t k+1 X tk x(t k+1 )) + Z tk x 0(t k+1 )) = 1 u(t k+1 x+ h[b k + b(t k+1 x+ hb k u(t k+1 x+ hb k ))]= +" hc k + "h 1= k ) + 1 u(t k+1 x+ h[b k + b(t k+1 x+ hb k u(t k+1 x+ hb k ))]= +" hc k ; "h 1= k ) + 1 hg k + 1 hg(t k+1 x+ hb k u(t k+1 x+ hb k )): We can approximate u(t k x) by v(t k x) found from (3.4) v(t k x)= = 1 u(t k+1 x+ h[ ~ b k + b(t k+1 x+ h ~ b k u(t k+1 x+ h ~ b k ))]= +" h~c k + "h 1= ~ k ) + 1 u(t k+1 x+ h[ ~ b k + b(t k+1 x+ h ~ b k u(t k+1 x+ h ~ b k ))]= +" h~c k ; "h 1= ~ k ) + 1 h~g k + 1 hg(t k+1 x+ h ~ b k u(t k+1 x+ h ~ b k )) where ~ b k ~c k ~ k, and ~g k are the coecients b c and g calculated at the point (t k x v(t k x)): The corresponding implicit layer method has the form (3.5) u(t N x)='(x) u(t k x)= 1 u(t k+1 x+ h[ b k + b(t k+1 x+ h b k u(t k+1 x+ h b k ))]= +" hc k + "h 1= k ) + 1 u(t k+1 x+ h[ b k + b(t k+1 x+ h b k u(t k+1 x+ h b k ))]= +" hc k ; "h 1= k ) + 1 hg k + 1 hg(t k+1 x+ h b k u(t k+1 x+ h b k )) k = N ; 1 ::: 0 where b k c k k, and g k (t k x u(t k x)): are the coecients b c and g calculated at the point 6

3.. Convergence theorem in the regular case. Let us assume that (i) The coecients b(t x u) and g(t x u) and their rst and second derivatives are continuous and uniformly bounded, the coecients c(t x u) and (t x u) and their rst derivatives are continuous and uniformly bounded: (3.6) i+j+l g @ i+j+l b j @t i @x j @u jk j @ l @t i @x j @u jk l i+j+l @ i+j+l c j @t i @x j @u jk j @ l @t i @x j @u jk l 0 i + j + l 0 i + j + l 1 t 0 t T x R 1 u <u<u where ;1 u u 1are some constants. (ii) There exists the only bounded solution u(t x) to the problem (3.1)-(3.) such that (3.7) u <u u(t x) u <u where u u are some constants, and there exist the uniformly bounded derivatives: (3.8) j @i+j u j K i =0 j =1 3 @t i 4 i =1 j =0 1 i = j =0 1 @xj i = 3 j =0 t0 t T x R 1 0 <" " : Below, in Section 3.4, we consider the singular case, when the condition (3.8) is not fullled. In Lemma 3.1 and Theorem 3.1 we use the letters K and C without any index for various constants which do not depend on h k x ": Lemma 3.1. Under the assumptions (i) - (ii) the one-step error of the implicit layer method (3:5) is estimated by O(h 3 + " h ) i.e., jv(t k x) ; u(t k x)j C (h 3 + " h ) where v(t k x) is found from (3:4), C does not depend on h k x ". Proof. Introduce the function U tk x(v) := := 1 u(t k+1 x+ h[ b k + b(t k+1 x+ h b k u(t k+1 x+ h b k ))]= +" hc k + "h 1= k ) + 1 u(t k+1 x+ h[ b k + b(t k+1 x+ h b k u(t k+1 x+ h b k ))]= +" hc k ; "h 1= k ) + 1 hg k + 1 hg(t k+1 x+ h b k u(t k+1 x+ h b k )) where b k c k k, and g k are the coecients b c and g calculated at (t k x v): Toprove the lemma, we make use of the method of simple iteration. Dene the sequence and take u(t k x) as anull iteration: v (i) (t k x):=u tk x(v (i;1) (t k x)) v (0) (t k x)=u(t k x): 7 i =1 :::

Firstly we prove that (3.9) jv (1) (t k x) ; v (0) (t k x)j = jv (1) (t k x) ; u(t k x)j C (h 3 + " h ): We have (3.10) v (1) (t k x)= = 1 u(t k+1 x+ h[b k + b(t k+1 x+ hb k u(t k+1 x+ hb k ))]= +" hc k + "h 1= k ) + 1 u(t k+1 x+ h[b k + b(t k+1 x+ hb k u(t k+1 x+ hb k ))]= +" hc k ; "h 1= k ) + 1 hg k + 1 hg(t k+1 x+ hb k u(t k+1 x+ hb k )) where b k c k k and g k are the coecients b c and g calculated at the point (t k x u(t k x)): Using the assumptions (i)-(ii), we expand the functions u and g : u(t k+1 x+ hb k =+hb(t k+1 x+ hb k u(t k+1 x+ hb k ))= +" hc k "h 1= k ) = u(t k x)+ @u @t h + @u @x (b kh + 1 @b @t h + 1 + 1 @b @u @u @x b kh + " c k h " k h 1= )+ 1 @ u @t h + @ @b @b @x b kh + 1 @u + @ u @x (1 " k h + 1 b k h "h 1= k (b k h + " c k h)) 1 @ 3 u 6 @x 3 "3 3 k h3= + O(h 3 + " h ) u @u @t h @t@x (b kh " k h 3= ) g(t k+1 x+ hb k u(t k+1 x+ hb k )) = g k + @g @t h + @g @x b kh + @g @u @u @t h + @g @u @u @x b kh + O(h ) where the derivatives of u are calculated at the point (t k x) the derivatives of the coecients b and g are calculated at the point (t k x u(t k x)) and (3.11) jo(h 3 + " h )jc (h 3 + " h ) jo(h )jc h : Substituting these expansions in (3.10), we get v (1) (t k x)=u(t k x)+h ( @u @t + @u @x (b k + " c k )+ " @ u @x k + g k) + 1 h ( @ u @t + @b @u @t @x + @b @u @u @u @t @x + b @ u k @x@t + @g @t + @g @u @u @t ) + 1 b kh ( @ u @t@x + @b @u @x @x + @b @u @u @u @x @x + b @ u k @x + @g @x + @g @u @u @x ) +O(h 3 + " h ): 8

Then, adding and taking o the appropriate terms of the order O(" h ) in the above expression, we obtain (3.1) v (1) (t k x)=u(t k x)+h ( @u @t + @u @x (b k + " c k )+ " @ u @x k + g k) + 1 h @ @t (@u @t + @u @x (b + " c)+ " @ u @x + g) + 1 b kh @ @x (@u @t + @u @x (b + " c)+ " @ u @x + g) +O(h 3 + " h ) where, after dierentiation, all the expressions are calculated at the point (t k x). Taking into account that u(t x) is the solution of the problem (3.1)-(3.), the relation (3.1) implies v (1) (t k x)=u(t k x)+o(h 3 + " h ): The estimate (3.9) is proved. Clearly, under a suciently small h we obtain u < ;Ch (h + ") +u v (1) (t k x) u + Ch (h + ") <u x R 1 : Using the assumptions (i)-(ii), we get whence it follows that jv () (t k x) ; v (1) (t k x)j = ju tk x(v (1) (t k x)) ; U tk x(v (0) (t k x))j Khjv (1) (t k x) ; v (0) (t k x)j jv () (t k x) ; v (0) (t k x)j (1 + Kh) jv (1) (t k x) ; v (0) (t k x)j: It is not dicult to show that there exists a suciently small h such that the procedure can be continued innitely (i.e., u <v (i) (t k x) <u i = 3 :::) and jv (n) (t k x) ; v (n;1) (t k x)j (Kh) n;1 jv (1) (t k x) ; v (0) (t k x)j jv (n) (t k x) ; v (0) (t k x)j 1 ; (Kh)n 1 ; Kh jv(1) (t k x) ; v (0) (t k x)j n =1 3 ::: : Further, we prove by the usual arguments that there is a unique root of the equation such that (3.13) jv(t k x) ; v (0) (t k x)j v(t k x)=u tk x(v(t k x)) 1 1 ; Kh jv(1) (t k x) ; v (0) (t k x)j: Substituting (3.9) in (3.13), we come to the statement of the lemma. Lemma 3.1 is proved. Let us prove the following theorem on global convergence. Theorem 3.1. Under the assumptions (i) - (ii) the global error of the implicit layer method (3:5) is estimated by O(h + " h): ju(t k x) ; u(t k x)j K (h + " h) where the constant K does not depend on h k x ": 9

Proof. We follow the proof of the corresponding theorem in [18]. Denote the error of the method (3.5) to the k-th step ( (N ; k)-th layer) as (3.14) Introduce the notation R(t k x):=u(t k x) ; u(t k x): X () k+1 := x + 1 h b k + 1 hb(t k+1 x+ h b k u(t k+1 x+ h b k )) + " hc k "h 1= k where (we remember) b k c k and k are the coecients b(t x u) c(t x u), and (t x u) calculated at t = t k x = x u = u(t k x)=u(t k x)+r(t k x): Using this notation, (3.5), and (3.14), we get (3.15) u(t k x)+r(t k x)=u(t k x) = 1 u(t k+1 X (+) k+1 )+1 u(t k+1 X (;) k+1 )+1 hg k + 1 hg(t k+1 x+ h b k u(t k+1 x+ h b k )) = 1 u(t k+1 X (+) k+1 )+1 u(t k+1 X (;) k+1 )+1 R(t k+1 X (+) k+1 )+1 R(t k+1 X (;) k+1 ) + 1 hg k + 1 hg(t k+1 x+ h b k u(t k+1 x+ h b k )): Clearly, R(t N x) = 0: Below we prove recurrently that R(t k x) k = N ; 1 ::: 0 is suciently small under a small h: Using the assumption (3.7), we shall be able to justify the following suggestion, in which we need now: the value u(t k x)+r(t k x) remains in the interval (u u ) under h small enough. We have bk = b(t k x u(t k x)) = b(t k x u(t k x)+r(t k x)) = b(t k x u(t k x)) + b = b k +b where b k := b(t k x u(t k x)) and due to the assumption (i) b satises the inequality (3.16) jbj KjR(t k x)j: Analogously, (3.17) c k = c k +c jcj KjR(t k x)j k = k + jj KjR(t k x)j g k = g k +g jgj KjR(t k x)j: Using (3.16)-(3.17) and (3.14), we get (3.18) X () k+1 = x + 1 h(b k +b) + + 1 hb(t k+1 x+ h[b k +b] u(t k+1 x+ h[b k +b]) + R(t k+1 x+ h b k )) +" h[c k +c] "h 1= [ k +] = x + 1 hb k + 1 hb(t k+1 x+ hb k u(t k+1 x+ hb k )) + " hc k "h 1= k +h[b= +" c] "h 1= + h 1 + h 10

and (3.19) 1 hg k + 1 hg(t k+1 x+ h b k u(t k+1 x+ h b k )) = 1 hg k + 1 hg + 1 hg(t k+1 x+ h[b k +b] u(t k+1 x+ h[b k +b]) + R(t k+1 x+ h b k )) = 1 hg k + 1 hg(t k+1 x+ hb k u(t k+1 x+ hb k )) + 1 hg + h 3 + h 4 where j 1 j j 3 jkjr(t k x)j and j j j 4 jkjr(t k+1 x+ h b k )j: Substituting (3.18) and (3.19) in (3.15) and expanding the functions 1 u(t k+1 X () k+1 ) it is not dicult to obtain: u(t k x)+r(t k x)=v (1) (t k x)+ 1 R(t k+1 X (+) k+1 )+1 R(t k+1 X (;) k+1 )+r(t k x)+~r(t k x) where v (1) (t k x) is dened in (3.10) and Then by (3.9) we get (3.0) jr(t k x)j KhjR(t k+1 x+ h b k )j j~r(t k x)j KhjR(t k x)j: R(t k x)= 1 R(t k+1 X (+) k+1 )+1 R(t k+1 X (;) k+1 )+r(t k x)+~r(t k x)+o(h 3 + " h ): Introduce the notation We have from (3.0): Hence R k := max jr(t k x)j: ;1<x<+1 R N =0 R k R k+1 + KhR k + KhR k+1 + C (h 3 + " h ) k = N ; 1 ::: 0: and therefore (remember N =(T ; t 0 )=h) Theorem 3.1 is proved. R k C 1+Kh (( K 1 ; Kh )N ; 1) (h + " h) R k C K (e4k(t ;t 0) ; 1) (h + " h): Remark 3.1. For linear parabolic equations, i.e., when the coecients of (3.1) do not depend on u the method (3.5) becomes the explicit one with the global error O(h + " h) and can be applied to solving linear parabolic equations with small parameter. Note also that if the dimension d of the linear problem is high (d >3 in practice) and it is enough to nd the solution in a few points only, the Monte-Carlo technique is preferable. 3.3. Explicit layer methods. For implementation of the implicit method (3.5), one can use the method of simple iteration. If we take u(t k+1 x) as a null iteration, in the case of b(t x u) 6= b(t x) or g(t x u) 6= g(t x) the rst iteration provides the one-step error O(h ) only. One can show that applying the second iteration we get O(h 3 + " h ) as the one-step error. However it is possible to reach the same one-step accuracy by some 11

modication of the rst iteration that reduces the number of recalculations. The explicit layer method obtained on this way has the form (we use the same notation u(t k x) again) (3.1) u(t N x)='(x) ^bk = b(t k x u(t k+1 x)) ^c k = c(t k x u(t k+1 x)) ^ k = (t k x u(t k+1 x)) u (1) (t k x)=u(t k+1 x+ h^b k )+hg(t k x u(t k+1 x)) u(t k x)= 1 u(t k+1 x+ h[b(t k x u (1) (t k x)) + b(t k+1 x+ h^b k u(t k+1 x+ h^b k ))]= +" h^c k + "h 1=^ k ) + 1 u(t k+1 x+ h[b(t k x u (1) (t k x)) + b(t k+1 x+ h^b k u(t k+1 x+ h^b k ))]= +" h^c k ; "h 1=^ k ) + 1 hg(t k x u (1) (t k x)) + 1 hg(t k+1 x+ h^b k u(t k+1 x+ h^b k )) k = N ; 1 ::: 0: The following theorem can be proved by the arguments like that in Lemma 3.1 and Theorem 3.1. Theorem 3.. Under the assumptions (i) - (ii) the global error of the explicit layer method (3:1) is estimated by O(h + " h): ju(t k x) ; u(t k x)j K (h + " h) where the constant K does not depend on h k x ": Remark 3.. Naturally, we can take other (more accurate than we use above) weak approximations of SDE with small noise [19] to construct the corresponding high-order (with respect to h and ") methods for the problem (3.1)-(3.). In Section 6 we give high-order methods in some particular cases of the equation (3.1). 3.4. Singular case. The estimates of errors for the methods proposed in this section (Theorems 3.1 and 3.) are obtained provided the bounds of derivatives of the solution to the considered problems are uniform with respect to x R d t [t 0 T] and 0 < " " (see (3.8)). This assumption is ensured, e.g., in the following case. Consider the rst-order partial dierential equation, obtained from (3.1) under " =0: (3.) (3.3) @u 0 @t + b(t x u0 ) @u 0 @x + g(t x u0 )=0 t [t 0 T) x R 1 u 0 (T x)='(x): If the coecients of the equation (3.) and the initial condition (3.3) are such that the solution u 0 (t x) x R 1 is suciently smooth under t 0 t T, then the derivatives of the solution u(t x) to (3.1)-(3.) can be uniformly bounded with respect to 0 " " under t [t 0 T] (see [7, 7]). Note that it is generally not enough to assume that the coecients of (3.) and the initial condition '(x) are bounded and smooth functions to ensure the regular behavior of u 0 (t x) at any t<t [7]. 1

A lot of physical phenomena (e.g., formation and propagation of shock waves) having singular behavior is described by equations with small parameter. The derivatives of their solutions go to innity as"! 0 and, rigorously speaking, the results of Theorems 3.1 and 3. become inapplicable. After change of variables t = " t 0 x = " x 0 the problem (3.1)-(3.) is rewritten for v(t 0 x 0 ):=u(" t 0 " x 0 ) in the form: (3.4) @v @t 0 + 1 (" t 0 " x 0 v) @ v @x 0 +(b(" t 0 " x 0 v)+" c(" t 0 " x 0 v)) @v @x 0 (3.5) +" g(" t 0 " x 0 v)=0 t 0 [t 0 =" T=" ) x 0 R 1 0 <" " v(t=" x 0 )='(" x 0 ): If the assumptions like (ii) take place for the solution v(t 0 x 0 ) to (3.4)-(3.5) (we pay attention that the problem (3.4)-(3.5) is considered on long time intervals), then the derivatives of the solution u(t x) to (3.1)-(3.) are estimated as: j @i+j u (3.6) @t i @x j K j " t [t 0 T] x R 1 0 <" " : (i+j) These bounds are natural ones for the problem (3.1)-(3.) in the singular case. If one followed the arguments of Lemma 3.1 and Theorem 3.1 in the singular case (i.e., taking the assumptions (i), (3.7), (3.6) instead of (i)-(ii)), the estimate of the form K h ;t0)=" (ek(t ; 1) would be obtained for the proposed methods. Due to the big factor C " 1=" in the exponent, this estimate is meaningless for practical purposes. Our numerical tests (see Section 8.1) demonstrate that the proposed methods possess essentially better quality than it can be predicted by this estimate. Apparently, the methods work fairly good in the singular case because the derivatives are large only in a small domain known as internal layer (see, e.g., [8]) that is attributable to the majorityofinteresting applications. The further theoretical investigation, to prove a realistic estimate for the errors of the proposed methods, should rest on a stability analysis and on more extensive properties of the considered solution. Recently, the similar problem for nite elements methods is considered in afewpapers (see [3] and references therein). However, in some particular, but important, singular cases of the problem (3.1)-(3.) we get reasonable estimates (without 1=" in the exponent) for the errors of the proposed methods by the arguments of Lemma 3.1 and Theorem 3.1. Theorem 3.3. Assume the coecients b and in (3:1) be independent of u: Let the conditions (i) and (3:7) fulll and the derivatives j @i+j u j i =0 j =1 3 @t i 4 i =1 j = @xj 0 1 i = j = 0 1 i = 3 j = 0 satisfy (3:6): Then under a suciently small h=" the global error of the explicit layer method (3:1) is estimated as ju(t k x) ; u(t k x)j K C h " 4 (ek(t ;t 0) ; 1) where the constants C and K do not depend on h k x ": Proof. Denote the one-step approximation of u(t k x) corresponding to the method (3.1) as v(t k x): By the arguments of Lemma 3.1, we get (3.7) u(t k x) ; v(t k x)=" h C 1 (t k x " h) +h 3 C (t k x " h): 13

Here C 1 (t k x " h) has the form C 1 (t k x " h) = 1 @c (; @u @u @t @u @x ; @c @t ; @ @ u @t @x ; b@ @ u @x @x ; @u @x ; c @ u @x@t ; bc@ u @x ; b @c @x @ 3 u @x @t + b @ 3 u @x ; @g 3 @u (c@u @x + @u @x ; b @c @u (@u @x ) @ u @x )) k + @c @u (t k x )( @u @u @t @x ) k +~c k b k ( @ u @x ) k +~c k ( @ u @x@t ) k + " 3 ~c k k (@ u @x ) 3 k + " ~c k (@ u @x ) k + k + k 4 ( k ; h 1= " 4 ( k + h 1= " + "3 k 1 ((" k ; 4 h1= (b k + " ~c k )) @ 4 u b k ; "h 1= ~c k ) @3 u @x @t (~ ~ ; ) ; k h 1= 4 " b k +"h 1= ~c k ) @3 u @x @t (~ ~ + )+ k h 1= 4 " @x 4 ( ;)+( " k @ 3 u @x@t ( ; ) @ 3 u @x@t ( + ) 4 + h1= (b k + " ~c k )) @ 4 u @x ( +)) 4 where the index k means calculation at (t k x) ~c k := c(t k x u(t k+1 x)) is a point between u(t k x) and u(t k+1 x) t k < ~ < t k+1 and ~ are points between x and x + 1 hb k + 1 hb(t k+1 x+ hb k )+" h~c k "h 1= k : It is not dicult to write down the corresponding expression for C (t k x " h) as well. Due to the assumptions (i), (3.7), (3.6), we obtain (3.8) jc 1 (t k x " h)j C " 6 jc (t k x " h)j C " 6 : Under a suciently small h=" the one-step error of the method (3.1) is estimated as ju(t k x) ; v(t k x)j C h " 4 : By the arguments of Theorem 3.1, it is not dicult to get the following analog of (3.0): (3.9) R(t k x)= 1 R(t k+1 X (+) k+1 )+1 R(t k+1 X (;) k+1 )+r(t k x)+" h C 1 (t k x " h) +h 3 C (t k x " h) X () k+1 = x + 1 hb(t k x)+ 1 hb(t k+1 x+ hb(t k x)) + " hc(t k x u(t k+1 x)) "h 1= (t k x) where R k+1 := jr(t k x)j KhR k+1 max jr(t k+1 x)j: ;1<x<+1 (Unlike (3.0) the formula (3.9) does not contain ~r(t k x) because the method (3.1) is explicit.) From (3.9) and (3.8) we obtain the statement of the theorem by the usual arguments. Theorem 3.3 is proved. 14

In a lot of applications (e.g., in shock waves) the derivatives are signicant only in a small interval (internal layer) (x (t) x (t)) with the width jx (t) ; x (t)j " : (3.30) and (3.31) j @i+j u @t i @x j j K " (i+j) t [t 0 T] x (x (t) x (t)) 0 <" " j @i+j u @t i @x jk t [t 0 T] x = (x j (t) x (t)) 0 <" " Z x=(x (t) x (t)) j @ i+j u @t i @x j jdx K i + j 6= 0 t [t 0 T] 0 <" " : Theorem 3.4. Assume the coecients b and in (3:1) be independent of u: Let the conditions (i) and (3:7) fulll and the derivatives j @i+j u j i =0 j =1 3 @t i 4 i =1 j = @xj 0 1 i = j = 0 1 i = 3 j = 0 satisfy (3:30) and (3:31). Then under a suciently small h=" the global error of the explicit layer method (3:1) is estimated in l 1 -norm as Z ju(t k x) ; u(t k x)jdx K h (3.3) R 1 C where the constants C and K do not depend on h k x ": Proof. Introduce the notation We have from (3.9): F k := F k F k+1 + KhF k+1 + Z Z R 1 jr(t k x)jdx: " (ek(t ;t 0) ; 1) R 1 j" h C 1 (t k x " h) +h 3 C (t k x " h)jdx: Due to the assumptions (i), (3.7), (3.30) (cf. (3.8)) jc 1 (t k x " h)j C " 6 jc (t k x " h)j C " 6 for x (x (t k ) x (t k )) and due to (i), (3.7), (3.31) Z x=(x (tk ) x (t k )) j" h C 1 (t k x " h) +h 3 C (t k x " h)jdx C (" h + h 3 ): Therefore, under a suciently small h " F k F k+1 + KhF k+1 + C h " whence (3.3) follows. Theorem 3.4 is proved. The analogous theorems for the more simple method (.8) give the same estimates of its error. However, in our experiments the layer method (3.1) gives better results than (.8). To show the advantages of the method (3.1) in the singular case theoretically, further investigation is required. Seemingly, a more accurate analysis of the error of the method (3.1) should rest on more extensive properties of the solution u(t x): See also Section 7 and Remarks 6.1 and 8.1, where in the singular situation we give reasonable estimates of the errors for some other particular cases of the problem (3.1)- (3.). 15

4. Numerical algorithms based on interpolation To calculate u(t k x) ' u(t k x) at a certain point x by one of the above written explicit layer methods, one can use the recursive procedure. But it is evident that if the number of steps N =(T ; t 0 )=h is relatively large, the recursive procedure is practically unrealizable due to a huge volume of needed calculations. In [18] another way is proposed, which is based on a discretization in the variable x and on an interpolation of u(t k x): Introduce an equidistant space discretization: fx j = x 0 + jh x j = 0 1 :::g x 0 R 1 h x is a suciently small positive number. While it does not lead to any misunderstanding, we use the old notation u u (1) etc. for new values here. Theorem 4.1. Under the assumptions (i)-(ii) the numerical algorithm based on the explicit method (3:1) and on the linear interpolation: (4.1) u(t N x)='(x) ^bk j = b(t k x j u(t k+1 x j )) ^c k j = c(t k x j u(t k+1 x j )) ^ k j = (t k x j u(t k+1 x j )) u (1) (t k x j )=u(t k+1 x j + h^b k j )+hg(t k x j u(t k+1 x j )) u(t k x j )= = 1 u(t k+1 x j + h[b(t k x j u (1) (t k x j )) + b(t k+1 x j + h^b k j u(t k+1 x j + h^b k j ))]= +" h^c k j + "h 1=^ k j ) + 1 u(t k+1 x j + h[b(t k x j u (1) (t k x j )) + b(t k+1 x j + h^b k j u(t k+1 x j + h^b k j ))]= +" h^c k j ; "h 1=^ k j ) + 1 hg(t k x j u (1) (t k x j )) + 1 hg(t k+1 x j + h^b k j u(t k+1 x j + h^b k j )) u(t k x)= x j+1 ; x u(t k x j )+ x ; x j u(t k x j+1 ) x j <x<x j+1 h x h x j =0 1 ::: k = N ; 1 ::: 0 has the global error estimated by O(h + " h) if the value of h x is selected as h x = min(h 3= "h) where is a positive constant. Proof. Here we follow the proof of the corresponding theorem in [18]. Introduce the notation X () k+1 j = x j + 1 hb(t k x j u (1) (t k x j )) + 1 hb(t k+1 x j + h^b k j u(t k+1 x j + h^b k j )) +" h^c k j "h 1=^ k j : Just as in Theorem 3.1 for the implicit method (3.5), it is possible to obtain the expression like (3.0) for the algorithm (4.1) at the nodes x j : R(t k x j )= 1 R(t k+1 X (+) k+1 j )+1 R(t k+1 X (;) k+1 j )+r(t k x j )+O(h 3 + " h ) 16

where jr(t k x j )jkhr k+1 R k+1 := max jr(t k+1 x)j: ;1<x<+1 (unlike (3.0) this formula does not contain ~r(t k x j ) because the algorithm (4.1) is explicit and, for instance, b satises the inequality jbj <KjR(t k+1 x j )j). Hence (4.) We have jr(t k x j )jr k+1 + KhR k+1 + C (h 3 + " h ): u(t k x)= x j+1 ; x u(t k x j )+ x ; x j (4.3) u(t k x j+1 )+O(h x h x h ) x j <x<x j+1 x where the interpolation error O(h x ) satises the inequality jo(h)j x Ch x with C independent of h k h x j x ": From the last relation of (4.1) and from (4.3), we get R(t k x)= x j+1 ; x R(t k x j )+ x ; x j R(t k x j+1 )+O(h x h x h ) x j <x<x j+1 x whence due to (4.): jr(t k x)j R k+1 + KhR k+1 + C (h 3 + " h + h x ) with a new constant C: As h x = min(h 3= "h) we obtain from here the statement of the theorem by the usual arguments. Theorem 4.1 is proved. Remark 4.1. The way of proving Theorem 4.1 gives us the restriction on the type of interpolation procedure, which we can use for constructing the numerical algorithm. The restriction is such that the sum of absolute values of the coecients staying at u(t k ) in the interpolation procedure must be not greater than 1: We can make use of B-splines of the order O(h x ) for which this restriction takes place. The cubic interpolation of the order O(h 4 x ) does not satisfy the restriction. However, our numerical tests give good results in the case of the algorithm based on the cubic interpolation. See also Section 8.1 and some details and theoretical explanations in [18]. Remark 4.. One can use a nonequidistant space discretization. For instance, one can take small h x in the intervals of x where derivatives of the solution are big, and take relatively large h x outside these intervals. 5. Extensions to multi-dimensional case and to reaction-diusion systems with small parameter It is easy to generalize the proposed here algorithms to the multi-dimensional case (d > 1): For instance, consider the algorithm like (4.1) in the case of d = : Introduce the equidistant space discretization: x 1 j = x1 0 + jh x 1 x l = x 0 + lh x j l =0 1 ::: (x 1 j x l )> R h x i = i min(h 3= "h) i =1 i are positive constants. The algorithm with the global error O(h + " h) has the form (5.1) u(t N x 1 x )='(x 1 x ) ^bk j l = b(t k x 1 j x l u(t k+1 x 1 j x l )) ^c k j l = c(t k x 1 j x l u(t k+1 x 1 j x l )) 17

^ k j l = (t k x 1 j x l u(t k+1 x 1 j x l )) u (1) (t k x 1 j x l )=u(t k+1 x 1 j + h^b 1 k j l x l + h^b k j l )+hg(t k x 1 j x l u(t k+1 x 1 j x l )) ( ~ X 1 k+1 j l ~ X k+1 j l )> =(x 1 j x l )> + 1 hb(t k x 1 j x l u(1) (t k x 1 j x l )) + 1 hb(t k+1 x 1 j + h^b 1 k j l x l + h^b k j l u(t k+1 x 1 j + h^b 1 k j l x l + h^b k j l )) + " h^c k j l u(t k x 1 j x l )= = 1 4 u(t k+1 ~ X 1 k+1 j l 11 1 + "h1=^ k j l + "h1=^ k j l X ~ 1 k+1 j l + "h1=^ k j l + "h1=^ k j l ) + 1 4 u(t k+1 X ~ 1 11 1 k+1 j l ; "h 1=^ k j l + "h1=^ k j l X ~ 1 k+1 j l ; "h 1=^ k j l + "h1=^ k j l ) + 1 4 u(t k+1 ~ X 1 k+1 j l 11 1 + "h1=^ k j l ; "h 1=^ k j l X ~ 1 k+1 j l + "h1=^ k j l ; "h 1=^ k j l ) + 1 4 u(t k+1 X ~ 1 11 1 k+1 j l ; "h 1=^ k j l ; "h 1=^ k j l X ~ 1 k+1 j l ; "h 1=^ k j l ; "h 1=^ k j l ) + 1 hg(t k x 1 j x l u(1) (t k x 1 j x l )) + 1 hg(t k+1 x 1 j + h^b 1 k j l x l + h^b k j l u(t k+1 x 1 j + h^b 1 k j l x l + h^b k j l )) u(t k x 1 x )= x1 j+1 ; x 1 h x 1 x l+1 ; x h x u(t k x 1 j x l )+x1 j+1 ; x 1 h x 1 x ; x l h x u(t k x 1 j x l+1 ) + x1 ; x 1 j h x 1 x l+1 ; x h x u(t k x 1 j+1 x l )+x1 ; x 1 j h x 1 x ; x l h x u(t k x 1 j+1 x l+1 ) x 1 j x 1 x 1 j+1 x l x x l+1 j l =0 1 ::: k = N ; 1 ::: 0: The proposed methods are applicable to the Cauchy problem for systems of reactiondiusion equations with small parameter as well. For instance, in the case of the following system (we take d =1for simplicity in writing) (5.) @u q @t + " q u q (t x u)@ @x the method like (3.1) has the form: (5.3) +(b q(t x u) +" c q (t x u)) @u q @x + g q(t x u) =0 t [t 0 T) x R 1 q =1 :::n u := (u 1 ::: u n ) u q (T x)=' q (x) u q (t N x)=' q (x) 18

^bq k = b q (t k x u(t k+1 x)) ^c q k = c q (t k x u(t k+1 x)) ^ q k = q (t k x u(t k+1 x)) u (1) i (t k x)=u i (t k+1 x+ h^b i k )+hg i (t k x u(t k+1 x)) i =1 ::: n u q (t k x)= 1 u q(t k+1 x+ h[b q (t k x u (1) (t k x)) +b q (t k+1 x+ h^b q k u(t k+1 x+ h^b q k ))]= +" h^c q k + "h 1=^ q k ) + 1 u q(t k+1 x+ h[b q (t k x u (1) (t k x)) +b q (t k+1 x+ h^b q k u(t k+1 x+ h^b q k ))]= +" h^c q k ; "h 1=^ q k ) + 1 hg q(t k x u (1) (t k x)) + 1 hg q(t k+1 x+ h^b q k u(t k+1 x+ h^b q k )) q =1 ::: n k = N ; 1 ::: 0: It is easy to write down the corresponding algorithm like (4.1) on the base of this method. See [18] to obtain such methods and algorithms for another type of reaction-diusion systems. 6. High-order methods for semilinear equation with small constant diusion and zero advection Here we restrict ourselves to the case of d =1for simplicity in writing again. Consider the Cauchy problem (6.1) (6.) @u @t + " @ u @x + g(t x u) =0 t [t 0 T) x R 1 u(t x)='(x): We assume that the coecient g(t x u) is a uniformly bounded and suciently smooth function and conditions like (ii) from Section 3 are fullled for the solution u(t x) to (6.1)-(6.). Note, to construct high-order methods we need in uniform boundedness of derivatives of u(t x) with higher orders than in the assumption (3.8). To realize the methods of this section, we can avoid any interpolation. It occurs so because under 1 b 0 c 0 we are able to choose the special space discretization. The methods of this section are tested by simulation of the generalized KPP-equation with a small parameter (see Section 8.). The probabilistic representation of the solution to (6.1)-(6.) has the form (see (.1)- (.3)) (6.3) u(t x) =E('(X t x (T )) + Z t x 0 (T )) where X t x (s), Z t x z (s) s t satises the system (6.4) dx = "dw(s) X(t) =x dz = g(s X u(s X))ds Z(t) =z: Note that the system (6.4) is a system of dierential equations with small additive noise. 19

6.1. Two-layer methods. For completeness of presentation, let us write down the layer methods (.8) and (3.1) and the second-order layer method from [18] in the case of the problem (6.1)-(6.). The explicit layer method with error O(h) (.8) has the form (6.5) u(t N x j )='(x j ) u(t k x j )= 1 u(t k+1 x j + "h 1= )+ 1 u(t k+1 x j ; "h 1= )+hg(t k+1 x j u(t k+1 x j )) x j = x 0 + j"h 1= j =0 1 ::: k = N ; 1 ::: 0: Note that it coincides with the well-known nite-dierence scheme under the special relation of time and space steps (h x = "h 1= t ) in the scheme. In the case of the problem (6.1)-(6.) the explicit layer method (3.1) with error O(h + " h) takes the form (6.6) u(t N x j )='(x j ) u (1) (t k x j )=u(t k+1 x j )+hg(t k x j u(t k+1 x j )) u(t k x j )= 1 u(t k+1 x j + "h 1= )+ 1 u(t k+1 x j ; "h 1= ) + 1 h[g(t k x j u (1) (t k x j )) + g(t k+1 x j u(t k+1 x j ))] x j = x 0 + j"h 1= j =0 1 ::: k = N ; 1 ::: 0: Using the second-order Runge-Kutta scheme (.1), the implicit second-order layer method for the semi-linear parabolic equation with constant diusion is constructed in [18]. In the case of the problem (6.1)-(6.) the implicit layer method with error O(h ) has the form (6.7) u(t k x j )= 1 6 u(t k+1 x j + u(t N x j )='(x j ) p 3"h 1= )+ 3 u(t k+1 x j )+ 1 6 u(t k+1 x j ; p 3"h 1= ) + h g(t k x j u(t k x j )) + h 3 g(t k+1 x j u(t k+1 x j )) + h 1 g(t k+1 x j + p p 3"h 1= u(t k+1 x j + 3"h 1= )) + h 1 g(t k+1 x j ; p 3"h 1= u(t k+1 x j ; p 3"h 1= )) p x j = x 0 + j 3"h 1= j =0 1 ::: k = N ; 1 N ; ::: 0: To solve the algebraic equations obtained at each step of the method (6.7), one can use the Newton method or the method of simple iteration. 0

Remark 6.1. In the singular case the natural bounds for derivatives of the solution to (6.1)-(6.) have the form j @i+j u (6.8) @t i @x jk j " t [t 0 T] x R 1 j 0 <" " : These bounds are obtained after the following change of variables: t = t 0 x = " x 0 (cf. Section 3.4). By the same arguments as in Theorem 3.3 one can prove under (6.8) that the errors of both methods (6.5) and (6.6) are estimated as ju(t k x) ; u(t k x)j Kh where the constant K does not depend on x k h ": Nevertheless, the method (6.6) gives better results than (6.5) in our experiments. One can explain this by the fact that the constant K of (6.6) is essentially less than the K of (6.5). Under (6.8) the error of the method (6.7) remains O(h ): Note that in Section 8. we present results of testing these methods (instead of (6.5) we use some its modication) on the equation, the coecient g of which depends on " and the derivatives of its solution have other bounds than (6.8) (see Remark 8.1 and other details in Section 8.). 6.. Three-layer methods. Here we obtain two three-layer methods. Their onestep errors can be estimated by the same arguments as in Lemma 3.1. We do not prove their convergence that requires stability analysis of multi-layer methods. We test these methods in our experiments and they give fairly good results. To calculate u(t k+1 x) by a three-layer method, two previous layers are used. So, to start simulations we should know u(t N x) and u(t N;1 x): To simulate u(t N;1 x) one can use, e.g., the two-layer method (6.7) under a suciently small step. Below we consider this layer to be known and denote (x) :=u(t N;1 x): Apply the special Runge-Kutta scheme (.13) to approximate (6.4): (6.9) X tk x(t k+1 ) ' X tk x(t k+1 )=x + "h 1= k Z tk x z(t k+1 ) ' Z tk x z(t k+1 )=z + h 6 (g(t k x u(t k x)) + g(t k+1= x u(t k+1= x)) +g(t k+1= x+ "h 1= k u(t k+1= x+ "h 1= k )) +g(t k+1 x+ "h 1= k u(t k+1 x+ "h 1= k ))) where k are i.i.d. variables with the law P ( =0)==3 P( = p 3) = 1=6: The implicit method with the one-step error O(h 5 + " h 3 ) has the form (to get the method we use the scheme (6.9) with the time step h) (6.10) u(t k x j )= 1 6 u(t k+ x j + u(t N x j )='(x j ) u(t N;1 x j )= (x j ) p 6"h 1= )+ 3 u(t k+ x j )+ 1 6 u(t k+ x j ; p 6"h 1= ) + h 3 g(t k x j u(t k x j )) + 10h 1 9 g(t k+1 x j u(t k+1 x j ))

+ h p p 9 g(t k+1 x j + 6"h 1= u(t k+1 x j + 6"h 1= )) + h 18 g(t k+ x j + + h 9 g(t k+1 x j ; p 6"h 1= u(t k+1 x j ; p 6"h 1= )) p p 6"h 1= u(t k+ x j + 6"h 1= )) + h 9 g(t k+ x j u(t k+ x j )) + h 18 g(t k+ x j ; p 6"h 1= u(t k+ x j ; p 6"h 1= )) p x j = x 0 + j 6"h 1= j =0 1 ::: k = N ; N ; 3 ::: 0: Let us look at the stability properties of this method in the simple case when u and g in (6.1) do not depend on x i.e., apply the method (6.10) to the ordinary dierential equation (6.11) du + g(t u) =0 t T u(t )=': dt Recall (see, e.g., [10]) that a linear n-step method for (6.11) n u k + n;1 u k+1 + + 0 u k+n = h ( n g k + + 0 g k+n ) g i = g(t i u i ) n 6=0 j 0 j + j 0 j > 0 is zero-stable (D-stable) if the generating polynomial (6.1) n n + n;1 n;1 + + 0 =0 satises the root condition: the roots of (6.1) lie on or within the unit circle, and the roots on the unit circle are simple. In the case of (6.11) the method (6.10) coincides with the Milne two-step method which is of the order O(h 4 ) and is zero-stable. Its generating polynomial has two roots: 1 and ;1. As is known [10], the root ;1 can be dangerous for some dierential equations. The method (6.10) has unstable behavior in our numerical tests on the generalized KPPequation with a small parameter (8.11)-(8.13) (Section 8.). One can see that the method (6.10) does not preserve the property u 1 of the problem (8.11)-(8.13) that leads to an unstable behavior of the approximate solutions. We modify the method (6.10) in the experiments: if u(t k x j ) > 1 we put u(t k x j )=1: Because locally, atone step, the arising dierence 0 < u(t k x j ) ; 1 is not greater than the one-step error of this method, this modication does not change the one-step accuracy order of the method. The modied method turned out to be fairly good in applying to the generalized KPP-equation. However, the modication is based on the knowledge of the properties of the solution and it may be dicult to nd such a modication for another problem. Fortunately, we are able to approximate the system (6.4) by another weak scheme and obtain a method for (6.1)-(6.) with better stability properties in the sense considered above (see the method (6.13) below) but with the one-step error of lower order. To reach both the same onestep accuracy O(h 5 + " h 3 ) and the better stability properties is possible by a four-layer method. Approximate (6.4) by the special scheme with one-step order O(h 4 + " h 3 ): X tk x(t k+1 ) ' X tk x(t k+1 )=x + "h 1= k

Z tk x z(t k+1 ) ' Z tk x z(t k+1 )=z + h 1 (5g(t k x u(t k x)) + g(t k+1 x u(t k+1 x)) +7g(t k+1 x+ "h 1= k u(t k+1 x+ "h 1= k )) ;g(t k+ x+ "h 1= k u(t k+ x+ "h 1= k ))) where k are i.i.d. variables with the law P ( =0)==3 P( = p 3) = 1=6: The three-layer implicit method with one-step error O(h 4 + " h 3 ) has the form (6.13) u(t N x j )='(x j ) u(t N;1 x j )= (x j ) u(t k x j )= 1 6 u(t k+1 x j + p 3"h 1= )+ 3 u(t k+1 x j )+ 1 6 u(t k+1 x j ; p 3"h 1= ) + 5h 1 g(t k x j u(t k x j )) + 17h 36 g(t k+1 x j u(t k+1 x j )) + 7h p p 7 g(t k+1 x j + 3"h 1= u(t k+1 x j + 3"h 1= )) + 7h 7 g(t k+1 x j ; p 3"h 1= u(t k+1 x j ; p 3"h 1= )) ; h 7 g(t k+ x j + p p 3"h 1= u(t k+ x j + 3"h 1= )) ; h 18 g(t k+ x j u(t k+ x j )) ; h 7 g(t k+ x j ; p 3"h 1= u(t k+ x j ; p 3"h 1= )) p x j = x 0 + j 3"h 1= j =0 1 ::: k = N ; N ; 3 ::: 0: For (6.11) this method coincides with one of the implicit two-step Adams methods of order O(h 3 ) and the roots of its generating polynomial are 1 and 0: One can expect that in the case of the problem (6.1)-(6.) the method (6.13) also possesses better stability properties than (6.10). In our numerical tests on the generalized KPP-equation with a small parameter (Section 8.) the method (6.13) has stable behavior. To solve the algebraic equations, obtained at each step of the methods (6.10) and (6.13), one can use the Newton method or the method of simple iteration. Remark 6.. The methods of this section can be extended for a problem of a higher dimension or for a system of reaction-diusion equations. In addition using some other weak approximations to SDE with small additive noise, new layer methods can be constructed. For instance, three- and four-layer methods with the one-step error O(h 5 + " h ) can be obtained. It is also not dicult to get an implicit four-layer method with the one-step error O(h 5 +" h 3 ) for (6.1)-(6.) possessing good stability properties in the sense as above or an explicit four-layer method with the one-step error O(h 4 + " h 3 ) and so on. 3

7. Method based on exact simulation of the Brownian motion In this section we construct a layer method for a model nonlinear problem using the probabilistic approach as above but to approximate SDE, arising in the probabilistic representation of the solution to the problem, we attract a numerical method with exact simulation of the Brownian motion instead of the weak schemes used in Sections - 6. In [17] a few methods with exact simulation of some components of SDE are proposed for linear problems. It was shown that these methods are preferable to weak schemes in some situations. One can expect that in the nonlinear case layer methods used the exact simulation of the Brownian motion possess some preferable properties as well. Let us consider the model problem (7.1) (7.) @u @t + " @ u @x + g 0(t x)u + " g 1 (t x u) =0 t [t 0 T) x R 1 u(t x)='(x ") 0 <" " : As the initial condition '(x ") one can take, for instance, a smooth function being close to the step function (e.g., 0:5 ; 0:5 tanh(x=")): Such a problem may be the result of regularization (after introducing the articial viscosity) of the rst-order hyperbolic problem with weak nonlinearity: @u 0 @t + g 0(t x)u 0 + " g 1 (t x u 0 )=0 t [t 0 T) x R 1 u 0 (T x)= 8 < : 1 x<0 1= x =0 0 x>0: The introduction of the articial viscosity is a common way for numerical simulations of rst-order hyperbolic problems with discontinuous solutions [4, 0, 4, 9]. The solution u(t x) to the problem (7.1)-(7.) has the probabilistic representation (7.3) u(t x) =E('(X t x (T ) ")Y t x 1 (T )+Z t x 1 0 (T )) where X t x (s) Y t x y (s) Z t x y z (s) s t satises the system of SDE (7.4) We have (7.5) dx = "dw(s) X(t) =x dy = g 0 (s X)Yds Y (t) =y dz = " g 1 (s X u(s X))Yds Z(t) =z: u(t k x)=e(u(t k+1 X tk x(t k+1 ))Y tk x 1(t k+1 )+Z tk x 1 0(t k+1 )): Let us assume that: (a) The coecient g 0 (t x) C 4 ([t 0 T] R 1 ) and both it and its derivatives are uniformly bounded with respect to t [t 0 T] x R 1 : j @i+j g 0 @t i @x j jk 0 i + j 4 t [t 0 T] x R 1 : The coecient g 1 (t x u) and its rst derivatives (with respect to t x u) and second derivatives (with respect to x and u) are continuous and uniformly bounded: j @i+j+l g 1 @t i @x j @u l jk 0 i + j + l t [t 0 T] x R 1 u <u(t x) <u 4