Discretization of SDEs: Euler Methods and Beyond

Similar documents
High order Cubature on Wiener space applied to derivative pricing and sensitivity estimations

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( )

THE GEOMETRY OF ITERATED STRATONOVICH INTEGRALS

Higher order weak approximations of stochastic differential equations with and without jumps

Numerical methods for solving stochastic differential equations

The concentration of a drug in blood. Exponential decay. Different realizations. Exponential decay with noise. dc(t) dt.

Numerical Integration of SDEs: A Short Tutorial

Stochastic differential equation models in biology Susanne Ditlevsen

FE610 Stochastic Calculus for Financial Engineers. Stevens Institute of Technology

B8.3 Mathematical Models for Financial Derivatives. Hilary Term Solution Sheet 2

Convergence of Particle Filtering Method for Nonlinear Estimation of Vortex Dynamics

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

WEAK VERSIONS OF STOCHASTIC ADAMS-BASHFORTH AND SEMI-IMPLICIT LEAPFROG SCHEMES FOR SDES. 1. Introduction

Divergence Theorems in Path Space. Denis Bell University of North Florida

Exercises. T 2T. e ita φ(t)dt.

Stochastic Differential Equations.

Stochastic Calculus February 11, / 33

I forgot to mention last time: in the Ito formula for two standard processes, putting

DUBLIN CITY UNIVERSITY

c 2002 Society for Industrial and Applied Mathematics

Accurate approximation of stochastic differential equations

Strong Predictor-Corrector Euler Methods for Stochastic Differential Equations

Controlled Diffusions and Hamilton-Jacobi Bellman Equations

Nested Uncertain Differential Equations and Its Application to Multi-factor Term Structure Model

Introduction to numerical simulations for Stochastic ODEs

MULTIDIMENSIONAL WICK-ITÔ FORMULA FOR GAUSSIAN PROCESSES

p 1 ( Y p dp) 1/p ( X p dp) 1 1 p

On pathwise stochastic integration

Introduction to the Numerical Solution of SDEs Selected topics

Rough paths methods 4: Application to fbm

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

Itô s formula. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

A numerical method for solving uncertain differential equations

Bridging the Gap between Center and Tail for Multiscale Processes

Mean-square Stability Analysis of an Extended Euler-Maruyama Method for a System of Stochastic Differential Equations

ADAPTIVE WEAK APPROXIMATION OF REFLECTED AND STOPPED DIFFUSIONS

Lecture 4: Introduction to stochastic processes and stochastic calculus

Stochastic Numerical Analysis

Adapting quasi-monte Carlo methods to simulation problems in weighted Korobov spaces

Chapter 4: Monte-Carlo Methods

Multilevel Monte Carlo for Lévy Driven SDEs

5 December 2016 MAA136 Researcher presentation. Anatoliy Malyarenko. Topics for Bachelor and Master Theses. Anatoliy Malyarenko

WORD SERIES FOR THE ANALYSIS OF SPLITTING SDE INTEGRATORS. Alfonso Álamo/J. M. Sanz-Serna Universidad de Valladolid/Universidad Carlos III de Madrid

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b)

Lecture 4: Numerical Solution of SDEs, Itô Taylor Series, Gaussian Approximations

Some Tools From Stochastic Analysis

Malliavin Calculus in Finance

Stochastic integration. P.J.C. Spreij

Theoretical Tutorial Session 2

Stochastic Differential Equations

Least Squares Estimators for Stochastic Differential Equations Driven by Small Lévy Noises

Second order weak Runge Kutta type methods for Itô equations

Bernardo D Auria Stochastic Processes /12. Notes. March 29 th, 2012

Geometric projection of stochastic differential equations

Monte-Carlo MMD-MA, Université Paris-Dauphine. Xiaolu Tan

Department of Mathematics, University of California, Berkeley. GRADUATE PRELIMINARY EXAMINATION, Part A Fall Semester 2016

Stochastic Integration and Stochastic Differential Equations: a gentle introduction

Discretization of Stochastic Differential Systems With Singular Coefficients Part II

Backward martingale representation and endogenous completeness in finance

Intertwinings for Markov processes

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

Stochastic optimal control with rough paths

Exercises in stochastic analysis

Lecture 6: Bayesian Inference in SDE Models

Bernardo D Auria Stochastic Processes /10. Notes. Abril 13 th, 2010

Lecture 4: Numerical Solution of SDEs, Itô Taylor Series, Gaussian Process Approximations

The Hopf algebraic structure of stochastic expansions and efficient simulation

Journal of Computational and Applied Mathematics. Higher-order semi-implicit Taylor schemes for Itô stochastic differential equations

Evaluation of the HJM equation by cubature methods for SPDE

Jump-type Levy Processes

Partial Differential Equations with Applications to Finance Seminar 1: Proving and applying Dynkin s formula

Nonparametric Drift Estimation for Stochastic Differential Equations

An adaptive numerical scheme for fractional differential equations with explosions

On a class of stochastic differential equations in a financial network model

Xt i Xs i N(0, σ 2 (t s)) and they are independent. This implies that the density function of X t X s is a product of normal density functions:

Lecture 4: Numerical solution of ordinary differential equations

Quantization of stochastic processes with applications on Euler-Maruyama schemes

Robust Mean-Variance Hedging via G-Expectation

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Lecture 1. Finite difference and finite element methods. Partial differential equations (PDEs) Solving the heat equation numerically

Brownian Motion on Manifold

Stochastic Processes and Advanced Mathematical Finance

Optimal simulation schemes for Lévy driven stochastic differential equations arxiv: v1 [math.pr] 22 Apr 2012

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( )

Simulation Method for Solving Stochastic Differential Equations with Constant Diffusion Coefficients

The method of lines (MOL) for the diffusion equation

Introduction to Computational Stochastic Differential Equations

A variational radial basis function approximation for diffusion processes

1. Stochastic Processes and filtrations

Derivation of Itô SDE and Relationship to ODE and CTMC Models

Pathwise volatility in a long-memory pricing model: estimation and asymptotic behavior

Simulation of conditional diffusions via forward-reverse stochastic representations

Fast Probability Generating Function Method for Stochastic Chemical Reaction Networks

SEA s workshop- MIT - July 10-14

The Wiener Itô Chaos Expansion

Simulation methods for stochastic models in chemistry

Linear Algebra Review (Course Notes for Math 308H - Spring 2016)

Numerical Methods with Lévy Processes

On Stochastic Adaptive Control & its Applications. Bozenna Pasik-Duncan University of Kansas, USA

On the quantiles of the Brownian motion and their hitting times.

Transcription:

Discretization of SDEs: Euler Methods and Beyond 09-26-2006 / PRisMa 2006 Workshop

Outline Introduction 1 Introduction Motivation Stochastic Differential Equations 2 The Time Discretization of SDEs Monte-Carlo Simulation 3 4

Applications of SDEs Introduction Motivation SDEs In mathematical finance, financial markets are often modelled by the solution X t of a stochastic differential equation (SDE). The price (at time 0) of a European option with maturity T and payoff f (X T ) is given by an expression like ( ) C(X 0 ) = E e rt f (X T ). More generally, given a differential operator L and let u denote the solution of the corresponding Cauchy problem { u t (t, x) = Lu(t, x), t ]0, T ], x Rn u(0, x) = f (x), x R n. A stochastic representation is the solution S t of an SDE s. t. u(t, x) = E(f (X T ) X t = x), t ]0, T ], x R n.

Definition Introduction Motivation SDEs Given a d-dimensional Brownian motion B defined on the probability space (Ω, F, P), T > 0, functions a : R n R n and σ : R n R n d being Lipschitz with at most linear growth. Definition By a stochastic differential equation we understand an equation t t Xt x = x + a(xs x )ds + σ(xs x )db s, t [0, T ], x R n. (1) 0 0 σdb t should be interpreted as a matrix-vector multiplication. We will often use the short-hand notation { dx x t = a(x x t )dt + σ(x x t )db t, t ]0, T ] X x 0 = x. (2)

Feynman-Kac Formula Motivation SDEs Let L denote the second-order differential operator on R n given by n Lf (x) = a i (x) f x i (x) + 1 n b ij 2 f (x) 2 x i x j (x), i=1 i,j=1 where b = (b ij ) is the n n-matrix defined by b = σσ T. Theorem (Feynman-Kac Formula) Let f C 2 c (R n ) and q C(R n ) bounded from below and define ( ( T ) u(t, x) = E exp q(x s )ds f (X T ) t X t = x Then u solves the parabolic Cauchy problem ), t [0, T ], x R n. u t (t, x) + Lu(t, x) + q(x)u(t, x) = 0, u(t, x) = f (x).

Euler Scheme for SDEs Time Discretization Monte-Carlo Simulation We present an approximation for the solution X x T of the SDE (2). Definition Fix x R n and let (N) be the equidistant mesh of size N, i. e. t i = i N T, i = 0,..., N. Set t i = T N and B i = B ti+1 B ti, i = 0,..., N 1. Define the uniform Euler-Maruyama approximation to XT x (N) by X 0 = x and X (N) i+1 = X (N) i +a(x (N) i ) t i +σ(x (N) i ) B i, i = 0,..., N 1. (3) Remark Note that the discretization X (N) i in this form is only discrete in the time variable but not as a random variable.

Time Discretization Monte-Carlo Simulation Strong Convergence of the Definition Given a sequence X (N) of time-discrete approximations of XT x along time partitions (N). X (N) N converges strongly to XT x if it converges in L 1 (Ω), i. e. if lim N E ( X x T X (N) ) N = 0. It converges strongly with order γ > 0 if there is a constant C > 0 such that with (N) = max(ti+1 t i ) E ( X x T X (N) ) C (N) γ. N Theorem The uniform Euler-Maruyama scheme converges strongly of order γ = 1 2 to the true solution X x T.

Time Discretization Monte-Carlo Simulation Weak Convergence of the Let C k p (R n ) denote the space of all C k -functions of polynomial growth (together with their derivatives). Definition X (N) N converges weakly to X x T if, for some k N, lim E( f ( X (N) )) N = E(f (X x T )), f Cp k (R n ). N X (N) N converges to X x T with weak order γ > 0 if C > 0 s. t. ( ( (N))) E f X N E(f (X x T )) C (N) γ, f Cp 2(γ+1) and for all N large enough.

Time Discretization Monte-Carlo Simulation Weak Convergence of the 2 Remark 1 The notion of weak convergence seems to be more appropriate for our problem than the stronger notion of strong convergence, since we actually want to approximate quantities of the form E(f (X x T )). 2 In probability theory, the notion of weak convergence is usually defined with respect to the space C b (R n ) of continuous, bounded functions instead of the space C k p (R n ). Theorem Let the data a and σ be C 4. Then the uniform Euler-Maruyama scheme converges weakly with order 1.

Monte-Carlo Simulation Time Discretization Monte-Carlo Simulation Proposition Given an R n -valued random variable X and a measurable function f : R n R such that f (X ) L 2 (Ω) with variance σ 2 (f ). Let X m, m N, denote a sequence of independent copies of X. Then lim M 1 M M f (X m ) = E(f (X )) a. s. m=1 and, in the sense of convergence in distribution, M 1 M M m=1 f (X m) E(f (X )) σ(f ) M N (0, 1).

Error of Monte-Carlo Simulation Time Discretization Monte-Carlo Simulation Remark 1 Monte-Carlo simulation allows approximation of E(f (X )) with error control provided that it is feasible to generate random numbers from X : heuristically, if M is large enough, 1 M ( M f (X m) E(f (X )) N m=1 0, σ2 (f ) M 2 It is a method of order 1 2. Given ɛ > 0 and 0 < δ < 1, to guarantee that 1 M M m=1 f (X m) E(f (X )) < ɛ with probability 1 δ we have to choose M ( pσ(f ) ɛ ). ) 2, where p is the (1 δ 2 )-quantile of a standard Gaussian random variable, i. e. P( Z p) = δ for Z N (0, 1).

Time Discretization Monte-Carlo Simulation Monte-Carlo Simulation in the We want to approximate the unknown value E(f (XT x )) by the value E ( f ( X (N) )) N, but in general it is impossible to calculate the latter expected value. Contrary to XT x, it is possible to generate random numbers according to the distribution of X (N) N, see (3). This allows approximation of E ( f ( X (N) )) n using Monte-Carlo simulation: for fixed N, M let ( X (N,m) ) N denote a sequence m N of independent copies of X (N) N. Then we approximate E ( f ( X (N) )) 1 M N M f ( X (N,m) ) m=1 N.

A New Error Regime Introduction Time Discretization Monte-Carlo Simulation Monte-Carlo simulation introduces an additional, stochastic error to the Euler scheme. The total errror naturally splits into two parts according to Error = E(f (X T x )) 1 M M f ( X (N,m) ) m=1 N E(f (XT x )) E( f ( X (N) )) N + E( f ( X (N) )) 1 M N M f ( X (N,m) ) m=1 N. The first term is called time-discretization error and the second term is called statistical error.

A Crude Error Analysis Remark The slow convergence rates of the Euler-Maruyama scheme seem to indicate the need for higher order methods. Fix an error tolerance ɛ > 0 and 0 < λ < 1. Reserve λɛ for the discretization and (1 λ)ɛ for the statistical error. Using a p-order discretization method, we need N(λɛ) = C 1 (λɛ) 1/p, M((1 λ)ɛ) = C 2 ((1 λ)ɛ) 2. Consequently, the total work is proportional to N(λɛ)M((1 λ)ɛ) = C 1 C 2 λ 1/p (1 λ) 2 ɛ (2+ 1 p ).

The Stratonovich Stochastic Integral Definition For a continuous semimartingale Y, the Stratonovich integral of Y with respect to Brownian motion is defined by t 0 Y s db i s = t 0 Y s db i s + 1 2 [Y, Bi ] t. Remark We can switch between SDEs in Stratonovich and Itô form by t 0 h(x s ) db s = t 0 h(x s )db s + 1 2 n d j=1 k=1 σ jk (X s ) hk x j (X s)ds.

Stochastic Taylor Expansion Theorem Let f : R n R be C -bounded. Define differential operators A j = n k=1 σkj and A 0 = n x k=1( a k 1 d k 2 j=1 Aj σ kj), x k j = 1,..., d. For α = (i 1,..., i k ) {0,..., d} k set f α = A i1 A i k f and construct the iterated Stratonovich integrals by Jt α = 0<t 1 < <t k <t dbi 1 t1 db i k tk with the convention that dbs 0 = ds. For m N we have f (Xt x ) = f α (x)jt α + R m (t, x, f ), (4) k N, α {0,...,d} k, k+#{i j =0} m where sup x R n E(R 2 m ) = O ( t m+1+1 ]1, [ (t) ) 2.

Taylor Schemes Introduction Remark 1 The stochastic Taylor expansion is the analogue of the Taylor expansion for deterministic functions. Note that the iterated Stratonovich integrals J α play the rôle of the polynomials. 2 The intriguing summation boundary k + #{i j = 0} comes from the different scaling of Brownian motion corresponding to non-zero indices and time corresponding to the index 0 which is sometimes described by db t dt. Consequently, time indices have to be counted twice. It seems plausible to generalize the Euler-Maruyama scheme by schemes where the increment (3) is obtained by truncating the stochastic Taylor expansion at some point. These schemes are called Taylor schemes.

Definition of the Milstein Scheme Definition Given a (uniform) partition as before. The Milstein scheme is defined by X (N) 0 = x and, for i = 0,..., N 1, i+1 = X (N) i +a ( X (N) ) i ti +σ ( X (N) ) i Bi + X (N) where I (j,k) t i = t i+1 t i (B j t B j t i )db k t. d j,k=1 A j σ k( X (N) ) (j,k) i I t i, Remark The Milstein scheme is obtained by using the second order stochastic Taylor expansion (after rewriting it in Itô form).

Properties of the Milstein Scheme Theorem The Milstein scheme converges strongly and weakly and is of order 1 in both senses. Remark The weak orders of the Milstein and the Euler-Maruyama schemes coincide. Since we are mostly interested in weak convergence, it might not seem sensible to use the Milstein scheme. Note, however, that one can construct Taylor schemes of any order and the Milstein scheme already has most of the properties of higher order Taylor schemes. Therefore, we may use it exemplorarily for general higher order schemes.

Sampling Iterated Itô Integrals If we want to apply the Milstein scheme we need to be able to sample random numbers according to the distribution of the d(d + 1)-dimensional random variable ( B 1 i,..., B d i, I (1,1) t i,..., I (j,k) t i,..., I (d,d) ) t i. (5) I (j,j) t i, 1 j d, can be expressed in terms of Brownian motion in an algebraic way by I (j,j) t i = 1 2 (( Bj i )2 t i ). The mixed terms I (i,j) t i depend on B in a non-trivial way. This makes sampling from the afore-mentioned random vector a costly task.

Embedding Higher Order Taylor Schemes in a Lie Group For the scheme constructed by using the stochastic Taylor expansion of order m, we need to be able to sample from the distribution of the random vector of all Jt α with α running through all the multi-indices in {0,..., d} such that α + #{i j = 0} m, c. f. (4). The analyis of this random vector is greatly simplified if we interpret the vector of iterated Stratonovich integrals of order up to m as a stochastic process taking values in some Lie group Gd,1 m, which is nilpotent of order m.

Definition of the Lie Group Let A m d,1 be the (associative, non-commutative) algebra of all polynomials in e 0,..., e d with degree less or equal to m, where all occurrences of e 0 are counted twice. We truncate all terms of higher degrees when multiplying two such polynomials. We encode the vector of iterated Stratonovich integrals as an element of the algebra by setting Yt 1 = 1 + Jt α e α, where the sum runs over α + #{i j = 0} m and e (i1,...,i k ) = e i1 e ik. Definition We define G m d,1 = {Y 1 t (ω) ω Ω} for any t > 0. Remark Gd,1 m is a Lie group having a global chart. Note that we would only need all paths ω of bounded variation in the definition of Gd,1 m.

The Vector of Iterated Integrals as a Process in G m d,1 For y G m d,1 consider the stochastic differential equation in G m d,1 dy y t = Y y t e 0 dt + d i=1 Y y t e i db i t. (6) Proposition (Teichmann 2006) The vector Yt 1 of iterated Stratonovich integrals coincides with the solution to the SDE (6) with initial value 1. Remark The infinitesimal generator to (6) is the sub-laplacian on G m d,1. Therefore, we call the fundamental solution of the SDE and density of the iterated integrals the heat kernel on G m d,1.

Approximation of the Heat Kernel Note that the coefficient of Yt 1 in e 0 is equal to t. Consequently, the heat kernel as density of Yt 1 does not exist albeit in a generalized function sense. If we factorize A m d,1 w. r. t. the subspace generated by e 0, then one can show that the heat kernel exists as a function in the Schwartz space. This suggests approximating the heat kernel by Hermite functions. Interpret Yt 1 as a process taking values in some R K. Let Σ t be a suitable covariance matrix thereon and let r t (z) and h β (t, z) respectively denote the density of N (0, Σ t ) and the corresponding Hermite polynomials, z R K, β N K. There is an efficient procedure for calculating the coefficients at β of the expansion p t ( ) r t ( ) = at β h β (t, ). (7) β NEuler K Methods & Beyond

Milstein Scheme with Approximate Densities In the step from t i to t i+1, the increment of the Milstein scheme depends on (an independent copy of) Y t 1 i, consequently X (N) N depends on Z 0,..., Z N 1, where Z i Y t 1 i independent of each other. We write X (N) N (Z 0,..., Z N 1 ). Therefore, we immediately get E ( f ( X (N) )) RKN N = f ( X (N) (z0,..., z N 1 ) ) p t0 (z 0 ) p tn 1 (z N 1 )dz 0 dz N 1. Fix an approximation p t of the heat kernel by truncating the Hermite expansion at some point and a probability Q t on R K with density q t > 0, e. g. Q t = N (0, Σ t ).

Milstein Scheme with Approximate Densities 2 Proposition For M N let Z (m) i, m = 1,..., M, i = 0,..., N 1, be a sequence of independent, Q ti -distributed random variables. E ( f ( X (N) )) 1 N = lim M M M m=1 Remark The choice of Σ t and Q t is crucial. f ( X (N) ( (m) N Z 0,..., Z (m) )) N 1 ( (m)) p t0 Z 0 ( (m)) p ( (m) ) t N 1 Z N 1 ( (m) ). q t0 Z 0 q tn 1 Z N 1 Note that p t is not a true density and takes negative values.

First Results for the Weak Error Call on the sphere (non hypo., non comm.) Error 0.002 0.010 0.050 0.200 1.000 Milstein method Euler method Confidence interval Reference lines 1 5 10 50 100 500

First Results for the Strong Error Strong error sphere (non comm. matrices) Error 0.1 0.2 0.5 1.0 Milstein Euler Reference lines Order 0.7 line 1 2 5 10 20 50 100

Introduction The Euler-Maruyama scheme for discretization of SDEs is simple to understand and implement, but suffers from a low order of convergence, especially in the strong sense. Typically, the cost of the Monte-Carlo simulation overshadows this defect and it seems to be more important to accelerate the simulation than to improve the order. In situations with dominating time discretization error, stochastic Taylor methods provide methods of any order, but the simulation is costly. One possibel alternative might be the use of approximate heat kernels. One should also consider Richardson extrapolation a generalization of Romberg extrapolation as a mean to get higher order methods.

References Introduction Peter E. Kloeden and Eckhard Platen, Numerical solutions of stochastic differential equations, Applications of Mathematics 23, 1992. Denis Talay and Luciano Tubaro, Expansion of the global error for numerical schemes solving stochastic differential equations, Stochastic Anal. Appl. 8, No. 4, 483 509, 1990.