Multilevel Monte Carlo for Lévy Driven SDEs

Similar documents
A multilevel Monte Carlo algorithm for Lévy driven stochastic differential equations

Stochastic Computation

New Multilevel Algorithms Based on Quasi-Monte Carlo Point Sets

Poisson random measure: motivation

Multilevel Monte Carlo for Stochastic McKean-Vlasov Equations

Optimal Randomized Algorithms for Integration on Function Spaces with underlying ANOVA decomposition

Numerical Methods with Lévy Processes

Adaptive Wavelet Methods for Elliptic SPDEs

Two new developments in Multilevel Monte Carlo methods

Jump-type Levy Processes

Infinitely divisible distributions and the Lévy-Khintchine formula

Discretization of SDEs: Euler Methods and Beyond

Non-nested Adaptive Timesteps in Multilevel Monte Carlo Computations

Optimal simulation schemes for Lévy driven stochastic differential equations arxiv: v1 [math.pr] 22 Apr 2012

Higher order weak approximations of stochastic differential equations with and without jumps

Besov Regularity and Approximation of a Certain Class of Random Fields

Exact Simulation of Multivariate Itô Diffusions

Introduction to the Numerical Solution of SDEs Selected topics

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

Stochastic differential equation models in biology Susanne Ditlevsen

Short-time expansions for close-to-the-money options under a Lévy jump model with stochastic volatility

Small-time asymptotics of stopped Lévy bridges and simulation schemes with controlled bias

Bridging the Gap between Center and Tail for Multiscale Processes

Optimal simulation schemes for Lévy driven stochastic differential equations

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

Adaptive timestepping for SDEs with non-globally Lipschitz drift

Rates of Convergence and CLTs for Subcanonical Debiased MLMC

Simulation methods for stochastic models in chemistry

Geometric projection of stochastic differential equations

Law of the iterated logarithm for pure jump Lévy processes

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Small ball probabilities and metric entropy

The strictly 1/2-stable example

The Contraction Method on C([0, 1]) and Donsker s Theorem

A NOTE ON STOCHASTIC INTEGRALS AS L 2 -CURVES

Simulation Method for Solving Stochastic Differential Equations with Constant Diffusion Coefficients

STOCHASTIC ANALYSIS FOR JUMP PROCESSES

When is a Moving Average a Semimartingale?

Stochastic optimal control with rough paths

Markov Chain Approximation of Pure Jump Processes. Nikola Sandrić (joint work with Ante Mimica and René L. Schilling) University of Zagreb

Lecture Characterization of Infinitely Divisible Distributions

Harnack Inequalities and Applications for Stochastic Equations

Statistical inference on Lévy processes

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition

Some Tools From Stochastic Analysis

LAN property for sde s with additive fractional noise and continuous time observation

Regular Variation and Extreme Events for Stochastic Processes

arxiv:math/ v1 [math.pr] 12 Jul 2006

Ernesto Mordecki 1. Lecture III. PASI - Guanajuato - June 2010

Approximation of BSDEs using least-squares regression and Malliavin weights

Malliavin Calculus in Finance

Simulation and Parametric Estimation of SDEs

1 Brownian Local Time

Lecture 4: Introduction to stochastic processes and stochastic calculus

Exponential martingales: uniform integrability results and applications to point processes

Theoretical Tutorial Session 2

Stochastic Analysis. Prof. Dr. Andreas Eberle

DUBLIN CITY UNIVERSITY

Sequential Monte Carlo Methods for Bayesian Computation

Least Squares Estimators for Stochastic Differential Equations Driven by Small Lévy Noises

Exercises. T 2T. e ita φ(t)dt.

EQUITY MARKET STABILITY

A BOOTSTRAP VIEW ON DICKEY-FULLER CONTROL CHARTS FOR AR(1) SERIES

Weak solutions of mean-field stochastic differential equations

arxiv: v3 [math.st] 4 Dec 2014

Complexity of two and multi-stage stochastic programming problems

Computational methods for continuous time Markov chains with applications to biological processes

Definition: Lévy Process. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 2: Lévy Processes. Theorem

Gillespie s Algorithm and its Approximations. Des Higham Department of Mathematics and Statistics University of Strathclyde

Approximating irregular SDEs via iterative Skorokhod embeddings

Tractability of Multivariate Problems

DFG-Schwerpunktprogramm 1324

High order Cubature on Wiener space applied to derivative pricing and sensitivity estimations

Gaussian Process Approximations of Stochastic Differential Equations

Estimation of probability density functions by the Maximum Entropy Method

Multilevel Sequential 2 Monte Carlo for Bayesian Inverse Problems

Adaptive Importance Sampling for Multilevel Monte Carlo Euler method

Fast-slow systems with chaotic noise

Stability of optimization problems with stochastic dominance constraints

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Variance Reduction Techniques for Monte Carlo Simulations with Stochastic Volatility Models

A Bootstrap View on Dickey-Fuller Control Charts for AR(1) Series

Strong Predictor-Corrector Euler Methods for Stochastic Differential Equations

SDE Coefficients. March 4, 2008

Xt i Xs i N(0, σ 2 (t s)) and they are independent. This implies that the density function of X t X s is a product of normal density functions:

Error Analysis for Quasi-Monte Carlo Methods

Probability Theory I: Syllabus and Exercise

GARCH processes continuous counterparts (Part 2)

Stochastic Volatility and Correction to the Heat Equation

Stochastic models of biochemical systems

On detection of unit roots generalizing the classic Dickey-Fuller approach

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.

13 The martingale problem

Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals

A Short Introduction to Diffusion Processes and Ito Calculus

Exact Simulation of Diffusions and Jump Diffusions

Hardy-Stein identity and Square functions

Convergence of price and sensitivities in Carr s randomization approximation globally and near barrier

I forgot to mention last time: in the Ito formula for two standard processes, putting

Chapter 4: Monte-Carlo Methods

Transcription:

Multilevel Monte Carlo for Lévy Driven SDEs Felix Heidenreich TU Kaiserslautern AG Computational Stochastics August 2011 joint work with Steffen Dereich Philipps-Universität Marburg supported within DFG-SPP 1324 F. Heidenreich (TU Kaiserslautern) MLMC for Lévy Driven SDEs August 2011 1 / 17

Outline 1 Introduction 2 The Multilevel Monte Carlo Algorithm 3 Approximation of the Driving Lévy Process 4 Result and Examples F. Heidenreich (TU Kaiserslautern) MLMC for Lévy Driven SDEs August 2011 2 / 17

Introduction The Problem SDE dy t = a(y t ) dx t, t [0, 1], Y 0 = y 0, with a square integrable Lévy process X = (X t ) t [0,1], a deterministic initial value y 0, a Lipschitz continuous function a. F. Heidenreich (TU Kaiserslautern) MLMC for Lévy Driven SDEs August 2011 3 / 17

Introduction The Problem SDE dy t = a(y t ) dx t, t [0, 1], Y 0 = y 0, with a square integrable Lévy process X = (X t ) t [0,1], a deterministic initial value y 0, a Lipschitz continuous function a. Computational problem: Compute S(f ) = E[f (Y )] for functionals f : D[0, 1] R. F. Heidenreich (TU Kaiserslautern) MLMC for Lévy Driven SDEs August 2011 3 / 17

Introduction The Problem SDE dy t = a(y t ) dx t, t [0, 1], Y 0 = y 0, with a square integrable Lévy process X = (X t ) t [0,1], a deterministic initial value y 0, a Lipschitz continuous function a. Computational problem: Compute S(f ) = E[f (Y )] for functionals f : D[0, 1] R. Example: payoff f of a path dependent option. F. Heidenreich (TU Kaiserslautern) MLMC for Lévy Driven SDEs August 2011 3 / 17

Introduction Standard Monte Carlo For approximation Ŷ of Y, S(f ) = E[f (Y )] E[f (Ŷ )] 1 n f (Ŷ i ), n i=1 }{{} where (Ŷ i ) i=1,...,n i.i.d. copies of Ŷ. =:ŜMC (f ) F. Heidenreich (TU Kaiserslautern) MLMC for Lévy Driven SDEs August 2011 4 / 17

Introduction Standard Monte Carlo For approximation Ŷ of Y, S(f ) = E[f (Y )] E[f (Ŷ )] 1 n f (Ŷ i ), n i=1 }{{} =:ŜMC (f ) where (Ŷ i ) i=1,...,n i.i.d. copies of Ŷ. Error decomposition into weak error and Monte Carlo error E[ S(f ) Ŝ MC (f ) 2 ] = E[f (Y ) f (Ŷ }{{ )] 2 + 1 Var(f (Ŷ )). }} n {{} =:bias(f (Ŷ )) =Var(ŜMC (f )) F. Heidenreich (TU Kaiserslautern) MLMC for Lévy Driven SDEs August 2011 4 / 17

Introduction Standard Monte Carlo For approximation Ŷ of Y, where (Ŷ i ) i=1,...,n i.i.d. copies of Ŷ. S(f ) = E[f (Y )] E[f (Ŷ )] 1 n f (Ŷ i ), n i=1 }{{} =:ŜMC (f ) Error decomposition into weak error and Monte Carlo error E[ S(f ) Ŝ MC (f ) 2 ] = E[f (Y ) f (Ŷ }{{ )] 2 + 1 Var(f (Ŷ )). }} n {{} =:bias(f (Ŷ )) =Var(ŜMC (f )) Classical approach: Use approximation Ŷ with small bias, e.g. higher-order weak Itô-Taylor schemes, or extrapolation techniques. F. Heidenreich (TU Kaiserslautern) MLMC for Lévy Driven SDEs August 2011 4 / 17

Introduction Path-independent functions f in the diffusion case Classical setting: Brownian motion as driving process, i.e. X = W. Compute expectations w.r.t. marginal in T > 0, i.e. S(f ) = E[f (Y (T ))] for f : R R. F. Heidenreich (TU Kaiserslautern) MLMC for Lévy Driven SDEs August 2011 5 / 17

Introduction Path-independent functions f in the diffusion case Classical setting: Brownian motion as driving process, i.e. X = W. Compute expectations w.r.t. marginal in T > 0, i.e. S(f ) = E[f (Y (T ))] for f : R R. Relation of error and computational cost: Weak approximation of order β yields E[ S(f ) Ŝ MC (f ) 2 ] ɛ 2 with computational cost O(ɛ 2 1 β ). Remedy: Only for sufficiently smooth f C 2(β+1). F. Heidenreich (TU Kaiserslautern) MLMC for Lévy Driven SDEs August 2011 5 / 17

Introduction Path-independent functions f in the diffusion case Classical setting: Brownian motion as driving process, i.e. X = W. Compute expectations w.r.t. marginal in T > 0, i.e. S(f ) = E[f (Y (T ))] for f : R R. Relation of error and computational cost: Weak approximation of order β yields E[ S(f ) Ŝ MC (f ) 2 ] ɛ 2 with computational cost O(ɛ 2 1 β ). Remedy: Only for sufficiently smooth f C 2(β+1). Note: Variance reduction via Multilevel algorithm Ŝ ML yields E[ S(f ) Ŝ ML (f ) 2 ] ɛ 2 with computational cost O(ɛ 2 log(ɛ) 2 ). Holds for f Lipschitz. F. Heidenreich (TU Kaiserslautern) MLMC for Lévy Driven SDEs August 2011 5 / 17

The Multilevel Monte Carlo Algorithm Multilevel Monte Carlo Algorithms See Heinrich [1998], Giles [2006], Creutzig, Dereich, Müller-Gronbach, Ritter [2009], Avikainen [2009], Kloeden, Neuenkirch, Pavani [2009], Hickernell, Müller-Gronbach, Niu, Ritter [2010], Marxen [2010],... Talks by Burgos, Giles, Roj, Szpruch, Xia... F. Heidenreich (TU Kaiserslautern) MLMC for Lévy Driven SDEs August 2011 6 / 17

The Multilevel Monte Carlo Algorithm Multilevel Monte Carlo Algorithms Basic idea: Y (1), Y (2),... strong approximations for the solution Y with increasing accuracy and increasing numerical cost. Telescoping sum: E[f (Y (m) )] = E[f (Y (1) )] + }{{} =D (1) m k=2 E[f (Y (k) ) f (Y (k 1) )] }{{} =D (k) F. Heidenreich (TU Kaiserslautern) MLMC for Lévy Driven SDEs August 2011 6 / 17

The Multilevel Monte Carlo Algorithm Multilevel Monte Carlo Algorithms Basic idea: Y (1), Y (2),... strong approximations for the solution Y with increasing accuracy and increasing numerical cost. Telescoping sum: E[f (Y (m) )] = E[f (Y (1) )] + }{{} =D (1) m k=2 E[f (Y (k) ) f (Y (k 1) )] }{{} =D (k) Approximate E[D (1) ],..., E[D (m) ] by independent Monte Carlo methods Ŝ(f ) = m 1 n k n k k=1 i=1 D (k) i, for (D (k) i ) i=1,...,nj being i.i.d. copies of D (k), k = 1,..., m. F. Heidenreich (TU Kaiserslautern) MLMC for Lévy Driven SDEs August 2011 6 / 17

The Multilevel Monte Carlo Algorithm Multilevel Monte Carlo Algorithms Basic idea: Y (1), Y (2),... strong approximations for the solution Y with increasing accuracy and increasing numerical cost. Telescoping sum: E[f (Y (m) )] = E[f (Y (1) )] + }{{} =D (1) m k=2 E[f (Y (k) ) f (Y (k 1) )] }{{} =D (k) Approximate E[D (1) ],..., E[D (m) ] by independent Monte Carlo methods Ŝ(f ) = m 1 n k n k k=1 i=1 D (k) i, for (D (k) i ) i=1,...,nj being i.i.d. copies of D (k), k = 1,..., m. Note: (Y (k), Y (k 1) ) are coupled via X so that the variance of D (k) decreases. F. Heidenreich (TU Kaiserslautern) MLMC for Lévy Driven SDEs August 2011 6 / 17

Approximation of the Driving Lévy Process Approximation of the driving Lévy process X Lévy-Ito-decomposition: X sum of three independent processes. Brownian motion W, drift b = E[X 1 ] R, X t = σw t + bt + L t, t 0. L 2 -martingale of compensated jumps L (jump-intensity measure ν). F. Heidenreich (TU Kaiserslautern) MLMC for Lévy Driven SDEs August 2011 7 / 17

Approximation of the Driving Lévy Process Approximation of the driving Lévy process X Lévy-Ito-decomposition: X sum of three independent processes. Brownian motion W, drift b = E[X 1 ] R, X t = σw t + bt + L t, t 0. L 2 -martingale of compensated jumps L (jump-intensity measure ν). Assumption: Simulations of ν ( h,h) c /ν(( h, h) c ) feasible for h > 0. Thus: Cutoff of the small jumps. F. Heidenreich (TU Kaiserslautern) MLMC for Lévy Driven SDEs August 2011 7 / 17

Approximation of the Driving Lévy Process Approximation of the driving Lévy process X Lévy-Ito-decomposition: X sum of three independent processes. X t = σw t + bt + L t, t 0. Assumption: Simulations of ν ( h,h) c /ν(( h, h) c ) feasible for h > 0. Thus: Cutoff of the small jumps. Define L (h) t = s [0,t] X s 1l { Xs h} t x ν(dx), ( h,h) c F. Heidenreich (TU Kaiserslautern) MLMC for Lévy Driven SDEs August 2011 7 / 17

Approximation of the Driving Lévy Process Approximation of the driving Lévy process X Lévy-Ito-decomposition: X sum of three independent processes. X t = σw t + bt + L t, t 0. Assumption: Simulations of ν ( h,h) c /ν(( h, h) c ) feasible for h > 0. Thus: Cutoff of the small jumps. Define L (h) t and approximate X by = s [0,t] X s 1l { Xs h} t x ν(dx), ( h,h) c X (h,ε) = (σw t + bt + L (h) t ) t T (h,ε) on random time discretization T (h,ε) with parameters h, ε > 0 F. Heidenreich (TU Kaiserslautern) MLMC for Lévy Driven SDEs August 2011 7 / 17

Approximation of the Driving Lévy Process Approximation of the driving Lévy process X Lévy-Ito-decomposition: X sum of three independent processes. X t = σw t + bt + L t, t 0. Assumption: Simulations of ν ( h,h) c /ν(( h, h) c ) feasible for h > 0. Thus: Cutoff of the small jumps. Define L (h) t and approximate X by = s [0,t] X s 1l { Xs h} t x ν(dx), ( h,h) c X (h,ε) = (σw t + bt + L (h) t ) t T (h,ε) on random time discretization T (h,ε) with parameters h, ε > 0 such that all the jump times of L (h) are included in T (h,ε), step-size is at most ε (for approximation of W ). F. Heidenreich (TU Kaiserslautern) MLMC for Lévy Driven SDEs August 2011 7 / 17

Approximation of the Driving Lévy Process The coupled approximation Compound Poisson approximations (L (h), L (h ) ). compensated jumps greater h 0.2 0.0 L t (h) 0.2 0.4 0.6 0.0 0.2 0.4 0.6 0.8 1.0 time F. Heidenreich (TU Kaiserslautern) MLMC for Lévy Driven SDEs August 2011 8 / 17

Approximation of the Driving Lévy Process The coupled approximation Compound Poisson approximations (L (h), L (h ) ). compensated jumps greater h compensated jumps greater h' 0.2 0.2 0.0 0.0 L t (h) 0.2 L t (h) 0.2 0.4 0.4 0.6 0.6 0.0 0.2 0.4 0.6 0.8 1.0 time 0.0 0.2 0.4 0.6 0.8 1.0 time F. Heidenreich (TU Kaiserslautern) MLMC for Lévy Driven SDEs August 2011 8 / 17

Approximation of the Driving Lévy Process The coupled approximation Approximation of σw on T (h,ɛ) T (h,ɛ ). Brownian motion 0.2 0.0 W t 0.2 0.4 0.6 0.0 0.2 0.4 0.6 0.8 1.0 time F. Heidenreich (TU Kaiserslautern) MLMC for Lévy Driven SDEs August 2011 8 / 17

Approximation of the Driving Lévy Process The coupled approximation Approximation of σw on T (h,ɛ) and T (h,ɛ ). Brownian motion for (h,ε) Brownian motion for (h',ε') 0.2 0.2 0.0 0.0 W t on T ε 0.2 W t on T ε ' 0.2 0.4 0.4 0.6 0.6 0.0 0.2 0.4 0.6 0.8 1.0 time 0.0 0.2 0.4 0.6 0.8 1.0 time F. Heidenreich (TU Kaiserslautern) MLMC for Lévy Driven SDEs August 2011 8 / 17

Approximation of the Driving Lévy Process The coupled approximation Piecewise constant approximation of (X (h,ɛ), X (h,ɛ ) ). approximation for (h,ε) approximation for (h',ε') 0.2 0.2 0.0 0.0 X t (ε) 0.2 X t (ε) 0.2 0.4 0.4 0.6 0.6 0.0 0.2 0.4 0.6 0.8 1.0 time 0.0 0.2 0.4 0.6 0.8 1.0 time F. Heidenreich (TU Kaiserslautern) MLMC for Lévy Driven SDEs August 2011 8 / 17

Approximation of the Driving Lévy Process Multilevel Euler Algorithm Remember: Telescoping sum E[f (Y (m) )] = E[f (Y (1) )] + }{{} =D (1) m k=2 E[f (Y (k) ) f (Y (k 1) )] }{{} =D (k) F. Heidenreich (TU Kaiserslautern) MLMC for Lévy Driven SDEs August 2011 9 / 17

Approximation of the Driving Lévy Process Multilevel Euler Algorithm Remember: Telescoping sum E[f (Y (m) )] = E[f (Y (1) )] + }{{} =D (1) m k=2 E[f (Y (k) ) f (Y (k 1) )] }{{} =D (k) Choice: (Y (k), Y (k 1) ) Euler schemes with coupled input (X (h k,ε k ), X (h k 1,ε k 1 ) ) F. Heidenreich (TU Kaiserslautern) MLMC for Lévy Driven SDEs August 2011 9 / 17

Approximation of the Driving Lévy Process Multilevel Euler Algorithm Remember: Telescoping sum E[f (Y (m) )] = E[f (Y (1) )] + }{{} =D (1) m k=2 E[f (Y (k) ) f (Y (k 1) )] }{{} =D (k) Choice: (Y (k), Y (k 1) ) Euler schemes with coupled input (X (h k,ε k ), X (h k 1,ε k 1 ) ) Parameters of the algorithm Ŝ(f ): m (number of levels) h 1 > h 2 >... > h m (approximation of L) ε 1 > ε 2 >... > ε m (approximation of W ) n 1,..., n m (number of replications per level) F. Heidenreich (TU Kaiserslautern) MLMC for Lévy Driven SDEs August 2011 9 / 17

Result and Examples Result Error: e 2 (Ŝ) = sup E[ S(f ) Ŝ(f ) 2 ] f Lip(1) Here: Lip(1) := {f : D[0, 1] R f 1 Lipschitz w.r.t. supremum norm} F. Heidenreich (TU Kaiserslautern) MLMC for Lévy Driven SDEs August 2011 10 / 17

Result and Examples Result Error: e 2 (Ŝ) = sup Cost: cost(ŝ) f Lip(1) E[ S(f ) Ŝ(f ) 2 ] m n k E[#breakpoints(Y (k) ) + 1] k=1 F. Heidenreich (TU Kaiserslautern) MLMC for Lévy Driven SDEs August 2011 10 / 17

Result and Examples Result Error: e 2 (Ŝ) = sup Cost: cost(ŝ) f Lip(1) E[ S(f ) Ŝ(f ) 2 ] m n k E[#breakpoints(Y (k) ) + 1] k=1 Theorem (Dereich, H.) Let { β := inf p > 0 : ( 1,1) x p ν(dx) < } [0, 2] denote the Blumenthal-Getoor-index of the driving Lévy process. Then, for suitably chosen parameters, cost(ŝn) n and e(ŝ n ) n ( 1 β 1 1 2 ). F. Heidenreich (TU Kaiserslautern) MLMC for Lévy Driven SDEs August 2011 10 / 17

Result and Examples Result Error: e 2 (Ŝ) = sup Cost: cost(ŝ) f Lip(1) E[ S(f ) Ŝ(f ) 2 ] m n k E[#breakpoints(Y (k) ) + 1] k=1 Theorem (Dereich, H.) Let { β := inf p > 0 : ( 1,1) x p ν(dx) < } [0, 2] denote the Blumenthal-Getoor-index of the driving Lévy process. Then, for suitably chosen parameters, cost(ŝn) n and e(ŝ n ) n ( 1 β 1 1 2 ). Remark: Gaussian approximation improves result for β 1 (see Dereich, 2010). Order of convergence: 4 β for σ = 0 or β / [1, 4/3], 6β 1 2 β 6β 4 and for σ 0 and β [1, 4/3]. F. Heidenreich (TU Kaiserslautern) MLMC for Lévy Driven SDEs August 2011 10 / 17

Result and Examples Remarks Related results: Jacod, Kurtz, Méléard and Protter [2007] Marxen [2010] F. Heidenreich (TU Kaiserslautern) MLMC for Lévy Driven SDEs August 2011 11 / 17

Result and Examples Remarks Related results: Jacod, Kurtz, Méléard and Protter [2007] Marxen [2010] Lower bound: Combine results of Creutzig, Dereich, Müller-Gronbach and Ritter [2009] on lower bounds in relation to quantization numbers and average Kolmogorov widths, and Aurzada and Dereich [2009] on the coding complexity of Lévy processes. F. Heidenreich (TU Kaiserslautern) MLMC for Lévy Driven SDEs August 2011 11 / 17

Result and Examples Remarks Related results: Jacod, Kurtz, Méléard and Protter [2007] Marxen [2010] Lower bound: Combine results of Creutzig, Dereich, Müller-Gronbach and Ritter [2009] on lower bounds in relation to quantization numbers and average Kolmogorov widths, and Aurzada and Dereich [2009] on the coding complexity of Lévy processes. In the case X = Y, for every randomized algorithm S n with cost( S n ) n, up to some logarithmic terms. e( S n ) n 1 2 F. Heidenreich (TU Kaiserslautern) MLMC for Lévy Driven SDEs August 2011 11 / 17

Result and Examples Example: α-stable processes Truncated symmetric α-stable processes with α < 2 and truncation level u > 0 Lebesgue density of the Lévy measure g ν (x) = Blumenthal-Getoor-index β = α. c x 1+α 1l ( u,u)(x), x R\{0}. F. Heidenreich (TU Kaiserslautern) MLMC for Lévy Driven SDEs August 2011 12 / 17

Result and Examples Example: α-stable processes Truncated symmetric α-stable processes with α < 2 and truncation level u > 0 Lebesgue density of the Lévy measure g ν (x) = c x 1+α 1l ( u,u)(x), Blumenthal-Getoor-index β = α. { n 1/2, α < 1, By the Theorem e(ŝ n ) n (1/α 1/2), α > 1. x R\{0}. F. Heidenreich (TU Kaiserslautern) MLMC for Lévy Driven SDEs August 2011 12 / 17

Result and Examples Example: α-stable processes Truncated symmetric α-stable processes with α < 2 and truncation level u > 0 Lebesgue density of the Lévy measure g ν (x) = c x 1+α 1l ( u,u)(x), Blumenthal-Getoor-index β = α. { n 1/2, α < 1, By the Theorem e(ŝ n ) n (1/α 1/2), α > 1. x R\{0}. In the sequel: X with u = 1 and c = 0.1, SDE with a(x t ) = X t and y 0 = 1, Lookback option with fixed strike 1 f (Y ) = ( sup (Y t ) 1) +. t [0,1] F. Heidenreich (TU Kaiserslautern) MLMC for Lévy Driven SDEs August 2011 12 / 17

Result and Examples Adaptive choice of m and n 1,..., n m Choose ɛ k = 2 k and h k such that ν(( h k, h k ) c ) = 2 k. F. Heidenreich (TU Kaiserslautern) MLMC for Lévy Driven SDEs August 2011 13 / 17

Result and Examples Adaptive choice of m and n 1,..., n m Choose ɛ k = 2 k and h k such that ν(( h k, h k ) c ) = 2 k. Given δ > 0, determine m and n 1,..., n m such that E[ S(f ) Ŝ(f ) 2 ] = E[f (Y ) f (Y (m) )] 2 + }{{} Var(Ŝ(f )) δ2. =bias(f (Y (m) )) F. Heidenreich (TU Kaiserslautern) MLMC for Lévy Driven SDEs August 2011 13 / 17

Result and Examples Adaptive choice of m and n 1,..., n m Choose ɛ k = 2 k and h k such that ν(( h k, h k ) c ) = 2 k. Given δ > 0, determine m and n 1,..., n m such that E[ S(f ) Ŝ(f ) 2 ] = E[f (Y ) f (Y (m) )] 2 + }{{} Var(Ŝ(f )) δ2. =bias(f (Y (m) )) First step: Estimate expectations E[D (1) ],..., E[D (5) ] and variances Var(D (1) ),..., Var(D (5) ). Bias and variance estimation for α=0.5 Bias and variance estimation for α=0.8 = bias = bias 1 = variance = variance 1.0 log 10(biask) and log10(vark) 2 3 log 10(biask) and log10(vark) 1.5 2.0 2.5 4 3.0 3.5 5 1 2 3 4 1 2 3 4 5 level k F. Heidenreich (TU Kaiserslautern) MLMC for Lévy Driven SDEs August 2011 13 / 17 level k

Result and Examples Adaptive choice of m and n 1,..., n m Highest level of accuracy m: F. Heidenreich (TU Kaiserslautern) MLMC for Lévy Driven SDEs August 2011 14 / 17

Result and Examples Adaptive choice of m and n 1,..., n m Highest level of accuracy m: E[D (1) ],..., E[D (5) ] together with extrapolation give control of the bias(f (Y (k) )) for k N. Choose m such that bias(f (Y (m) )) 2 δ2 2. F. Heidenreich (TU Kaiserslautern) MLMC for Lévy Driven SDEs August 2011 14 / 17

Result and Examples Adaptive choice of m and n 1,..., n m Highest level of accuracy m: E[D (1) ],..., E[D (5) ] together with extrapolation give control of the bias(f (Y (k) )) for k N. Choose m such that Replication numbers n 1,..., n m : bias(f (Y (m) )) 2 δ2 2. F. Heidenreich (TU Kaiserslautern) MLMC for Lévy Driven SDEs August 2011 14 / 17

Result and Examples Adaptive choice of m and n 1,..., n m Highest level of accuracy m: E[D (1) ],..., E[D (5) ] together with extrapolation give control of the bias(f (Y (k) )) for k N. Choose m such that Replication numbers n 1,..., n m : bias(f (Y (m) )) 2 δ2 2. Var(D (1) ),..., Var(D (5) ) together with extrapolation give control of Var(D (k) ) for k = 1,..., m. Choose n 1,..., n m with minimal cost(ŝ) such that Var(Ŝ) = m k=1 Var(D (k) ) n k δ2 2. F. Heidenreich (TU Kaiserslautern) MLMC for Lévy Driven SDEs August 2011 14 / 17

Result and Examples Adaptive choice of m and n 1,..., n m Expample of n 1,..., n m with α = 0.5. Precisions δ = (0.003, 0.002, 0.001, 0.0006, 0.0003). Highest levels m = (3, 3, 4, 4, 5). Replication numbers for α = 0.5 6 precision δ: = 0.003 = 0.002 = 0.001 = 6e 04 = 3e 04 5 log 10(nk) 4 3 1 2 3 4 5 level k F. Heidenreich (TU Kaiserslautern) MLMC for Lévy Driven SDEs August 2011 15 / 17

Result and Examples Adaptive choice of m and n 1,..., n m Expample of n 1,..., n m with α = 0.8. Precisions δ = (0.01, 0.004, 0.002, 0.001, 0.0007). Highest levels m = (4, 5, 6, 7, 7). Replication numbers for α = 0.8 6 5 precision δ: = 0.01 = 0.004 = 0.002 = 0.001 = 7e 04 log 10(nk) 4 3 2 1 2 3 4 5 6 7 level k F. Heidenreich (TU Kaiserslautern) MLMC for Lévy Driven SDEs August 2011 15 / 17

Result and Examples Adaptive choice of m and n 1,..., n m Expample of n 1,..., n m with α = 1.2. Precisions δ = (0.02, 0.01, 0.007, 0.005, 0.0035). Highest levels m = (7, 8, 9, 10, 11). Replication numbers for α = 1.2 5.0 4.5 precision δ: = 0.02 = 0.01 = 0.007 = 0.005 = 0.0035 4.0 log 10(nk) 3.5 3.0 2.5 2.0 1 2 3 4 5 6 7 8 9 10 11 level k F. Heidenreich (TU Kaiserslautern) MLMC for Lévy Driven SDEs August 2011 15 / 17

Result and Examples Empirical relation of error and cost Error and cost of MLMC and classical MC = MLMC = MC 2.0 log 10(error) 2.5 α = 0.5 0.8 1.2 MLMC 0.47 0.46 0.38 MC 0.45 0.34 0.23 3.0 3.5 α = 0.5 = 0.8 = 1.2 4.5 5.0 5.5 6.0 6.5 7.0 log 10(cost) F. Heidenreich (TU Kaiserslautern) MLMC for Lévy Driven SDEs August 2011 16 / 17

Result and Examples Summary and Conclusions Summary: Multilevel algorithm for a wide class of functionals and processes. Up to log s asymptotically optimal for Blumenthal-Getoor-index β 1. Heuristics to achieve desired precision δ > 0. Numerical results match the analytical results. Improvement for β 1 via Gaussian correction, see Dereich [2010]. F. Heidenreich (TU Kaiserslautern) MLMC for Lévy Driven SDEs August 2011 17 / 17

Result and Examples Summary and Conclusions Summary: Multilevel algorithm for a wide class of functionals and processes. Up to log s asymptotically optimal for Blumenthal-Getoor-index β 1. Heuristics to achieve desired precision δ > 0. Numerical results match the analytical results. Improvement for β 1 via Gaussian correction, see Dereich [2010]. Work to do: Further examples of Lévy processes. Implementation of the ML-algorithm with Gaussian correction. The gap between the upper and lower bound for β > 1. F. Heidenreich (TU Kaiserslautern) MLMC for Lévy Driven SDEs August 2011 17 / 17