Multilevel Monte Carlo for Lévy Driven SDEs Felix Heidenreich TU Kaiserslautern AG Computational Stochastics August 2011 joint work with Steffen Dereich Philipps-Universität Marburg supported within DFG-SPP 1324 F. Heidenreich (TU Kaiserslautern) MLMC for Lévy Driven SDEs August 2011 1 / 17
Outline 1 Introduction 2 The Multilevel Monte Carlo Algorithm 3 Approximation of the Driving Lévy Process 4 Result and Examples F. Heidenreich (TU Kaiserslautern) MLMC for Lévy Driven SDEs August 2011 2 / 17
Introduction The Problem SDE dy t = a(y t ) dx t, t [0, 1], Y 0 = y 0, with a square integrable Lévy process X = (X t ) t [0,1], a deterministic initial value y 0, a Lipschitz continuous function a. F. Heidenreich (TU Kaiserslautern) MLMC for Lévy Driven SDEs August 2011 3 / 17
Introduction The Problem SDE dy t = a(y t ) dx t, t [0, 1], Y 0 = y 0, with a square integrable Lévy process X = (X t ) t [0,1], a deterministic initial value y 0, a Lipschitz continuous function a. Computational problem: Compute S(f ) = E[f (Y )] for functionals f : D[0, 1] R. F. Heidenreich (TU Kaiserslautern) MLMC for Lévy Driven SDEs August 2011 3 / 17
Introduction The Problem SDE dy t = a(y t ) dx t, t [0, 1], Y 0 = y 0, with a square integrable Lévy process X = (X t ) t [0,1], a deterministic initial value y 0, a Lipschitz continuous function a. Computational problem: Compute S(f ) = E[f (Y )] for functionals f : D[0, 1] R. Example: payoff f of a path dependent option. F. Heidenreich (TU Kaiserslautern) MLMC for Lévy Driven SDEs August 2011 3 / 17
Introduction Standard Monte Carlo For approximation Ŷ of Y, S(f ) = E[f (Y )] E[f (Ŷ )] 1 n f (Ŷ i ), n i=1 }{{} where (Ŷ i ) i=1,...,n i.i.d. copies of Ŷ. =:ŜMC (f ) F. Heidenreich (TU Kaiserslautern) MLMC for Lévy Driven SDEs August 2011 4 / 17
Introduction Standard Monte Carlo For approximation Ŷ of Y, S(f ) = E[f (Y )] E[f (Ŷ )] 1 n f (Ŷ i ), n i=1 }{{} =:ŜMC (f ) where (Ŷ i ) i=1,...,n i.i.d. copies of Ŷ. Error decomposition into weak error and Monte Carlo error E[ S(f ) Ŝ MC (f ) 2 ] = E[f (Y ) f (Ŷ }{{ )] 2 + 1 Var(f (Ŷ )). }} n {{} =:bias(f (Ŷ )) =Var(ŜMC (f )) F. Heidenreich (TU Kaiserslautern) MLMC for Lévy Driven SDEs August 2011 4 / 17
Introduction Standard Monte Carlo For approximation Ŷ of Y, where (Ŷ i ) i=1,...,n i.i.d. copies of Ŷ. S(f ) = E[f (Y )] E[f (Ŷ )] 1 n f (Ŷ i ), n i=1 }{{} =:ŜMC (f ) Error decomposition into weak error and Monte Carlo error E[ S(f ) Ŝ MC (f ) 2 ] = E[f (Y ) f (Ŷ }{{ )] 2 + 1 Var(f (Ŷ )). }} n {{} =:bias(f (Ŷ )) =Var(ŜMC (f )) Classical approach: Use approximation Ŷ with small bias, e.g. higher-order weak Itô-Taylor schemes, or extrapolation techniques. F. Heidenreich (TU Kaiserslautern) MLMC for Lévy Driven SDEs August 2011 4 / 17
Introduction Path-independent functions f in the diffusion case Classical setting: Brownian motion as driving process, i.e. X = W. Compute expectations w.r.t. marginal in T > 0, i.e. S(f ) = E[f (Y (T ))] for f : R R. F. Heidenreich (TU Kaiserslautern) MLMC for Lévy Driven SDEs August 2011 5 / 17
Introduction Path-independent functions f in the diffusion case Classical setting: Brownian motion as driving process, i.e. X = W. Compute expectations w.r.t. marginal in T > 0, i.e. S(f ) = E[f (Y (T ))] for f : R R. Relation of error and computational cost: Weak approximation of order β yields E[ S(f ) Ŝ MC (f ) 2 ] ɛ 2 with computational cost O(ɛ 2 1 β ). Remedy: Only for sufficiently smooth f C 2(β+1). F. Heidenreich (TU Kaiserslautern) MLMC for Lévy Driven SDEs August 2011 5 / 17
Introduction Path-independent functions f in the diffusion case Classical setting: Brownian motion as driving process, i.e. X = W. Compute expectations w.r.t. marginal in T > 0, i.e. S(f ) = E[f (Y (T ))] for f : R R. Relation of error and computational cost: Weak approximation of order β yields E[ S(f ) Ŝ MC (f ) 2 ] ɛ 2 with computational cost O(ɛ 2 1 β ). Remedy: Only for sufficiently smooth f C 2(β+1). Note: Variance reduction via Multilevel algorithm Ŝ ML yields E[ S(f ) Ŝ ML (f ) 2 ] ɛ 2 with computational cost O(ɛ 2 log(ɛ) 2 ). Holds for f Lipschitz. F. Heidenreich (TU Kaiserslautern) MLMC for Lévy Driven SDEs August 2011 5 / 17
The Multilevel Monte Carlo Algorithm Multilevel Monte Carlo Algorithms See Heinrich [1998], Giles [2006], Creutzig, Dereich, Müller-Gronbach, Ritter [2009], Avikainen [2009], Kloeden, Neuenkirch, Pavani [2009], Hickernell, Müller-Gronbach, Niu, Ritter [2010], Marxen [2010],... Talks by Burgos, Giles, Roj, Szpruch, Xia... F. Heidenreich (TU Kaiserslautern) MLMC for Lévy Driven SDEs August 2011 6 / 17
The Multilevel Monte Carlo Algorithm Multilevel Monte Carlo Algorithms Basic idea: Y (1), Y (2),... strong approximations for the solution Y with increasing accuracy and increasing numerical cost. Telescoping sum: E[f (Y (m) )] = E[f (Y (1) )] + }{{} =D (1) m k=2 E[f (Y (k) ) f (Y (k 1) )] }{{} =D (k) F. Heidenreich (TU Kaiserslautern) MLMC for Lévy Driven SDEs August 2011 6 / 17
The Multilevel Monte Carlo Algorithm Multilevel Monte Carlo Algorithms Basic idea: Y (1), Y (2),... strong approximations for the solution Y with increasing accuracy and increasing numerical cost. Telescoping sum: E[f (Y (m) )] = E[f (Y (1) )] + }{{} =D (1) m k=2 E[f (Y (k) ) f (Y (k 1) )] }{{} =D (k) Approximate E[D (1) ],..., E[D (m) ] by independent Monte Carlo methods Ŝ(f ) = m 1 n k n k k=1 i=1 D (k) i, for (D (k) i ) i=1,...,nj being i.i.d. copies of D (k), k = 1,..., m. F. Heidenreich (TU Kaiserslautern) MLMC for Lévy Driven SDEs August 2011 6 / 17
The Multilevel Monte Carlo Algorithm Multilevel Monte Carlo Algorithms Basic idea: Y (1), Y (2),... strong approximations for the solution Y with increasing accuracy and increasing numerical cost. Telescoping sum: E[f (Y (m) )] = E[f (Y (1) )] + }{{} =D (1) m k=2 E[f (Y (k) ) f (Y (k 1) )] }{{} =D (k) Approximate E[D (1) ],..., E[D (m) ] by independent Monte Carlo methods Ŝ(f ) = m 1 n k n k k=1 i=1 D (k) i, for (D (k) i ) i=1,...,nj being i.i.d. copies of D (k), k = 1,..., m. Note: (Y (k), Y (k 1) ) are coupled via X so that the variance of D (k) decreases. F. Heidenreich (TU Kaiserslautern) MLMC for Lévy Driven SDEs August 2011 6 / 17
Approximation of the Driving Lévy Process Approximation of the driving Lévy process X Lévy-Ito-decomposition: X sum of three independent processes. Brownian motion W, drift b = E[X 1 ] R, X t = σw t + bt + L t, t 0. L 2 -martingale of compensated jumps L (jump-intensity measure ν). F. Heidenreich (TU Kaiserslautern) MLMC for Lévy Driven SDEs August 2011 7 / 17
Approximation of the Driving Lévy Process Approximation of the driving Lévy process X Lévy-Ito-decomposition: X sum of three independent processes. Brownian motion W, drift b = E[X 1 ] R, X t = σw t + bt + L t, t 0. L 2 -martingale of compensated jumps L (jump-intensity measure ν). Assumption: Simulations of ν ( h,h) c /ν(( h, h) c ) feasible for h > 0. Thus: Cutoff of the small jumps. F. Heidenreich (TU Kaiserslautern) MLMC for Lévy Driven SDEs August 2011 7 / 17
Approximation of the Driving Lévy Process Approximation of the driving Lévy process X Lévy-Ito-decomposition: X sum of three independent processes. X t = σw t + bt + L t, t 0. Assumption: Simulations of ν ( h,h) c /ν(( h, h) c ) feasible for h > 0. Thus: Cutoff of the small jumps. Define L (h) t = s [0,t] X s 1l { Xs h} t x ν(dx), ( h,h) c F. Heidenreich (TU Kaiserslautern) MLMC for Lévy Driven SDEs August 2011 7 / 17
Approximation of the Driving Lévy Process Approximation of the driving Lévy process X Lévy-Ito-decomposition: X sum of three independent processes. X t = σw t + bt + L t, t 0. Assumption: Simulations of ν ( h,h) c /ν(( h, h) c ) feasible for h > 0. Thus: Cutoff of the small jumps. Define L (h) t and approximate X by = s [0,t] X s 1l { Xs h} t x ν(dx), ( h,h) c X (h,ε) = (σw t + bt + L (h) t ) t T (h,ε) on random time discretization T (h,ε) with parameters h, ε > 0 F. Heidenreich (TU Kaiserslautern) MLMC for Lévy Driven SDEs August 2011 7 / 17
Approximation of the Driving Lévy Process Approximation of the driving Lévy process X Lévy-Ito-decomposition: X sum of three independent processes. X t = σw t + bt + L t, t 0. Assumption: Simulations of ν ( h,h) c /ν(( h, h) c ) feasible for h > 0. Thus: Cutoff of the small jumps. Define L (h) t and approximate X by = s [0,t] X s 1l { Xs h} t x ν(dx), ( h,h) c X (h,ε) = (σw t + bt + L (h) t ) t T (h,ε) on random time discretization T (h,ε) with parameters h, ε > 0 such that all the jump times of L (h) are included in T (h,ε), step-size is at most ε (for approximation of W ). F. Heidenreich (TU Kaiserslautern) MLMC for Lévy Driven SDEs August 2011 7 / 17
Approximation of the Driving Lévy Process The coupled approximation Compound Poisson approximations (L (h), L (h ) ). compensated jumps greater h 0.2 0.0 L t (h) 0.2 0.4 0.6 0.0 0.2 0.4 0.6 0.8 1.0 time F. Heidenreich (TU Kaiserslautern) MLMC for Lévy Driven SDEs August 2011 8 / 17
Approximation of the Driving Lévy Process The coupled approximation Compound Poisson approximations (L (h), L (h ) ). compensated jumps greater h compensated jumps greater h' 0.2 0.2 0.0 0.0 L t (h) 0.2 L t (h) 0.2 0.4 0.4 0.6 0.6 0.0 0.2 0.4 0.6 0.8 1.0 time 0.0 0.2 0.4 0.6 0.8 1.0 time F. Heidenreich (TU Kaiserslautern) MLMC for Lévy Driven SDEs August 2011 8 / 17
Approximation of the Driving Lévy Process The coupled approximation Approximation of σw on T (h,ɛ) T (h,ɛ ). Brownian motion 0.2 0.0 W t 0.2 0.4 0.6 0.0 0.2 0.4 0.6 0.8 1.0 time F. Heidenreich (TU Kaiserslautern) MLMC for Lévy Driven SDEs August 2011 8 / 17
Approximation of the Driving Lévy Process The coupled approximation Approximation of σw on T (h,ɛ) and T (h,ɛ ). Brownian motion for (h,ε) Brownian motion for (h',ε') 0.2 0.2 0.0 0.0 W t on T ε 0.2 W t on T ε ' 0.2 0.4 0.4 0.6 0.6 0.0 0.2 0.4 0.6 0.8 1.0 time 0.0 0.2 0.4 0.6 0.8 1.0 time F. Heidenreich (TU Kaiserslautern) MLMC for Lévy Driven SDEs August 2011 8 / 17
Approximation of the Driving Lévy Process The coupled approximation Piecewise constant approximation of (X (h,ɛ), X (h,ɛ ) ). approximation for (h,ε) approximation for (h',ε') 0.2 0.2 0.0 0.0 X t (ε) 0.2 X t (ε) 0.2 0.4 0.4 0.6 0.6 0.0 0.2 0.4 0.6 0.8 1.0 time 0.0 0.2 0.4 0.6 0.8 1.0 time F. Heidenreich (TU Kaiserslautern) MLMC for Lévy Driven SDEs August 2011 8 / 17
Approximation of the Driving Lévy Process Multilevel Euler Algorithm Remember: Telescoping sum E[f (Y (m) )] = E[f (Y (1) )] + }{{} =D (1) m k=2 E[f (Y (k) ) f (Y (k 1) )] }{{} =D (k) F. Heidenreich (TU Kaiserslautern) MLMC for Lévy Driven SDEs August 2011 9 / 17
Approximation of the Driving Lévy Process Multilevel Euler Algorithm Remember: Telescoping sum E[f (Y (m) )] = E[f (Y (1) )] + }{{} =D (1) m k=2 E[f (Y (k) ) f (Y (k 1) )] }{{} =D (k) Choice: (Y (k), Y (k 1) ) Euler schemes with coupled input (X (h k,ε k ), X (h k 1,ε k 1 ) ) F. Heidenreich (TU Kaiserslautern) MLMC for Lévy Driven SDEs August 2011 9 / 17
Approximation of the Driving Lévy Process Multilevel Euler Algorithm Remember: Telescoping sum E[f (Y (m) )] = E[f (Y (1) )] + }{{} =D (1) m k=2 E[f (Y (k) ) f (Y (k 1) )] }{{} =D (k) Choice: (Y (k), Y (k 1) ) Euler schemes with coupled input (X (h k,ε k ), X (h k 1,ε k 1 ) ) Parameters of the algorithm Ŝ(f ): m (number of levels) h 1 > h 2 >... > h m (approximation of L) ε 1 > ε 2 >... > ε m (approximation of W ) n 1,..., n m (number of replications per level) F. Heidenreich (TU Kaiserslautern) MLMC for Lévy Driven SDEs August 2011 9 / 17
Result and Examples Result Error: e 2 (Ŝ) = sup E[ S(f ) Ŝ(f ) 2 ] f Lip(1) Here: Lip(1) := {f : D[0, 1] R f 1 Lipschitz w.r.t. supremum norm} F. Heidenreich (TU Kaiserslautern) MLMC for Lévy Driven SDEs August 2011 10 / 17
Result and Examples Result Error: e 2 (Ŝ) = sup Cost: cost(ŝ) f Lip(1) E[ S(f ) Ŝ(f ) 2 ] m n k E[#breakpoints(Y (k) ) + 1] k=1 F. Heidenreich (TU Kaiserslautern) MLMC for Lévy Driven SDEs August 2011 10 / 17
Result and Examples Result Error: e 2 (Ŝ) = sup Cost: cost(ŝ) f Lip(1) E[ S(f ) Ŝ(f ) 2 ] m n k E[#breakpoints(Y (k) ) + 1] k=1 Theorem (Dereich, H.) Let { β := inf p > 0 : ( 1,1) x p ν(dx) < } [0, 2] denote the Blumenthal-Getoor-index of the driving Lévy process. Then, for suitably chosen parameters, cost(ŝn) n and e(ŝ n ) n ( 1 β 1 1 2 ). F. Heidenreich (TU Kaiserslautern) MLMC for Lévy Driven SDEs August 2011 10 / 17
Result and Examples Result Error: e 2 (Ŝ) = sup Cost: cost(ŝ) f Lip(1) E[ S(f ) Ŝ(f ) 2 ] m n k E[#breakpoints(Y (k) ) + 1] k=1 Theorem (Dereich, H.) Let { β := inf p > 0 : ( 1,1) x p ν(dx) < } [0, 2] denote the Blumenthal-Getoor-index of the driving Lévy process. Then, for suitably chosen parameters, cost(ŝn) n and e(ŝ n ) n ( 1 β 1 1 2 ). Remark: Gaussian approximation improves result for β 1 (see Dereich, 2010). Order of convergence: 4 β for σ = 0 or β / [1, 4/3], 6β 1 2 β 6β 4 and for σ 0 and β [1, 4/3]. F. Heidenreich (TU Kaiserslautern) MLMC for Lévy Driven SDEs August 2011 10 / 17
Result and Examples Remarks Related results: Jacod, Kurtz, Méléard and Protter [2007] Marxen [2010] F. Heidenreich (TU Kaiserslautern) MLMC for Lévy Driven SDEs August 2011 11 / 17
Result and Examples Remarks Related results: Jacod, Kurtz, Méléard and Protter [2007] Marxen [2010] Lower bound: Combine results of Creutzig, Dereich, Müller-Gronbach and Ritter [2009] on lower bounds in relation to quantization numbers and average Kolmogorov widths, and Aurzada and Dereich [2009] on the coding complexity of Lévy processes. F. Heidenreich (TU Kaiserslautern) MLMC for Lévy Driven SDEs August 2011 11 / 17
Result and Examples Remarks Related results: Jacod, Kurtz, Méléard and Protter [2007] Marxen [2010] Lower bound: Combine results of Creutzig, Dereich, Müller-Gronbach and Ritter [2009] on lower bounds in relation to quantization numbers and average Kolmogorov widths, and Aurzada and Dereich [2009] on the coding complexity of Lévy processes. In the case X = Y, for every randomized algorithm S n with cost( S n ) n, up to some logarithmic terms. e( S n ) n 1 2 F. Heidenreich (TU Kaiserslautern) MLMC for Lévy Driven SDEs August 2011 11 / 17
Result and Examples Example: α-stable processes Truncated symmetric α-stable processes with α < 2 and truncation level u > 0 Lebesgue density of the Lévy measure g ν (x) = Blumenthal-Getoor-index β = α. c x 1+α 1l ( u,u)(x), x R\{0}. F. Heidenreich (TU Kaiserslautern) MLMC for Lévy Driven SDEs August 2011 12 / 17
Result and Examples Example: α-stable processes Truncated symmetric α-stable processes with α < 2 and truncation level u > 0 Lebesgue density of the Lévy measure g ν (x) = c x 1+α 1l ( u,u)(x), Blumenthal-Getoor-index β = α. { n 1/2, α < 1, By the Theorem e(ŝ n ) n (1/α 1/2), α > 1. x R\{0}. F. Heidenreich (TU Kaiserslautern) MLMC for Lévy Driven SDEs August 2011 12 / 17
Result and Examples Example: α-stable processes Truncated symmetric α-stable processes with α < 2 and truncation level u > 0 Lebesgue density of the Lévy measure g ν (x) = c x 1+α 1l ( u,u)(x), Blumenthal-Getoor-index β = α. { n 1/2, α < 1, By the Theorem e(ŝ n ) n (1/α 1/2), α > 1. x R\{0}. In the sequel: X with u = 1 and c = 0.1, SDE with a(x t ) = X t and y 0 = 1, Lookback option with fixed strike 1 f (Y ) = ( sup (Y t ) 1) +. t [0,1] F. Heidenreich (TU Kaiserslautern) MLMC for Lévy Driven SDEs August 2011 12 / 17
Result and Examples Adaptive choice of m and n 1,..., n m Choose ɛ k = 2 k and h k such that ν(( h k, h k ) c ) = 2 k. F. Heidenreich (TU Kaiserslautern) MLMC for Lévy Driven SDEs August 2011 13 / 17
Result and Examples Adaptive choice of m and n 1,..., n m Choose ɛ k = 2 k and h k such that ν(( h k, h k ) c ) = 2 k. Given δ > 0, determine m and n 1,..., n m such that E[ S(f ) Ŝ(f ) 2 ] = E[f (Y ) f (Y (m) )] 2 + }{{} Var(Ŝ(f )) δ2. =bias(f (Y (m) )) F. Heidenreich (TU Kaiserslautern) MLMC for Lévy Driven SDEs August 2011 13 / 17
Result and Examples Adaptive choice of m and n 1,..., n m Choose ɛ k = 2 k and h k such that ν(( h k, h k ) c ) = 2 k. Given δ > 0, determine m and n 1,..., n m such that E[ S(f ) Ŝ(f ) 2 ] = E[f (Y ) f (Y (m) )] 2 + }{{} Var(Ŝ(f )) δ2. =bias(f (Y (m) )) First step: Estimate expectations E[D (1) ],..., E[D (5) ] and variances Var(D (1) ),..., Var(D (5) ). Bias and variance estimation for α=0.5 Bias and variance estimation for α=0.8 = bias = bias 1 = variance = variance 1.0 log 10(biask) and log10(vark) 2 3 log 10(biask) and log10(vark) 1.5 2.0 2.5 4 3.0 3.5 5 1 2 3 4 1 2 3 4 5 level k F. Heidenreich (TU Kaiserslautern) MLMC for Lévy Driven SDEs August 2011 13 / 17 level k
Result and Examples Adaptive choice of m and n 1,..., n m Highest level of accuracy m: F. Heidenreich (TU Kaiserslautern) MLMC for Lévy Driven SDEs August 2011 14 / 17
Result and Examples Adaptive choice of m and n 1,..., n m Highest level of accuracy m: E[D (1) ],..., E[D (5) ] together with extrapolation give control of the bias(f (Y (k) )) for k N. Choose m such that bias(f (Y (m) )) 2 δ2 2. F. Heidenreich (TU Kaiserslautern) MLMC for Lévy Driven SDEs August 2011 14 / 17
Result and Examples Adaptive choice of m and n 1,..., n m Highest level of accuracy m: E[D (1) ],..., E[D (5) ] together with extrapolation give control of the bias(f (Y (k) )) for k N. Choose m such that Replication numbers n 1,..., n m : bias(f (Y (m) )) 2 δ2 2. F. Heidenreich (TU Kaiserslautern) MLMC for Lévy Driven SDEs August 2011 14 / 17
Result and Examples Adaptive choice of m and n 1,..., n m Highest level of accuracy m: E[D (1) ],..., E[D (5) ] together with extrapolation give control of the bias(f (Y (k) )) for k N. Choose m such that Replication numbers n 1,..., n m : bias(f (Y (m) )) 2 δ2 2. Var(D (1) ),..., Var(D (5) ) together with extrapolation give control of Var(D (k) ) for k = 1,..., m. Choose n 1,..., n m with minimal cost(ŝ) such that Var(Ŝ) = m k=1 Var(D (k) ) n k δ2 2. F. Heidenreich (TU Kaiserslautern) MLMC for Lévy Driven SDEs August 2011 14 / 17
Result and Examples Adaptive choice of m and n 1,..., n m Expample of n 1,..., n m with α = 0.5. Precisions δ = (0.003, 0.002, 0.001, 0.0006, 0.0003). Highest levels m = (3, 3, 4, 4, 5). Replication numbers for α = 0.5 6 precision δ: = 0.003 = 0.002 = 0.001 = 6e 04 = 3e 04 5 log 10(nk) 4 3 1 2 3 4 5 level k F. Heidenreich (TU Kaiserslautern) MLMC for Lévy Driven SDEs August 2011 15 / 17
Result and Examples Adaptive choice of m and n 1,..., n m Expample of n 1,..., n m with α = 0.8. Precisions δ = (0.01, 0.004, 0.002, 0.001, 0.0007). Highest levels m = (4, 5, 6, 7, 7). Replication numbers for α = 0.8 6 5 precision δ: = 0.01 = 0.004 = 0.002 = 0.001 = 7e 04 log 10(nk) 4 3 2 1 2 3 4 5 6 7 level k F. Heidenreich (TU Kaiserslautern) MLMC for Lévy Driven SDEs August 2011 15 / 17
Result and Examples Adaptive choice of m and n 1,..., n m Expample of n 1,..., n m with α = 1.2. Precisions δ = (0.02, 0.01, 0.007, 0.005, 0.0035). Highest levels m = (7, 8, 9, 10, 11). Replication numbers for α = 1.2 5.0 4.5 precision δ: = 0.02 = 0.01 = 0.007 = 0.005 = 0.0035 4.0 log 10(nk) 3.5 3.0 2.5 2.0 1 2 3 4 5 6 7 8 9 10 11 level k F. Heidenreich (TU Kaiserslautern) MLMC for Lévy Driven SDEs August 2011 15 / 17
Result and Examples Empirical relation of error and cost Error and cost of MLMC and classical MC = MLMC = MC 2.0 log 10(error) 2.5 α = 0.5 0.8 1.2 MLMC 0.47 0.46 0.38 MC 0.45 0.34 0.23 3.0 3.5 α = 0.5 = 0.8 = 1.2 4.5 5.0 5.5 6.0 6.5 7.0 log 10(cost) F. Heidenreich (TU Kaiserslautern) MLMC for Lévy Driven SDEs August 2011 16 / 17
Result and Examples Summary and Conclusions Summary: Multilevel algorithm for a wide class of functionals and processes. Up to log s asymptotically optimal for Blumenthal-Getoor-index β 1. Heuristics to achieve desired precision δ > 0. Numerical results match the analytical results. Improvement for β 1 via Gaussian correction, see Dereich [2010]. F. Heidenreich (TU Kaiserslautern) MLMC for Lévy Driven SDEs August 2011 17 / 17
Result and Examples Summary and Conclusions Summary: Multilevel algorithm for a wide class of functionals and processes. Up to log s asymptotically optimal for Blumenthal-Getoor-index β 1. Heuristics to achieve desired precision δ > 0. Numerical results match the analytical results. Improvement for β 1 via Gaussian correction, see Dereich [2010]. Work to do: Further examples of Lévy processes. Implementation of the ML-algorithm with Gaussian correction. The gap between the upper and lower bound for β > 1. F. Heidenreich (TU Kaiserslautern) MLMC for Lévy Driven SDEs August 2011 17 / 17