A multilevel Monte Carlo algorithm for Lévy driven stochastic differential equations
|
|
- Ashlynn Kennedy
- 5 years ago
- Views:
Transcription
1 A multilevel Monte Carlo algorithm for Lévy driven stochastic differential equations by Steffen Dereich and Felix Heidenreich 2 Fachbereich Mathematik und Informatik Philipps Universität Marburg Hans Meerwein Straße 3532 Marburg Germany 2 Fachbereich Mathematik Technische Universität Kaiserslautern Erwin-Schrödinger-Straße Kaiserslautern Germany Version of December 2 Summary. This article introduces and analyzes multilevel Monte Carlo schemes for the evaluation of the expectation E[f(Y )], where Y = (Y t ) t [,] is a solution of a stochastic differential equation driven by a Lévy process. Upper bounds are provided for the worst case error over the class of all path dependent measurable functions f that are Lipschitz continuous with respect to supremum norm. In the case where the Blumenthal-Getoor index of the driving process is smaller than one, one obtains convergence rates of order / n, when the computational cost n tends to infinity. This rate is optimal up to logarithms in the case where Y is itself a Lévy process. Furthermore, an error estimate for Blumenthal-Getoor indices larger than one is included together with results of numerical experiments. Keywords. Multilevel Monte Carlo, numerical integration, quadrature, Lévy-driven stochastic differential equation. 2 Mathematics Subject Classification. Primary 6G5, Secondary 6H, 6J75
2 Introduction In this article, we analyze numerical schemes for the evaluation of S(f) := E[f(Y )], where Y = (Y t ) t [,] is a solution to a multivariate stochastic differential equation driven by a multidimensional Lévy process, and f : D[, ] R is a Borel measurable mapping from the Skorokhod space D[, ] of R d Y -valued functions over the time interval [, ] that is Lipschitz continuous with respect to supremum norm. This is a classical problem which appears for instance in finance, where Y models the risk neutral stock price and f denotes the payoff of a (possibly path dependent) option, and in the past several concepts have been employed for dealing with it. We refer in particular to [PT97], [Rub3], and [JKMP5] for an analysis of the Euler scheme for Lévy-driven SDEs. Recently, Giles [Gil8b] introduced the so called multilevel Monte Carlo method in the context of stochastic differential equations, and this turned out to be very efficient when Y is a continuous diffusion. Indeed, it can be shown that it is optimal on the Lipschitz class [CDMGR8], see also [Gil8a, Avi9] for further recent results and [MGR9] for a survey and further references. In this article, we analyze multilevel Monte Carlo algorithms for the computation of S(f) with a focus on path dependent f s that are Lipschitz functions on the space D[, ] with supremum norm. In order to gain approximative solutions we decompose the Lévy process in a purely discontinuous Lévy martingale with discontinuities of size smaller than a threshold parameter and a Lévy process with discontinuities larger than the former parameter. The latter process can be efficiently simulated on an arbitrary finite time set and we apply an Euler approximation to get approximative solutions for the stochastic differential equation (see for instance [Rub3] for an analysis of such approximations). The article is structured as follows. In the next subsection., the main notation as well as the assumptions for the SDE are stated. Furthermore, basic facts concerning the Lévy-Itodecomposition of a Lévy process are given as a reminder. The actual algorithm, and in particular the coupled Euler schemes, are described in detail in section 2, while the main result and the right choice of the parameters of the algorithm are postponed to section 3. Both depend on g(h) x 2 ν(dx) on (, ) and on whether the driving Lévy process has a Brownian h 2 component. Let us explain our main findings in terms of the Blumenthal-Getoor index (BG-index) β of the driving Lévy process which is an index in [, 2]. It measures the frequency of small jumps, see (2), where a large index corresponds to a process which has small jumps at high frequencies. If the Blumenthal-Getoor index is smaller than one, appropriately adjusted algorithms achieve the same error bounds as those obtained in Giles [Gil8b] for continuous diffusions i.e., the error is of the order n /2 (log n) 3/2 in terms of the computation time n (in the case where f depends on the whole trajectory and is Lipschitz w.r.t. supremum norm). If the driving Lévy process does not include a Wiener process one even obtains error estimtates of order n /2. Unfortunately the error rates become significantly worse for larger Blumenthal-Getoor indices. In this case, a remedy would be to incorporate a Gaussian term as compensation for the disregarded discontinuous Lévy martingale (see for instance [AR]). Derivations of convergence rates for multilevel schemes are typically based on a weak and a strong error estimate for the approximative solutions. In this article, the main technical tool is an error estimate in the strong sense. We shall use as weak error estimate the one that is 2
3 induced by the strong estimate. As is well known this approach is suboptimal when the payoff f(y ) actually does not depend on the whole trajectory of the process Y but on the value of Y at a particular deterministic time instance. In that case, an analysis based on the weak estimates of [JKMP5] or [TKH9] seems to be favorable. Unfortunately, in the case where f is path dependent, one does not have better error estimates at hand. To gain better results for large BG-indices it is preferable to incorporate a Gaussian correction. In the case where f(y ) depends only on the value of the process at a given time instance, one can combine the strong estimate of this article with weak estimates of [JKMP5] to improve the error estimates. However, to do so one has to impose significantly stronger assumptions on f and the Lévy process. The case where f is path dependent is analyzed in a forthcoming article [Der]. Here the strong estimate of this article is combined with a new weak estimate for the approximative solutions with Gaussian correction.. Assumptions and Basic Facts Let us now introduce the main notation. We denote by the Euclidean norm for vectors as well as the Frobenius norm for matrices. For h >, we put B h = {x R d X : x < h}. Moreover, we let D[, ] denote the space of càdlàg functions from [, ] to R d Y, where R d Y is the state space of Y to be defined below. The space D[, ] is endowed with the σ-field induced by the projections on the marginals (or, equivalently, the Borel-σ-field for the Skorokhod topology). Often, we consider supremum norm on D[, ] and we denote f = sup t [,] f(t) for f D[, ]. Furthermore, the set of Borel measurable functions f : D[, ] R, that are Lipschitz continuous with Lipschitz constant one with respect to the supremum norm is denoted by Lip(). In general, we write f g iff lim f/g =, while f g stands for lim sup f/g. Finally, f g means < lim inf f/g lim sup f/g <, and f g means lim sup f/g <. Let X = (X t ) t denote a d X -dimensional L 2 -Lévy process with Lévy measure ν, drift parameter b and Gaussian covariance matrix ΣΣ, that is a process with independent and stationary increments satisfying E[e i θ,xt ] = e tψ(θ), θ R d X for ψ(θ) = 2 Σ θ 2 ( + i b, θ + e i θ,x i θ, x ) ν(dx). R d X Briefly, we call X a (ν, ΣΣ, b)-lévy process. Note that we do not need to work with a truncation function since the marginals of the process are in L 2 (P) by assumption. We consider the stochastic integral equation Y t = y + t a(y s ) dx s () with deterministic initial value y R d Y and we impose a standard Lipschitz assumption on the function a, which implies, in particular, existence and strong uniqueness of the solution. More precisely, we require the following Assumption A: For a fixed K <, the function a : R d Y R d Y d X satisfies a(y) a(y ) K y y 3
4 for all y, y R d Y. Furthermore, we have a(y ) K, < x 2 ν(dx) K 2, Σ K and b K. For a general account on Lévy processes we refer the reader to the books by [Ber98] and [Sat99]. Moreover, concerning stochastic analysis, we refer the reader to the books by Protter [Pro5] and Applebaum [App4]. The distribution of the driving Lévy process X is uniquely characterized by the parameters b R d X, Σ R d X d X and ν. We sketch a construction of X with a view towards simulation in the multilevel setting. This construction is based on the L 2 -approximation of Lévy processes as it is presented in [Pro5] and [App4]. Consider a stochastic process (N(t, A)) t,a B(R d X \{}) on some probability space (Ω, A, P ) with the following properties. For every ω Ω the mapping [, t] A N(t, A)(ω) induces a σ-finite counting measure on B(R + (R d X \{})). For every A B(R d X \{}) that is bounded away from zero the process (N(t, A)) t is a Poisson process with intensity ν(a). For pairwise disjoint sets A,..., A r B(R d X \{}) the stochastic processes (N(t, A )) t,..., (N(t, A r )) t are independent. Then N(t,.) P -a.s. defines a finite measure on Bh c with values in N. The integral B x N(t, dx) h c thus is a random finite sum, which gives rise to a compound Poisson process. More specifically put λ (h) = ν(bh c) <, which satisfies λ(h) > for sufficiently small h >. In this case µ (h) = ν B c h /λ (h) defines a probability measure on R d X \{} such that B c h x N(t, dx) = d N t ξ i, (2) where = d denotes equality in distribution, (N t ) t is a Poisson process with intensity λ (h) and (ξ i ) i N is an i.i.d. sequence of random variables with distribution µ (h) and independent of (N t ) t. Its expectation calculates to E [ B x N(t, dx) ] = F h c (h)t, where we set The compensated process L (h) = (L (h) t L (h) t = F (h) = B c h i= x ν(dx). ) t, given by x N(t, dx) tf (h), (3) B c h is an L 2 -martingale, and the same holds true for its L 2 -limit L = (L t ) t = lim h L (h), see, e.g., Applebaum [App4]. With W denoting a d X -dimensional Brownian motion independent of L, we define the Lévy process X by X t = ΣW t + L t + bt. (4) We add that the Lévy-Itô-decomposition guarantees that every L 2 -Lévy process has a representation (4). 4
5 2 The Algorithm 2. A Coupled Euler Scheme The multilevel Monte Carlo algorithm introduced in Section 2.2 is based on a hierarchy of coupled Euler schemes for the approximation of the solution process Y of the SDE (). In every single Euler scheme we approximate the driving process X in the following way. At first we neglect all the jumps with size smaller than h. The jumps of size at least h induce a random time discretization, which, because of the Brownian component, is refined so that the step sizes are at most ε. Finally W as well as the drift component are approximated by piecewise constant functions with respect to the refined time discretization. In this way, we get an approximation ˆX (h,ɛ) of X. The multilevel approach requires simulation of the joint distribution of ˆX (h,ɛ) and ˆX (h,ε ) for different values of h > h > and ε > ε >. More precisely, we proceed as follows. For any càdlàg process L we denote by L t = L t lim s t L s the jump-discontinuity at time t. The jump times of L (h) are then given by T (h) = and T (h) k = inf{t > T (h) k : L(h) t } for k. The time differences T (h) k T (h) k form an i.i.d. sequence of random variables that are exponentially distributed with parameter λ (h). Furthermore, this sequence is independent of the sequence of jump heights L (h), which is i.i.d. with distribution µ (h), and on every interval T (h) k [T (h) k, T (h) k+ ) the process L(h) is linear with slope F (h). See (2) and (3). The processes L (h ) and (L (h) L (h ) ) are independent with values in {} Bh c and {} Bh c\bc h, respectively, and therefore the jumps of the process L (h ) can be obtained from those of L (h) by L (h ) t = L (h) t { L (h) t >h }. We conclude that the simulation of the joint distribution of (L (h), L (h ) ) only requires samples from the jump times and jump heights T (h) k and L (h), respectively, which amounts to sampling from µ (h) and from an exponential distribution. Because of the Brownian component we refine the time discretization by T (h,ε) = and T (h,ε) j = inf{t (h) k > T (h,ε) j T (h) k (h,ε) : k N} (T j + ε) (5) for j. Summarizing, X is approximated at the discretization times T j = T (h,ε) j by and ˆX (h,ɛ) (h,ɛ) T j = ˆX T j + Σ(W Tj W Tj ) + L (h) T j + (b F (h))(t j T j ) for j. Observe that ˆX (h,ɛ) T j = ΣW Tj + L (h) T j + bt j. ˆX (h,ɛ) = To simulate the Brownian components of the coupled processes ( ˆX (h,ɛ), ˆX (h,ε ) ), we refine the sequence of jump times T (h) k to get (T (h,ε) j ) j N and (T (h,ε ) j ) j N, respectively. Since W and L are independent, the process W is easily simulated at all times (T (h,ε) j ) j N and (T (h,ε ) j ) j N that are in [, ] by sampling from a normal distribution. 5
6 For the SDE () the Euler scheme with the driving process (ΣW t + L (h) t + bt) t and the random time discretization (T j ) j N = (T (h,ε) j ) j N is defined by Ŷ (h,ɛ) = y and for j. Furthermore Ŷ (h,ɛ) t Ŷ (h,ɛ) T j = Ŷ (h,ɛ) T j + a(ŷ (h,ɛ) (h,ɛ) T j )( ˆX T j = Ŷ (h,ɛ) T j ˆX (h,ɛ) T j ) (6) for t [T j, T j+ ). In the multilevel approach the solution process Y of the SDE () is approximated via coupled Euler schemes (Ŷ (h,ɛ), Ŷ (h,ε ) ), which are obtained by applying the Euler scheme (6) to the coupled driving processes ˆX (h,ɛ) and ˆX (h,ε ) with their random discretization times T (h,ε), respectively. j and T (h,ε ) 2.2 The Multilevel Monte Carlo Algorithm We fix two positive and decreasing sequences (ε k ) k N and (h k ) k N, and we put Y (k) t = Ŷ (h k,ɛ k ) t. For technical reasons we define Y (k) t for all t, although we are typically only interested in Y (k) := (Y (k) t ) t [,]. For m N and a given measurable function f : D[, ] R with f(y (k) ) being integrable for k =,..., m, we write E[f(Y (m) )] as telescoping sum E[f(Y (m) )] = E[f(Y () )] + j m E[f(Y (k) ) f(y (k ) )]. In the multilevel approach each expectation on the right-hand side is approximated separately by means of independent classical Monte Carlo approximations. For k = 2,..., m we denote by n k the number of replications for the approximation of E[f(Y (k) ) f(y (k ) )] and by n the k=2 number of replications for the approximation of E[f(Y () )]. For (Z (k) j,, Z (k) j,2 ) j=,...,n k being i.i.d. copies of the coupled Euler scheme (Y (k ), Y (k) ) for k = 2,..., m and (Z () j ) j=,...,n being i.i.d. copies of Y (), the corresponding multilevel Monte Carlo algorithm is given by Ŝ(f) = n f(z () j ) + n j= m n k n k k=2 j= [ f(z (k) j,2 (k) ) f(z j, )]. The algorithm Ŝ is uniquely described by the parameters m and (n k, h k, ε k ),...,m so that we formally identify the algorithm Ŝ with these parameters. The error of the algorithm For measurable functions f : D[, ] R with f(y ), f(y () ),..., f(y (m) ) being square integrable, the mean squared error of Ŝ(f) calculates to If f is in Lip(), then E[ S(f) Ŝ(f) 2 ] = E[f(Y ) f(y (m) )] 2 + var(ŝ(f)). E[ S(f) Ŝ(f) 2 ] E Y Y (m) 2 + m k=2 n k E Y (k) Y (k ) 2 + n E Y () y 2. (7) In particular, the upper bound does not depend on the choice of f. Hence (7) remains valid for the worst case error e 2 (Ŝ) = sup f Lip() E[ S(f) Ŝ(f) 2 ]. (8) 6
7 The cost of the algorithm We work in the real number model of computation, which means that we assume that arithmetic operations with real numbers and comparisons can be done in one time unit. We suppose that one can sample from the distribution ν B c h /ν(bh c ) for sufficiently small h > and the uniform distribution on [, ] in constant time, one can evaluate a at any point y R d Y in constant time, and f can be evaluated for piecewise constant functions in time less than a constant multiple of its breakpoints plus one. For a piecewise constant R d Y -valued function y = (y t ) t [,], we denote by Υ(y) its number of breakpoints. Then the cost of the algorithm Ŝ is defined by the function m cost(ŝ) = n k E[Υ(Y (k) )]. (9) Under the above assumptions, the average runtime to evaluate Ŝ(f) is indeed less than a constant multiple of cost(ŝ). 3 Results and Examples We consider algorithms Ŝ as described in the previous section with cost function defined by (9) and with respect to the worst case error criterion (8). Our results depend on the frequency of small jumps of the driving process X. To quantify this property and to use it for the choice of the parameters of the multilevel algorithm we take a a decreasing and invertible function g : (, ) (, ) such that x 2 ν(dx) g(h) for all h >. h2 The explicit choices for the parameters m and (n k, ε k, h k ),...,m in the corresponding algorithms are given after the statement of the main results. Theorem. (i) If the driving process X has no Brownian component, i.e., Σ =, and if there exists γ > such that g(h) h(log /h) +γ () as h, then there exists a sequence of multilevel algorithms Ŝn such that cost(ŝn) n and e(ŝn). n 7
8 (ii) If there exists γ /2 such that g(h) (log /h)γ, h as h, then there exists a sequence of multilevel algorithms Ŝn such that cost(ŝn) n and e(ŝn) (log n) γ+. n (iii) If there exists γ > such that for all sufficiently small h > g( γ h) 2g(h), () 2 then there exists a sequence of multilevel algorithms Ŝn such that cost(ŝn) n and e(ŝn) n g (n). In terms of the Blumenthal-Getoor index { } β := inf p > : x p ν(dx) < [, 2] (2) B we get the following corollary. Corollary. There exists a sequence of multilevel algorithms (Ŝn) with cost(ŝn) n such that sup{γ : e(ŝn) n γ } ( β 2) 2. Remark. Parts (ii) and (iii) of Theorem deal with stochastic differential equations that include a Brownian component in the driving process X, i.e. Σ, see Section.. Essentially these two results cover the case Σ. When γ = /2 in (ii), the asymptotics are the same as in the classical diffusion setting analyzed in [Gil8b]. Similarly, as in our proof one can also treat different terms of lower order instead of log. Certainly, it also makes sense to consider γ < /2, when Σ =. The computations are similar and therefore omitted. Part (iii) covers, in particular, all cases where g is regularly varying at with exponent strictly smaller than. As the following remark shows the results of parts (i) and (ii) of Theorem are close to optimal. In particular, these parts cover the case of a Blumenthal-Getoor index β. Remark 2. To provide a lower bound we adopt a very broad notion of algorithm that was introduced and analyzed in [CDMGR8]. Roughly it can be described as follows. An algorithm Ŝ is allowed to carry out real number computations and evaluations of the functional f for arbitrary paths from D[, ]. The algorithm is required to end in finite time with a possibly random output Ŝ(f). For fixed f, the mean number of f-evaluations is referred to as cost of the algorithm when applied to f. Its maximal value over the class Lip() is considered as cost of the algorithm Ŝ, that is cost(ŝ) = sup E[# of f-evaluations used in the computation of Ŝ(f)]. f Lip() 8
9 For n N, we consider the optimal worst case error e n := inf Ŝ sup E[(S(f) Ŝ(f))2 ] /2, f Lip() where the infimum is taken over all algorithms Ŝ with cost(ŝ) n. Note that any reasonable implementation of such an algorithm will have a runtime that dominates the cost assigned in our considerations. Hence, the lower bound provided below prevails in more realistic models. Suppose that Y = X. Furthermore assume that X includes a Wiener process in which case we set β = 3 2 or that x 2 ν(dx) h α for some α (, 2) in which case we set β = + h 2 α. Then lim sup n(log n) β e n >. n The statement follows by combining the lower bound on so called quantization numbers of Theorem.5 of [AD9] with Theorem 3 of [CDMGR8]. The choice for the parameters The algorithm Ŝ is completely determined by the parameters m and (n k, ε k, h k ),...,m and we now give the parameters which achieve the error estimates provided in Theorem. Recall that Theorem depends on an invertible and decreasing function g : (, ) (, ) satisfying and we set, for k N, x 2 We choose the remaining parameters by where ν(dx) g(h) for all h >, h2 ε k = 2 k and h k = g (2 k ). m = inf{k N : C h k < } and n k = C h k for k =,..., m, (3) (i) C = n, (ii) C = n (log n) γ+, and (iii) C = /g (n) in the respective cases. Here we need to assume additionally that g is such that h 2/3 g(h) in case (i) and h log /h g(h) in case (ii). These assumptions do not result in a loss of generality. The parameters optimize (up to constant multiples) the error estimate induced by equation (7) together with Theorem 2 below. Numerical results In this section, we provide numerical examples for the new algorithms and compare our approach with the classical Monte-Carlo approach. We denote by X a symmetric truncated α-stable Lévy process, that is a (ν,, )-Lévy process with dν dx (x) = l c (,u](x) x +α + l c [ u,)(x) x +α, x R\{}, 9
10 where α (, 2), c > and u > denotes the truncation threshold. Note that x 2 h 2 ν(dx) 2c 2α α 2 h α = g(h). so that our theoretical findings (Theorem ) imply that there exists a sequence of algorithms (Ŝn) each satisfying the cost constraint cost(ŝn) n such that n 2, α < 2 e(ŝn) n /2 (log n) 3/2, α = n ( α 2 ), α > In the case where α =, the rates can be improved to e(ŝn) n 2 log n along a similar proof since the process X does not include a Gaussian term. In the numerical study we consider a lookback option with strike, that is We treat the stochastic differential equation f(y ) = ( sup Y t ) +. t [,] Y t = y + t a(y s ) dx s with y = and X being a symmetric α-stable Lévy process with parameters u =, c =., and α {.5,.8,.2}. In the aproach that is to be presented next, we allow the user to specify a desired precision δ measured in the root mean squared error. The highest level in the simulation and the iteration numbers (n k ),...,m are then derived from simulations of the coarse levels. The remaining parameters are chosen as ε k = 2 k and h k = ( 2 k α 2c + u α ) /α which is the uniqe value with ν(b c h k ) = 2 k. To obtain mean squared error at most δ 2, we want to choose the highest level m and the iteration numbers n,..., n m such that and bias(ŝδ(f)) 2 := E[f(Y ) f(ŷ (m) )] 2 δ2 2 (4) var(ŝδ(f)) δ2 2. (5) A direct estimate for bias(ŝδ(f)) is not available, and therefore we proceed as follows. Put bias k (f) := E[f(Ŷ (k) ) f(ŷ (k ) )] for k 2, to obtain bias(ŝδ(f)) = k=m+ bias k(f), and note that bias k (f) can be estimated for small levels k at a small computational cost. This suggests to use extrapolation to large levels k. In particular, if there is an exponential decay bias k (f) γ ρ k, (6)
11 Bias and variance estimation for α=.5 Bias and variance estimation for α=.8 = bias = variance. = bias = variance log (biask) and log(vark) 2 3 log (biask) and log(vark) level k level k Figure : Estimates of bias k (f) and var k (f) with corresponding regression lines. with γ > and ρ ], [, we have bias(ŝδ(f)) γ/( ρ) ρ m+ and thus (4) holds for log(( ρ)δ) log( 2 γ) m =. log(ρ) To check the validity of assumption (6), we estimate bias k (f) for small levels k (here k 4 or k 5) by the empirical mean of simulations. It turns out that the estimates for bias k (f) fit very well to the assumption (6), see Figure. For the variance of Ŝδ(f) we proceed in the same way. Put var k (f) := var(f(ŷ (k) ) f(ŷ (k ) )) for k 2 and var (f) = var(f(ŷ () )) to obtain var(ŝδ(f)) = m var k(f)/n k. Estimates of var k (f) for small levels fit very well to an exponential decay, see Figure, and we use extrapolation to large levels again. Finally the replication numbers n,..., n m are determined by minimizing the cost of Ŝδ(f), which is given by m n k2 k, up to a constant, subject to the constraint (5). For several values of δ, the corresponding choice of replication numbers is shown in Figure 2. We now study the relation between (E[S(f) Ŝδ(f)] 2 ) /2 and cost(ŝδ(f)) by means of a simulation experiment. Here, different precisions δ and the respective replication numbers and maximal levels are chosen according to Figure 2. We use Monte Carlo simulations of Ŝδ(f) to estimate the root mean-squared error and the cost, where the unknown value of S(f) is replaced by the output of a master computation. The results are presented in a log-log plot in Figure 3. In this experiment, the empirical orders of convergence, i.e. the slopes of the regression lines, are close to the asymptotic results from Theorem. For α =.5 and α =.8, the empirical orders are.47 and.46, where in both cases the asymptotic result is /2. For stability index α =.2, the orders are.38 empirically and /3 according to Theorem. Let us compare the multilevel algorithm Ŝδ with the classical Monte Carlo scheme. In the latter case, we do not have a heuristic control of the bias and so we use the strong approximation error of Theorem 2 as an upper bound. All unknown constants appearing in the error estimate
12 Replication numbers for α =.5 Replication numbers for α =.8 precision δ: 6 precision δ: 6 =.3 =.2 =. = 6e 4 = 3e 4 5 =. =.4 =.2 =. = 7e 4 5 log (nk) log (nk) level k level k Replication numbers for α = precision δ: =.2 =. =.7 =.5 = log (nk) level k Figure 2: Replication numbers for different precisions δ. 2
13 Error and cost of MLMC and classical MC = MLMC = MC 2. log (error) α =.5 =.8 = log (cost) Figure 3: Cost and error of multilevel (MLMC) and classical (MC) Monte Carlo. 3
14 as well as the unknown variance are assumed to be. Then we determine the parameters h, ε and the replication number n in order to reach precision δ > for the root mean squared error to ( ) (2 α) h = δ 2 2 α 2, ε = 4c ν(bh c and n = ), δ 2. The empirical orders of convergence in the classical Monte Carlo algorithm, which are.45,.34 and.23 for α =.5,.8 and.2, are always worse than those of the corresponding multilevel algorithm. Furthermore, the higher the Blumenthal-Getoor index is, i.e. the harder the problem is itself, the more we benefit from the multilevel idea, since higher levels are needed to reach a desired precision. Concluding, the multilevel scheme provides an algorithm Ŝδ(f) which achieves a superior order of convergence together with a heuristic control of the error, which is an important feature in applications. 4 Proofs Proving the main result requires asymptotic error bounds for the strong approximation of Y by Ŷ (h,ɛ) for given ε, h >. We derive an error estimate in terms of the function F (h) := x 2 ν(dx) B h for h >. Theorem 2. Under Assumption A, there exists a constant κ depending only on K such that for any ε (, ] and h > with ν(bh c) ε, one has [ E sup Y t Ŷ (h,ɛ) t 2] κ ( ε log(e/ε) + F (h) ) t [,] in the general case and [ E sup t [,] in the case without Wiener process, i.e. Σ =. Y t Ŷ (h,ɛ) t 2] κ [ F (h) + b F (h) 2 ε 2] Remark 3. A similar Euler scheme is analyzed in [Rub3]. There it is shown that the appropriately scaled error process (the discrepancy between approximative and real solution) converges in distribution to a stochastic process. In the cases where this limit theorem is applicable, it is straight-forward to verify that the estimate provided in Theorem 2 is of the right order. Remark 4. In the case without Wiener process the term b F (h) 2 ε 2 is typically of lower order than F (h), so that we have in most cases that [ E Y t Ŷ (h,ɛ) t 2] κ F (h). sup t [,] 4
15 The proof of Theorem 2 is based on the analysis of an auxiliary process (Ȳt): We decompose the Lévy martingale L into a sum of the Lévy martingale L = L (h) constituted by the sum of compensated jumps of size bigger than h and the remaining part L = (L t L t) t. We denote X = (ΣW t + L t + tb) t, and let Ȳ = (Ȳt) t be the solution to the integral equation Ȳ t = y + t a(ȳι(s )) d X s, (7) where ι(t) = sup[, t] T, and T is the set of random times (T j ) j Z+ defined by T j = T (h,ε) j as in (5) in Section 2.. Proposition. Under Assumption A, there exists a constant κ depending only on K such that for any ε (, ] and h > with ν(bh c ) /ε we have [ E sup Y t Ȳt 2] κ ( ε + F (h) ) t [,] in the general case and [ E sup Y t Ȳt 2] κ [ F (h) + b F (h) 2 ε 2] t [,] in the case without Wiener process, i.e. Σ =. The proof of the proposition relies on the following lemma. Lemma. Under Assumption A, we have E [ sup Y t y 2] κ, t [,] where κ is a finite constant depending on K only. The proof of the lemma can be achieved along the standard argument for proving bounds for second moments. Indeed, the standard combination of Gronwall s lemma together with Doob s inequality yields the result. Proof of Proposition. For t, we consider Z t = Y t Ȳt and Z t = Y t Ȳι(t). We fix a stopping time τ and let z τ (t) = E[sup s [,t τ] Z s 2 ]. To indicate that a process is stopped at time τ we put τ in its superscript. The main task of the proof is to establish an estimate of the form t z τ (t) α z τ (s) ds + α 2 with values α, α 2 > that do not depend on the choice of τ. Then by using a localizing sequence of stopping times (τ n ) with finite z τn (), we deduce from Gronwall s inequality that E[ sup Y s Ȳs 2 ] = lim z τ s [,] n n () α 2 exp(α ). 5
16 We analyze t Z t = (a(y s ) a(ȳι(s ))) d(σw s + L s) + a(y s ) dl s }{{} + t =:M t (a(y s ) a(ȳι(s ))) b ds t (8) with M = (M t ) t being a local L 2 -martingale. By Doob and Lemma 3, we get E sup s [,t τ] [ M s 2 t τ t τ 4 E a(y s ) a(ȳι(s )) 2 d ΣW + L s + a(y s ) 2 d L s ], where in general for a local L 2 -martingale A we set A t = j A(j) t, where A (j) denotes the predictable compensator of the classical bracket process of the j-th coordinate of A. Note that d ΣW +L t = ( Σ 2 + B x 2 ν(dx)) dt 2K 2 dt and similarly d L h c t = F (h) dt. Consequently, by Assumption A and Fubini s theorem, we get E sup s [,t τ] M s 2 8K 4 t E[l {s τ} Z τ s 2 ] ds + 4K 2 F (h) t E[( Y s y + ) 2 ] ds. Conversely, by the Cauchy-Schwarz inequality and Fubini s theorem, one has for t [, ] that [ t τ E (a(y s ) a(ȳι(s ))) b ds 2 ] t K 4 E[l {s τ} Z τ s 2 ] ds. Using that for a, b R, (a + b) 2 2a 2 + 2b 2, we deduce with (8) that E sup s [,t τ] Z s 2 8K 4 t Since Z t = Z t + Ȳt Ȳι(t) we conclude that E sup s [,t τ] E[l {s τ} Z τ s 2 ] ds + 8K 2 F (h) t E[( Y s y + ) 2 ] ds. t Z s 2 36K 4 [ E[ Z τ s 2 ] + E[l {s τ} Ȳs Ȳι(s ) 2 ] ] t ds + 8K 2 F (h) E[( Y s y + ) 2 ] ds. By Lemma, E[sup s [,] ( Y s y + ) 2 ] is bounded by a constant, and we get, for t [, ], [ t[ z τ (t) κ zτ (s) + E[l {s τ} Ȳs Ȳι(s ) 2 ] ] ds + F (h)], (9) where κ is a constant that depends only on K. Next, we provide an appropriate estimate for E[l {s τ} Ȳt Ȳι(t) 2 ]. Since L has no jumps on (ι(t), t) we have Ȳ t Ȳι(t) = a(ȳι(t)) [ Σ(W t W ι(t) ) + ( b F (h) ) (t ι(t)) ] so that E[l {t τ} Ȳt Ȳι(t) 2 ] 2K 2 E[( Ȳ τ ι(t) y + ) 2 ] [ Σ 2 ε + b F (h) 2 ε 2]. 6
17 Here we used the independence of the Wiener process and the random times in T. Next, we use that Ȳι(t) y Y ι(t) y + Z ι(t) to deduce that E[l {t τ} Ȳt Ȳι(t) 2 ] 4K 2[ E[( Y τ ι(t) y + ) 2 ] + E[ Z τ ι(t) 2 ] ] [ Σ 2 ε + b F (h) 2 ε 2]. Recall that E[sup s [,] ( Y s y + ) 2 ] is uniformly bounded. Moreover, by the Cauchy-Schwarz inequality, F (h) 2 K 2 ν(bh c) K2 /ε so that the right bracket in the latter equation is uniformly bounded. Consequently, for t [, ], E[l {t τ} Ȳt Ȳι(t) 2 [ ] κ 2 Σ 2 ε + b F (h) 2 ε 2 + z τ (t) ] where κ 2 is an appropriate constant that depends only on K. Inserting this estimate into (9) gives z τ (t) κ 3 [ t ] z τ (s) ds + Σ 2 ε + b F (h) 2 ε 2 + F (h). for a constant κ 3 that depends only on K. If Σ =, then the statement of the proposition follows from the Gronwall inequality. The general case is an immediate consequence of the estimates Σ 2 ε K 2 ε and b F (h) 2 ε 2 2K 2 (ε 2 + ε) 4K 2 ε, where we used again that F (h) 2 K 2 /ε. For the proof of Theorem 2, we need a further lemma. Lemma 2. Let r N and (G j ) j=,,...,r denote a filtration. Moreover, let, for j =,..., r, U j and V j denote non-negative random variables such that U j is G j -measurable, and V j is G j+ - measurable and independent of G j. Then one has E [ max U ] [ jv j E j=,...,r max j=,...,r U j ] E [ max j=,...,r V j Proof. Without loss of generality we can and will assume that (U j ) is monotonically increasing. Otherwise, we can prove the result for (Ũj) given by Ũj = max k j U k instead, and then deduce the result for the original sequence (U j ). We proceed by induction. For r = the statement is trivial, since U and V are independent random variables. Next, we let r arbitrary and note that E[ max j=,...,r U jv j ] = E[ max j=,...,r U jv j ] + E[(U V max j=,...,r U jv j ) + ]. Using the monotonicity of (U j ), we get that E[(U V max j=,...,r U jv j ) + G ] U E[(V max j=,...,r V j) + G ] = U E[(V max j=,...,r V j) + ]. Next, we use the induction hypothesis for E[max j=,...,r U j V j ] to deduce that E[ max j=,...,r U jv j ] E[U r ] E[ max j=,...,r V j] + E[U ] E[(V max j=,...,r V j) + ] E[U r ] E[ max j=,...,r V j]. ]. 7
18 Proof of Theorem 2. By Proposition, it remains to find an appropriate upper bound for E[sup t [,] Ȳt Ŷt 2 ] where we denote Ŷt = Ŷ (h,ɛ) t. First note that for all j N, one has L Tj = L T j and L T j+ = L T j + L T j+ (T j+ T j )F (h) so that by definition, the processes (Ȳt) and (Ŷt) coincide almost surely for all times in T (see (7) and (6)). Hence, ( Ȳ t Ŷt = Ȳt Ȳι(t) = a(ȳι(t))σ(w t W ι(t) ) + a(ȳι(t)) b F (h) ) (t ι(t)). }{{}}{{} =:A t =:B t Since two neighboring points in T are at most ε units apart we get It remains to analyze E [ sup B t 2] K 2 E[( Ȳ y + ) 2 ] b F (h) 2 ε 2. (2) t [,] E [ sup A t 2] K 2 Σ 2 E [ sup ( Ȳι(t) y + ) 2 W t W ι(t) 2]. t [,] t [,] We apply Lemma 2 with U j = l {Tj <}( ȲT j y + ) 2, V j = sup t [Tj,T j+ ) W t W Tj 2 and G j = F Tj (j =,,... ). For r N we obtain E [ sup A t 2] K 2 Σ 2 E [ sup ( Ȳι(t) y + ) 2] E [ sup sup W t W Tj 2]. t [, T r] t [, T r] j=,,...,r t [T j,t j+ ) By Lévy s modulus of continuity, the term W ϕ := W t W s sup s<t ϕ(t s) is almost surely finite for ϕ : [, ] [, ), δ δ log(e/δ) and, by Fernique s theorem, E W 2 ϕ is finite. Recalling that neighboring points in T are at most ε units apart, we conclude with the monotonicity of ϕ on [, ] that E [ sup sup W t W Tj 2] E [ W 2 ] ϕ ϕ(ε) 2 Letting r to infinity we arrive at j=,...,r t [T j,t j+ ) E [ sup A t 2] K 2 Σ 2 E[ W 2 ϕ] E [ sup ( Ȳι(t) y + ) 2] ϕ(ε) 2, (2) t [,] t [,] Combining (2) and (2), we get E[ Ȳ Ŷ 2 ] 2K 2 E[( Ȳ y + ) 2 ] ( Σ 2 E [ W 2 ϕ] ϕ(ε) 2 + b F (h) 2 ε 2). Next, note that by, Proposition and Lemma, E[( Ȳ y + ) 2 ] is bounded from above by some constant depending on K only. Consequently, there exists a constant κ with E[ Ȳ Ŷ 2 ] κ ( Σ 2 ϕ(ε) 2 + b F (h) 2 ε 2). Together with Proposition, one immediately obtains the statement for the case without Wiener process. In order to obtain the statement of the general case, we again use that B c h x ν(dx) 2 K 2 /ε due to the Cauchy-Schwarz inequality. 8
19 Proof of part (i) of Theorem We can assume without loss of generality that h 2/3 g(h), (22) h(log /h) +γ since otherwise, we can modify g in such a way that the new g is larger than the old one and enjoys the wanted properties. We consider a multilevel Monte Carlo algorithm Ŝ (as introduced in Section 2.2) with h k = g (2 k ) and ε k = 2 k for k Z +. For technical reasons, we also define ε and h although they do not appear in the algorithm. The parameters m N and n,..., n m N are specified below. Note that so that by Theorem 2, we have ν(b c h k ) g(h k ) = /ε k and ε k, (23) E[ Y (k) Y (k ) 2 ] 2E[ Y Y (k) 2 ] + 2E[ Y Y (k ) 2 ] κ [ F (hk ) + b F (h k ) 2 ε 2 k ]. for some constant κ >. By Lemma and Theorem 2, E[ Y () y 2 ] is bounded from above by some constant depending on K only. Hence, equation (7) together with Theorem 2 imply the existence of a constant κ 2 such that e 2 (Ŝ) κ 2 m+ [ F (hk ) + b F (h k ) 2 ε 2 ] k, (24) n k where we set n m+ =. Next, we analyze the terms in (24). Using assumption (), we have, for a sufficiently small v (, ) and an appropriate constant κ 3, F (h k ) x ν(dx) x (v x ) ν(dx) v x 2 ν(dx) + ν(b v v u) c du Bv c v x 2 ν(dx) + κ 3 du. v u(log /u) +γ Both integrals are finite. Moreover, we have and using (22) we get that F (h k ) g(h k ) h 2 k = 2k g (2 k ) 2, (25) y 3/2 g (y) y(log y) +γ as y. (26) Hence, we have 2 k g (2 k ) 2 2 2k = ε 2 k as k tends to infinity, and there exists a constant κ 4 such that m+ e 2 (Ŝ) κ 4 2 k g (2 k ) 2. n k 9
20 We shall now fix m and n,..., n m. For a given parameter Z /g (), we choose m = m(z) = inf{k N : Zg (2 k ) < }. Moreover, we set n k = n k (Z) = Zg (2 k ) for k =,..., m, and set again n m+ =. Then /n k 2/(Zg (2 k )) for k =,..., m +, so that e 2 (Ŝ) κ 4 m+ m+ 2 k g (2 k ) 2 2κ 4 n k Z 2 k g (2 k ). By (26), 2 k g (2 k ) k (+γ) and the latter sum is uniformly bounded for all m. Hence, there exists a constant κ 5 depending only on g and K such that e 2 (Ŝ) κ 5 Z. It remains to analyze the cost of the algorithm. The expected number of breakpoints of Y (k) is less than /ε k + ν(b c h k ) 2 k+ (see (23)) so that Hence, where κ 6 > is an appropriate constant. Proof of part (ii) of Theorem m cost(ŝ) 2 k+ n k. (27) m cost(ŝ) 4Z 2 k g (2 k ) κ 6 Z, We proceed similarly as in the proof of part (i). We assume without loss of generality that g satisfies log /h (log /h)γ g(h), (28) h h since, otherwise, we can enlarge g appropriately. In analogy to the proof of part (i), we choose h k = g (2 k ) and ε k = 2 k for k Z +, and we note that estimates (23) and (25) remain valid. Next, we deduce with equation (7) and Theorem 2 that, for some constant κ, e 2 (Ŝ) κ m+ [ F (h k )+ε k log n k e ] m+ κ ε k where again n m+ =. Note that (28) implies that [ ] 2 k g (2 k ) 2 +2 (k ) log(e2 k ), n k log y y g (y) (log y)γ y as y, (29) so that, in particular, 2 k g (2 k ) 2 2 k log(e2 k ) as k tends to infinity. Hence, we find a constant κ 2 such that m+ e 2 (Ŝ) κ 2 2 k g (2 k ) 2. n k 2
21 For a parameter Z e (/g ()), we choose m = m(z) = inf{k N : Zg (2 k ) < }, and we set n k = n k (Z) = Zg (2 k ) for k =,..., m. Then we get with (29) that e 2 (Ŝ) 2κ 2 Z m+ 2 k g (2 k ) κ 3 Z mγ+ for an appropriate constant κ 3. By definition, Zg (2 m ) so that, by equation (28), m log g( )/ log 2 log Z as Z. Z Consequently, there exists a constant κ 4 such that Similarly, we get for the cost function Next, we choose m cost(ŝ) 2 k+ n k 4Z e 2 (Ŝ) κ (log Z) γ+ 4. Z m 2 k g (2 k ) κ 5 Z(log Z) γ+. Z = Z(n) = n 2κ 5 (log n) γ+ for n e sufficiently large such that Z e (/g ()). Then cost(ŝ) κ 5Z(log Z) γ+ 2 n, and we have, for all sufficiently large n, cost(ŝ) n. Conversely, we find e 2 (Ŝ) κ (log Z) γ+ 4 Z (log n)2(γ+). n Proof of part (iii) of Theorem First note that property () is equivalent to γ 2 g (u) g (2u) (3) for all sufficiently large u >. This implies that there exists a finite constant κ depending only on g such that for all k, l Z + with k l one has g (2 k ( 2 ) l kg ) κ (2 l ). (3) γ We proceed similar as in the proof of part (i) and consider Ŝ A with h k = g (2 k ) and ε k = 2 k for k Z +. The maximal index m and the number of iterations n k are fixed later in the discussion. Again estimates (23) and (25) remain valid and we get with Theorem 2 e 2 (Ŝ) κ 2 m+ [ F (h k )+ε k log n k e ] m+ κ 2 ε k 2 [ ] g (2 k ) 2 2 k +2 (k ) log(e2 k ), n k
22 for a constant κ 2 and n m+ = as before. By (3), we have g (2 k ) (γ/2) k and recalling that γ > we conclude that 2 k log(e2 k ) g (2 k ) 2 2 k as k tends to infinity. Hence, there exists a constant κ 3 with e 2 (Ŝ) κ 3 Conversely, we again estimate the cost by m+ n k 2 k g (2 k ) 2. m cost(ŝ) 2 k+ n k. Now we specify m and n,..., n m. For a given parameter Z 2/g (), we let m = m(z) = inf{k N : Zg (2 k ) < 2}, and set n k = n k (Z) = Zg (2 k ) for k =,..., m. Then we get with (3) that there exists a constant κ 4 with e 2 (Ŝ) 2κ 3 Z m+ m+ 2 k g (2 k ) 2κ κ 3 Z 2 k g (2 m+ ) ( 2 ) m+ (k ) γ m+ 2κ κ 3 Z 2m+ g (2 m+ ) γ (m+ (k )) κ 4 Z 2m+ g (2 m+ ). Moreover, by (3) one has for sufficiently large m (or, equivalently, for sufficiently large Z) that Zg (2 m+ ) γ 2 Zg (2 m ) γ > so that Similarly, one obtains e 2 (Ŝ) κ 42 m+ g (2 m+ ) 2 cost(ŝ) κ 5Z2 m+ g (2 m+ ) < 2κ 5 2 m+. Next, we choose, for given n 2κ 5, Z > such that m = log 2 cost(ŝ) n and for sufficiently large n we have e 2 (Ŝ) κ n 4 g ( n ) 2 n g (n) 2 as n. 2κ 5 4κ 5 In the last step, we again used property (3). Proof of Corollary n 2κ 5. Then, clearly, We assume without loss of generality that β < 2 since otherwise the statement of the corollary is trivial. We fix β (β, 2] and recall that C := x β ν(dx) B is finite. We consider ḡ : (, ) (, ), h x 2 ν(dx), whose integral we split for h 2 h (, ] into three parts: x ḡ(h) = Bh 2 h 2 ν(dx) + ν(dx) + ν(dx) =: I + I 2 + I 3. B \B h 22 B c
23 The last term does not depend on h and we estimate the first two terms by I h β x β ν(dx) C h β and I 2 h β x β ν(dx) C h β. B h B \B h Hence, we can choose β ((β ), 2] arbitrarily, and a decreasing and invertible function g : (, ) (, ) that dominates ḡ and satisfies g(h) = h β for all sufficiently small h >. Then for γ = 2 /β, one has g( γ 2 h) = 2g(h) for all sufficiently small h > and, by part (iii) of Theorem there exists a sequence of multilevel algorithms Ŝn such that e(ŝn) n 2 β. The general statement follows via a diagonal argument. In the case where β <, one can choose β = and β > arbitrarily close to one and gets the result. Whereas for β, one can choose for any β > β an appropriate β and thus retrieve the statement. Appendix We shall use the following consequence of the Itô isometry for Lévy processes. Lemma 3. Let (A t ) be a previsible process with state space R d Y d X, let (L t ) be a square integrable R d X -valued Lévy martingale and denote by L the process given via L t = d X j= L (j) t, where L (j) denotes the predictable compensator of the classical bracket process for the j-th coordinate of L. One has, for any stopping time τ with finite expectation E τ A s 2 d L s, that ( t τ A s dl s ) t is a uniformly square integrable martingale which satisfies E τ 2 τ A s dl s E A s 2 d L s. In general the estimate can be strengthened by replacing the Frobenius norm by the matrix norm induced by the Euclidean norm. For the convenience of the reader we provide a proof of the lemma. Proof. Let ν denote the Lévy measure of L and ΣΣ the covariance of its Wiener component. Let Q : R d X R d X denote the self-adjoint operator given by Qx = ΣΣ x + x, y y ν(dy). We recall the Itô isometry for Lévy processes. One has for a previsible process (A s ) with E τ A s Q /2 2 ds <, 23
24 that ( t τ A s dl s ) t is a uniformly square integrable martingale with E τ 2 τ A s dl s = E A s Q /2 2 ds. The statement now follows immediately by noticing that A s Q /2 2 A s 2 tr(q) and τ A s 2 tr(q) ds = τ A s 2 d L s. Acknowledgments. The authors would like to thank Michael Scheutzow for the proof of Lemma 2. Felix Heidenreich was supported by the DFG-Schwerpunktprogramm 324. References [AD9] F. Aurzada and S. Dereich. The coding complexity of Lévy processes. Found. Comput. Math., 9(3):359 39, 29. [App4] D. Applebaum. Lévy processes and stochastic calculus, volume 93 of Cambridge Studies in Advanced Mathematics. Cambridge University Press, Cambridge, 24. [AR] S. Asmussen and J. Rosiński. Approximations of small jumps of Lévy processes with a view towards simulation. J. Appl. Probab., 38(2): , 2. [Avi9] R. Avikainen. On irregular functionals of SDEs and the euler scheme. Finance Stoch., 3(3):38 4, 29. [Ber98] J. Bertoin. Lévy processes. Cambridge Univ. Press, 998. [CDMGR8] J. Creutzig, S. Dereich, T. Müller-Gronbach, and K. Ritter. Infinite-dimensional quadrature and approximation of distributions. Found. Comput. Math., 28. [Der] S. Dereich. Multilevel Monte Carlo for Lévy-driven SDE s with Gaussian correction. To appear in Ann. Appl. Probab., 2. [Gil8a] M. B. Giles. Improved multilevel Monte Carlo convergence using the Milstein scheme. In Monte Carlo and quasi-monte Carlo methods 26, pages Springer, Berlin, 28. [Gil8b] M. B. Giles. Multilevel Monte Carlo path simulation. Oper. Res., 56(3), 28. [JKMP5] J. Jacod, T. G. Kurtz, S. Méléard, and P. Protter. The approximate Euler method for Lévy driven stochastic differential equations. Ann. Inst. H. Poincaré Probab. Statist., 4(3): , 25. [MGR9] T. Müller-Gronbach and K. Ritter. Variable subspace sampling and multi-level algorithms. In A. Owen P. L Ecuyer, editor, Monte Carlo and Quasi-Monte Carlo Methods 28. Springer-Verlag, Berlin,
25 [Pro5] P. Protter. Stochastic integration and differential equations, volume 2 of Stochastic Modelling and Applied Probability. Springer-Verlag, Berlin, 25. 2nd edition. [PT97] P. Protter and D. Talay. The Euler scheme for Lévy driven stochastic differential equations. Ann. Probab., 25(): , 997. [Rub3] S. Rubenthaler. Numerical simulation of the solution of a stochastic differential equation driven by a Lévy process. Stochastic Process. Appl., 3(2):3 349, 23. [Sat99] K. Sato. Lévy processes and infinitely divisible distributions, volume 68 of Cambridge Studies in Advanced Mathematics. Cambridge University Press, Cambridge, 999. [TKH9] H. Tanaka and A. Kohatsu-Higa. An operator approach for Markov chain weak approximations with an application to infinite activity Lévy driven SDEs. Ann. Appl. Probab., 9(3):26 62,
Multilevel Monte Carlo for Lévy Driven SDEs
Multilevel Monte Carlo for Lévy Driven SDEs Felix Heidenreich TU Kaiserslautern AG Computational Stochastics August 2011 joint work with Steffen Dereich Philipps-Universität Marburg supported within DFG-SPP
More informationBrownian Motion. 1 Definition Brownian Motion Wiener measure... 3
Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................
More informationPoisson random measure: motivation
: motivation The Lévy measure provides the expected number of jumps by time unit, i.e. in a time interval of the form: [t, t + 1], and of a certain size Example: ν([1, )) is the expected number of jumps
More informationRegular Variation and Extreme Events for Stochastic Processes
1 Regular Variation and Extreme Events for Stochastic Processes FILIP LINDSKOG Royal Institute of Technology, Stockholm 2005 based on joint work with Henrik Hult www.math.kth.se/ lindskog 2 Extremes for
More informationLecture 12. F o s, (1.1) F t := s>t
Lecture 12 1 Brownian motion: the Markov property Let C := C(0, ), R) be the space of continuous functions mapping from 0, ) to R, in which a Brownian motion (B t ) t 0 almost surely takes its value. Let
More informationA NOTE ON STOCHASTIC INTEGRALS AS L 2 -CURVES
A NOTE ON STOCHASTIC INTEGRALS AS L 2 -CURVES STEFAN TAPPE Abstract. In a work of van Gaans (25a) stochastic integrals are regarded as L 2 -curves. In Filipović and Tappe (28) we have shown the connection
More informationExponential martingales: uniform integrability results and applications to point processes
Exponential martingales: uniform integrability results and applications to point processes Alexander Sokol Department of Mathematical Sciences, University of Copenhagen 26 September, 2012 1 / 39 Agenda
More informationEULER MARUYAMA APPROXIMATION FOR SDES WITH JUMPS AND NON-LIPSCHITZ COEFFICIENTS
Qiao, H. Osaka J. Math. 51 (14), 47 66 EULER MARUYAMA APPROXIMATION FOR SDES WITH JUMPS AND NON-LIPSCHITZ COEFFICIENTS HUIJIE QIAO (Received May 6, 11, revised May 1, 1) Abstract In this paper we show
More informationInfinitely divisible distributions and the Lévy-Khintchine formula
Infinitely divisible distributions and the Cornell University May 1, 2015 Some definitions Let X be a real-valued random variable with law µ X. Recall that X is said to be infinitely divisible if for every
More informationOn Optimal Stopping Problems with Power Function of Lévy Processes
On Optimal Stopping Problems with Power Function of Lévy Processes Budhi Arta Surya Department of Mathematics University of Utrecht 31 August 2006 This talk is based on the joint paper with A.E. Kyprianou:
More informationBrownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539
Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory
More informationFiltrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition
Filtrations, Markov Processes and Martingales Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition David pplebaum Probability and Statistics Department,
More informationConvergence at first and second order of some approximations of stochastic integrals
Convergence at first and second order of some approximations of stochastic integrals Bérard Bergery Blandine, Vallois Pierre IECN, Nancy-Université, CNRS, INRIA, Boulevard des Aiguillettes B.P. 239 F-5456
More informationThe Azéma-Yor Embedding in Non-Singular Diffusions
Stochastic Process. Appl. Vol. 96, No. 2, 2001, 305-312 Research Report No. 406, 1999, Dept. Theoret. Statist. Aarhus The Azéma-Yor Embedding in Non-Singular Diffusions J. L. Pedersen and G. Peskir Let
More informationFunctional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals
Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals Noèlia Viles Cuadros BCAM- Basque Center of Applied Mathematics with Prof. Enrico
More informationPROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS
PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please
More informationSome SDEs with distributional drift Part I : General calculus. Flandoli, Franco; Russo, Francesco; Wolf, Jochen
Title Author(s) Some SDEs with distributional drift Part I : General calculus Flandoli, Franco; Russo, Francesco; Wolf, Jochen Citation Osaka Journal of Mathematics. 4() P.493-P.54 Issue Date 3-6 Text
More informationJump-type Levy Processes
Jump-type Levy Processes Ernst Eberlein Handbook of Financial Time Series Outline Table of contents Probabilistic Structure of Levy Processes Levy process Levy-Ito decomposition Jump part Probabilistic
More informationA Concise Course on Stochastic Partial Differential Equations
A Concise Course on Stochastic Partial Differential Equations Michael Röckner Reference: C. Prevot, M. Röckner: Springer LN in Math. 1905, Berlin (2007) And see the references therein for the original
More informationThe Codimension of the Zeros of a Stable Process in Random Scenery
The Codimension of the Zeros of a Stable Process in Random Scenery Davar Khoshnevisan The University of Utah, Department of Mathematics Salt Lake City, UT 84105 0090, U.S.A. davar@math.utah.edu http://www.math.utah.edu/~davar
More informationA Change of Variable Formula with Local Time-Space for Bounded Variation Lévy Processes with Application to Solving the American Put Option Problem 1
Chapter 3 A Change of Variable Formula with Local Time-Space for Bounded Variation Lévy Processes with Application to Solving the American Put Option Problem 1 Abstract We establish a change of variable
More informationI forgot to mention last time: in the Ito formula for two standard processes, putting
I forgot to mention last time: in the Ito formula for two standard processes, putting dx t = a t dt + b t db t dy t = α t dt + β t db t, and taking f(x, y = xy, one has f x = y, f y = x, and f xx = f yy
More informationSupermodular ordering of Poisson arrays
Supermodular ordering of Poisson arrays Bünyamin Kızıldemir Nicolas Privault Division of Mathematical Sciences School of Physical and Mathematical Sciences Nanyang Technological University 637371 Singapore
More informationPROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS
PROBABILITY: LIMIT THEOREMS II, SPRING 15. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please
More informationn E(X t T n = lim X s Tn = X s
Stochastic Calculus Example sheet - Lent 15 Michael Tehranchi Problem 1. Let X be a local martingale. Prove that X is a uniformly integrable martingale if and only X is of class D. Solution 1. If If direction:
More informationOn Reflecting Brownian Motion with Drift
Proc. Symp. Stoch. Syst. Osaka, 25), ISCIE Kyoto, 26, 1-5) On Reflecting Brownian Motion with Drift Goran Peskir This version: 12 June 26 First version: 1 September 25 Research Report No. 3, 25, Probability
More informationSTOCHASTIC DIFFERENTIAL EQUATIONS DRIVEN BY PROCESSES WITH INDEPENDENT INCREMENTS
STOCHASTIC DIFFERENTIAL EQUATIONS DRIVEN BY PROCESSES WITH INDEPENDENT INCREMENTS DAMIR FILIPOVIĆ AND STEFAN TAPPE Abstract. This article considers infinite dimensional stochastic differential equations
More informationHigher order weak approximations of stochastic differential equations with and without jumps
Higher order weak approximations of stochastic differential equations with and without jumps Hideyuki TANAKA Graduate School of Science and Engineering, Ritsumeikan University Rough Path Analysis and Related
More informationAn essay on the general theory of stochastic processes
Probability Surveys Vol. 3 (26) 345 412 ISSN: 1549-5787 DOI: 1.1214/1549578614 An essay on the general theory of stochastic processes Ashkan Nikeghbali ETHZ Departement Mathematik, Rämistrasse 11, HG G16
More informationFast-slow systems with chaotic noise
Fast-slow systems with chaotic noise David Kelly Ian Melbourne Courant Institute New York University New York NY www.dtbkelly.com May 1, 216 Statistical properties of dynamical systems, ESI Vienna. David
More informationMetric Spaces and Topology
Chapter 2 Metric Spaces and Topology From an engineering perspective, the most important way to construct a topology on a set is to define the topology in terms of a metric on the set. This approach underlies
More informationThe Skorokhod reflection problem for functions with discontinuities (contractive case)
The Skorokhod reflection problem for functions with discontinuities (contractive case) TAKIS KONSTANTOPOULOS Univ. of Texas at Austin Revised March 1999 Abstract Basic properties of the Skorokhod reflection
More informationConvergence rates in weighted L 1 spaces of kernel density estimators for linear processes
Alea 4, 117 129 (2008) Convergence rates in weighted L 1 spaces of kernel density estimators for linear processes Anton Schick and Wolfgang Wefelmeyer Anton Schick, Department of Mathematical Sciences,
More informationLimit theorems for multipower variation in the presence of jumps
Limit theorems for multipower variation in the presence of jumps Ole E. Barndorff-Nielsen Department of Mathematical Sciences, University of Aarhus, Ny Munkegade, DK-8 Aarhus C, Denmark oebn@imf.au.dk
More informationIndependence of some multiple Poisson stochastic integrals with variable-sign kernels
Independence of some multiple Poisson stochastic integrals with variable-sign kernels Nicolas Privault Division of Mathematical Sciences School of Physical and Mathematical Sciences Nanyang Technological
More informationLecture 21 Representations of Martingales
Lecture 21: Representations of Martingales 1 of 11 Course: Theory of Probability II Term: Spring 215 Instructor: Gordan Zitkovic Lecture 21 Representations of Martingales Right-continuous inverses Let
More informationDiscretization of SDEs: Euler Methods and Beyond
Discretization of SDEs: Euler Methods and Beyond 09-26-2006 / PRisMa 2006 Workshop Outline Introduction 1 Introduction Motivation Stochastic Differential Equations 2 The Time Discretization of SDEs Monte-Carlo
More informationLecture 17 Brownian motion as a Markov process
Lecture 17: Brownian motion as a Markov process 1 of 14 Course: Theory of Probability II Term: Spring 2015 Instructor: Gordan Zitkovic Lecture 17 Brownian motion as a Markov process Brownian motion is
More informationGARCH processes continuous counterparts (Part 2)
GARCH processes continuous counterparts (Part 2) Alexander Lindner Centre of Mathematical Sciences Technical University of Munich D 85747 Garching Germany lindner@ma.tum.de http://www-m1.ma.tum.de/m4/pers/lindner/
More informationNumerical Methods with Lévy Processes
Numerical Methods with Lévy Processes 1 Objective: i) Find models of asset returns, etc ii) Get numbers out of them. Why? VaR and risk management Valuing and hedging derivatives Why not? Usual assumption:
More information(2m)-TH MEAN BEHAVIOR OF SOLUTIONS OF STOCHASTIC DIFFERENTIAL EQUATIONS UNDER PARAMETRIC PERTURBATIONS
(2m)-TH MEAN BEHAVIOR OF SOLUTIONS OF STOCHASTIC DIFFERENTIAL EQUATIONS UNDER PARAMETRIC PERTURBATIONS Svetlana Janković and Miljana Jovanović Faculty of Science, Department of Mathematics, University
More informationPathwise volatility in a long-memory pricing model: estimation and asymptotic behavior
Pathwise volatility in a long-memory pricing model: estimation and asymptotic behavior Ehsan Azmoodeh University of Vaasa Finland 7th General AMaMeF and Swissquote Conference September 7 1, 215 Outline
More information1. Stochastic Processes and filtrations
1. Stochastic Processes and 1. Stoch. pr., A stochastic process (X t ) t T is a collection of random variables on (Ω, F) with values in a measurable space (S, S), i.e., for all t, In our case X t : Ω S
More informationStochastic Processes
Stochastic Processes A very simple introduction Péter Medvegyev 2009, January Medvegyev (CEU) Stochastic Processes 2009, January 1 / 54 Summary from measure theory De nition (X, A) is a measurable space
More informationLecture 9. d N(0, 1). Now we fix n and think of a SRW on [0,1]. We take the k th step at time k n. and our increments are ± 1
Random Walks and Brownian Motion Tel Aviv University Spring 011 Lecture date: May 0, 011 Lecture 9 Instructor: Ron Peled Scribe: Jonathan Hermon In today s lecture we present the Brownian motion (BM).
More informationApproximation of BSDEs using least-squares regression and Malliavin weights
Approximation of BSDEs using least-squares regression and Malliavin weights Plamen Turkedjiev (turkedji@math.hu-berlin.de) 3rd July, 2012 Joint work with Prof. Emmanuel Gobet (E cole Polytechnique) Plamen
More informationOn pathwise stochastic integration
On pathwise stochastic integration Rafa l Marcin Lochowski Afican Institute for Mathematical Sciences, Warsaw School of Economics UWC seminar Rafa l Marcin Lochowski (AIMS, WSE) On pathwise stochastic
More informationSUMMARY OF RESULTS ON PATH SPACES AND CONVERGENCE IN DISTRIBUTION FOR STOCHASTIC PROCESSES
SUMMARY OF RESULTS ON PATH SPACES AND CONVERGENCE IN DISTRIBUTION FOR STOCHASTIC PROCESSES RUTH J. WILLIAMS October 2, 2017 Department of Mathematics, University of California, San Diego, 9500 Gilman Drive,
More information{σ x >t}p x. (σ x >t)=e at.
3.11. EXERCISES 121 3.11 Exercises Exercise 3.1 Consider the Ornstein Uhlenbeck process in example 3.1.7(B). Show that the defined process is a Markov process which converges in distribution to an N(0,σ
More informationOPTIMAL SOLUTIONS TO STOCHASTIC DIFFERENTIAL INCLUSIONS
APPLICATIONES MATHEMATICAE 29,4 (22), pp. 387 398 Mariusz Michta (Zielona Góra) OPTIMAL SOLUTIONS TO STOCHASTIC DIFFERENTIAL INCLUSIONS Abstract. A martingale problem approach is used first to analyze
More informationApplications of Ito s Formula
CHAPTER 4 Applications of Ito s Formula In this chapter, we discuss several basic theorems in stochastic analysis. Their proofs are good examples of applications of Itô s formula. 1. Lévy s martingale
More information1 Brownian Local Time
1 Brownian Local Time We first begin by defining the space and variables for Brownian local time. Let W t be a standard 1-D Wiener process. We know that for the set, {t : W t = } P (µ{t : W t = } = ) =
More informationLECTURE 2: LOCAL TIME FOR BROWNIAN MOTION
LECTURE 2: LOCAL TIME FOR BROWNIAN MOTION We will define local time for one-dimensional Brownian motion, and deduce some of its properties. We will then use the generalized Ray-Knight theorem proved in
More informationDoléans measures. Appendix C. C.1 Introduction
Appendix C Doléans measures C.1 Introduction Once again all random processes will live on a fixed probability space (Ω, F, P equipped with a filtration {F t : 0 t 1}. We should probably assume the filtration
More informationIto Formula for Stochastic Integrals w.r.t. Compensated Poisson Random Measures on Separable Banach Spaces. B. Rüdiger, G. Ziglio. no.
Ito Formula for Stochastic Integrals w.r.t. Compensated Poisson Random Measures on Separable Banach Spaces B. Rüdiger, G. Ziglio no. 187 Diese rbeit ist mit Unterstützung des von der Deutschen Forschungsgemeinschaft
More informationStochastic integration. P.J.C. Spreij
Stochastic integration P.J.C. Spreij this version: April 22, 29 Contents 1 Stochastic processes 1 1.1 General theory............................... 1 1.2 Stopping times...............................
More informationErnesto Mordecki 1. Lecture III. PASI - Guanajuato - June 2010
Optimal stopping for Hunt and Lévy processes Ernesto Mordecki 1 Lecture III. PASI - Guanajuato - June 2010 1Joint work with Paavo Salminen (Åbo, Finland) 1 Plan of the talk 1. Motivation: from Finance
More informationn [ F (b j ) F (a j ) ], n j=1(a j, b j ] E (4.1)
1.4. CONSTRUCTION OF LEBESGUE-STIELTJES MEASURES In this section we shall put to use the Carathéodory-Hahn theory, in order to construct measures with certain desirable properties first on the real line
More informationCIMPA SCHOOL, 2007 Jump Processes and Applications to Finance Monique Jeanblanc
CIMPA SCHOOL, 27 Jump Processes and Applications to Finance Monique Jeanblanc 1 Jump Processes I. Poisson Processes II. Lévy Processes III. Jump-Diffusion Processes IV. Point Processes 2 I. Poisson Processes
More informationLeast Squares Estimators for Stochastic Differential Equations Driven by Small Lévy Noises
Least Squares Estimators for Stochastic Differential Equations Driven by Small Lévy Noises Hongwei Long* Department of Mathematical Sciences, Florida Atlantic University, Boca Raton Florida 33431-991,
More informationMaximum Likelihood Drift Estimation for Gaussian Process with Stationary Increments
Austrian Journal of Statistics April 27, Volume 46, 67 78. AJS http://www.ajs.or.at/ doi:.773/ajs.v46i3-4.672 Maximum Likelihood Drift Estimation for Gaussian Process with Stationary Increments Yuliya
More informationSTOCHASTIC ANALYSIS FOR JUMP PROCESSES
STOCHASTIC ANALYSIS FOR JUMP PROCESSES ANTONIS PAPAPANTOLEON Abstract. Lecture notes from courses at TU Berlin in WS 29/1, WS 211/12 and WS 212/13. Contents 1. Introduction 2 2. Definition of Lévy processes
More informationStrong Predictor-Corrector Euler Methods for Stochastic Differential Equations
Strong Predictor-Corrector Euler Methods for Stochastic Differential Equations Nicola Bruti-Liberati 1 and Eckhard Platen July 14, 8 Dedicated to the 7th Birthday of Ludwig Arnold. Abstract. This paper
More informationP (A G) dp G P (A G)
First homework assignment. Due at 12:15 on 22 September 2016. Homework 1. We roll two dices. X is the result of one of them and Z the sum of the results. Find E [X Z. Homework 2. Let X be a r.v.. Assume
More informationRisk Bounds for Lévy Processes in the PAC-Learning Framework
Risk Bounds for Lévy Processes in the PAC-Learning Framework Chao Zhang School of Computer Engineering anyang Technological University Dacheng Tao School of Computer Engineering anyang Technological University
More informationChange detection problems in branching processes
Change detection problems in branching processes Outline of Ph.D. thesis by Tamás T. Szabó Thesis advisor: Professor Gyula Pap Doctoral School of Mathematics and Computer Science Bolyai Institute, University
More informationPotentials of stable processes
Potentials of stable processes A. E. Kyprianou A. R. Watson 5th December 23 arxiv:32.222v [math.pr] 4 Dec 23 Abstract. For a stable process, we give an explicit formula for the potential measure of the
More informationLévy Processes and Infinitely Divisible Measures in the Dual of afebruary Nuclear2017 Space 1 / 32
Lévy Processes and Infinitely Divisible Measures in the Dual of a Nuclear Space David Applebaum School of Mathematics and Statistics, University of Sheffield, UK Talk at "Workshop on Infinite Dimensional
More informationWeak solutions of mean-field stochastic differential equations
Weak solutions of mean-field stochastic differential equations Juan Li School of Mathematics and Statistics, Shandong University (Weihai), Weihai 26429, China. Email: juanli@sdu.edu.cn Based on joint works
More informationHardy-Stein identity and Square functions
Hardy-Stein identity and Square functions Daesung Kim (joint work with Rodrigo Bañuelos) Department of Mathematics Purdue University March 28, 217 Daesung Kim (Purdue) Hardy-Stein identity UIUC 217 1 /
More informationMultivariate Generalized Ornstein-Uhlenbeck Processes
Multivariate Generalized Ornstein-Uhlenbeck Processes Anita Behme TU München Alexander Lindner TU Braunschweig 7th International Conference on Lévy Processes: Theory and Applications Wroclaw, July 15 19,
More informationRandom Bernstein-Markov factors
Random Bernstein-Markov factors Igor Pritsker and Koushik Ramachandran October 20, 208 Abstract For a polynomial P n of degree n, Bernstein s inequality states that P n n P n for all L p norms on the unit
More informationPathwise uniqueness for stochastic differential equations driven by pure jump processes
Pathwise uniqueness for stochastic differential equations driven by pure jump processes arxiv:73.995v [math.pr] 9 Mar 7 Jiayu Zheng and Jie Xiong Abstract Based on the weak existence and weak uniqueness,
More informationStochastic Differential Equations.
Chapter 3 Stochastic Differential Equations. 3.1 Existence and Uniqueness. One of the ways of constructing a Diffusion process is to solve the stochastic differential equation dx(t) = σ(t, x(t)) dβ(t)
More informationThe strictly 1/2-stable example
The strictly 1/2-stable example 1 Direct approach: building a Lévy pure jump process on R Bert Fristedt provided key mathematical facts for this example. A pure jump Lévy process X is a Lévy process such
More informationUNCERTAINTY FUNCTIONAL DIFFERENTIAL EQUATIONS FOR FINANCE
Surveys in Mathematics and its Applications ISSN 1842-6298 (electronic), 1843-7265 (print) Volume 5 (2010), 275 284 UNCERTAINTY FUNCTIONAL DIFFERENTIAL EQUATIONS FOR FINANCE Iuliana Carmen Bărbăcioru Abstract.
More informationEuler Equations: local existence
Euler Equations: local existence Mat 529, Lesson 2. 1 Active scalars formulation We start with a lemma. Lemma 1. Assume that w is a magnetization variable, i.e. t w + u w + ( u) w = 0. If u = Pw then u
More informationStochastic Processes. Winter Term Paolo Di Tella Technische Universität Dresden Institut für Stochastik
Stochastic Processes Winter Term 2016-2017 Paolo Di Tella Technische Universität Dresden Institut für Stochastik Contents 1 Preliminaries 5 1.1 Uniform integrability.............................. 5 1.2
More informatione - c o m p a n i o n
OPERATIONS RESEARCH http://dx.doi.org/1.1287/opre.111.13ec e - c o m p a n i o n ONLY AVAILABLE IN ELECTRONIC FORM 212 INFORMS Electronic Companion A Diffusion Regime with Nondegenerate Slowdown by Rami
More informationNew Multilevel Algorithms Based on Quasi-Monte Carlo Point Sets
New Multilevel Based on Quasi-Monte Carlo Point Sets Michael Gnewuch Institut für Informatik Christian-Albrechts-Universität Kiel 1 February 13, 2012 Based on Joint Work with Jan Baldeaux (UTS Sydney)
More informationStochastic Computation
Stochastic Computation A Brief Introduction Klaus Ritter Computational Stochastics TU Kaiserslautern 1/1 Objectives Objectives of Computational Stochastics: Construction and analysis of (A) algorithms
More informationWEAK VERSIONS OF STOCHASTIC ADAMS-BASHFORTH AND SEMI-IMPLICIT LEAPFROG SCHEMES FOR SDES. 1. Introduction
WEAK VERSIONS OF STOCHASTIC ADAMS-BASHFORTH AND SEMI-IMPLICIT LEAPFROG SCHEMES FOR SDES BRIAN D. EWALD 1 Abstract. We consider the weak analogues of certain strong stochastic numerical schemes considered
More informationLecture 22 Girsanov s Theorem
Lecture 22: Girsanov s Theorem of 8 Course: Theory of Probability II Term: Spring 25 Instructor: Gordan Zitkovic Lecture 22 Girsanov s Theorem An example Consider a finite Gaussian random walk X n = n
More informationTheory and Applications of Stochastic Systems Lecture Exponential Martingale for Random Walk
Instructor: Victor F. Araman December 4, 2003 Theory and Applications of Stochastic Systems Lecture 0 B60.432.0 Exponential Martingale for Random Walk Let (S n : n 0) be a random walk with i.i.d. increments
More informationExercises in stochastic analysis
Exercises in stochastic analysis Franco Flandoli, Mario Maurelli, Dario Trevisan The exercises with a P are those which have been done totally or partially) in the previous lectures; the exercises with
More informationSmall-time asymptotics of stopped Lévy bridges and simulation schemes with controlled bias
Small-time asymptotics of stopped Lévy bridges and simulation schemes with controlled bias José E. Figueroa-López 1 1 Department of Statistics Purdue University Seoul National University & Ajou University
More informationSolving the Poisson Disorder Problem
Advances in Finance and Stochastics: Essays in Honour of Dieter Sondermann, Springer-Verlag, 22, (295-32) Research Report No. 49, 2, Dept. Theoret. Statist. Aarhus Solving the Poisson Disorder Problem
More informationJump Processes. Richard F. Bass
Jump Processes Richard F. Bass ii c Copyright 214 Richard F. Bass Contents 1 Poisson processes 1 1.1 Definitions............................. 1 1.2 Stopping times.......................... 3 1.3 Markov
More informationAn Introduction to Malliavin calculus and its applications
An Introduction to Malliavin calculus and its applications Lecture 3: Clark-Ocone formula David Nualart Department of Mathematics Kansas University University of Wyoming Summer School 214 David Nualart
More informationWiener Measure and Brownian Motion
Chapter 16 Wiener Measure and Brownian Motion Diffusion of particles is a product of their apparently random motion. The density u(t, x) of diffusing particles satisfies the diffusion equation (16.1) u
More informationA Barrier Version of the Russian Option
A Barrier Version of the Russian Option L. A. Shepp, A. N. Shiryaev, A. Sulem Rutgers University; shepp@stat.rutgers.edu Steklov Mathematical Institute; shiryaev@mi.ras.ru INRIA- Rocquencourt; agnes.sulem@inria.fr
More informationGAUSSIAN PROCESSES; KOLMOGOROV-CHENTSOV THEOREM
GAUSSIAN PROCESSES; KOLMOGOROV-CHENTSOV THEOREM STEVEN P. LALLEY 1. GAUSSIAN PROCESSES: DEFINITIONS AND EXAMPLES Definition 1.1. A standard (one-dimensional) Wiener process (also called Brownian motion)
More informationOn the martingales obtained by an extension due to Saisho, Tanemura and Yor of Pitman s theorem
On the martingales obtained by an extension due to Saisho, Tanemura and Yor of Pitman s theorem Koichiro TAKAOKA Dept of Applied Physics, Tokyo Institute of Technology Abstract M Yor constructed a family
More informationThe Lévy-Itô decomposition and the Lévy-Khintchine formula in31 themarch dual of 2014 a nuclear 1 space. / 20
The Lévy-Itô decomposition and the Lévy-Khintchine formula in the dual of a nuclear space. Christian Fonseca-Mora School of Mathematics and Statistics, University of Sheffield, UK Talk at "Stochastic Processes
More informationShort-time expansions for close-to-the-money options under a Lévy jump model with stochastic volatility
Short-time expansions for close-to-the-money options under a Lévy jump model with stochastic volatility José Enrique Figueroa-López 1 1 Department of Statistics Purdue University Statistics, Jump Processes,
More informationStochastic Analysis. Prof. Dr. Andreas Eberle
Stochastic Analysis Prof. Dr. Andreas Eberle March 13, 212 Contents Contents 2 1 Lévy processes and Poisson point processes 6 1.1 Lévy processes.............................. 7 Characteristic exponents.........................
More informationSelected Exercises on Expectations and Some Probability Inequalities
Selected Exercises on Expectations and Some Probability Inequalities # If E(X 2 ) = and E X a > 0, then P( X λa) ( λ) 2 a 2 for 0 < λ
More informationSome Tools From Stochastic Analysis
W H I T E Some Tools From Stochastic Analysis J. Potthoff Lehrstuhl für Mathematik V Universität Mannheim email: potthoff@math.uni-mannheim.de url: http://ls5.math.uni-mannheim.de To close the file, click
More informationBrownian Motion. Chapter Definition of Brownian motion
Chapter 5 Brownian Motion Brownian motion originated as a model proposed by Robert Brown in 1828 for the phenomenon of continual swarming motion of pollen grains suspended in water. In 1900, Bachelier
More informationThe main results about probability measures are the following two facts:
Chapter 2 Probability measures The main results about probability measures are the following two facts: Theorem 2.1 (extension). If P is a (continuous) probability measure on a field F 0 then it has a
More information