SMALL-TIME EXPANSIONS FOR LOCAL JUMP-DIFFUSIONS MODELS WITH INFINITE JUMP ACTIVITY. 1. Introduction

Size: px
Start display at page:

Download "SMALL-TIME EXPANSIONS FOR LOCAL JUMP-DIFFUSIONS MODELS WITH INFINITE JUMP ACTIVITY. 1. Introduction"

Transcription

1 SMALL-TIME EXPANSIONS FOR LOCAL JUMP-DIFFUSIONS MODELS WITH INFINITE JUMP ACTIVITY JOSÉ E. FIGUEROA-LÓPEZ AND CHENG OUYANG Abstract. We consider a Markov process {X (x) t } t with initial condition X (x) t = x, which is the solution of a stochastic differential equation driven by a Lévy process Z and an independent Wiener process W. Under some regularity conditions, including non-degeneracy of the diffusive and jump components of the process as well as smoothness of the Lévy density of Z, we obtain a small-time second-order polynomial expansion in t for the tail distribution and the transition density of the process X. The method of proof combines a recent approach for regularizing the tail probability P(X (x) t x + y) with classical results based on Malliavin calculus for purely-jump processes, which have to be extended here to deal with the mixture model X. As an application, the leading term for out-of-the-money option prices in short maturity under a local jump-diffusion model is also derived. 1. Introduction It is well-known by practitioners that the market implied volatility skewness is more pronounced as the expiration time approaches. Such a phenomenon indicates that a jump risk should be included into classical purely-continuous financial models (e.g. local volatility models and stochastic volatility models) to reproduce more accurately the implied volatility skews observed in short-term option prices (see [8], Chapter 1 for further motivation of jumps in asset price modeling). Moreover, further studies have shown that accurate modeling of the option market and asset prices requires a mixture of a continuous diffusive component and a jump component. For instance, based on statistical methods, several empirical studies have rejected statistically the null hypothesis of either a purely-continuous or a purely-jump model (see, e.g., [1], [], [3], [6]). Similarly, by contrasting the observed behavior of at-the-money (ATM) and out-of-the-money (OTM) call option prices near expiration with their analog theoretical behaviors with and without jumps, [7] and [] argued that both continuous and jump component are necessary to explain the implied volatility behavior of S&P5 index options. The study of small-time asymptotics of option prices and implied volatilities has grown significantly during the last decade, as it provides a convenient tool for testing various pricing models and calibrating parameters in each model. For recent accounts of the subject in the case of purely-continuous modes, we refer the reader to [15] for local volatility models, [9] for the Heston model, [13] for a general uncorrelated local-stochastic volatility model, and [5] for general stochastic volatility models. Broadly, for purely-continuous models, OTM option prices are asymptotically equivalent to O(e c/t ), where t is the time-to-maturity. In this case, the corresponding implied volatility converges to a nonnegative constant which is characterized by the Riemannian distance from the current stock price to the strike price under the Riemannian metric induced by the diffusion coefficient for the model. For 1991 Mathematics Subject Classification. 6J75, 6G51, 6F1. First author supported in part by NSF Grant DMS

2 JOSÉ E. FIGUEROA-LÓPEZ AND CHENG OUYANG Lévy process, [1], [8] and [31] show that OTM option prices are generally asymptotically equivalent to the time-to-maturity t. This fact in turn implies that the corresponding implied volatility explodes as t in such models. The explosion rate was proved to be asymptotically equivalent to (t log(1/t)) 1 in [31], while the second order and higher order asymptotics where obtained in [1] and [14], respectively. A similar result was also obtained for a stochastic volatility model with independent Lévy jumps in [1], while [1] studied the first order short-time asymptotics for OTM and ATM option prices under a relatively general class of Itô semimartingales with jumps. In spite of the ample literature on the asymptotic behavior of the transition densities and option prices for either purely-continuous or purely-jump models, results on local jump-diffusion models are scarce. One difficulty to deal with these models arises from the more complex interplay of the jump and continuous components. The current article is thus an important contribution in this direction, by analyzing the higher-order asymptotic behavior of the transition distributions and densities in local jump-diffusion models. In addition to the asymptotics of vanilla option prices, short-time asymptotics for the transition densities and distributions of Markov processes have also proved to be useful in other financial applications such as non-parametric estimation methods of the model under highfrequency sampling data and Monte-Carlo based pricing of path-dependent options such as lookback or Asian options. In all these applications, a certain discretization of the continuous-time object under study is needed and, in that case, short-time asymptotics are important in the study of the rate of convergence of the discrete-time proxies to their continuous-time analogs. The small-time behavior for the transition densities of Markov processes {X (x) t } t with deterministic initial condition X (x) = x has been studied for a long time in the literature, especially for either a purely-continuous or a purely-discontinuous process. Starting from the problem of existence, there are several sets of sufficient conditions for the existence of the transition densities, y p t (y; x), largely based on the machinery of Malliavin calculus, originally developed for continuous diffusions (see the monograph []) and, then, extended to Markov process with jumps (see the monograph [6]). This approach can also yield estimates of the transition density p t (y; x) in small time t. For purely-jump Markov processes, the key assumption is that the Lévy measure of the process admits a smooth Lévy density. The pioneer of this approach was Léandre [17], who obtained the first-order small-time asymptotic behavior of the transition density for fully supported Lévy densities. This result was extended in [16] to the case where the point y cannot be reached with only one jump from x but rather with finitely many jumps, while [4] developed a new method for proving existence of the transition densities which allows Lévy measures with a possibly singular component. The main result in [17] states that 1 lim t t p t(y; x) = g(y; x), where g(y; x) is the Lévy density of the process X to be defined below (see (1.6)). Léandre s approach consists of separating the small jumps (say, those with sizes smaller than an ε > ) and the large jumps of the underlying Lévy process, and condition on the number of large jumps by time t. Malliavin s calculus is then applied to control the density given that there is no large jump. For ε > small enough, the term when there is only one large jump is proved to be equivalent, up to a remainder of order o(t), to the resulting term resulting as if there were no small-jump component at all. Finally, the terms when there is more than one large jump are shown to be of order O(t ). Higher order expansions of the transition density of Markov processes with jumps have been considered quite recently and only for processes with finite jump activity (see, e.g., [3]) or for Lévy

3 SMALL-TIME EXPANSIONS FOR JUMP-DIFFUSIONS 3 processes with possibly infinite jump-activity. We focus on the literature of the latter case due to its close connection to the present work. The first attempt in obtaining higher order expansions for Lévy processes was given in [9] using Léandre s approach. Concretely, the following expansion for the transition densities {p t (y)} t of a Lévy process {Z t } t was proposed there: (1.1) p t (y) := d N 1 dy P(Z t y) = a n (y) tn n! + O(tN ), (y, N N). n=1 As in [17], the idea was to justify that each higher order term (say, the term corresponding to k large jumps) can be replaced, up to a remainder of order O(t N ), by the resulting density as if there were no small-jump component. However, this method of proof fails for k and the expressions proposed there were in general incorrect for any higher order term (in fact, only valid for compound Poisson processes). This issue was subsequently resolved in [11] using a new approach, where the expansion (1.1) was obtained under the assumption that the Lévy density of the Lévy process {Z t } t is sufficiently smooth and the following technical condition (1.) sup sup p (k) s (y) <, <s<t y >δ for any k N and δ >, and some t := t (δ, k) >. There are two key ideas in [11]. Firstly, instead of working directly with the transition densities, the following analog expansions for the tail probabilities were considered: (1.3) P(Z t y) = N 1 n=1 A n (y) tn n! + tn R t (y), (y >, N N), where sup <t t R t (y) <, for some t >. Secondly, by considering a smooth thresholding of the large jumps (so that the density of large jumps is smooth) and conditioning on the size of the first jump, it was possible to regularized the discontinuous functional 1 {Zt x} and, subsequently, proceed to use an iterated Dynkin s formula to expand the resulting smooth moment functions E(f(Z t )) as a power series in t. (1.1) was then obtained by differentiation of (1.3), after justifying that the functions A n (y) and the remainder R t (y) were differentiable in y. Recently, [1] formally proved that (1.1) can be derived from (1.3) using only the conditions of Léandre [17], without the technical condition (1.). The results described in the previous paragraph open the door to the study of higher order expansions for the transition densities of more general Markov models with infinite jump-activity. We take the analysis one step further and consider a jump-diffusion model with non-degenerate diffusion and jump components. Our analysis can also be applied to purely-discontinuous processes as in [17], but we prefer to take a mixture model due to its relevance in financial applications where, as explained above, both continuous and jump components are necessary. More concretely, we consider the following Stochastic Differential Equations (SDE): (1.4) X t (x) = x + t b(x u (x))du + t σ (X u (x)) dw u + u (,t]: Z u γ (X u (x), Z u ), where b, σ : R R, and γ : R R R are some suitable deterministic functions, {W t } t is a Wiener process, and {Z t } t is an independent purely-jump Lévy process of bounded variation. The latter condition on Z is not essential for our analysis and it is only imposed for the easiness of notation.

4 4 JOSÉ E. FIGUEROA-LÓPEZ AND CHENG OUYANG Under some regularization conditions on b, σ and γ, as well as the Lévy measure ν of Z, we show the following second order expansion (as t ) for the tail distribution of {X t (x)} t : (1.5) P(X t (x) x + y) = ta 1 (x; y) + t A (x; y) + O(t 3 ), for x R, y >. The key assumptions on the SDE s coefficients are some non-degeneracy conditions on σ and γ that ensure the existence of the transition density and that allow us to use Malliavin calculus. As in [17], the key assumption on the Lévy measure ν of Z is that this admits a density h : R\{} R + that is bounded and sufficiently smooth outside any neighborhood of the origin. In that case, the leading term A 1 (x; y) depends only on the jump component of the process as follows A 1 (x; y) = ν({ζ : γ(x, ζ) y}) = h(ζ)dζ. {ζ:γ(x,ζ) y} The second order term A (x; y) admits a slightly more complicated (but explicit) representation. As a consequence of this correction term A (x; y), one can explicitly characterize the effects of the drift b and the diffusion σ of the process in the likelihood of a large positive move (say, a move of size more than y) during a short time period t. For instance, in the case that γ does not dependent on the current state x (i.e. γ(x, ζ) = γ(ζ) for all x), we found that a positive spot drift value of b(x) will increase the probability of a large positive move by b(x)g(y)t (1 + o(1)), where g(y) is the so-called Lévy density of the process X defined by g(y) := d ν({ζ : γ(ζ) y}). Similarly, since typically dy g (y) < when y >, the effect of a non-zero spot volatility σ(x) is to increase the probability of a large positive move by 1 σ (x) g (y) t (1 + o(1)). Once the asymptotic expansion for tail distribution is obtained, we take the analysis one step further and obtain a second order expansion for the transition density function p t (y; x) by taking derivative of the tail distribution with respect to y, as long as proper regularity conditions are assumed. As expected, the leading term of the transition density p t (y; x) is of the form tg(y; x), where g(y; x) is the so-called Lévy density of the process {X t (x)} t defined by (1.6) g(y; x) := d ν({ζ : γ(x, ζ) y}). dy One of the main difficulties here comes from controlling the first term (which corresponds to the case of no large jumps). Hence, we generalize the result in [17] to the case where there is a non-degenarate diffusion component. Again, Malliavin calculus is proved to be the key tool for this task. As an application of the above asymptotic formulas, we derive the leading term of the small-time expansion for option prices of out-of-the-money European call options with strike price K and expiry T. Specifically, let {S t } t be the stock price process and denote X t = log S t for each t. We assume that P is the option pricing measure and that under this measure the process {X t } t is of the form in (1.4). Then, under certain regularity conditions, we prove that 1 (1.7) lim t t E(S t K) + = ( S e γ(x,ζ) K ) + h(ζ)dζ. A special case of the above result is when γ(x, ζ) = ζ. The model reduces to an exponential Lévy model. In this case, the above first order asymptotics becomes 1 lim t t E(S ( t K) + = S e ζ K ) h(ζ)dζ, +

5 SMALL-TIME EXPANSIONS FOR JUMP-DIFFUSIONS 5 which recovers the well-known first-order asymptotic behavior for exponential Lévy model (see, e.g., [8] and [31]). It is important to note that (1.7) is derived from the asymptotics for the tail distributions (1.5) rather than from those for the transition density, given that the former requires less stringent conditions on the coefficients of (1.4). An important related paper is [19], where (1.7) was obtained for a wide class of multi-factor Lévy Markov models under certain technical conditions (see Theorem.1 therein), including the requirement that lim t E(S t K) + /t exists in the out-of-themoney region and an integrability condition such as ( γ(x, ζ) 1)e γ(x,ζ)+ε γ(x,ζ) h(ζ)dζ <, for some ε >. The paper is organized as follows. In Section, we introduced the model and the assumptions needed for our results. The probabilistic tools, such as the iterated Dynkin s formula as well as tail estimates for semimartingales with bounded jumps, are presented in Section 3. The main results of the paper are then stated in Sections 4 and 5, where the second order expansion for the tail distributions and the transition densities are obtained, respectively. The application of the expansion for the tail distribution to option pricing in local jump-diffusion financial models is presented in Section 6. The proofs of our main results as well as some preliminaries of Malliavin calculus on Wiener-Poisson spaces are given in several appendices.. The Setup and Assumptions In this paper, let {Z t } t be a driftless pure-jump Lévy process of bounded variation and let {W t } t be a Wiener process independent of Z, both of which are defined on a complete probability space (Ω, F, P) equipped with the natural filtration (F t ) t, generated by W and Z and augmented by all the null sets in F so that it satisfies the usual conditions (see, e.g., [7]). We consider the following local jump-diffusion model (.1) X t (x) = x + t b(x u (x))du + t σ (X u (x)) dw u + <u t γ (X u (x), Z u ). where b, σ : R R and γ : R R R are suitable deterministic functions so that (.1) admits a unique solution (see, e.g., [3]). It is convenient to write Z in terms of its Lévy-Itô representation: Z t = t Z u = ζm(du, dζ). R <u t Here, R = R\{} and M is the jump measure of the process Z, which is a Poisson random measure on R + R. Furthermore, we make the following standing assumptions about Z: (C1) The Lévy measure ν of Z has a C (R\{}) strictly positive density h such that, for every ε > and n, (.) (i) (1 ζ )h(ζ)dζ <, and (ii) sup h (n) (ζ) <. ζ >ε Remark.1. The condition of bounded variation on Z is not essential for our results and it is imposed only for the easiness of notation. We could have taken a general Lévy process Z and replace the last term in (.1) with a finite-variation pure-jump process plus a compensated Poisson integral

6 6 JOSÉ E. FIGUEROA-LÓPEZ AND CHENG OUYANG as follows: X t (x) = x + + t t t b(x u (x))du + σ (X u (x)) dw u γ (X u (x), ζ) M(du, dζ) + ζ >1 t ζ 1 γ (X u (x), ζ) M(du, dζ), where as usual M(du, dζ) is the compensated Poisson integral (M(du, dζ) ν(dζ)du). Throughout the paper, the generator γ satisfies the following assumption: (C) For any x, ζ R, γ(x, ) =, γ(, ) C, and (.3) sup i γ(x, ζ) x x i C(ζ 1), for i =, 1, and C <. In particular, sup x,ζ i γ(x,ζ) x i <, for i =, 1,. We also assume the following non-degeneracy and boundedness conditions, which were also imposed in [17]: (C3-a) The functions b(x) and v(x) := σ (x)/ belong to Cb constant δ > such that σ(x) δ for all x. (C3-b) For the same constant δ above, we have γ(x, ζ) 1 + x δ for all x, ζ R. (R) and there exists a As it is usually the case with Lévy processes, we shall decompose Z into a compound Poisson process and a process with bounded jumps. More specifically, let φ ε C (R) be a truncation function such that 1 ζ ε φ ε (ζ) 1 ζ ε/ and let {Z t (ε)} t and {Z t(ε)} t be independent driftless Lévy processes of bounded variation with Lévy measures φ ε (ζ)h(ζ) and (1 φ ε (ζ))h(ζ), respectively. Without loss of generality, we assume hereafter that (.4) Z t = Z t (ε) + Z t(ε). Note that Z t (ε) is a compound Poisson process with intensity of jumps λ ε := φ ε (ζ)h(ζ)dζ and jumps with probability density function (.5) h ε (ζ) := φ ε(ζ)h(ζ) λ ε. Throughout the paper, J := J ε will represent a random variable with density h ε (ζ). The following assumptions will also be needed in the sequel: (C4-a) For any u R and ε > small-enough (independent of u), the random variable γ(u, J ε ) admits a density Γ(ζ; u) := Γ ε (ζ; u) in Cb (R R), the class of function with continuous partial derivatives of arbitrary order such that, for every n, m, n+m Γ(ζ; u) ζ n u m <. sup u,ζ

7 SMALL-TIME EXPANSIONS FOR JUMP-DIFFUSIONS 7 (C4-b) In addition to (C4-a), we also have that, for any z R, δ >, and m, there exists a δ > such that sup m Γ(ζ; u) u m dζ <. Remark.. Note that Γ ε (ζ; u) = ζ δ u (z δ,z+δ) 1 {γ(u;y) ζ} h ε (y)dy = 1 λ ε ζ 1 {γ(u;y) ζ} φ ε (y)h(y)dy. In particular, if for each u R, the function y γ(u, y) is strictly increasing, then ( Γ ε (ζ; u) = h ε γ 1 (u; ζ) ) ( ) 1 γ ζ (u, γ 1 (u; ζ)), where γ 1 (u; ζ) is the inverse of the function y γ(u, y) (i.e. γ(u; γ 1 (u; ζ)) = ζ). As a consequence, since γ 1 (u; ζ) for any ζ, there exists an ε (u) > small enough such that λ ε Γ ε (ζ; u) = h ( γ 1 (u; ζ) ) ( ) 1 γ ζ (u; γ 1 (u; ζ)) =: a 1 (u; ζ), for any ε < ε. Note also that for Γ ε (ζ; u) to be bounded, it suffices that (.-ii) with n = and γ(u;ζ) inf u,ζ ζ > hold true. Similarly, Γ ε (ζ; u)/ ζ can be written as ( h ε γ 1 (u; ζ) ) ( ) γ ζ (u, ( γ 1 (u; ζ)) h ε γ 1 (u; ζ) ) ( ) 3 γ γ ζ (u, γ 1 (u; ζ)) ζ (u, γ 1 (u; ζ)), which can be bounded uniformly if, in addition, (.-ii) with n = 1 and (.3) hold true. One can similarly deal with higher order derivatives of Γ ε. This observation justifies the usage of a truncation function φ ε to regularize the Lévy density h around the origin. We next define an important class of processes. For a collection of times {s 1,..., s n } in R +, let {X t (ε, {s 1,..., s n }, x)} t be the solution of the SDE: t X t (ε, {s 1,..., s n }, x) := x + b(x u (ε, {s 1,..., s n }, x))du + σ (X u (ε, {s 1,..., s n }, x)) dw u + γ(x u (ε, {s 1,..., s n }, x), Z u(ε)) + γ(x s (ε, {s 1,..., s n }, x), J ε i i ), <u t where {Ji ε } i 1 represents a random sample from the distribution φ ε (ζ)h(ζ)dζ/λ ε, independent of Z. Similarly, we define {X t (ε,, x)} t as the solution of the SDE: (.6) X t (ε,, x) := x + t b(x u (ε,, x))du + t t i:s i t σ (X u (ε,, x)) dw u + 3. Probabilistic tools <u t γ (X u (ε,, x), Z u(ε)). Throughout this section, Cb n(i) (resp. Cn b ) denotes the class of functions having continuous and bounded derivatives of order k n in an open interval I R (resp. in R). Also, g = sup y g(y).

8 8 JOSÉ E. FIGUEROA-LÓPEZ AND CHENG OUYANG 3.1. Uniform tail probability estimates. The following general result will be important in the sequel. Proposition 3.1. Let M be a Poisson random measure on R + R and M be its compensated random measure. Let Y := Y (x) be the solution of the SDE t t t Y t = x + b(ys )ds + σ(y s )dw s + γ(y s, ζ) M(ds, dζ). Assume that b(x) and σ are uniformly bounded and that γ(x, ζ) are regular enough so that the second and third integral on the right hand side of the above equation are martingales. Furthermore, suppose the jumps of {Y t } t are bounded by some constant S. Then there exists a constant C(S) depending on S, such that, for all t 1, { } P sup Y s (x) x ps C(S)t p. s t Proof. Let M t be either t σ(y s )dw s, or t γ(y s, z) M(ds, dz). It is clear that M t is a martingale with its jumps bounded by S. Moreover, the quadratic variation M, M satisfies M, M t kt for some constant k. By [18] (or (1.3) in [17]) we have { } ] (3.1) P sup M s C exp [ λc + λ s t S kt(1 + exp[λs]). Now take C = ps and λ = log t /S. The rest of the proof is then clear. As a direct corollary of the above proposition, we have the following tail probability estimate for the small-jump component {X t (ε,, x)} t of X introduced in (.6). Lemma 3.. Fix any η > and a positive integer N. Then, under the Conditions (C-C3) of Section, there exists an ε > and C := C(N, η) < such that (1) For all t < 1, (3.) sup P( X t (ε,, x) x η) < Ct N. <ε <ε,x R () For all t < 1, sup ε <ε,x R η P{e Xt(ε,,x) x s}ds = sup ε <ε,x R η P{ X t (ε,, x) x log s}ds < Ct N. Proof. The first statement is a special case of Proposition 3.1, which can be applied in light of the boundedness conditions (C3) as well as the condition (C) that ensure that the components t t σ (X u (ε,, x)) dw u, γ (X u (ε,, x), ζ) M (du, dζ),

9 SMALL-TIME EXPANSIONS FOR JUMP-DIFFUSIONS 9 are martingales. Here, M is the compensated Poisson measure of Z (ε ) (see Remark 3.4 below for the notation). To prove the second statement, we keep the notation of the proof of Proposition 3.1. By (3.1) there exists a constant C > such that ] P{ X t (ε,, x) x log s}ds C exp [ λ log s + λ ε kt(1 + exp[λε]) ds η = η Cη (λ 1)η λ exp Now it suffices to take λ = log t /ε and ε = log η/n. [ ] λ ε kt(1 + exp[λε]). 3.. Iterated Dynkin s formula. The following well-known formula will be important in the sequel. The case n = can be proved using Itô s formula (see Section 1.5 in [3]), while the general case is proved by induction. Lemma 3.3. Let Y := Y (x) be a solution of the SDE (3.3) dy t = b(y t )dt + σ(y t )dw t + γ(y t, ζ) M(dt, dζ) + + (3.4) Y = x, ζ 1 and let L be its infinitesimal generator, defined by (3.5) ζ >1 γ(y t, ζ)m(dt, dζ), (Lf)(y) := σ (y) (f(y f (y) + b(y)f ) (y) + + γ(y, ζ)) f(y) γ(y, ζ)f (y)1 { ζ 1} ν(dζ), for any f Cb. Then, we have (3.6) Ef(Y t ) = f(x) + n k=1 t k k! Lk f(x) + tn+1 n! 1 (1 α) n E { L n+1 f(y αt ) } dα, valid for any n and f Cb such that Lk f Cb, for each k = 1,..., n. Remark 3.4. Below, Y is taken to be the process (.6), which is of the form ( ) with underlying driving Lévy process {Z t(ε)} t (the small-jump component of Z defined in (.4)). In particular, denoting M the jump measure of Z (ε), we have t t t X t (ε,, x) = x + b (X u (ε,, x)) du + σ (X u (ε,, x)) dw u + γ (X u (ε,, x), ζ) M (du, dζ). Since (1 φ ε (ζ))ν(dζ) is the Lévy measure of Z (ε), we can write X t (ε,, x) as in ( ) with γ(y, ζ) = γ(y, ζ), σ(y) = σ(y), and b(y) = b(y) + γ(y, ζ)(1 φ ε (ζ))ν(dζ). ζ 1 After substitution of these coefficient functions in (3.5) and some simplification, we get the following infinitesimal generator for {X t (ε,, x)} t : L ε f(y) := σ (y) (3.7) f (y) + b(y)f (y) + (f(y + γ(y, ζ)) f(y)) h ε (ζ)dζ, where h ε (ζ) := (1 φ ε (ζ))h(ζ).

10 1 JOSÉ E. FIGUEROA-LÓPEZ AND CHENG OUYANG The following result gives conditions for the validity of the second-order Dynkin s formula (3.6) when Y is the small-jump component of Z (its proof is reported in Appendix D). Lemma 3.5. Supposed that γ, b, and σ satisfy the Conditions (C-C3) of Section and let g be a bounded functions that is Cb 4 in a neighborhood of x. Then, the expansion (3.8) Ef (X t (ε,, x)) = f(x) + tl ε f(x) + t R t(ε, x), holds true with R t (ε, x) such that (3.9) sup R t (ε, x) K <t<1, ε <ε for a constant K < depending only on (1 ζ )h(ζ)dζ, f (k) 1 (x δ,x+δ) for some small-enough δ >, b (k), and v (k) (k =, 1, ). Moreover, if f C 4 b, then (3.1) sup R t (ε, x) K, <t<1, ε <ε,x R for some K <. The following small-time expansion will also be needed (see the Appendix D for a proof). Lemma 3.6. Let s (, t) and f C b. Then, Ef (X t (ε, {s}, x)) = G (x) + s ˆR s (x) + (t s)er t s (X s (ε,, x)), where G (x) := E [f (x + γ(x, J))], sup u>,z R ˆR u (z) <, and sup u>,z R R u (z) <. 4. Second order expansion for the tail distributions In this section, we consider the small-time behavior of the tail distribution of {X t (x)} t : (4.1) Ft (x, y) := P(X t (x) x + y), (y > ). The key idea is to use the decomposition (.4) by conditioning on the number of large jumps occurring before time t. Concretely, denoting {N ε t } t and λ ε := φ ε (ζ)h(ζ)dζ the jump counting process and the jump intensity of the compound Poisson process {Z t (ε)} t, respectively, we have (4.) P(X t (x) x + y) = e λεt P(X t (x) x + y Nt ε = n) (λ εt) n. n! n= The first term in (4.) (when n = ) can be written as P(X t (x) x + y N ε t = ) = P(X t (ε,, x) x + y). In light of (3.), this term can be made O(t N ) for an arbitrarily large N 1, by taking ε small enough. In order to deal with the other terms in (4.), we use the iterated Dynkin s formula introduced in Section 3.. The following is the main result of this section (see Appendix B for the proof). Below, J denotes a random variable with density (.5), J 1 and J denote independent copies of J, and Γ(ζ; x) denotes the density of γ(x, J).

11 SMALL-TIME EXPANSIONS FOR JUMP-DIFFUSIONS 11 Theorem 4.1. Let x R and y >. Then, under the Conditions (C1-C3) of Section, we have (4.3) Ft (x, y) := P(X t (x) x + y) = ta 1 (x; y) + t A (x; y) + O(t 3 ), as t, where A 1 (x; y) and A (x; y) admit the following representations (for ε > small-enough): A 1 (x; y) := λ ε H, (x; y) = h(ζ)dζ, with (4.4) {γ(x,ζ) y} A (x; y) := λ ε [H,1 (x; y) + H 1, (x; y)] + λ ε [H, (x; y) H, (x; y)], λ ε = φ ε (ζ)h(ζ)dζ, H, (x; y) = P [γ(x, J) y] = y Γ(ζ; x)dζ, ( ) Γ(ζ; x) H,1 (x; y) = b(x) dζ + Γ(y; x) y x ( + σ (x) ) Γ(ζ; x) Γ(y; x) Γ(y; x) dζ + + y x x y Ĥ,1(x; y), Ĥ,1 (x; y) = (P [γ(x + γ(x, ζ), J) y γ(x, ζ)] P [γ(x, J) y]) h ε (ζ)dζ, H 1, (x; y) = b(x)γ(y; x) σ (x) Γ(y; x) + y Ĥ1,(x; y), Ĥ 1, (x; y) = (P [γ(x, J) y γ(x, ζ)] P [γ(x, J) y]) h ε (ζ)dζ, H, (x; y) = P (γ(x + γ(x, J ), J 1 ) y γ(x, J )). Remark 4.. Note that if supp(ν) {ζ : γ(x, ζ) y} = (so that it is not possible to reach the level y from x with only one jump), then A 1 (x; y) = and P(X t (x) x + y) = O(t ) as t. If, in addition, it is possible to reach the level y from x with two jumps, then H, (x; y) and, hence, A (x; y), implying that P(X t (x) x + y) decreases at the order of t. These observations are consistent with the results in [16] and [5]. Example 4.3. In the case that the generator γ(x, ζ) does not depend on x, we get the following expansion for P(X t (x) x + y): λ ε P(γ(J) y) t + λ ε (P(γ(J 1 ) + γ(j ) y) P(γ(J) y)) t ( ) + λ ε b(x)γ(y) σ (x) Γ (y) + (P [γ(j) y γ(ζ)] P [γ(j) y]) h ε (ζ)dζ t + O(t 3 ). The leading term in the above expression is dominated by the jump component of the process and it has a natural interpretation: if within a very short time interval there is a large positive move (say, a move by more than y), this move must be due to a jump. It is until the second term, when the diffusion and drift terms of the process appear. Denoting g(y) := d d ν({ζ : γ(ζ) y}) = dy dy {γ(ζ) y} h(ζ)dζ,

12 1 JOSÉ E. FIGUEROA-LÓPEZ AND CHENG OUYANG the so-called Lévy density of the process {X t } t, the effect of a positive spot drift b(x) > is to increase the probability of a large positive move by b(x)g(y)t (1 + o(1)). Similarly, since typically g (y) < when y >, the effect of a non-zero spot volatility σ(x) is to increase the probability of a large positive move by σ (x) g (y) t (1 + o(1)). 5. Expansion for the transition densities Our goal here is to obtain a second-order small-time approximation for the transition densities {p t ( ; x)} t of {X t (x)} t. As it was done in the previous section, the idea is to work with the expansion (4.) by first showing that each term there is differentiable with respect y, and then determining their rates of convergence to as t. One of the main difficulties of this approach comes from controlling the term corresponding to no large jumps. As in the case of purely diffusion processes, Malliavin calculus is proved to be the key tool for this task. This analysis is presented in the following subsection before our main result is presented in Section Density estimates for SDE with bounded jumps. In this part, we analyze the term corresponding to N ε t = : P(X t (x) x + y N ε t = ) = P(X t (ε,, x) x + y). We will prove that, for any fix positive integer N and η >, there exists an ε > small enough and a constant C < such that the density p t ( ; ɛ,, x) of X t (ε,, x) satisfies (5.1) sup y x >δ,ɛ<ɛ p t (y; ɛ,, x) < Ct N, for all < t 1. The estimate (5.1) holds true for a larger class of SDE with bounded jumps. Concretely, throughout this section, we consider the more general equation t t t (5.) Xt x = x + b(xs )ds x + σ(xs )dw x s + γ(xs, x ζ) M(ds, dζ), where, as before, M(ds, dζ) is a Poisson random measure on R + R\{} with mean measure µ(ds, dζ) = ds ν(dζ) and M = M µ is its compensated measure. As before, we assume that ν admits a smooth density h with respect to the Lebesgue measure on R\{}, but moreover, we now assume h is supported in a ball B(, r) for some r >. Note that the solution (X t (ε,, x)) t of (.6) perfectly fits within this framework. The following assumptions on the coefficient functions of (5.) are used in the sequel. Note that, when {X x t } is taken to be {X t (ε,, x)}, these assumptions follow from the assumptions in Section. Assumption 5.1. The coefficients b(x), σ(x) and γ(x, ζ) are 4 times differentiable. There exists a constant c > and a function κ p< Lp (ν) such that: (1) x n nb(x), n x nσ(x), 1 κ(ζ) n x nγ(x, ζ) c, for n 5. () n+k γ(x, ζ) x n ζ c, for 1 n + k 5 and k 1. k

13 SMALL-TIME EXPANSIONS FOR JUMP-DIFFUSIONS 13 Malliavin calculus is the main tool to analyze the existence and smoothness of density for Xt x. For the sake of completeness, we are presenting some necessary preliminaries of Malliavin calculus on Wiener-Poisson spaces in Appendix A following the approach in [6]. As described therein, there are different ways to define a Malliavin operator for such spaces. For our purposes, it suffices to consider the Malliavin operator corresponding to ρ = (see Section A. for the details). The intuitive explanation of ρ = is that when making perturbation of the sample path on the Wiener-Poisson space, we only perturb the Brownian path. Let us start by noting that Assumption 5.1 ensures that Xt x is a C diffeomorphism and also has a continuous density (see, e.g., [6]). Furthermore, recalling that H is the extended domain of the Malliavin operator, one has X t H and { t } (5.3) Γ(X t, X t ) := U t = x Xt x ( x Xs x ) 1 σ (Xs x )( x Xs x ) 1 ds x Xt x. Remark 5.. Under the Condition (C3-b) of Section, one can show that x Xt x invertible. Indeed, one can show that ( cf. [6]) Moreover, x X x t d ( x X x t ) = 1 + x b(x x t ) x X x t dt + x σ(x x t ) x X x t dw t + x γ(x x t, ζ) x X x t M(dt, dζ). admits an inverse Y t = ( x X x t ) 1 which satisfies a similar equation dy t = 1 + Y t D t dt + Y t E t dw t + Y t F t M(dt, dζ). is almost surely Here D t, E t and F t are determined by b(x), σ(x), γ(x, ζ) and X x t. As a consequence, together with our assumption on b, σ and γ, one have for all p > 1. E sup ( x Xt x ) p, E sup ( x Xt x ) p < t 1 t 1 The main result for this section is Theorem 5.7. For this purpose, we state the following integration by parts formula (which is also the main ingredient for existence and smoothness of the density of X x t ), which is a special case of Lemma 4-14 together with the discussion of Chapter IV in [6]. Proposition 5.3. For any f Cc (R), there exists a random variable J t L p for all p N, such that ED x f(xt x ) = EJ t Ut f(xt x ). The following existence and regularity result for the density of a finite measure is well known (see, e.g., Theorem 5.3 in [3]). Lemma 5.4. Let m be a finite measure supported in an open set O R n. Take any p > n. Suppose that there exists g = (g 1,..., g n ) L p (m) such that j fdm = fg j dm, f Cc (O), j = 1,..., n. R n R n Then m has a bounded density function q C b (O) satisfying Here the constant C depends on n and p. q C g n L p (m)m(o) 1 n/p.

14 14 JOSÉ E. FIGUEROA-LÓPEZ AND CHENG OUYANG The following lemma is where we use assumption (C3-a). Theorem 5.7. Its proof is deferred to Appendix D. It is the first ingredient in proving Lemma 5.5. Recall U t = Γ(X t, X t ). Under the Condition (C3-a) of Section, we have EU p t Ct p. The final ingredient toward our main result of this section is the proposition below, which is just a restatement of Proposition 3.1. Proposition 5.6. Let X t be the solution to equation (5.). Recall that since the Lévy density h is compactly supported and γ(x, z) is uniformly bounded, the jumps of X t is bounded by some constant S. There exists a constant C(S) depending on S, such that, for all t 1, { } P sup Xs x x ps C(S)t p. s t Finally, we can state and prove our main result of this section. Theorem 5.7. Assume the Condition (C3) of Section is satisfied. Let {X x t } t be the solution to equation (5.) and denote by p t (y; x) the density of X x t. Fix any η >. Then for any N >, there exists r(η, N) > such that if µ is supported in B(, r) with r r(η, N), we have for all t 1 sup p t (y; x) C(η, N)t N. y: x y η Proof. For a fix t, define a finite measure m η t on R by m η t (A) = P ({ X x t A B c (x, η) }), A R, where B c (x, r) denotes the complement of closure of B(x, r). Thus, to prove our result it suffices to prove that m η t admits a density that has the desired bound. Next, for any smooth function f compactly supported in B c (x, η), we have ( x f)(y)m η t (dy) = E x f(xt x ) = EJ t Ut f(xt x ) R = E [ J t Ut X x t = y ] f(y)m η t (dy), R by Proposition 5.3. The rest of the proof follows from Lemmas 5.4 and 5.5 and Proposition Expansion for the transition density. We are ready to state our main result of this section, namely, the second order expansion for the transition densities of the process {X t (x)} t in terms of the density Γ( ; x) of γ(x, J ε ). The proof is deferred to Appendix C. Theorem 5.8. Let x R and y >. Then, under the Conditions (C1-C3) of Section, we have (5.4) p t (y; x) := P(X t(x) x + y) y = ta 1 (x; y) + t a (x; y) + O(t 3 ),

15 SMALL-TIME EXPANSIONS FOR JUMP-DIFFUSIONS 15 as t, where a 1 (x; y) and a (x; y) admit the following representations (for ε > small-enough): with a 1 (x; y) := A 1(x; y) y a (x; y) := A 1(x; y) y = λ ε Γ(y; x), = λ ε [h,1 (x; y) + h 1, (x; y)] λ ε [h, (x; y) + Γ(y; x)], ( ) h,1 (x; y) = σ (x) Γ(y; x) + Γ(y; x) Γ(y; x) + x y x y ĥ,1(x; y), ĥ,1 (x; y) = {Γ(y; x) Γ(y γ(x, ζ); x + γ(x, ζ)} h ε (ζ)dζ, (5.5) h 1, (x; y) = σ (x) Γ(y; x) + y ĥ1,(x; y), ĥ 1, (x; y) = {Γ(y; x) Γ(y γ(x, ζ); x)} h ε (ζ)dζ, h, (x; y) = E {Γ (ζ γ(x, J); x + γ(x, J))}. 6. The first order term of the option price expansion In this section, we use our previous results to derive the leading term of the small-time expansion for option prices of out-of-the-money (OTM) European call options with strike price K and expiry T. This can be achieved by either the asymptotics of the tail distributions or the transition density. Given that the former requires less stringent conditions on the coefficients of the SDE, we choose to employ our result for tail distribution. Throughout this section, let {S t } t be the stock price process and let X t = log S t for each t. We assume that P is the option pricing measure and that under this measure the process {X t } t is of the form in (.1). As usual, without loss of generality we assume that the risk-free interest rate r is. In particular, in order for S t = exp X t to be a Q-(local) martingale, we fix b(x) := 1 σ (x) (e γ(x,z) 1 ) h(z)dz We also assume that σ and γ are such that the Conditions (C1-C4) of Section are satisfied. By the Markov property of the system, it will suffice to compute a small-time expansion for In particular, using the well-known formula v t = E(S t K) + = E ( e Xt K ) +. EU1 {U>K} = KP{V > log K} + where U is a positive random variable and V := log U, we can write E(e Xt K) + = K K P{S t > s}ds = S K S P{V > log s}ds, P{X t x > log s}ds,

16 16 JOSÉ E. FIGUEROA-LÓPEZ AND CHENG OUYANG where x = X = log S. Recall that (6.1) P(X t x y) = e λεt P(X t x y Nt ε = n) (λ εt) n, n! n= where λ ε := φ ε (ζ)h(ζ)dζ is the jump intensity of {N ε t } t. We proceed as in Section 4. First, note that (6.) where I 1 = I = λ ε t I 3 = λ εt v t = S K S P{X t x > log s}ds = S e λεt (I 1 + I + I 3 ), K S P{X t x log s N ε t = }ds = K S n= P{X t x log s N ε t = 1}ds, (λ ε t) n n! K S K S P{X t x log s N ε t = n}ds. P{X t (ε,, x) x log s}ds, It is clear that I 1 /t as t by Lemma 3.. We show that the same is true for I 3, which is the content of the following lemma. Its proof is given in Appendix D. Lemma 6.1. With the above notation, we have sup n N,t [,1] 1 n! As a consequence, I 3 /t as t. P( X t x log y N ε t = n)dy <. Note that the above lemma actually implies that Ee Xt x < for all t [, 1). We are ready to state the main result of this section. Theorem 6.. Let v t = E(S t K) + be the price of a European call option with strike K. Under the Conditions (C1-C4) of Section, we have 1 lim t t v t = Proof. We use the notation introduced in (6.). Lemma 6.1, one can show that (6.3) K S sup t [,1] ( S e γ(x,ζ) K ) + h(ζ)dζ. Following a similar argument as in the proof of P{X t x log s N ε t = 1}ds <. Also, it is clear that I 1 /t converges to zero when t approaches to zero by Lemma 3.. Using the latter fact, (6.3), Lemma 6.1, (6.), and dominated convergence theorem, we have v t lim t t = lim S I t t = λ ε S lim t K S P{X t x log s N ε t = 1}ds = λ ε S K S lim P{X t x log s Nt ε = 1}ds. t

17 Next, using Theorem 4.1, it follows that v t lim t t = S SMALL-TIME EXPANSIONS FOR JUMP-DIFFUSIONS 17 A 1 (x, log s)ds = S K S K S {γ(x,ζ) log s} h(ζ)dζds = where we had used Fubini s theorem for the last equality above. ( S e γ(x,ζ) K ) + h(ζ)dζ, Remark 6.3. As a special case of our result, let γ(x, ζ) = ζ. The model reduces to an exponential Lévy model. The above first order asymptotics becomes to 1 lim t t v t = ( S e ζ K ) + h(ζ)dζ. This recovers the well-known first-order asymptotic behavior for exponential Lévy model (see, e.g., [8] and [31]). Appendix A. Preliminaries of Malliavin calculus on Wiener-Poisson spaces In this section, we follow [6] and introduce briefly the Malliavin calculus on Wiener-Poisson space to our need. All the details can be found in [6]. We first introduce, in an abstract manner, the notion of Malliavin operator (infinitesimal generator of the Ornstein-Uhlenbeck semigroup). Fix a probability space (Ω, F, P). Let C p(r n ) be the space of all C functions on R n which, together with all their partial derivatives up to the second order, have at most polynomial growth. Definition A.1. A Malliavin operator (L, R) is a linear operator L on a domain R p< Lp, taking values in p< Lp, and such that the following conditions hold true: (1) If Φ R n and F C p(r n ), then F (Φ) R. () F is the σ-field generated by all functions in R. (3) L is symmetric in L, i.e. E(ΦLΨ) = E(ΨLΦ) for all Φ, Ψ R. (4) Define Γ(Φ, Ψ) = L(ΦΨ) ΦLΨ ΨLΦ, for all Φ, Ψ R. Then Γ is nonnegative on R R. (5) For Φ = (Φ 1,..., Φ n ) R n and F C p(r n ), one have L(F Φ) = n i=1 F x i (Φ)LΦ i + 1 n i,j=1 F x i x j (Φ)Γ(Φ i, Φ j ). A.1. Malliavin operator on Wiener-space. We consider the Wiener space of continuous paths: Ω W = ( C([, 1], R), (Ft W ) t 1, P ) W where: (1) C([, 1], R) is the space of continuous functions [, 1] R; () (W t ) t is the coordinate process defined by W t (f) = f (t), f C([, 1], R); (3) P W is the Wiener measure; (4) (Ft W ) t 1 is the (P W -completed) natural filtration of (W t ) t 1.

18 18 JOSÉ E. FIGUEROA-LÓPEZ AND CHENG OUYANG Now, (Ω W, F W, P W ) is the canonical 1-dimensional Wiener space. Set S = { Φ = F (W t1,..., W tn ), F C p(r n ), t i 1 }. For Φ given as above, define L W Φ = 1 n F (W t1,..., W tn )W ti + 1 x i i=1 n i,j=1 F x i x j (W t1,..., W tn )(t i t j ). It is well-known that (L W, S W ) is a Malliavin operator. Furthermore, for Φ = F (W t1,..., W tn ) and Ψ = H(W s1,..., W sm ) n m Γ W F (Φ, Ψ) = (W t1,..., W tn ) H (W s1,..., W sm )(t i s j ). x i x j i=1 j=1 A.. Malliavin operator on Poisson space. In this section, we consider the canonical probability space (Ω P, F P, P P ) of a prefixed Poisson random measure M(dt, dζ) on [, 1] (R\{}). Let µ(dt, dζ) = dt ν(dζ) be the intensity measure of M(dt, dζ) and denote by M = M µ its compensated Poisson random measure. Throughout our discussion, we also assume that ν admits a smooth density h with respect to the Lebesgue measure on R, i.e. ν(dζ) = h(ζ)dζ. We denote by Cc b, ([, 1] R\{}) the set of all compactly supported, Borel functions f on [, 1] R\{} that are of C on R\{} with f, ζ f, ζ f uniformly bounded. For such f, we write M(f) = f(t, ζ)m(dt, dζ). Let ρ : R\{} [, ) be an auxiliary function which is of class C 1 b. Set S P = {Φ = F (M(f 1 ),..., M(f k )); F Cp(R k ), f i Cc b, ([, 1] R\{})}. Given any Φ S P as above, define L P Φ = 1 k i=1 F x i (M(f 1 ),..., M(f k ))M + 1 k i,j=1 In the above, ζ stands for the Laplacian on R\{}. ( ρ ζ f i + z ρ ζ f i + ρ ) ζh h ζf i F x i x j (M(f 1 ),..., M(f k ))M(ρ ζ f i ζ f j ). Proposition A.. (L P, S P ) defines a Malliavin operator. Moreover for Φ(M(f 1 ),..., M(f k )) and Ψ = H(M(h 1 ),..., M(h l )), we have k l Γ P F (Φ, Ψ) = (M(f 1 ),..., M(f k )) H (M(h 1 ),..., M(h l ))M(ρD ζ f i D ζ h j ). x i x j i=1 j=1 Proof. See Proposition 9-3 in [6]. Remark A.3. As we can see from the above definition, the Malliavin operator for Poisson space is not uniquely defined. Indeed, each different choice of ρ gives a different operator. More remarks on this point are given at the end of this section below.

19 SMALL-TIME EXPANSIONS FOR JUMP-DIFFUSIONS 19 A.3. Malliavin operator on Wiener-Poisson space. Keep notations as in previous sections. We define a probability space (Ω, F, P) by (Ω, F, P) = (Ω W, F W, P W ) (Ω P, F P, P P ). Clearly (Ω, F, P) is the canonical Wiener-Poisson space. Then define (L, S) to be the direct product of (L W, S W ) and (L P, S P ). More precisely, (1) S = {Φ = F (Φ 1,..., Φ n ; Ψ 1,..., Ψ m ); for all Φ i S W, Ψ j S P, F C p}. () For Φ as given in (1), and ω = (ω W, ω P ) Ω, LΦ(ω) = L W Φ(, ω P ) + L P Φ(ω W, ). It can be shown that (L, S) is a Malliavin operator on (Ω, F, P). A few properties of (L, S) listed below. Proposition A.4. Let Φ S W and Ψ S P, we have are (1) LΦ = L W Φ, and Γ(Φ, Φ) = Γ W (Φ, Φ). () LΨ = L P Ψ, and Γ(Ψ, Ψ) = Γ P (Ψ, Ψ). (3) Γ(Φ, Ψ) =. Finally, we extend the L to a larger domain that is suitable to work on. For this purpose, set Φ Hp = Φ L p + LΦ L p + Γ(Φ, Φ) 1/ L p and denote by H p the closure of S under Hp. Also set H = H p. p< Proposition A.5. The operator (L, H ) is a Malliavin operator. Proof. This is the content of Section 8-b in [6]. Remark A.6. As before, a different choice of the auxiliary function ρ in the definition for L P results in a different Malliavin operator L on the Wiener-Poisson space. Hence, the extended domain H may also be different. Remark A.7. Consider the following stochastic differential equation with regular enough coefficients t t t Xt x = x + b(xs )ds x + σ(xs )dw x s + γ(xs, x ζ) M(ds, dζ). To make use of the power of Malliavin calculus to analyze the above SDE, one needs to impose further conditions on ρ so that Xt x is in the domain H. Those further conditions are generally determined by the support of the intensity measure ν and are need to ensure that some approximation procedure can work out. For more details of the role that ρ plays, see Section 9-1 and Theorem 1-3 in [6]. It is important to note that, among various choices of ρ, one can always choose ρ =, as we impose the non-degeneracy Assumption (C3-a). The intuitive explanation of ρ = is that when making perturbation of the sample path on the Wiener-Poisson space, we only perturb the Brownian path.

20 JOSÉ E. FIGUEROA-LÓPEZ AND CHENG OUYANG Appendix B. Proof of the tail distribution expansion The proof of Theorem 4.1 is decomposed into two parts described in the following two subsections. For future reference, we try to keep track of the reminder terms when applying Dynkin s formula (3.6). B.1. The leading term. In order to determine the leading term of (4.1), we analyze the second term in (4.) corresponding to n = 1 (only one large jump). By conditioning on the time of the jump (necessarily uniformly distributed on [, t]), we can write (B.1) Conditioning on F s (B.) where (B.3) P(X t (x) x + y N ε t = 1) = 1 t t P(X t (ε, {s}, x) x + y)ds. (similar to the procedure in (D.3) below), we can write P(X t (ε, {s}, x) x + y) = E (G t s (X s (ε,, x))), G t (z) := G t (z; x, y) := P [X t (ε,, z) + γ(z, J) x + y]. Conditioning the last expression on X t (ε,, z), we have (B.4) G t (z) = EH(X t (ε,, z)), with H(w) := H(w; x, y, z) := P [γ(z, J) x + y w]. Note that w H(w; x, y, z) = Γ(x + y w; z) and (B.5) k wh(w; x, y, z) = ( 1) k 1 k 1 Γ(x + y w; z) ζ k 1. Here and throughout the proof, Γ(ζ; u) is the density of γ(z, J), whose existence is ensured from the Condition (C4) in the introduction, and k ζ Γ = k Γ/ ζ k and k uγ = k Γ/ u k denote its partial derivatives. Next, in light of (C4-a), we can apply Dynkin s formula and get (B.6) where G t (z) = EH(X t (ε,, z)) = H (z; x, y) + H 1 (z; x, y)t + t R1 t (z; x, y), H (z; x, y) := H(z; x, y, z) = P [γ(z, J) x + y z], H 1 (z; x, y) := L ε H(z; x, y, z) = b(z)γ(x + y z; z) σ (z) ζγ(x + y z; z) + (H(z + γ(z, ζ); x, y, z) H(z; x, y, z)) h ε (ζ)dζ (B.7) R 1 t (z; x, y) = = b(z)γ(x + y z; z) σ (z) ζγ(x + y z; z) + Ĥ1,(z; x + y z), 1 (1 α)el εh(x αt (ε,, z); x, y, z)dα, where Ĥ1, is given as in (4.4). Next, using (3.1), (C4-a), and (B.5), (B.8) sup R 1 t (z; x, y) C <, x,y,z

21 SMALL-TIME EXPANSIONS FOR JUMP-DIFFUSIONS 1 for a constant C depending on (1 ζ ) h ε (ζ)dζ, b (k), v (k), and sup u,ζ ζ k Γ(ζ; u), for k =,..., 4. Plugging (B.6) in (B.), we get (B.9) P(X t (ε, {s}, x) x + y) = EH (X s (ε,, x); x, y) + (t s)eh 1 (X s (ε,, x); x, y) (t s) + ER 1 t s(x s (ε,, x); x, y), Finally, we apply Dynkin s formula to EH (X s (ε,, x); x, y). To this end, we show that H (z; x, y) is C 4 b in a neighborhood of x. Note that 1 δ (H (z + δ; x, y) H (z; x, y)) = x+y z δ 1 Γ(ζ; z + δβ) dβdζ 1 x+y z δ Γ(ζ; z)dζ, u δ x+y z where, as before, Γ(ζ; u) denotes the density of γ(u, J). Due to (C4-b), the limit of the previous expression as δ exists and we get (B.1) H (z; x, y) z = x+y z Γ(ζ; z) dζ + Γ(x + y z; z), u which is locally bounded for values of z in a neighborhood of x because of (C4). Similarly, we can show that (B.11) zh (z; x, y) = x+y z uγ(ζ; z)dζ + u Γ(x + y z; z) ζ Γ(x + y z; z), which is also locally bounded for values of z in a neighborhood of x because of (C4). We can similarly proceed to show that k z H (z; x, y) exist and is bounded for k = 3, 4. Using Lemma 3.5, (B.1) where (B.13) EH (X s (ε,, x); x, y) = H, (x; y) + sh,1 (x; y) + s R s(x; y), H, (x; y) := H (x; x, y) = P [γ(x, J) y], H,1 (x; y) := L ε H (x; x, y) = b(x) H (z; x, y) z + σ (x) H (z; x, y) z=x z + (H (x + γ(x, ζ); x, y) H (x; x, y)) h ε (ζ)dζ = b(x) H (z; x, y) z + σ (x) H (z; x, y) z=x z + Ĥ,1(x; y), z=x R s(x; y) := 1 z=x (1 α)el εh,δ (X αs (ε,, x); x, y)dα + E H,δ (X s (ε,, x); x, y), where Ĥ,1(x; y) is defined as in (4.4) and H,δ (z) = H (z)ψ δ (z), and H,δ (z) = H (z) ψ δ (z) with ψ δ defined as in the proof of Lemma 3.5. Note the R s(x; y) = O(1), as s. Plugging (B.1) in (B.9), which can then be plugged in (B.1) to get P(X t (ε, {s}, x) x + y) = P [γ(x, J) y] + O(t), P(X t (x) x + y N ε t = 1) = P [γ(x, J) y] + O(t).

22 JOSÉ E. FIGUEROA-LÓPEZ AND CHENG OUYANG Finally, (4.) can be written as P(X t (x) x + y) = e λεt tλ ε P [γ(x, J) y] + O(t ) = t 1 {γ(x,ζ) y} h(ζ)dζ + O(t ), where the second equality above is valid for ε > small enough. B.. Second order term. In addition to (B.1), we also need to consider the leading terms in the term EH 1 (X s (ε,, x); x, y) of (B.9) and the term P(X t (x) x + y N ε t = ) of (4.). Let us first show that z H 1 (z; x, y) is C b. Let K(ζ; x, y, z) := P [γ(z, J) x + y z γ(z, ζ)] P [γ(z, J) x + y z], and recall that H 1 (z; x, y) = b(z)γ(x + y z; z) σ (z) ζγ(x + y z; z) + K(ζ; x, y, z) h ε (ζ)dζ. Obviously, the first two terms on the right-hand side of the previous expression are Cb in light of (C3) and (C4-a). Hence, for the derivative H 1(z;x,y) to exist, it suffices to show that K(ζ;x,y,z) exists z z and that sup K(ζ; x, y, z) z < C(ζ 1), for any ζ < ε. Note that K(ζ; x, y, z) z z,x,y = Γ(x + y z γ(z, ζ); z) Γ(x + y z; z) }{{} T 1 x+y z + x+y z γ(z,ζ) T γ(z, ζ) + Γ(x + y z γ(z, ζ); z) }{{ z } Γ(ζ ; z) dζ. u }{{} T 3 The following bounds are evident from (C) and (C4-a), for ε > small enough, sup T 1 sup Γ(ζ ; u) x,y,z u,ζ ζ sup γ(z, ζ) C(1 ζ ), z sup T sup Γ(ζ ; u) sup γ(z, ζ) x,y,z u,ζ z z C(1 ζ ), sup T 3 sup Γ(ζ ; u) x,y,z u,ζ u sup γ(z, ζ) C(1 ζ ). z We can similarly prove that H 1 (z;x,y) z to get exists and is bounded. Hence, we can apply Dynkin s formula (B.14) EH 1 (X s (ε,, x); x, y) = H 1, (x, y) + sr 3 s(x; y),

SMALL-TIME EXPANSIONS FOR LOCAL JUMP-DIFFUSION MODELS WITH INFINITE JUMP ACTIVITY

SMALL-TIME EXPANSIONS FOR LOCAL JUMP-DIFFUSION MODELS WITH INFINITE JUMP ACTIVITY SMALL-TIME EXPANSIONS FOR LOCAL JUMP-DIFFUSION MODELS WITH INFINITE JUMP ACTIVITY JOSÉ E. FIGUEROA-LÓPEZ AND CHENG OUYANG Abstract. We consider a Markov process X which is the solution of a stochastic

More information

SMALL-TIME EXPANSIONS FOR LOCAL JUMP-DIFFUSION MODELS WITH INFINITE JUMP ACTIVITY

SMALL-TIME EXPANSIONS FOR LOCAL JUMP-DIFFUSION MODELS WITH INFINITE JUMP ACTIVITY SMALL-TIME EXPANSIONS FOR LOCAL JUMP-DIFFUSION MODELS WITH INFINITE JUMP ACTIVITY JOSÉ E. FIGUEROA-LÓPEZ, YANKENG LUO, AND CHENG OUYANG Abstract. We consider a Markov process X, which is the solution of

More information

Short-time expansions for close-to-the-money options under a Lévy jump model with stochastic volatility

Short-time expansions for close-to-the-money options under a Lévy jump model with stochastic volatility Short-time expansions for close-to-the-money options under a Lévy jump model with stochastic volatility José Enrique Figueroa-López 1 1 Department of Statistics Purdue University Statistics, Jump Processes,

More information

Malliavin Calculus in Finance

Malliavin Calculus in Finance Malliavin Calculus in Finance Peter K. Friz 1 Greeks and the logarithmic derivative trick Model an underlying assent by a Markov process with values in R m with dynamics described by the SDE dx t = b(x

More information

Stochastic Volatility and Correction to the Heat Equation

Stochastic Volatility and Correction to the Heat Equation Stochastic Volatility and Correction to the Heat Equation Jean-Pierre Fouque, George Papanicolaou and Ronnie Sircar Abstract. From a probabilist s point of view the Twentieth Century has been a century

More information

Poisson random measure: motivation

Poisson random measure: motivation : motivation The Lévy measure provides the expected number of jumps by time unit, i.e. in a time interval of the form: [t, t + 1], and of a certain size Example: ν([1, )) is the expected number of jumps

More information

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition Filtrations, Markov Processes and Martingales Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition David pplebaum Probability and Statistics Department,

More information

Regularity of the density for the stochastic heat equation

Regularity of the density for the stochastic heat equation Regularity of the density for the stochastic heat equation Carl Mueller 1 Department of Mathematics University of Rochester Rochester, NY 15627 USA email: cmlr@math.rochester.edu David Nualart 2 Department

More information

Generalized Gaussian Bridges of Prediction-Invertible Processes

Generalized Gaussian Bridges of Prediction-Invertible Processes Generalized Gaussian Bridges of Prediction-Invertible Processes Tommi Sottinen 1 and Adil Yazigi University of Vaasa, Finland Modern Stochastics: Theory and Applications III September 1, 212, Kyiv, Ukraine

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 15. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

An essay on the general theory of stochastic processes

An essay on the general theory of stochastic processes Probability Surveys Vol. 3 (26) 345 412 ISSN: 1549-5787 DOI: 1.1214/1549578614 An essay on the general theory of stochastic processes Ashkan Nikeghbali ETHZ Departement Mathematik, Rämistrasse 11, HG G16

More information

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3 Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................

More information

Lecture 12. F o s, (1.1) F t := s>t

Lecture 12. F o s, (1.1) F t := s>t Lecture 12 1 Brownian motion: the Markov property Let C := C(0, ), R) be the space of continuous functions mapping from 0, ) to R, in which a Brownian motion (B t ) t 0 almost surely takes its value. Let

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

Some SDEs with distributional drift Part I : General calculus. Flandoli, Franco; Russo, Francesco; Wolf, Jochen

Some SDEs with distributional drift Part I : General calculus. Flandoli, Franco; Russo, Francesco; Wolf, Jochen Title Author(s) Some SDEs with distributional drift Part I : General calculus Flandoli, Franco; Russo, Francesco; Wolf, Jochen Citation Osaka Journal of Mathematics. 4() P.493-P.54 Issue Date 3-6 Text

More information

Reflected Brownian Motion

Reflected Brownian Motion Chapter 6 Reflected Brownian Motion Often we encounter Diffusions in regions with boundary. If the process can reach the boundary from the interior in finite time with positive probability we need to decide

More information

Jump-type Levy Processes

Jump-type Levy Processes Jump-type Levy Processes Ernst Eberlein Handbook of Financial Time Series Outline Table of contents Probabilistic Structure of Levy Processes Levy process Levy-Ito decomposition Jump part Probabilistic

More information

{σ x >t}p x. (σ x >t)=e at.

{σ x >t}p x. (σ x >t)=e at. 3.11. EXERCISES 121 3.11 Exercises Exercise 3.1 Consider the Ornstein Uhlenbeck process in example 3.1.7(B). Show that the defined process is a Markov process which converges in distribution to an N(0,σ

More information

Stochastic Differential Equations.

Stochastic Differential Equations. Chapter 3 Stochastic Differential Equations. 3.1 Existence and Uniqueness. One of the ways of constructing a Diffusion process is to solve the stochastic differential equation dx(t) = σ(t, x(t)) dβ(t)

More information

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539 Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory

More information

Laplace s Equation. Chapter Mean Value Formulas

Laplace s Equation. Chapter Mean Value Formulas Chapter 1 Laplace s Equation Let be an open set in R n. A function u C 2 () is called harmonic in if it satisfies Laplace s equation n (1.1) u := D ii u = 0 in. i=1 A function u C 2 () is called subharmonic

More information

Least Squares Estimators for Stochastic Differential Equations Driven by Small Lévy Noises

Least Squares Estimators for Stochastic Differential Equations Driven by Small Lévy Noises Least Squares Estimators for Stochastic Differential Equations Driven by Small Lévy Noises Hongwei Long* Department of Mathematical Sciences, Florida Atlantic University, Boca Raton Florida 33431-991,

More information

Harmonic Functions and Brownian motion

Harmonic Functions and Brownian motion Harmonic Functions and Brownian motion Steven P. Lalley April 25, 211 1 Dynkin s Formula Denote by W t = (W 1 t, W 2 t,..., W d t ) a standard d dimensional Wiener process on (Ω, F, P ), and let F = (F

More information

Asymptotic properties of maximum likelihood estimator for the growth rate for a jump-type CIR process

Asymptotic properties of maximum likelihood estimator for the growth rate for a jump-type CIR process Asymptotic properties of maximum likelihood estimator for the growth rate for a jump-type CIR process Mátyás Barczy University of Debrecen, Hungary The 3 rd Workshop on Branching Processes and Related

More information

1 Brownian Local Time

1 Brownian Local Time 1 Brownian Local Time We first begin by defining the space and variables for Brownian local time. Let W t be a standard 1-D Wiener process. We know that for the set, {t : W t = } P (µ{t : W t = } = ) =

More information

ON THE REGULARITY OF SAMPLE PATHS OF SUB-ELLIPTIC DIFFUSIONS ON MANIFOLDS

ON THE REGULARITY OF SAMPLE PATHS OF SUB-ELLIPTIC DIFFUSIONS ON MANIFOLDS Bendikov, A. and Saloff-Coste, L. Osaka J. Math. 4 (5), 677 7 ON THE REGULARITY OF SAMPLE PATHS OF SUB-ELLIPTIC DIFFUSIONS ON MANIFOLDS ALEXANDER BENDIKOV and LAURENT SALOFF-COSTE (Received March 4, 4)

More information

An Introduction to Malliavin calculus and its applications

An Introduction to Malliavin calculus and its applications An Introduction to Malliavin calculus and its applications Lecture 3: Clark-Ocone formula David Nualart Department of Mathematics Kansas University University of Wyoming Summer School 214 David Nualart

More information

Small-time expansions for local jump-diffusion models with infinite jump activity

Small-time expansions for local jump-diffusion models with infinite jump activity Bernoulli 2(3), 214, 1165 129 DOI: 1.315/13-BEJ518 arxiv:118.3386v5 [math.pr] 2 Jul 214 Small-time expansions for local jump-diffusion models with infinite jump activity JOSÉ E. FIGUEROA-LÓPEZ1, YANKENG

More information

Some Aspects of Universal Portfolio

Some Aspects of Universal Portfolio 1 Some Aspects of Universal Portfolio Tomoyuki Ichiba (UC Santa Barbara) joint work with Marcel Brod (ETH Zurich) Conference on Stochastic Asymptotics & Applications Sixth Western Conference on Mathematical

More information

Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals

Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals Noèlia Viles Cuadros BCAM- Basque Center of Applied Mathematics with Prof. Enrico

More information

Lecture 17 Brownian motion as a Markov process

Lecture 17 Brownian motion as a Markov process Lecture 17: Brownian motion as a Markov process 1 of 14 Course: Theory of Probability II Term: Spring 2015 Instructor: Gordan Zitkovic Lecture 17 Brownian motion as a Markov process Brownian motion is

More information

The Wiener Itô Chaos Expansion

The Wiener Itô Chaos Expansion 1 The Wiener Itô Chaos Expansion The celebrated Wiener Itô chaos expansion is fundamental in stochastic analysis. In particular, it plays a crucial role in the Malliavin calculus as it is presented in

More information

Exponential martingales: uniform integrability results and applications to point processes

Exponential martingales: uniform integrability results and applications to point processes Exponential martingales: uniform integrability results and applications to point processes Alexander Sokol Department of Mathematical Sciences, University of Copenhagen 26 September, 2012 1 / 39 Agenda

More information

The Pedestrian s Guide to Local Time

The Pedestrian s Guide to Local Time The Pedestrian s Guide to Local Time Tomas Björk, Department of Finance, Stockholm School of Economics, Box 651, SE-113 83 Stockholm, SWEDEN tomas.bjork@hhs.se November 19, 213 Preliminary version Comments

More information

Small-time asymptotics of stopped Lévy bridges and simulation schemes with controlled bias

Small-time asymptotics of stopped Lévy bridges and simulation schemes with controlled bias Small-time asymptotics of stopped Lévy bridges and simulation schemes with controlled bias José E. Figueroa-López 1 1 Department of Statistics Purdue University Seoul National University & Ajou University

More information

I forgot to mention last time: in the Ito formula for two standard processes, putting

I forgot to mention last time: in the Ito formula for two standard processes, putting I forgot to mention last time: in the Ito formula for two standard processes, putting dx t = a t dt + b t db t dy t = α t dt + β t db t, and taking f(x, y = xy, one has f x = y, f y = x, and f xx = f yy

More information

Properties of an infinite dimensional EDS system : the Muller s ratchet

Properties of an infinite dimensional EDS system : the Muller s ratchet Properties of an infinite dimensional EDS system : the Muller s ratchet LATP June 5, 2011 A ratchet source : wikipedia Plan 1 Introduction : The model of Haigh 2 3 Hypothesis (Biological) : The population

More information

A Change of Variable Formula with Local Time-Space for Bounded Variation Lévy Processes with Application to Solving the American Put Option Problem 1

A Change of Variable Formula with Local Time-Space for Bounded Variation Lévy Processes with Application to Solving the American Put Option Problem 1 Chapter 3 A Change of Variable Formula with Local Time-Space for Bounded Variation Lévy Processes with Application to Solving the American Put Option Problem 1 Abstract We establish a change of variable

More information

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( )

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( ) Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio (2014-2015) Etienne Tanré - Olivier Faugeras INRIA - Team Tosca November 26th, 2014 E. Tanré (INRIA - Team Tosca) Mathematical

More information

Harnack Inequalities and Applications for Stochastic Equations

Harnack Inequalities and Applications for Stochastic Equations p. 1/32 Harnack Inequalities and Applications for Stochastic Equations PhD Thesis Defense Shun-Xiang Ouyang Under the Supervision of Prof. Michael Röckner & Prof. Feng-Yu Wang March 6, 29 p. 2/32 Outline

More information

Wiener Measure and Brownian Motion

Wiener Measure and Brownian Motion Chapter 16 Wiener Measure and Brownian Motion Diffusion of particles is a product of their apparently random motion. The density u(t, x) of diffusing particles satisfies the diffusion equation (16.1) u

More information

THE MALLIAVIN CALCULUS FOR SDE WITH JUMPS AND THE PARTIALLY HYPOELLIPTIC PROBLEM

THE MALLIAVIN CALCULUS FOR SDE WITH JUMPS AND THE PARTIALLY HYPOELLIPTIC PROBLEM Takeuchi, A. Osaka J. Math. 39, 53 559 THE MALLIAVIN CALCULUS FOR SDE WITH JUMPS AND THE PARTIALLY HYPOELLIPTIC PROBLEM ATSUSHI TAKEUCHI Received October 11, 1. Introduction It has been studied by many

More information

LAN property for sde s with additive fractional noise and continuous time observation

LAN property for sde s with additive fractional noise and continuous time observation LAN property for sde s with additive fractional noise and continuous time observation Eulalia Nualart (Universitat Pompeu Fabra, Barcelona) joint work with Samy Tindel (Purdue University) Vlad s 6th birthday,

More information

Exercises in stochastic analysis

Exercises in stochastic analysis Exercises in stochastic analysis Franco Flandoli, Mario Maurelli, Dario Trevisan The exercises with a P are those which have been done totally or partially) in the previous lectures; the exercises with

More information

GAUSSIAN PROCESSES; KOLMOGOROV-CHENTSOV THEOREM

GAUSSIAN PROCESSES; KOLMOGOROV-CHENTSOV THEOREM GAUSSIAN PROCESSES; KOLMOGOROV-CHENTSOV THEOREM STEVEN P. LALLEY 1. GAUSSIAN PROCESSES: DEFINITIONS AND EXAMPLES Definition 1.1. A standard (one-dimensional) Wiener process (also called Brownian motion)

More information

Lecture 4: Introduction to stochastic processes and stochastic calculus

Lecture 4: Introduction to stochastic processes and stochastic calculus Lecture 4: Introduction to stochastic processes and stochastic calculus Cédric Archambeau Centre for Computational Statistics and Machine Learning Department of Computer Science University College London

More information

Applications of Ito s Formula

Applications of Ito s Formula CHAPTER 4 Applications of Ito s Formula In this chapter, we discuss several basic theorems in stochastic analysis. Their proofs are good examples of applications of Itô s formula. 1. Lévy s martingale

More information

STAT 331. Martingale Central Limit Theorem and Related Results

STAT 331. Martingale Central Limit Theorem and Related Results STAT 331 Martingale Central Limit Theorem and Related Results In this unit we discuss a version of the martingale central limit theorem, which states that under certain conditions, a sum of orthogonal

More information

FE610 Stochastic Calculus for Financial Engineers. Stevens Institute of Technology

FE610 Stochastic Calculus for Financial Engineers. Stevens Institute of Technology FE610 Stochastic Calculus for Financial Engineers Lecture 3. Calculaus in Deterministic and Stochastic Environments Steve Yang Stevens Institute of Technology 01/31/2012 Outline 1 Modeling Random Behavior

More information

A new approach for investment performance measurement. 3rd WCMF, Santa Barbara November 2009

A new approach for investment performance measurement. 3rd WCMF, Santa Barbara November 2009 A new approach for investment performance measurement 3rd WCMF, Santa Barbara November 2009 Thaleia Zariphopoulou University of Oxford, Oxford-Man Institute and The University of Texas at Austin 1 Performance

More information

Weak convergence and large deviation theory

Weak convergence and large deviation theory First Prev Next Go To Go Back Full Screen Close Quit 1 Weak convergence and large deviation theory Large deviation principle Convergence in distribution The Bryc-Varadhan theorem Tightness and Prohorov

More information

Discretization of SDEs: Euler Methods and Beyond

Discretization of SDEs: Euler Methods and Beyond Discretization of SDEs: Euler Methods and Beyond 09-26-2006 / PRisMa 2006 Workshop Outline Introduction 1 Introduction Motivation Stochastic Differential Equations 2 The Time Discretization of SDEs Monte-Carlo

More information

Jump Processes. Richard F. Bass

Jump Processes. Richard F. Bass Jump Processes Richard F. Bass ii c Copyright 214 Richard F. Bass Contents 1 Poisson processes 1 1.1 Definitions............................. 1 1.2 Stopping times.......................... 3 1.3 Markov

More information

Stochastic Calculus February 11, / 33

Stochastic Calculus February 11, / 33 Martingale Transform M n martingale with respect to F n, n =, 1, 2,... σ n F n (σ M) n = n 1 i= σ i(m i+1 M i ) is a Martingale E[(σ M) n F n 1 ] n 1 = E[ σ i (M i+1 M i ) F n 1 ] i= n 2 = σ i (M i+1 M

More information

Metric Spaces and Topology

Metric Spaces and Topology Chapter 2 Metric Spaces and Topology From an engineering perspective, the most important way to construct a topology on a set is to define the topology in terms of a metric on the set. This approach underlies

More information

Elliptic Operators with Unbounded Coefficients

Elliptic Operators with Unbounded Coefficients Elliptic Operators with Unbounded Coefficients Federica Gregorio Universitá degli Studi di Salerno 8th June 2018 joint work with S.E. Boutiah, A. Rhandi, C. Tacelli Motivation Consider the Stochastic Differential

More information

On the Converse Law of Large Numbers

On the Converse Law of Large Numbers On the Converse Law of Large Numbers H. Jerome Keisler Yeneng Sun This version: March 15, 2018 Abstract Given a triangular array of random variables and a growth rate without a full upper asymptotic density,

More information

Kolmogorov Equations and Markov Processes

Kolmogorov Equations and Markov Processes Kolmogorov Equations and Markov Processes May 3, 013 1 Transition measures and functions Consider a stochastic process {X(t)} t 0 whose state space is a product of intervals contained in R n. We define

More information

1 Lyapunov theory of stability

1 Lyapunov theory of stability M.Kawski, APM 581 Diff Equns Intro to Lyapunov theory. November 15, 29 1 1 Lyapunov theory of stability Introduction. Lyapunov s second (or direct) method provides tools for studying (asymptotic) stability

More information

THE INVERSE FUNCTION THEOREM

THE INVERSE FUNCTION THEOREM THE INVERSE FUNCTION THEOREM W. PATRICK HOOPER The implicit function theorem is the following result: Theorem 1. Let f be a C 1 function from a neighborhood of a point a R n into R n. Suppose A = Df(a)

More information

ON THE POLICY IMPROVEMENT ALGORITHM IN CONTINUOUS TIME

ON THE POLICY IMPROVEMENT ALGORITHM IN CONTINUOUS TIME ON THE POLICY IMPROVEMENT ALGORITHM IN CONTINUOUS TIME SAUL D. JACKA AND ALEKSANDAR MIJATOVIĆ Abstract. We develop a general approach to the Policy Improvement Algorithm (PIA) for stochastic control problems

More information

Lecture 21 Representations of Martingales

Lecture 21 Representations of Martingales Lecture 21: Representations of Martingales 1 of 11 Course: Theory of Probability II Term: Spring 215 Instructor: Gordan Zitkovic Lecture 21 Representations of Martingales Right-continuous inverses Let

More information

Solving the Poisson Disorder Problem

Solving the Poisson Disorder Problem Advances in Finance and Stochastics: Essays in Honour of Dieter Sondermann, Springer-Verlag, 22, (295-32) Research Report No. 49, 2, Dept. Theoret. Statist. Aarhus Solving the Poisson Disorder Problem

More information

Stochastic integration. P.J.C. Spreij

Stochastic integration. P.J.C. Spreij Stochastic integration P.J.C. Spreij this version: April 22, 29 Contents 1 Stochastic processes 1 1.1 General theory............................... 1 1.2 Stopping times...............................

More information

Markov Chain Approximation of Pure Jump Processes. Nikola Sandrić (joint work with Ante Mimica and René L. Schilling) University of Zagreb

Markov Chain Approximation of Pure Jump Processes. Nikola Sandrić (joint work with Ante Mimica and René L. Schilling) University of Zagreb Markov Chain Approximation of Pure Jump Processes Nikola Sandrić (joint work with Ante Mimica and René L. Schilling) University of Zagreb 8 th International Conference on Lévy processes Angers, July 25-29,

More information

A Concise Course on Stochastic Partial Differential Equations

A Concise Course on Stochastic Partial Differential Equations A Concise Course on Stochastic Partial Differential Equations Michael Röckner Reference: C. Prevot, M. Röckner: Springer LN in Math. 1905, Berlin (2007) And see the references therein for the original

More information

Backward Stochastic Differential Equations with Infinite Time Horizon

Backward Stochastic Differential Equations with Infinite Time Horizon Backward Stochastic Differential Equations with Infinite Time Horizon Holger Metzler PhD advisor: Prof. G. Tessitore Università di Milano-Bicocca Spring School Stochastic Control in Finance Roscoff, March

More information

9 Brownian Motion: Construction

9 Brownian Motion: Construction 9 Brownian Motion: Construction 9.1 Definition and Heuristics The central limit theorem states that the standard Gaussian distribution arises as the weak limit of the rescaled partial sums S n / p n of

More information

µ X (A) = P ( X 1 (A) )

µ X (A) = P ( X 1 (A) ) 1 STOCHASTIC PROCESSES This appendix provides a very basic introduction to the language of probability theory and stochastic processes. We assume the reader is familiar with the general measure and integration

More information

SEPARABLE TERM STRUCTURES AND THE MAXIMAL DEGREE PROBLEM. 1. Introduction This paper discusses arbitrage-free separable term structure (STS) models

SEPARABLE TERM STRUCTURES AND THE MAXIMAL DEGREE PROBLEM. 1. Introduction This paper discusses arbitrage-free separable term structure (STS) models SEPARABLE TERM STRUCTURES AND THE MAXIMAL DEGREE PROBLEM DAMIR FILIPOVIĆ Abstract. This paper discusses separable term structure diffusion models in an arbitrage-free environment. Using general consistency

More information

Ernesto Mordecki 1. Lecture III. PASI - Guanajuato - June 2010

Ernesto Mordecki 1. Lecture III. PASI - Guanajuato - June 2010 Optimal stopping for Hunt and Lévy processes Ernesto Mordecki 1 Lecture III. PASI - Guanajuato - June 2010 1Joint work with Paavo Salminen (Åbo, Finland) 1 Plan of the talk 1. Motivation: from Finance

More information

A MODEL FOR THE LONG-TERM OPTIMAL CAPACITY LEVEL OF AN INVESTMENT PROJECT

A MODEL FOR THE LONG-TERM OPTIMAL CAPACITY LEVEL OF AN INVESTMENT PROJECT A MODEL FOR HE LONG-ERM OPIMAL CAPACIY LEVEL OF AN INVESMEN PROJEC ARNE LØKKA AND MIHAIL ZERVOS Abstract. We consider an investment project that produces a single commodity. he project s operation yields

More information

Statistical inference on Lévy processes

Statistical inference on Lévy processes Alberto Coca Cabrero University of Cambridge - CCA Supervisors: Dr. Richard Nickl and Professor L.C.G.Rogers Funded by Fundación Mutua Madrileña and EPSRC MASDOC/CCA student workshop 2013 26th March Outline

More information

LECTURE 2: LOCAL TIME FOR BROWNIAN MOTION

LECTURE 2: LOCAL TIME FOR BROWNIAN MOTION LECTURE 2: LOCAL TIME FOR BROWNIAN MOTION We will define local time for one-dimensional Brownian motion, and deduce some of its properties. We will then use the generalized Ray-Knight theorem proved in

More information

PROGRESSIVE ENLARGEMENTS OF FILTRATIONS AND SEMIMARTINGALE DECOMPOSITIONS

PROGRESSIVE ENLARGEMENTS OF FILTRATIONS AND SEMIMARTINGALE DECOMPOSITIONS PROGRESSIVE ENLARGEMENTS OF FILTRATIONS AND SEMIMARTINGALE DECOMPOSITIONS Libo Li and Marek Rutkowski School of Mathematics and Statistics University of Sydney NSW 26, Australia July 1, 211 Abstract We

More information

Homework #6 : final examination Due on March 22nd : individual work

Homework #6 : final examination Due on March 22nd : individual work Université de ennes Année 28-29 Master 2ème Mathématiques Modèles stochastiques continus ou à sauts Homework #6 : final examination Due on March 22nd : individual work Exercise Warm-up : behaviour of characteristic

More information

n [ F (b j ) F (a j ) ], n j=1(a j, b j ] E (4.1)

n [ F (b j ) F (a j ) ], n j=1(a j, b j ] E (4.1) 1.4. CONSTRUCTION OF LEBESGUE-STIELTJES MEASURES In this section we shall put to use the Carathéodory-Hahn theory, in order to construct measures with certain desirable properties first on the real line

More information

Densities for the Navier Stokes equations with noise

Densities for the Navier Stokes equations with noise Densities for the Navier Stokes equations with noise Marco Romito Università di Pisa Universitat de Barcelona March 25, 2015 Summary 1 Introduction & motivations 2 Malliavin calculus 3 Besov bounds 4 Other

More information

INVARIANT MANIFOLDS WITH BOUNDARY FOR JUMP-DIFFUSIONS

INVARIANT MANIFOLDS WITH BOUNDARY FOR JUMP-DIFFUSIONS INVARIANT MANIFOLDS WITH BOUNDARY FOR JUMP-DIFFUSIONS DAMIR FILIPOVIĆ, STFAN TAPP, AND JOSF TICHMANN Abstract. We provide necessary and sufficient conditions for stochastic invariance of finite dimensional

More information

The Stein and Chen-Stein methods for functionals of non-symmetric Bernoulli processes

The Stein and Chen-Stein methods for functionals of non-symmetric Bernoulli processes The Stein and Chen-Stein methods for functionals of non-symmetric Bernoulli processes Nicolas Privault Giovanni Luca Torrisi Abstract Based on a new multiplication formula for discrete multiple stochastic

More information

Convoluted Brownian motions: a class of remarkable Gaussian processes

Convoluted Brownian motions: a class of remarkable Gaussian processes Convoluted Brownian motions: a class of remarkable Gaussian processes Sylvie Roelly Random models with applications in the natural sciences Bogotá, December 11-15, 217 S. Roelly (Universität Potsdam) 1

More information

On a class of stochastic differential equations in a financial network model

On a class of stochastic differential equations in a financial network model 1 On a class of stochastic differential equations in a financial network model Tomoyuki Ichiba Department of Statistics & Applied Probability, Center for Financial Mathematics and Actuarial Research, University

More information

Dynkin (λ-) and π-systems; monotone classes of sets, and of functions with some examples of application (mainly of a probabilistic flavor)

Dynkin (λ-) and π-systems; monotone classes of sets, and of functions with some examples of application (mainly of a probabilistic flavor) Dynkin (λ-) and π-systems; monotone classes of sets, and of functions with some examples of application (mainly of a probabilistic flavor) Matija Vidmar February 7, 2018 1 Dynkin and π-systems Some basic

More information

Supermodular ordering of Poisson arrays

Supermodular ordering of Poisson arrays Supermodular ordering of Poisson arrays Bünyamin Kızıldemir Nicolas Privault Division of Mathematical Sciences School of Physical and Mathematical Sciences Nanyang Technological University 637371 Singapore

More information

Martingale Problems. Abhay G. Bhatt Theoretical Statistics and Mathematics Unit Indian Statistical Institute, Delhi

Martingale Problems. Abhay G. Bhatt Theoretical Statistics and Mathematics Unit Indian Statistical Institute, Delhi s Abhay G. Bhatt Theoretical Statistics and Mathematics Unit Indian Statistical Institute, Delhi Lectures on Probability and Stochastic Processes III Indian Statistical Institute, Kolkata 20 24 November

More information

Brownian Motion. An Undergraduate Introduction to Financial Mathematics. J. Robert Buchanan. J. Robert Buchanan Brownian Motion

Brownian Motion. An Undergraduate Introduction to Financial Mathematics. J. Robert Buchanan. J. Robert Buchanan Brownian Motion Brownian Motion An Undergraduate Introduction to Financial Mathematics J. Robert Buchanan 2010 Background We have already seen that the limiting behavior of a discrete random walk yields a derivation of

More information

Variance Reduction Techniques for Monte Carlo Simulations with Stochastic Volatility Models

Variance Reduction Techniques for Monte Carlo Simulations with Stochastic Volatility Models Variance Reduction Techniques for Monte Carlo Simulations with Stochastic Volatility Models Jean-Pierre Fouque North Carolina State University SAMSI November 3, 5 1 References: Variance Reduction for Monte

More information

On continuous time contract theory

On continuous time contract theory Ecole Polytechnique, France Journée de rentrée du CMAP, 3 octobre, 218 Outline 1 2 Semimartingale measures on the canonical space Random horizon 2nd order backward SDEs (Static) Principal-Agent Problem

More information

Asymptotics for posterior hazards

Asymptotics for posterior hazards Asymptotics for posterior hazards Pierpaolo De Blasi University of Turin 10th August 2007, BNR Workshop, Isaac Newton Intitute, Cambridge, UK Joint work with Giovanni Peccati (Université Paris VI) and

More information

Exact Simulation of Multivariate Itô Diffusions

Exact Simulation of Multivariate Itô Diffusions Exact Simulation of Multivariate Itô Diffusions Jose Blanchet Joint work with Fan Zhang Columbia and Stanford July 7, 2017 Jose Blanchet (Columbia/Stanford) Exact Simulation of Diffusions July 7, 2017

More information

A Note on the Central Limit Theorem for a Class of Linear Systems 1

A Note on the Central Limit Theorem for a Class of Linear Systems 1 A Note on the Central Limit Theorem for a Class of Linear Systems 1 Contents Yukio Nagahata Department of Mathematics, Graduate School of Engineering Science Osaka University, Toyonaka 560-8531, Japan.

More information

Chapter 6. Markov processes. 6.1 Introduction

Chapter 6. Markov processes. 6.1 Introduction Chapter 6 Markov processes 6.1 Introduction It is not uncommon for a Markov process to be defined as a sextuple (, F, F t,x t, t, P x ), and for additional notation (e.g.,,, S,P t,r,etc.) tobe introduced

More information

Richard F. Bass Krzysztof Burdzy University of Washington

Richard F. Bass Krzysztof Burdzy University of Washington ON DOMAIN MONOTONICITY OF THE NEUMANN HEAT KERNEL Richard F. Bass Krzysztof Burdzy University of Washington Abstract. Some examples are given of convex domains for which domain monotonicity of the Neumann

More information

p 1 ( Y p dp) 1/p ( X p dp) 1 1 p

p 1 ( Y p dp) 1/p ( X p dp) 1 1 p Doob s inequality Let X(t) be a right continuous submartingale with respect to F(t), t 1 P(sup s t X(s) λ) 1 λ {sup s t X(s) λ} X + (t)dp 2 For 1 < p

More information

56 4 Integration against rough paths

56 4 Integration against rough paths 56 4 Integration against rough paths comes to the definition of a rough integral we typically take W = LV, W ; although other choices can be useful see e.g. remark 4.11. In the context of rough differential

More information

GENERALIZED COVARIATION FOR BANACH SPACE VALUED PROCESSES, ITÔ FORMULA AND APPLICATIONS

GENERALIZED COVARIATION FOR BANACH SPACE VALUED PROCESSES, ITÔ FORMULA AND APPLICATIONS Di Girolami, C. and Russo, F. Osaka J. Math. 51 (214), 729 783 GENERALIZED COVARIATION FOR BANACH SPACE VALUED PROCESSES, ITÔ FORMULA AND APPLICATIONS CRISTINA DI GIROLAMI and FRANCESCO RUSSO (Received

More information

CIMPA SCHOOL, 2007 Jump Processes and Applications to Finance Monique Jeanblanc

CIMPA SCHOOL, 2007 Jump Processes and Applications to Finance Monique Jeanblanc CIMPA SCHOOL, 27 Jump Processes and Applications to Finance Monique Jeanblanc 1 Jump Processes I. Poisson Processes II. Lévy Processes III. Jump-Diffusion Processes IV. Point Processes 2 I. Poisson Processes

More information

(B(t i+1 ) B(t i )) 2

(B(t i+1 ) B(t i )) 2 ltcc5.tex Week 5 29 October 213 Ch. V. ITÔ (STOCHASTIC) CALCULUS. WEAK CONVERGENCE. 1. Quadratic Variation. A partition π n of [, t] is a finite set of points t ni such that = t n < t n1

More information

Cores for generators of some Markov semigroups

Cores for generators of some Markov semigroups Cores for generators of some Markov semigroups Giuseppe Da Prato, Scuola Normale Superiore di Pisa, Italy and Michael Röckner Faculty of Mathematics, University of Bielefeld, Germany and Department of

More information

The Uses of Differential Geometry in Finance

The Uses of Differential Geometry in Finance The Uses of Differential Geometry in Finance Andrew Lesniewski Bloomberg, November 21 2005 The Uses of Differential Geometry in Finance p. 1 Overview Joint with P. Hagan and D. Woodward Motivation: Varadhan

More information