SMALL-TIME EXPANSIONS FOR LOCAL JUMP-DIFFUSIONS MODELS WITH INFINITE JUMP ACTIVITY. 1. Introduction

Similar documents
SMALL-TIME EXPANSIONS FOR LOCAL JUMP-DIFFUSION MODELS WITH INFINITE JUMP ACTIVITY

SMALL-TIME EXPANSIONS FOR LOCAL JUMP-DIFFUSION MODELS WITH INFINITE JUMP ACTIVITY

Short-time expansions for close-to-the-money options under a Lévy jump model with stochastic volatility

Malliavin Calculus in Finance

Stochastic Volatility and Correction to the Heat Equation

Poisson random measure: motivation

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition

Regularity of the density for the stochastic heat equation

Generalized Gaussian Bridges of Prediction-Invertible Processes

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

An essay on the general theory of stochastic processes

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Lecture 12. F o s, (1.1) F t := s>t

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

Some SDEs with distributional drift Part I : General calculus. Flandoli, Franco; Russo, Francesco; Wolf, Jochen

Reflected Brownian Motion

Jump-type Levy Processes

{σ x >t}p x. (σ x >t)=e at.

Stochastic Differential Equations.

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Laplace s Equation. Chapter Mean Value Formulas

Least Squares Estimators for Stochastic Differential Equations Driven by Small Lévy Noises

Harmonic Functions and Brownian motion

Asymptotic properties of maximum likelihood estimator for the growth rate for a jump-type CIR process

1 Brownian Local Time

ON THE REGULARITY OF SAMPLE PATHS OF SUB-ELLIPTIC DIFFUSIONS ON MANIFOLDS

An Introduction to Malliavin calculus and its applications

Small-time expansions for local jump-diffusion models with infinite jump activity

Some Aspects of Universal Portfolio

Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals

Lecture 17 Brownian motion as a Markov process

The Wiener Itô Chaos Expansion

Exponential martingales: uniform integrability results and applications to point processes

The Pedestrian s Guide to Local Time

Small-time asymptotics of stopped Lévy bridges and simulation schemes with controlled bias

I forgot to mention last time: in the Ito formula for two standard processes, putting

Properties of an infinite dimensional EDS system : the Muller s ratchet

A Change of Variable Formula with Local Time-Space for Bounded Variation Lévy Processes with Application to Solving the American Put Option Problem 1

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( )

Harnack Inequalities and Applications for Stochastic Equations

Wiener Measure and Brownian Motion

THE MALLIAVIN CALCULUS FOR SDE WITH JUMPS AND THE PARTIALLY HYPOELLIPTIC PROBLEM

LAN property for sde s with additive fractional noise and continuous time observation

Exercises in stochastic analysis

GAUSSIAN PROCESSES; KOLMOGOROV-CHENTSOV THEOREM

Lecture 4: Introduction to stochastic processes and stochastic calculus

Applications of Ito s Formula

STAT 331. Martingale Central Limit Theorem and Related Results

FE610 Stochastic Calculus for Financial Engineers. Stevens Institute of Technology

A new approach for investment performance measurement. 3rd WCMF, Santa Barbara November 2009

Weak convergence and large deviation theory

Discretization of SDEs: Euler Methods and Beyond

Jump Processes. Richard F. Bass

Stochastic Calculus February 11, / 33

Metric Spaces and Topology

Elliptic Operators with Unbounded Coefficients

On the Converse Law of Large Numbers

Kolmogorov Equations and Markov Processes

1 Lyapunov theory of stability

THE INVERSE FUNCTION THEOREM

ON THE POLICY IMPROVEMENT ALGORITHM IN CONTINUOUS TIME

Lecture 21 Representations of Martingales

Solving the Poisson Disorder Problem

Stochastic integration. P.J.C. Spreij

Markov Chain Approximation of Pure Jump Processes. Nikola Sandrić (joint work with Ante Mimica and René L. Schilling) University of Zagreb

A Concise Course on Stochastic Partial Differential Equations

Backward Stochastic Differential Equations with Infinite Time Horizon

9 Brownian Motion: Construction

µ X (A) = P ( X 1 (A) )

SEPARABLE TERM STRUCTURES AND THE MAXIMAL DEGREE PROBLEM. 1. Introduction This paper discusses arbitrage-free separable term structure (STS) models

Ernesto Mordecki 1. Lecture III. PASI - Guanajuato - June 2010

A MODEL FOR THE LONG-TERM OPTIMAL CAPACITY LEVEL OF AN INVESTMENT PROJECT

Statistical inference on Lévy processes

LECTURE 2: LOCAL TIME FOR BROWNIAN MOTION

PROGRESSIVE ENLARGEMENTS OF FILTRATIONS AND SEMIMARTINGALE DECOMPOSITIONS

Homework #6 : final examination Due on March 22nd : individual work

n [ F (b j ) F (a j ) ], n j=1(a j, b j ] E (4.1)

Densities for the Navier Stokes equations with noise

INVARIANT MANIFOLDS WITH BOUNDARY FOR JUMP-DIFFUSIONS

The Stein and Chen-Stein methods for functionals of non-symmetric Bernoulli processes

Convoluted Brownian motions: a class of remarkable Gaussian processes

On a class of stochastic differential equations in a financial network model

Dynkin (λ-) and π-systems; monotone classes of sets, and of functions with some examples of application (mainly of a probabilistic flavor)

Supermodular ordering of Poisson arrays

Martingale Problems. Abhay G. Bhatt Theoretical Statistics and Mathematics Unit Indian Statistical Institute, Delhi

Brownian Motion. An Undergraduate Introduction to Financial Mathematics. J. Robert Buchanan. J. Robert Buchanan Brownian Motion

Variance Reduction Techniques for Monte Carlo Simulations with Stochastic Volatility Models

On continuous time contract theory

Asymptotics for posterior hazards

Exact Simulation of Multivariate Itô Diffusions

A Note on the Central Limit Theorem for a Class of Linear Systems 1

Chapter 6. Markov processes. 6.1 Introduction

Richard F. Bass Krzysztof Burdzy University of Washington

p 1 ( Y p dp) 1/p ( X p dp) 1 1 p

56 4 Integration against rough paths

GENERALIZED COVARIATION FOR BANACH SPACE VALUED PROCESSES, ITÔ FORMULA AND APPLICATIONS

CIMPA SCHOOL, 2007 Jump Processes and Applications to Finance Monique Jeanblanc

(B(t i+1 ) B(t i )) 2

Cores for generators of some Markov semigroups

The Uses of Differential Geometry in Finance

Transcription:

SMALL-TIME EXPANSIONS FOR LOCAL JUMP-DIFFUSIONS MODELS WITH INFINITE JUMP ACTIVITY JOSÉ E. FIGUEROA-LÓPEZ AND CHENG OUYANG Abstract. We consider a Markov process {X (x) t } t with initial condition X (x) t = x, which is the solution of a stochastic differential equation driven by a Lévy process Z and an independent Wiener process W. Under some regularity conditions, including non-degeneracy of the diffusive and jump components of the process as well as smoothness of the Lévy density of Z, we obtain a small-time second-order polynomial expansion in t for the tail distribution and the transition density of the process X. The method of proof combines a recent approach for regularizing the tail probability P(X (x) t x + y) with classical results based on Malliavin calculus for purely-jump processes, which have to be extended here to deal with the mixture model X. As an application, the leading term for out-of-the-money option prices in short maturity under a local jump-diffusion model is also derived. 1. Introduction It is well-known by practitioners that the market implied volatility skewness is more pronounced as the expiration time approaches. Such a phenomenon indicates that a jump risk should be included into classical purely-continuous financial models (e.g. local volatility models and stochastic volatility models) to reproduce more accurately the implied volatility skews observed in short-term option prices (see [8], Chapter 1 for further motivation of jumps in asset price modeling). Moreover, further studies have shown that accurate modeling of the option market and asset prices requires a mixture of a continuous diffusive component and a jump component. For instance, based on statistical methods, several empirical studies have rejected statistically the null hypothesis of either a purely-continuous or a purely-jump model (see, e.g., [1], [], [3], [6]). Similarly, by contrasting the observed behavior of at-the-money (ATM) and out-of-the-money (OTM) call option prices near expiration with their analog theoretical behaviors with and without jumps, [7] and [] argued that both continuous and jump component are necessary to explain the implied volatility behavior of S&P5 index options. The study of small-time asymptotics of option prices and implied volatilities has grown significantly during the last decade, as it provides a convenient tool for testing various pricing models and calibrating parameters in each model. For recent accounts of the subject in the case of purely-continuous modes, we refer the reader to [15] for local volatility models, [9] for the Heston model, [13] for a general uncorrelated local-stochastic volatility model, and [5] for general stochastic volatility models. Broadly, for purely-continuous models, OTM option prices are asymptotically equivalent to O(e c/t ), where t is the time-to-maturity. In this case, the corresponding implied volatility converges to a nonnegative constant which is characterized by the Riemannian distance from the current stock price to the strike price under the Riemannian metric induced by the diffusion coefficient for the model. For 1991 Mathematics Subject Classification. 6J75, 6G51, 6F1. First author supported in part by NSF Grant DMS-96919. 1

JOSÉ E. FIGUEROA-LÓPEZ AND CHENG OUYANG Lévy process, [1], [8] and [31] show that OTM option prices are generally asymptotically equivalent to the time-to-maturity t. This fact in turn implies that the corresponding implied volatility explodes as t in such models. The explosion rate was proved to be asymptotically equivalent to (t log(1/t)) 1 in [31], while the second order and higher order asymptotics where obtained in [1] and [14], respectively. A similar result was also obtained for a stochastic volatility model with independent Lévy jumps in [1], while [1] studied the first order short-time asymptotics for OTM and ATM option prices under a relatively general class of Itô semimartingales with jumps. In spite of the ample literature on the asymptotic behavior of the transition densities and option prices for either purely-continuous or purely-jump models, results on local jump-diffusion models are scarce. One difficulty to deal with these models arises from the more complex interplay of the jump and continuous components. The current article is thus an important contribution in this direction, by analyzing the higher-order asymptotic behavior of the transition distributions and densities in local jump-diffusion models. In addition to the asymptotics of vanilla option prices, short-time asymptotics for the transition densities and distributions of Markov processes have also proved to be useful in other financial applications such as non-parametric estimation methods of the model under highfrequency sampling data and Monte-Carlo based pricing of path-dependent options such as lookback or Asian options. In all these applications, a certain discretization of the continuous-time object under study is needed and, in that case, short-time asymptotics are important in the study of the rate of convergence of the discrete-time proxies to their continuous-time analogs. The small-time behavior for the transition densities of Markov processes {X (x) t } t with deterministic initial condition X (x) = x has been studied for a long time in the literature, especially for either a purely-continuous or a purely-discontinuous process. Starting from the problem of existence, there are several sets of sufficient conditions for the existence of the transition densities, y p t (y; x), largely based on the machinery of Malliavin calculus, originally developed for continuous diffusions (see the monograph []) and, then, extended to Markov process with jumps (see the monograph [6]). This approach can also yield estimates of the transition density p t (y; x) in small time t. For purely-jump Markov processes, the key assumption is that the Lévy measure of the process admits a smooth Lévy density. The pioneer of this approach was Léandre [17], who obtained the first-order small-time asymptotic behavior of the transition density for fully supported Lévy densities. This result was extended in [16] to the case where the point y cannot be reached with only one jump from x but rather with finitely many jumps, while [4] developed a new method for proving existence of the transition densities which allows Lévy measures with a possibly singular component. The main result in [17] states that 1 lim t t p t(y; x) = g(y; x), where g(y; x) is the Lévy density of the process X to be defined below (see (1.6)). Léandre s approach consists of separating the small jumps (say, those with sizes smaller than an ε > ) and the large jumps of the underlying Lévy process, and condition on the number of large jumps by time t. Malliavin s calculus is then applied to control the density given that there is no large jump. For ε > small enough, the term when there is only one large jump is proved to be equivalent, up to a remainder of order o(t), to the resulting term resulting as if there were no small-jump component at all. Finally, the terms when there is more than one large jump are shown to be of order O(t ). Higher order expansions of the transition density of Markov processes with jumps have been considered quite recently and only for processes with finite jump activity (see, e.g., [3]) or for Lévy

SMALL-TIME EXPANSIONS FOR JUMP-DIFFUSIONS 3 processes with possibly infinite jump-activity. We focus on the literature of the latter case due to its close connection to the present work. The first attempt in obtaining higher order expansions for Lévy processes was given in [9] using Léandre s approach. Concretely, the following expansion for the transition densities {p t (y)} t of a Lévy process {Z t } t was proposed there: (1.1) p t (y) := d N 1 dy P(Z t y) = a n (y) tn n! + O(tN ), (y, N N). n=1 As in [17], the idea was to justify that each higher order term (say, the term corresponding to k large jumps) can be replaced, up to a remainder of order O(t N ), by the resulting density as if there were no small-jump component. However, this method of proof fails for k and the expressions proposed there were in general incorrect for any higher order term (in fact, only valid for compound Poisson processes). This issue was subsequently resolved in [11] using a new approach, where the expansion (1.1) was obtained under the assumption that the Lévy density of the Lévy process {Z t } t is sufficiently smooth and the following technical condition (1.) sup sup p (k) s (y) <, <s<t y >δ for any k N and δ >, and some t := t (δ, k) >. There are two key ideas in [11]. Firstly, instead of working directly with the transition densities, the following analog expansions for the tail probabilities were considered: (1.3) P(Z t y) = N 1 n=1 A n (y) tn n! + tn R t (y), (y >, N N), where sup <t t R t (y) <, for some t >. Secondly, by considering a smooth thresholding of the large jumps (so that the density of large jumps is smooth) and conditioning on the size of the first jump, it was possible to regularized the discontinuous functional 1 {Zt x} and, subsequently, proceed to use an iterated Dynkin s formula to expand the resulting smooth moment functions E(f(Z t )) as a power series in t. (1.1) was then obtained by differentiation of (1.3), after justifying that the functions A n (y) and the remainder R t (y) were differentiable in y. Recently, [1] formally proved that (1.1) can be derived from (1.3) using only the conditions of Léandre [17], without the technical condition (1.). The results described in the previous paragraph open the door to the study of higher order expansions for the transition densities of more general Markov models with infinite jump-activity. We take the analysis one step further and consider a jump-diffusion model with non-degenerate diffusion and jump components. Our analysis can also be applied to purely-discontinuous processes as in [17], but we prefer to take a mixture model due to its relevance in financial applications where, as explained above, both continuous and jump components are necessary. More concretely, we consider the following Stochastic Differential Equations (SDE): (1.4) X t (x) = x + t b(x u (x))du + t σ (X u (x)) dw u + u (,t]: Z u γ (X u (x), Z u ), where b, σ : R R, and γ : R R R are some suitable deterministic functions, {W t } t is a Wiener process, and {Z t } t is an independent purely-jump Lévy process of bounded variation. The latter condition on Z is not essential for our analysis and it is only imposed for the easiness of notation.

4 JOSÉ E. FIGUEROA-LÓPEZ AND CHENG OUYANG Under some regularization conditions on b, σ and γ, as well as the Lévy measure ν of Z, we show the following second order expansion (as t ) for the tail distribution of {X t (x)} t : (1.5) P(X t (x) x + y) = ta 1 (x; y) + t A (x; y) + O(t 3 ), for x R, y >. The key assumptions on the SDE s coefficients are some non-degeneracy conditions on σ and γ that ensure the existence of the transition density and that allow us to use Malliavin calculus. As in [17], the key assumption on the Lévy measure ν of Z is that this admits a density h : R\{} R + that is bounded and sufficiently smooth outside any neighborhood of the origin. In that case, the leading term A 1 (x; y) depends only on the jump component of the process as follows A 1 (x; y) = ν({ζ : γ(x, ζ) y}) = h(ζ)dζ. {ζ:γ(x,ζ) y} The second order term A (x; y) admits a slightly more complicated (but explicit) representation. As a consequence of this correction term A (x; y), one can explicitly characterize the effects of the drift b and the diffusion σ of the process in the likelihood of a large positive move (say, a move of size more than y) during a short time period t. For instance, in the case that γ does not dependent on the current state x (i.e. γ(x, ζ) = γ(ζ) for all x), we found that a positive spot drift value of b(x) will increase the probability of a large positive move by b(x)g(y)t (1 + o(1)), where g(y) is the so-called Lévy density of the process X defined by g(y) := d ν({ζ : γ(ζ) y}). Similarly, since typically dy g (y) < when y >, the effect of a non-zero spot volatility σ(x) is to increase the probability of a large positive move by 1 σ (x) g (y) t (1 + o(1)). Once the asymptotic expansion for tail distribution is obtained, we take the analysis one step further and obtain a second order expansion for the transition density function p t (y; x) by taking derivative of the tail distribution with respect to y, as long as proper regularity conditions are assumed. As expected, the leading term of the transition density p t (y; x) is of the form tg(y; x), where g(y; x) is the so-called Lévy density of the process {X t (x)} t defined by (1.6) g(y; x) := d ν({ζ : γ(x, ζ) y}). dy One of the main difficulties here comes from controlling the first term (which corresponds to the case of no large jumps). Hence, we generalize the result in [17] to the case where there is a non-degenarate diffusion component. Again, Malliavin calculus is proved to be the key tool for this task. As an application of the above asymptotic formulas, we derive the leading term of the small-time expansion for option prices of out-of-the-money European call options with strike price K and expiry T. Specifically, let {S t } t be the stock price process and denote X t = log S t for each t. We assume that P is the option pricing measure and that under this measure the process {X t } t is of the form in (1.4). Then, under certain regularity conditions, we prove that 1 (1.7) lim t t E(S t K) + = ( S e γ(x,ζ) K ) + h(ζ)dζ. A special case of the above result is when γ(x, ζ) = ζ. The model reduces to an exponential Lévy model. In this case, the above first order asymptotics becomes 1 lim t t E(S ( t K) + = S e ζ K ) h(ζ)dζ, +

SMALL-TIME EXPANSIONS FOR JUMP-DIFFUSIONS 5 which recovers the well-known first-order asymptotic behavior for exponential Lévy model (see, e.g., [8] and [31]). It is important to note that (1.7) is derived from the asymptotics for the tail distributions (1.5) rather than from those for the transition density, given that the former requires less stringent conditions on the coefficients of (1.4). An important related paper is [19], where (1.7) was obtained for a wide class of multi-factor Lévy Markov models under certain technical conditions (see Theorem.1 therein), including the requirement that lim t E(S t K) + /t exists in the out-of-themoney region and an integrability condition such as ( γ(x, ζ) 1)e γ(x,ζ)+ε γ(x,ζ) h(ζ)dζ <, for some ε >. The paper is organized as follows. In Section, we introduced the model and the assumptions needed for our results. The probabilistic tools, such as the iterated Dynkin s formula as well as tail estimates for semimartingales with bounded jumps, are presented in Section 3. The main results of the paper are then stated in Sections 4 and 5, where the second order expansion for the tail distributions and the transition densities are obtained, respectively. The application of the expansion for the tail distribution to option pricing in local jump-diffusion financial models is presented in Section 6. The proofs of our main results as well as some preliminaries of Malliavin calculus on Wiener-Poisson spaces are given in several appendices.. The Setup and Assumptions In this paper, let {Z t } t be a driftless pure-jump Lévy process of bounded variation and let {W t } t be a Wiener process independent of Z, both of which are defined on a complete probability space (Ω, F, P) equipped with the natural filtration (F t ) t, generated by W and Z and augmented by all the null sets in F so that it satisfies the usual conditions (see, e.g., [7]). We consider the following local jump-diffusion model (.1) X t (x) = x + t b(x u (x))du + t σ (X u (x)) dw u + <u t γ (X u (x), Z u ). where b, σ : R R and γ : R R R are suitable deterministic functions so that (.1) admits a unique solution (see, e.g., [3]). It is convenient to write Z in terms of its Lévy-Itô representation: Z t = t Z u = ζm(du, dζ). R <u t Here, R = R\{} and M is the jump measure of the process Z, which is a Poisson random measure on R + R. Furthermore, we make the following standing assumptions about Z: (C1) The Lévy measure ν of Z has a C (R\{}) strictly positive density h such that, for every ε > and n, (.) (i) (1 ζ )h(ζ)dζ <, and (ii) sup h (n) (ζ) <. ζ >ε Remark.1. The condition of bounded variation on Z is not essential for our results and it is imposed only for the easiness of notation. We could have taken a general Lévy process Z and replace the last term in (.1) with a finite-variation pure-jump process plus a compensated Poisson integral

6 JOSÉ E. FIGUEROA-LÓPEZ AND CHENG OUYANG as follows: X t (x) = x + + t t t b(x u (x))du + σ (X u (x)) dw u γ (X u (x), ζ) M(du, dζ) + ζ >1 t ζ 1 γ (X u (x), ζ) M(du, dζ), where as usual M(du, dζ) is the compensated Poisson integral (M(du, dζ) ν(dζ)du). Throughout the paper, the generator γ satisfies the following assumption: (C) For any x, ζ R, γ(x, ) =, γ(, ) C, and (.3) sup i γ(x, ζ) x x i C(ζ 1), for i =, 1, and C <. In particular, sup x,ζ i γ(x,ζ) x i <, for i =, 1,. We also assume the following non-degeneracy and boundedness conditions, which were also imposed in [17]: (C3-a) The functions b(x) and v(x) := σ (x)/ belong to Cb constant δ > such that σ(x) δ for all x. (C3-b) For the same constant δ above, we have γ(x, ζ) 1 + x δ for all x, ζ R. (R) and there exists a As it is usually the case with Lévy processes, we shall decompose Z into a compound Poisson process and a process with bounded jumps. More specifically, let φ ε C (R) be a truncation function such that 1 ζ ε φ ε (ζ) 1 ζ ε/ and let {Z t (ε)} t and {Z t(ε)} t be independent driftless Lévy processes of bounded variation with Lévy measures φ ε (ζ)h(ζ) and (1 φ ε (ζ))h(ζ), respectively. Without loss of generality, we assume hereafter that (.4) Z t = Z t (ε) + Z t(ε). Note that Z t (ε) is a compound Poisson process with intensity of jumps λ ε := φ ε (ζ)h(ζ)dζ and jumps with probability density function (.5) h ε (ζ) := φ ε(ζ)h(ζ) λ ε. Throughout the paper, J := J ε will represent a random variable with density h ε (ζ). The following assumptions will also be needed in the sequel: (C4-a) For any u R and ε > small-enough (independent of u), the random variable γ(u, J ε ) admits a density Γ(ζ; u) := Γ ε (ζ; u) in Cb (R R), the class of function with continuous partial derivatives of arbitrary order such that, for every n, m, n+m Γ(ζ; u) ζ n u m <. sup u,ζ

SMALL-TIME EXPANSIONS FOR JUMP-DIFFUSIONS 7 (C4-b) In addition to (C4-a), we also have that, for any z R, δ >, and m, there exists a δ > such that sup m Γ(ζ; u) u m dζ <. Remark.. Note that Γ ε (ζ; u) = ζ δ u (z δ,z+δ) 1 {γ(u;y) ζ} h ε (y)dy = 1 λ ε ζ 1 {γ(u;y) ζ} φ ε (y)h(y)dy. In particular, if for each u R, the function y γ(u, y) is strictly increasing, then ( Γ ε (ζ; u) = h ε γ 1 (u; ζ) ) ( ) 1 γ ζ (u, γ 1 (u; ζ)), where γ 1 (u; ζ) is the inverse of the function y γ(u, y) (i.e. γ(u; γ 1 (u; ζ)) = ζ). As a consequence, since γ 1 (u; ζ) for any ζ, there exists an ε (u) > small enough such that λ ε Γ ε (ζ; u) = h ( γ 1 (u; ζ) ) ( ) 1 γ ζ (u; γ 1 (u; ζ)) =: a 1 (u; ζ), for any ε < ε. Note also that for Γ ε (ζ; u) to be bounded, it suffices that (.-ii) with n = and γ(u;ζ) inf u,ζ ζ > hold true. Similarly, Γ ε (ζ; u)/ ζ can be written as ( h ε γ 1 (u; ζ) ) ( ) γ ζ (u, ( γ 1 (u; ζ)) h ε γ 1 (u; ζ) ) ( ) 3 γ γ ζ (u, γ 1 (u; ζ)) ζ (u, γ 1 (u; ζ)), which can be bounded uniformly if, in addition, (.-ii) with n = 1 and (.3) hold true. One can similarly deal with higher order derivatives of Γ ε. This observation justifies the usage of a truncation function φ ε to regularize the Lévy density h around the origin. We next define an important class of processes. For a collection of times {s 1,..., s n } in R +, let {X t (ε, {s 1,..., s n }, x)} t be the solution of the SDE: t X t (ε, {s 1,..., s n }, x) := x + b(x u (ε, {s 1,..., s n }, x))du + σ (X u (ε, {s 1,..., s n }, x)) dw u + γ(x u (ε, {s 1,..., s n }, x), Z u(ε)) + γ(x s (ε, {s 1,..., s n }, x), J ε i i ), <u t where {Ji ε } i 1 represents a random sample from the distribution φ ε (ζ)h(ζ)dζ/λ ε, independent of Z. Similarly, we define {X t (ε,, x)} t as the solution of the SDE: (.6) X t (ε,, x) := x + t b(x u (ε,, x))du + t t i:s i t σ (X u (ε,, x)) dw u + 3. Probabilistic tools <u t γ (X u (ε,, x), Z u(ε)). Throughout this section, Cb n(i) (resp. Cn b ) denotes the class of functions having continuous and bounded derivatives of order k n in an open interval I R (resp. in R). Also, g = sup y g(y).

8 JOSÉ E. FIGUEROA-LÓPEZ AND CHENG OUYANG 3.1. Uniform tail probability estimates. The following general result will be important in the sequel. Proposition 3.1. Let M be a Poisson random measure on R + R and M be its compensated random measure. Let Y := Y (x) be the solution of the SDE t t t Y t = x + b(ys )ds + σ(y s )dw s + γ(y s, ζ) M(ds, dζ). Assume that b(x) and σ are uniformly bounded and that γ(x, ζ) are regular enough so that the second and third integral on the right hand side of the above equation are martingales. Furthermore, suppose the jumps of {Y t } t are bounded by some constant S. Then there exists a constant C(S) depending on S, such that, for all t 1, { } P sup Y s (x) x ps C(S)t p. s t Proof. Let M t be either t σ(y s )dw s, or t γ(y s, z) M(ds, dz). It is clear that M t is a martingale with its jumps bounded by S. Moreover, the quadratic variation M, M satisfies M, M t kt for some constant k. By [18] (or (1.3) in [17]) we have { } ] (3.1) P sup M s C exp [ λc + λ s t S kt(1 + exp[λs]). Now take C = ps and λ = log t /S. The rest of the proof is then clear. As a direct corollary of the above proposition, we have the following tail probability estimate for the small-jump component {X t (ε,, x)} t of X introduced in (.6). Lemma 3.. Fix any η > and a positive integer N. Then, under the Conditions (C-C3) of Section, there exists an ε > and C := C(N, η) < such that (1) For all t < 1, (3.) sup P( X t (ε,, x) x η) < Ct N. <ε <ε,x R () For all t < 1, sup ε <ε,x R η P{e Xt(ε,,x) x s}ds = sup ε <ε,x R η P{ X t (ε,, x) x log s}ds < Ct N. Proof. The first statement is a special case of Proposition 3.1, which can be applied in light of the boundedness conditions (C3) as well as the condition (C) that ensure that the components t t σ (X u (ε,, x)) dw u, γ (X u (ε,, x), ζ) M (du, dζ),

SMALL-TIME EXPANSIONS FOR JUMP-DIFFUSIONS 9 are martingales. Here, M is the compensated Poisson measure of Z (ε ) (see Remark 3.4 below for the notation). To prove the second statement, we keep the notation of the proof of Proposition 3.1. By (3.1) there exists a constant C > such that ] P{ X t (ε,, x) x log s}ds C exp [ λ log s + λ ε kt(1 + exp[λε]) ds η = η Cη (λ 1)η λ exp Now it suffices to take λ = log t /ε and ε = log η/n. [ ] λ ε kt(1 + exp[λε]). 3.. Iterated Dynkin s formula. The following well-known formula will be important in the sequel. The case n = can be proved using Itô s formula (see Section 1.5 in [3]), while the general case is proved by induction. Lemma 3.3. Let Y := Y (x) be a solution of the SDE (3.3) dy t = b(y t )dt + σ(y t )dw t + γ(y t, ζ) M(dt, dζ) + + (3.4) Y = x, ζ 1 and let L be its infinitesimal generator, defined by (3.5) ζ >1 γ(y t, ζ)m(dt, dζ), (Lf)(y) := σ (y) (f(y f (y) + b(y)f ) (y) + + γ(y, ζ)) f(y) γ(y, ζ)f (y)1 { ζ 1} ν(dζ), for any f Cb. Then, we have (3.6) Ef(Y t ) = f(x) + n k=1 t k k! Lk f(x) + tn+1 n! 1 (1 α) n E { L n+1 f(y αt ) } dα, valid for any n and f Cb such that Lk f Cb, for each k = 1,..., n. Remark 3.4. Below, Y is taken to be the process (.6), which is of the form (3.3-3.4) with underlying driving Lévy process {Z t(ε)} t (the small-jump component of Z defined in (.4)). In particular, denoting M the jump measure of Z (ε), we have t t t X t (ε,, x) = x + b (X u (ε,, x)) du + σ (X u (ε,, x)) dw u + γ (X u (ε,, x), ζ) M (du, dζ). Since (1 φ ε (ζ))ν(dζ) is the Lévy measure of Z (ε), we can write X t (ε,, x) as in (3.3-3.4) with γ(y, ζ) = γ(y, ζ), σ(y) = σ(y), and b(y) = b(y) + γ(y, ζ)(1 φ ε (ζ))ν(dζ). ζ 1 After substitution of these coefficient functions in (3.5) and some simplification, we get the following infinitesimal generator for {X t (ε,, x)} t : L ε f(y) := σ (y) (3.7) f (y) + b(y)f (y) + (f(y + γ(y, ζ)) f(y)) h ε (ζ)dζ, where h ε (ζ) := (1 φ ε (ζ))h(ζ).

1 JOSÉ E. FIGUEROA-LÓPEZ AND CHENG OUYANG The following result gives conditions for the validity of the second-order Dynkin s formula (3.6) when Y is the small-jump component of Z (its proof is reported in Appendix D). Lemma 3.5. Supposed that γ, b, and σ satisfy the Conditions (C-C3) of Section and let g be a bounded functions that is Cb 4 in a neighborhood of x. Then, the expansion (3.8) Ef (X t (ε,, x)) = f(x) + tl ε f(x) + t R t(ε, x), holds true with R t (ε, x) such that (3.9) sup R t (ε, x) K <t<1, ε <ε for a constant K < depending only on (1 ζ )h(ζ)dζ, f (k) 1 (x δ,x+δ) for some small-enough δ >, b (k), and v (k) (k =, 1, ). Moreover, if f C 4 b, then (3.1) sup R t (ε, x) K, <t<1, ε <ε,x R for some K <. The following small-time expansion will also be needed (see the Appendix D for a proof). Lemma 3.6. Let s (, t) and f C b. Then, Ef (X t (ε, {s}, x)) = G (x) + s ˆR s (x) + (t s)er t s (X s (ε,, x)), where G (x) := E [f (x + γ(x, J))], sup u>,z R ˆR u (z) <, and sup u>,z R R u (z) <. 4. Second order expansion for the tail distributions In this section, we consider the small-time behavior of the tail distribution of {X t (x)} t : (4.1) Ft (x, y) := P(X t (x) x + y), (y > ). The key idea is to use the decomposition (.4) by conditioning on the number of large jumps occurring before time t. Concretely, denoting {N ε t } t and λ ε := φ ε (ζ)h(ζ)dζ the jump counting process and the jump intensity of the compound Poisson process {Z t (ε)} t, respectively, we have (4.) P(X t (x) x + y) = e λεt P(X t (x) x + y Nt ε = n) (λ εt) n. n! n= The first term in (4.) (when n = ) can be written as P(X t (x) x + y N ε t = ) = P(X t (ε,, x) x + y). In light of (3.), this term can be made O(t N ) for an arbitrarily large N 1, by taking ε small enough. In order to deal with the other terms in (4.), we use the iterated Dynkin s formula introduced in Section 3.. The following is the main result of this section (see Appendix B for the proof). Below, J denotes a random variable with density (.5), J 1 and J denote independent copies of J, and Γ(ζ; x) denotes the density of γ(x, J).

SMALL-TIME EXPANSIONS FOR JUMP-DIFFUSIONS 11 Theorem 4.1. Let x R and y >. Then, under the Conditions (C1-C3) of Section, we have (4.3) Ft (x, y) := P(X t (x) x + y) = ta 1 (x; y) + t A (x; y) + O(t 3 ), as t, where A 1 (x; y) and A (x; y) admit the following representations (for ε > small-enough): A 1 (x; y) := λ ε H, (x; y) = h(ζ)dζ, with (4.4) {γ(x,ζ) y} A (x; y) := λ ε [H,1 (x; y) + H 1, (x; y)] + λ ε [H, (x; y) H, (x; y)], λ ε = φ ε (ζ)h(ζ)dζ, H, (x; y) = P [γ(x, J) y] = y Γ(ζ; x)dζ, ( ) Γ(ζ; x) H,1 (x; y) = b(x) dζ + Γ(y; x) y x ( + σ (x) ) Γ(ζ; x) Γ(y; x) Γ(y; x) dζ + + y x x y Ĥ,1(x; y), Ĥ,1 (x; y) = (P [γ(x + γ(x, ζ), J) y γ(x, ζ)] P [γ(x, J) y]) h ε (ζ)dζ, H 1, (x; y) = b(x)γ(y; x) σ (x) Γ(y; x) + y Ĥ1,(x; y), Ĥ 1, (x; y) = (P [γ(x, J) y γ(x, ζ)] P [γ(x, J) y]) h ε (ζ)dζ, H, (x; y) = P (γ(x + γ(x, J ), J 1 ) y γ(x, J )). Remark 4.. Note that if supp(ν) {ζ : γ(x, ζ) y} = (so that it is not possible to reach the level y from x with only one jump), then A 1 (x; y) = and P(X t (x) x + y) = O(t ) as t. If, in addition, it is possible to reach the level y from x with two jumps, then H, (x; y) and, hence, A (x; y), implying that P(X t (x) x + y) decreases at the order of t. These observations are consistent with the results in [16] and [5]. Example 4.3. In the case that the generator γ(x, ζ) does not depend on x, we get the following expansion for P(X t (x) x + y): λ ε P(γ(J) y) t + λ ε (P(γ(J 1 ) + γ(j ) y) P(γ(J) y)) t ( ) + λ ε b(x)γ(y) σ (x) Γ (y) + (P [γ(j) y γ(ζ)] P [γ(j) y]) h ε (ζ)dζ t + O(t 3 ). The leading term in the above expression is dominated by the jump component of the process and it has a natural interpretation: if within a very short time interval there is a large positive move (say, a move by more than y), this move must be due to a jump. It is until the second term, when the diffusion and drift terms of the process appear. Denoting g(y) := d d ν({ζ : γ(ζ) y}) = dy dy {γ(ζ) y} h(ζ)dζ,

1 JOSÉ E. FIGUEROA-LÓPEZ AND CHENG OUYANG the so-called Lévy density of the process {X t } t, the effect of a positive spot drift b(x) > is to increase the probability of a large positive move by b(x)g(y)t (1 + o(1)). Similarly, since typically g (y) < when y >, the effect of a non-zero spot volatility σ(x) is to increase the probability of a large positive move by σ (x) g (y) t (1 + o(1)). 5. Expansion for the transition densities Our goal here is to obtain a second-order small-time approximation for the transition densities {p t ( ; x)} t of {X t (x)} t. As it was done in the previous section, the idea is to work with the expansion (4.) by first showing that each term there is differentiable with respect y, and then determining their rates of convergence to as t. One of the main difficulties of this approach comes from controlling the term corresponding to no large jumps. As in the case of purely diffusion processes, Malliavin calculus is proved to be the key tool for this task. This analysis is presented in the following subsection before our main result is presented in Section 5.. 5.1. Density estimates for SDE with bounded jumps. In this part, we analyze the term corresponding to N ε t = : P(X t (x) x + y N ε t = ) = P(X t (ε,, x) x + y). We will prove that, for any fix positive integer N and η >, there exists an ε > small enough and a constant C < such that the density p t ( ; ɛ,, x) of X t (ε,, x) satisfies (5.1) sup y x >δ,ɛ<ɛ p t (y; ɛ,, x) < Ct N, for all < t 1. The estimate (5.1) holds true for a larger class of SDE with bounded jumps. Concretely, throughout this section, we consider the more general equation t t t (5.) Xt x = x + b(xs )ds x + σ(xs )dw x s + γ(xs, x ζ) M(ds, dζ), where, as before, M(ds, dζ) is a Poisson random measure on R + R\{} with mean measure µ(ds, dζ) = ds ν(dζ) and M = M µ is its compensated measure. As before, we assume that ν admits a smooth density h with respect to the Lebesgue measure on R\{}, but moreover, we now assume h is supported in a ball B(, r) for some r >. Note that the solution (X t (ε,, x)) t of (.6) perfectly fits within this framework. The following assumptions on the coefficient functions of (5.) are used in the sequel. Note that, when {X x t } is taken to be {X t (ε,, x)}, these assumptions follow from the assumptions in Section. Assumption 5.1. The coefficients b(x), σ(x) and γ(x, ζ) are 4 times differentiable. There exists a constant c > and a function κ p< Lp (ν) such that: (1) x n nb(x), n x nσ(x), 1 κ(ζ) n x nγ(x, ζ) c, for n 5. () n+k γ(x, ζ) x n ζ c, for 1 n + k 5 and k 1. k

SMALL-TIME EXPANSIONS FOR JUMP-DIFFUSIONS 13 Malliavin calculus is the main tool to analyze the existence and smoothness of density for Xt x. For the sake of completeness, we are presenting some necessary preliminaries of Malliavin calculus on Wiener-Poisson spaces in Appendix A following the approach in [6]. As described therein, there are different ways to define a Malliavin operator for such spaces. For our purposes, it suffices to consider the Malliavin operator corresponding to ρ = (see Section A. for the details). The intuitive explanation of ρ = is that when making perturbation of the sample path on the Wiener-Poisson space, we only perturb the Brownian path. Let us start by noting that Assumption 5.1 ensures that Xt x is a C diffeomorphism and also has a continuous density (see, e.g., [6]). Furthermore, recalling that H is the extended domain of the Malliavin operator, one has X t H and { t } (5.3) Γ(X t, X t ) := U t = x Xt x ( x Xs x ) 1 σ (Xs x )( x Xs x ) 1 ds x Xt x. Remark 5.. Under the Condition (C3-b) of Section, one can show that x Xt x invertible. Indeed, one can show that ( cf. [6]) Moreover, x X x t d ( x X x t ) = 1 + x b(x x t ) x X x t dt + x σ(x x t ) x X x t dw t + x γ(x x t, ζ) x X x t M(dt, dζ). admits an inverse Y t = ( x X x t ) 1 which satisfies a similar equation dy t = 1 + Y t D t dt + Y t E t dw t + Y t F t M(dt, dζ). is almost surely Here D t, E t and F t are determined by b(x), σ(x), γ(x, ζ) and X x t. As a consequence, together with our assumption on b, σ and γ, one have for all p > 1. E sup ( x Xt x ) p, E sup ( x Xt x ) p < t 1 t 1 The main result for this section is Theorem 5.7. For this purpose, we state the following integration by parts formula (which is also the main ingredient for existence and smoothness of the density of X x t ), which is a special case of Lemma 4-14 together with the discussion of Chapter IV in [6]. Proposition 5.3. For any f Cc (R), there exists a random variable J t L p for all p N, such that ED x f(xt x ) = EJ t Ut f(xt x ). The following existence and regularity result for the density of a finite measure is well known (see, e.g., Theorem 5.3 in [3]). Lemma 5.4. Let m be a finite measure supported in an open set O R n. Take any p > n. Suppose that there exists g = (g 1,..., g n ) L p (m) such that j fdm = fg j dm, f Cc (O), j = 1,..., n. R n R n Then m has a bounded density function q C b (O) satisfying Here the constant C depends on n and p. q C g n L p (m)m(o) 1 n/p.

14 JOSÉ E. FIGUEROA-LÓPEZ AND CHENG OUYANG The following lemma is where we use assumption (C3-a). Theorem 5.7. Its proof is deferred to Appendix D. It is the first ingredient in proving Lemma 5.5. Recall U t = Γ(X t, X t ). Under the Condition (C3-a) of Section, we have EU p t Ct p. The final ingredient toward our main result of this section is the proposition below, which is just a restatement of Proposition 3.1. Proposition 5.6. Let X t be the solution to equation (5.). Recall that since the Lévy density h is compactly supported and γ(x, z) is uniformly bounded, the jumps of X t is bounded by some constant S. There exists a constant C(S) depending on S, such that, for all t 1, { } P sup Xs x x ps C(S)t p. s t Finally, we can state and prove our main result of this section. Theorem 5.7. Assume the Condition (C3) of Section is satisfied. Let {X x t } t be the solution to equation (5.) and denote by p t (y; x) the density of X x t. Fix any η >. Then for any N >, there exists r(η, N) > such that if µ is supported in B(, r) with r r(η, N), we have for all t 1 sup p t (y; x) C(η, N)t N. y: x y η Proof. For a fix t, define a finite measure m η t on R by m η t (A) = P ({ X x t A B c (x, η) }), A R, where B c (x, r) denotes the complement of closure of B(x, r). Thus, to prove our result it suffices to prove that m η t admits a density that has the desired bound. Next, for any smooth function f compactly supported in B c (x, η), we have ( x f)(y)m η t (dy) = E x f(xt x ) = EJ t Ut f(xt x ) R = E [ J t Ut X x t = y ] f(y)m η t (dy), R by Proposition 5.3. The rest of the proof follows from Lemmas 5.4 and 5.5 and Proposition 5.6. 5.. Expansion for the transition density. We are ready to state our main result of this section, namely, the second order expansion for the transition densities of the process {X t (x)} t in terms of the density Γ( ; x) of γ(x, J ε ). The proof is deferred to Appendix C. Theorem 5.8. Let x R and y >. Then, under the Conditions (C1-C3) of Section, we have (5.4) p t (y; x) := P(X t(x) x + y) y = ta 1 (x; y) + t a (x; y) + O(t 3 ),

SMALL-TIME EXPANSIONS FOR JUMP-DIFFUSIONS 15 as t, where a 1 (x; y) and a (x; y) admit the following representations (for ε > small-enough): with a 1 (x; y) := A 1(x; y) y a (x; y) := A 1(x; y) y = λ ε Γ(y; x), = λ ε [h,1 (x; y) + h 1, (x; y)] λ ε [h, (x; y) + Γ(y; x)], ( ) h,1 (x; y) = σ (x) Γ(y; x) + Γ(y; x) Γ(y; x) + x y x y ĥ,1(x; y), ĥ,1 (x; y) = {Γ(y; x) Γ(y γ(x, ζ); x + γ(x, ζ)} h ε (ζ)dζ, (5.5) h 1, (x; y) = σ (x) Γ(y; x) + y ĥ1,(x; y), ĥ 1, (x; y) = {Γ(y; x) Γ(y γ(x, ζ); x)} h ε (ζ)dζ, h, (x; y) = E {Γ (ζ γ(x, J); x + γ(x, J))}. 6. The first order term of the option price expansion In this section, we use our previous results to derive the leading term of the small-time expansion for option prices of out-of-the-money (OTM) European call options with strike price K and expiry T. This can be achieved by either the asymptotics of the tail distributions or the transition density. Given that the former requires less stringent conditions on the coefficients of the SDE, we choose to employ our result for tail distribution. Throughout this section, let {S t } t be the stock price process and let X t = log S t for each t. We assume that P is the option pricing measure and that under this measure the process {X t } t is of the form in (.1). As usual, without loss of generality we assume that the risk-free interest rate r is. In particular, in order for S t = exp X t to be a Q-(local) martingale, we fix b(x) := 1 σ (x) (e γ(x,z) 1 ) h(z)dz We also assume that σ and γ are such that the Conditions (C1-C4) of Section are satisfied. By the Markov property of the system, it will suffice to compute a small-time expansion for In particular, using the well-known formula v t = E(S t K) + = E ( e Xt K ) +. EU1 {U>K} = KP{V > log K} + where U is a positive random variable and V := log U, we can write E(e Xt K) + = K K P{S t > s}ds = S K S P{V > log s}ds, P{X t x > log s}ds,

16 JOSÉ E. FIGUEROA-LÓPEZ AND CHENG OUYANG where x = X = log S. Recall that (6.1) P(X t x y) = e λεt P(X t x y Nt ε = n) (λ εt) n, n! n= where λ ε := φ ε (ζ)h(ζ)dζ is the jump intensity of {N ε t } t. We proceed as in Section 4. First, note that (6.) where I 1 = I = λ ε t I 3 = λ εt v t = S K S P{X t x > log s}ds = S e λεt (I 1 + I + I 3 ), K S P{X t x log s N ε t = }ds = K S n= P{X t x log s N ε t = 1}ds, (λ ε t) n n! K S K S P{X t x log s N ε t = n}ds. P{X t (ε,, x) x log s}ds, It is clear that I 1 /t as t by Lemma 3.. We show that the same is true for I 3, which is the content of the following lemma. Its proof is given in Appendix D. Lemma 6.1. With the above notation, we have sup n N,t [,1] 1 n! As a consequence, I 3 /t as t. P( X t x log y N ε t = n)dy <. Note that the above lemma actually implies that Ee Xt x < for all t [, 1). We are ready to state the main result of this section. Theorem 6.. Let v t = E(S t K) + be the price of a European call option with strike K. Under the Conditions (C1-C4) of Section, we have 1 lim t t v t = Proof. We use the notation introduced in (6.). Lemma 6.1, one can show that (6.3) K S sup t [,1] ( S e γ(x,ζ) K ) + h(ζ)dζ. Following a similar argument as in the proof of P{X t x log s N ε t = 1}ds <. Also, it is clear that I 1 /t converges to zero when t approaches to zero by Lemma 3.. Using the latter fact, (6.3), Lemma 6.1, (6.), and dominated convergence theorem, we have v t lim t t = lim S I t t = λ ε S lim t K S P{X t x log s N ε t = 1}ds = λ ε S K S lim P{X t x log s Nt ε = 1}ds. t

Next, using Theorem 4.1, it follows that v t lim t t = S SMALL-TIME EXPANSIONS FOR JUMP-DIFFUSIONS 17 A 1 (x, log s)ds = S K S K S {γ(x,ζ) log s} h(ζ)dζds = where we had used Fubini s theorem for the last equality above. ( S e γ(x,ζ) K ) + h(ζ)dζ, Remark 6.3. As a special case of our result, let γ(x, ζ) = ζ. The model reduces to an exponential Lévy model. The above first order asymptotics becomes to 1 lim t t v t = ( S e ζ K ) + h(ζ)dζ. This recovers the well-known first-order asymptotic behavior for exponential Lévy model (see, e.g., [8] and [31]). Appendix A. Preliminaries of Malliavin calculus on Wiener-Poisson spaces In this section, we follow [6] and introduce briefly the Malliavin calculus on Wiener-Poisson space to our need. All the details can be found in [6]. We first introduce, in an abstract manner, the notion of Malliavin operator (infinitesimal generator of the Ornstein-Uhlenbeck semigroup). Fix a probability space (Ω, F, P). Let C p(r n ) be the space of all C functions on R n which, together with all their partial derivatives up to the second order, have at most polynomial growth. Definition A.1. A Malliavin operator (L, R) is a linear operator L on a domain R p< Lp, taking values in p< Lp, and such that the following conditions hold true: (1) If Φ R n and F C p(r n ), then F (Φ) R. () F is the σ-field generated by all functions in R. (3) L is symmetric in L, i.e. E(ΦLΨ) = E(ΨLΦ) for all Φ, Ψ R. (4) Define Γ(Φ, Ψ) = L(ΦΨ) ΦLΨ ΨLΦ, for all Φ, Ψ R. Then Γ is nonnegative on R R. (5) For Φ = (Φ 1,..., Φ n ) R n and F C p(r n ), one have L(F Φ) = n i=1 F x i (Φ)LΦ i + 1 n i,j=1 F x i x j (Φ)Γ(Φ i, Φ j ). A.1. Malliavin operator on Wiener-space. We consider the Wiener space of continuous paths: Ω W = ( C([, 1], R), (Ft W ) t 1, P ) W where: (1) C([, 1], R) is the space of continuous functions [, 1] R; () (W t ) t is the coordinate process defined by W t (f) = f (t), f C([, 1], R); (3) P W is the Wiener measure; (4) (Ft W ) t 1 is the (P W -completed) natural filtration of (W t ) t 1.

18 JOSÉ E. FIGUEROA-LÓPEZ AND CHENG OUYANG Now, (Ω W, F W, P W ) is the canonical 1-dimensional Wiener space. Set S = { Φ = F (W t1,..., W tn ), F C p(r n ), t i 1 }. For Φ given as above, define L W Φ = 1 n F (W t1,..., W tn )W ti + 1 x i i=1 n i,j=1 F x i x j (W t1,..., W tn )(t i t j ). It is well-known that (L W, S W ) is a Malliavin operator. Furthermore, for Φ = F (W t1,..., W tn ) and Ψ = H(W s1,..., W sm ) n m Γ W F (Φ, Ψ) = (W t1,..., W tn ) H (W s1,..., W sm )(t i s j ). x i x j i=1 j=1 A.. Malliavin operator on Poisson space. In this section, we consider the canonical probability space (Ω P, F P, P P ) of a prefixed Poisson random measure M(dt, dζ) on [, 1] (R\{}). Let µ(dt, dζ) = dt ν(dζ) be the intensity measure of M(dt, dζ) and denote by M = M µ its compensated Poisson random measure. Throughout our discussion, we also assume that ν admits a smooth density h with respect to the Lebesgue measure on R, i.e. ν(dζ) = h(ζ)dζ. We denote by Cc b, ([, 1] R\{}) the set of all compactly supported, Borel functions f on [, 1] R\{} that are of C on R\{} with f, ζ f, ζ f uniformly bounded. For such f, we write M(f) = f(t, ζ)m(dt, dζ). Let ρ : R\{} [, ) be an auxiliary function which is of class C 1 b. Set S P = {Φ = F (M(f 1 ),..., M(f k )); F Cp(R k ), f i Cc b, ([, 1] R\{})}. Given any Φ S P as above, define L P Φ = 1 k i=1 F x i (M(f 1 ),..., M(f k ))M + 1 k i,j=1 In the above, ζ stands for the Laplacian on R\{}. ( ρ ζ f i + z ρ ζ f i + ρ ) ζh h ζf i F x i x j (M(f 1 ),..., M(f k ))M(ρ ζ f i ζ f j ). Proposition A.. (L P, S P ) defines a Malliavin operator. Moreover for Φ(M(f 1 ),..., M(f k )) and Ψ = H(M(h 1 ),..., M(h l )), we have k l Γ P F (Φ, Ψ) = (M(f 1 ),..., M(f k )) H (M(h 1 ),..., M(h l ))M(ρD ζ f i D ζ h j ). x i x j i=1 j=1 Proof. See Proposition 9-3 in [6]. Remark A.3. As we can see from the above definition, the Malliavin operator for Poisson space is not uniquely defined. Indeed, each different choice of ρ gives a different operator. More remarks on this point are given at the end of this section below.

SMALL-TIME EXPANSIONS FOR JUMP-DIFFUSIONS 19 A.3. Malliavin operator on Wiener-Poisson space. Keep notations as in previous sections. We define a probability space (Ω, F, P) by (Ω, F, P) = (Ω W, F W, P W ) (Ω P, F P, P P ). Clearly (Ω, F, P) is the canonical Wiener-Poisson space. Then define (L, S) to be the direct product of (L W, S W ) and (L P, S P ). More precisely, (1) S = {Φ = F (Φ 1,..., Φ n ; Ψ 1,..., Ψ m ); for all Φ i S W, Ψ j S P, F C p}. () For Φ as given in (1), and ω = (ω W, ω P ) Ω, LΦ(ω) = L W Φ(, ω P ) + L P Φ(ω W, ). It can be shown that (L, S) is a Malliavin operator on (Ω, F, P). A few properties of (L, S) listed below. Proposition A.4. Let Φ S W and Ψ S P, we have are (1) LΦ = L W Φ, and Γ(Φ, Φ) = Γ W (Φ, Φ). () LΨ = L P Ψ, and Γ(Ψ, Ψ) = Γ P (Ψ, Ψ). (3) Γ(Φ, Ψ) =. Finally, we extend the L to a larger domain that is suitable to work on. For this purpose, set Φ Hp = Φ L p + LΦ L p + Γ(Φ, Φ) 1/ L p and denote by H p the closure of S under Hp. Also set H = H p. p< Proposition A.5. The operator (L, H ) is a Malliavin operator. Proof. This is the content of Section 8-b in [6]. Remark A.6. As before, a different choice of the auxiliary function ρ in the definition for L P results in a different Malliavin operator L on the Wiener-Poisson space. Hence, the extended domain H may also be different. Remark A.7. Consider the following stochastic differential equation with regular enough coefficients t t t Xt x = x + b(xs )ds x + σ(xs )dw x s + γ(xs, x ζ) M(ds, dζ). To make use of the power of Malliavin calculus to analyze the above SDE, one needs to impose further conditions on ρ so that Xt x is in the domain H. Those further conditions are generally determined by the support of the intensity measure ν and are need to ensure that some approximation procedure can work out. For more details of the role that ρ plays, see Section 9-1 and Theorem 1-3 in [6]. It is important to note that, among various choices of ρ, one can always choose ρ =, as we impose the non-degeneracy Assumption (C3-a). The intuitive explanation of ρ = is that when making perturbation of the sample path on the Wiener-Poisson space, we only perturb the Brownian path.

JOSÉ E. FIGUEROA-LÓPEZ AND CHENG OUYANG Appendix B. Proof of the tail distribution expansion The proof of Theorem 4.1 is decomposed into two parts described in the following two subsections. For future reference, we try to keep track of the reminder terms when applying Dynkin s formula (3.6). B.1. The leading term. In order to determine the leading term of (4.1), we analyze the second term in (4.) corresponding to n = 1 (only one large jump). By conditioning on the time of the jump (necessarily uniformly distributed on [, t]), we can write (B.1) Conditioning on F s (B.) where (B.3) P(X t (x) x + y N ε t = 1) = 1 t t P(X t (ε, {s}, x) x + y)ds. (similar to the procedure in (D.3) below), we can write P(X t (ε, {s}, x) x + y) = E (G t s (X s (ε,, x))), G t (z) := G t (z; x, y) := P [X t (ε,, z) + γ(z, J) x + y]. Conditioning the last expression on X t (ε,, z), we have (B.4) G t (z) = EH(X t (ε,, z)), with H(w) := H(w; x, y, z) := P [γ(z, J) x + y w]. Note that w H(w; x, y, z) = Γ(x + y w; z) and (B.5) k wh(w; x, y, z) = ( 1) k 1 k 1 Γ(x + y w; z) ζ k 1. Here and throughout the proof, Γ(ζ; u) is the density of γ(z, J), whose existence is ensured from the Condition (C4) in the introduction, and k ζ Γ = k Γ/ ζ k and k uγ = k Γ/ u k denote its partial derivatives. Next, in light of (C4-a), we can apply Dynkin s formula and get (B.6) where G t (z) = EH(X t (ε,, z)) = H (z; x, y) + H 1 (z; x, y)t + t R1 t (z; x, y), H (z; x, y) := H(z; x, y, z) = P [γ(z, J) x + y z], H 1 (z; x, y) := L ε H(z; x, y, z) = b(z)γ(x + y z; z) σ (z) ζγ(x + y z; z) + (H(z + γ(z, ζ); x, y, z) H(z; x, y, z)) h ε (ζ)dζ (B.7) R 1 t (z; x, y) = = b(z)γ(x + y z; z) σ (z) ζγ(x + y z; z) + Ĥ1,(z; x + y z), 1 (1 α)el εh(x αt (ε,, z); x, y, z)dα, where Ĥ1, is given as in (4.4). Next, using (3.1), (C4-a), and (B.5), (B.8) sup R 1 t (z; x, y) C <, x,y,z

SMALL-TIME EXPANSIONS FOR JUMP-DIFFUSIONS 1 for a constant C depending on (1 ζ ) h ε (ζ)dζ, b (k), v (k), and sup u,ζ ζ k Γ(ζ; u), for k =,..., 4. Plugging (B.6) in (B.), we get (B.9) P(X t (ε, {s}, x) x + y) = EH (X s (ε,, x); x, y) + (t s)eh 1 (X s (ε,, x); x, y) (t s) + ER 1 t s(x s (ε,, x); x, y), Finally, we apply Dynkin s formula to EH (X s (ε,, x); x, y). To this end, we show that H (z; x, y) is C 4 b in a neighborhood of x. Note that 1 δ (H (z + δ; x, y) H (z; x, y)) = x+y z δ 1 Γ(ζ; z + δβ) dβdζ 1 x+y z δ Γ(ζ; z)dζ, u δ x+y z where, as before, Γ(ζ; u) denotes the density of γ(u, J). Due to (C4-b), the limit of the previous expression as δ exists and we get (B.1) H (z; x, y) z = x+y z Γ(ζ; z) dζ + Γ(x + y z; z), u which is locally bounded for values of z in a neighborhood of x because of (C4). Similarly, we can show that (B.11) zh (z; x, y) = x+y z uγ(ζ; z)dζ + u Γ(x + y z; z) ζ Γ(x + y z; z), which is also locally bounded for values of z in a neighborhood of x because of (C4). We can similarly proceed to show that k z H (z; x, y) exist and is bounded for k = 3, 4. Using Lemma 3.5, (B.1) where (B.13) EH (X s (ε,, x); x, y) = H, (x; y) + sh,1 (x; y) + s R s(x; y), H, (x; y) := H (x; x, y) = P [γ(x, J) y], H,1 (x; y) := L ε H (x; x, y) = b(x) H (z; x, y) z + σ (x) H (z; x, y) z=x z + (H (x + γ(x, ζ); x, y) H (x; x, y)) h ε (ζ)dζ = b(x) H (z; x, y) z + σ (x) H (z; x, y) z=x z + Ĥ,1(x; y), z=x R s(x; y) := 1 z=x (1 α)el εh,δ (X αs (ε,, x); x, y)dα + E H,δ (X s (ε,, x); x, y), where Ĥ,1(x; y) is defined as in (4.4) and H,δ (z) = H (z)ψ δ (z), and H,δ (z) = H (z) ψ δ (z) with ψ δ defined as in the proof of Lemma 3.5. Note the R s(x; y) = O(1), as s. Plugging (B.1) in (B.9), which can then be plugged in (B.1) to get P(X t (ε, {s}, x) x + y) = P [γ(x, J) y] + O(t), P(X t (x) x + y N ε t = 1) = P [γ(x, J) y] + O(t).

JOSÉ E. FIGUEROA-LÓPEZ AND CHENG OUYANG Finally, (4.) can be written as P(X t (x) x + y) = e λεt tλ ε P [γ(x, J) y] + O(t ) = t 1 {γ(x,ζ) y} h(ζ)dζ + O(t ), where the second equality above is valid for ε > small enough. B.. Second order term. In addition to (B.1), we also need to consider the leading terms in the term EH 1 (X s (ε,, x); x, y) of (B.9) and the term P(X t (x) x + y N ε t = ) of (4.). Let us first show that z H 1 (z; x, y) is C b. Let K(ζ; x, y, z) := P [γ(z, J) x + y z γ(z, ζ)] P [γ(z, J) x + y z], and recall that H 1 (z; x, y) = b(z)γ(x + y z; z) σ (z) ζγ(x + y z; z) + K(ζ; x, y, z) h ε (ζ)dζ. Obviously, the first two terms on the right-hand side of the previous expression are Cb in light of (C3) and (C4-a). Hence, for the derivative H 1(z;x,y) to exist, it suffices to show that K(ζ;x,y,z) exists z z and that sup K(ζ; x, y, z) z < C(ζ 1), for any ζ < ε. Note that K(ζ; x, y, z) z z,x,y = Γ(x + y z γ(z, ζ); z) Γ(x + y z; z) }{{} T 1 x+y z + x+y z γ(z,ζ) T γ(z, ζ) + Γ(x + y z γ(z, ζ); z) }{{ z } Γ(ζ ; z) dζ. u }{{} T 3 The following bounds are evident from (C) and (C4-a), for ε > small enough, sup T 1 sup Γ(ζ ; u) x,y,z u,ζ ζ sup γ(z, ζ) C(1 ζ ), z sup T sup Γ(ζ ; u) sup γ(z, ζ) x,y,z u,ζ z z C(1 ζ ), sup T 3 sup Γ(ζ ; u) x,y,z u,ζ u sup γ(z, ζ) C(1 ζ ). z We can similarly prove that H 1 (z;x,y) z to get exists and is bounded. Hence, we can apply Dynkin s formula (B.14) EH 1 (X s (ε,, x); x, y) = H 1, (x, y) + sr 3 s(x; y),