Sieve-based confidence intervals and bands for Lévy densities

Size: px
Start display at page:

Download "Sieve-based confidence intervals and bands for Lévy densities"

Transcription

1 Sieve-based confidence intervals and bands for Lévy densities José E. Figueroa-López Purdue University Department of Statistics West Lafayette, IN Abstract: A Lévy process combines a Brownian motion and a pure-jump homogeneous process, such as a compound Poisson process. he estimation of the Lévy density, the infinite-dimensional parameter controlling the jump dynamics of the process, is considered under a discrete-sampling scheme. In that case, the jumps are latent variables which statistical properties can in principle be assessed when the frequency and time horizon of observations increase to infinity at suitable rates. Nonparametric estimators for the Lévy density based on Grenander s method of sieves had been proposed in [4]. In this paper, central limit theorems for these sieve estimators, both pointwise and uniform on an interval away from the origin, are obtained, leading to point-wise confidence intervals and bands for the Lévy density. In the point-wise case, we find feasible estimators which converge to s at a rate that is arbitrarily close to the rate of the minimax risk of estimation for smooth Lévy densities. We determine how frequently one needs to sample to attain the desired rate. In the case of uniform bands and discrete regular sampling, our results are consistent with the case of density estimation, achieving a rate of order arbitrarily close to log /2 n n /3, where n is the number of observations. he rate is valid provided that s is smooth enough, and that the time horizon n and the dimension of the sieve are appropriately chosen in terms of n. Keywords and phrases: confidence bands, confidence intervals, Lévy processes, nonparametric estimation, model selection, sieve estimators.. Introduction.. Motivation and some background In the past decade, Lévy processes have received a great deal of attention, fueled by numerous applications in the area of mathematical finance, to the extend that Lévy processes have become a fundamental building block in the modeling of asset prices with jumps see [2] for an introduction on the field. he simplest of these models postulates that the price of a commodity say a stock at time t is determined by S t := S 0 e Xt,. It is a pleasure to thank David Mason for pointing out the KM inequality and other important remarks. My gratitude goes to the participants of the Workshop on Infinitely Divisible Processes CIMA A.C. March 2009 for helpful comments.

2 J.E. Figueroa-López/Confidence intervals and bands for Lévy densities 2 where X := {X t } t 0 is a Lévy process. Even this simple extension of the classical Black-Scholes model, in which X is simply a Brownian motion with drift, is able to account for several fundamental empirical features commonly observed in time series of asset returns such as heavy tails, high-kurtosis, and asymmetry. In recent years other Lévy based models have been proposed such as exponential time-changed Lévy processes cf. [8]-[0] and stochastic differential equations driven by multivariate Lévy processes cf. [], [29]. Lévy processes, as models capturing some of the most important features of returns and as first-order approximations to other more accurate models, are fundamental in developing and testing successful statistical methodologies. However, even in such parsimonious models, there are several issues in performing statistical inference by standard likelihood-based methods. A Lévy process is the discontinuous sibling of a Brownian motion. Concretely, X = {X t } t 0 is a Lévy process if X has independent and stationary increments, its paths are right-continuous with left limits, and it has no fixed jump times. he later condition means that, for any t > 0, P [ X t 0] = 0, where X t := Xt lim s t X s is the magnitude of the jump of X at time t. It can be proved that the only Lévy process with continuous paths is essentially the Brownian motion W := {W t } t 0 up to a drift term bt hence, the well-known Gaussian distribution of the increments of W is a byproduct of the stationarity and independence of its increments. he only deterministic Lévy process is of the form X t := bt, for a constant b. Another distinguished type of Lévy process is a compound Poisson process defined as N t Y t := ξ i,.2 where N is a homogeneous Poisson process and the random variables ξ i, i, are mutually independent from one another, independent from N, and with common distribution ρ. he process N dictates the jump times, which can occur homogeneously across time with an average intensity of λ jumps per unit time, while the sequence {ξ i } i determines the sizes of the jumps. It turns out that the most general Lévy process is the superposition of a Brownian motion with drift, σw t +bt, a compound Poisson process, and the limit process resulting from making the jump intensity of a compensated compound Poisson process, Y t E Y t, goes to infinity while simultaneously allowing jumps of smaller sizes. he latter limiting process is governed by a measure ν such that the intensity of jumps is λ ε := νε x <, the common distribution of the jump sizes is ν ε dx := {ε x <} νdx/λ ε, and the limit is when ε 0. For such a limit to converge to a steady process it must hold that x 2 νdx <. { x <} i=

3 J.E. Figueroa-López/Confidence intervals and bands for Lévy densities 3 his characterization of Lévy processes is called the Lévy-Itô decomposition. It says that a Lévy process X admits the decomposition t t X t = bt + σb t + lim x µ µdx, ds + x µdx, ds,.3 ε 0 0 ε x 0 x > where B is a standard Brownian motion and µ is an independent Poisson measure on R + R\{0} with mean measure µdx, dt := νdxdt see e.g. Sato [28]. In that case, we say that X has Lévy triplet σ 2, b, ν. In summary, Lévy processes are determined by three parameters : a nonnegative real σ 2, a real b, and a measure ν on R\{0} such that x 2 νdx <. he measure ν controls the jump dynamics of the process X in that for any A BR whose indicator χ A vanishes in a neighborhood of the origin, νa = t E χ A Xs, s t for any t > 0 see Section 9 of [28]. hus, νa gives the average number of jumps per unit time whose magnitudes fall in the set A. A common assumption in Lévy-based financial models is that ν is determined by a function s : R\{0} [0,, called the Lévy density, as follows νa = sxdx, A BR\{0}. A Intuitively, the value of s at x 0 provides information on the frequency of jumps with sizes close to x 0. In the case of the compound Poisson process.2, the Lévy measure is νdx = λρdx. By allowing a general Lévy process X in., instead of just a Brownian motion with drift as in the Black-Scholes model, one can incorporate two very appealing features: sudden changes in the price dynamics and some freedom in the distribution for the log return log{s t /S s } = X t X s. he possible distributions belong to the class of infinitely-divisible distributions, a very rich class which include most known parametric families of distributions..2. he statistical problem and the methodology We are interested in estimating, in a non-parametric fashion, the Lévy density s over a window of estimation D := [a, b] R\{0}, based on discrete observations of the process on a finite interval [0, ]. We recall that s can in general blow up around the origin, and hence, we consider only domains D that are separated from the origin, in the sense that D ε, ε = for some ε > 0. If the whole path of the process were available and hence, the jumps of the process would be available, the problem would be identical to the estimation of the intensity of a non-homogeneous Poisson process on a fixed time interval, say [0, ], based on [ ] independent copies of the process. However, under discrete-sampling, the times

4 J.E. Figueroa-López/Confidence intervals and bands for Lévy densities 4 and magnitudes of jumps are latent unobservable variables, which statistical property can in principle be accurately assessed when the frequency and time horizon of observations increase to infinity. One natural issue is to determine how frequently to sample during a given time horizon [0, ]. A sensible criterion is to sample at a high enough frequency such that the estimation of s based on the resulting discrete sample performs, asymptotically, as good as the best possible estimation of s that can be accomplished under continuous-sampling of X. his is precisely the approach we adopt in this paper. Non-parametric estimators for the Lévy density were proposed in [6], under continuous-sampling of the process, and in [4], under discrete-sampling, using the method of sieves of Grenander [9]. he results there identify the rate of convergence of the minimax risk, off the origin, for smooth Lévy densities, and show that the proposed sieve-based estimators attain such a rate, provided that the sampling frequency is high enough relative to the time horizon. In this paper we will obtain further asymptotic properties of the estimators developed in [4]. Central limit theorems, both point-wise and uniform on an interval away from the origin, are obtained, leading to point-wise confidence intervals and bands for the Lévy density. In the point-wise case, our results show that feasible estimators exist which converge to s at a rate that is arbitrarily close to the optimal minimax rate of [4]. he optimal asymptotic rate is achieved provided that the sampling frequency and the dimension of the sieves increases at suitable rates relative to the time horizon. In the case of uniform bands and discrete regular sampling, our results are consistent with the case of density estimation, achieving a rate of order arbitrarily close to log /2 n n /3, where n is the number of observations. he rate is valid provided that the time horizon n and the dimension of the sieves is appropriately chosen. he fundamental theoretical property of a Lévy process behind our results is the fact that t P [X t y] converges to ν[y, at a rate of Ot. his result is established in Section 3. he method of sieves was originally proposed by Grenander [9] and applied more recently by Birgé, Massart, and others see e.g. [2] & [6] to several classical nonparametric problems such as density estimation and regression. his approach consists of the following general steps. First, choose a family of finitedimensional linear models of functions, called sieves, with good approximation properties. Common sieves are splines, trigonometric polynomials, or wavelets. Second, specify a distance metric d between functions, relative to which the best approximation of s in a given linear model S will be characterized. hat is, the best approximation s of s on S is defined by the following equation ds, s = inf ds, p. p S Finally, devise an estimator ŝ, called the projection estimator, for the best approximation s of s in S. he sieves considered here are of the general form S := {β ϕ + + β d ϕ d : β,..., β d R},.4

5 J.E. Figueroa-López/Confidence intervals and bands for Lévy densities 5 where ϕ,..., ϕ d are orthonormal functions with respect to the inner product p, q D := pxqxdx. D In the sequel, := D stands for the associated norm, /2 D on L2 D, dx. We recall that, relative to the distance induced by, the element of S closest to s, i.e. the orthogonal projection of s on S, is given by s x := d βϕ j ϕ j x,.5 j= where βϕ j := ϕ j, s D = D ϕ jxsxdx. hen, under this setting, the method of sieves boils down to estimate the functional βϕ = ϕxsxdx, D for certain functions ϕ. In Section 2, we propose estimators for βϕ and as a byproduct, we propose projection estimators ŝ on S. Following [4], we specialize further our approach and take regular piece-wise polynomials as sieves, though similar results will hold true if we take other typical classes of sieves such as smooth splines, trigonometric polynomials, or wavelets. For future reference, let us formally define the sieves. Definition.. S k,m stands for the class of functions ϕ such that for each i = 0,..., m, there exists a polynomial q i,k of degree at most k such that ϕx = q i,m x for all x in x i, x i ], where x i = a + b ai m. It is proved in [4] that by appropriately choosing the number of classes m and the sampling frequency high enough both choices determined in function of the time horizon, the resulting projection estimator on S m,k attains the best possible rate of convergence as among all feasible estimators even continuous-time based, provided that s belongs to a class Θ of smooth functions. hese results will be revised in Section 2. For now, let us recall a few points in order to motivate the results in this paper. he referred optimal rate of convergence is O 2α/2α+, where α characterizes the smoothness of the Lévy density s on the interval [a, b], in that if s is r-times differentiable on a, b r = 0,... and s r x s r y L x y κ,.6 for all x, y a, b and some L < and κ 0, ], then the smoothness parameter of s is α := r + κ. It is proved in [4] that there exists a δ > 0 such that if the time span between consecutive sampling observations is at most δ and m := [ /2α+], then the resulting projection estimator, denoted by ŝ, is such that lim sup 2α 2α+ sup E s ŝ 2 <..7 s Θ

6 J.E. Figueroa-López/Confidence intervals and bands for Lévy densities 6 Notice that the convergence in.7 is the integrated mean-square sense. A natural question is whether or not one can devise projection estimators ŝ on S k,m such that α/2α+ D ŝ x sx σxz,.8 holds for a standard Normal random variable Z, for each fixed x D. We were unable to attain.8 due to the fact that the bias of the estimator ŝ, namely E ŝ x sx, is just O α/2α+. However, for any β < α 2α+, we can devise a projection estimator ŝ β such that β ŝ β D x sx σxz..9 he idea is to use undersmoothing to make the effect of bias negligible. Our results are in perfect alignment with those obtained in other standard nonparametric problems, such as density estimation or functional regression, using local nonparametric methods such as kernel estimation see for instance Hall [20]. We were unable to find a reference where undersmoothing is used in a global nonparametric method such as the sieves method, and hence, this could be an additional contribution of the results presented here. A natural extension of the point-wise central limit theorems is the development of global measures of deviation or asymptotic confidence bands for the Lévy density. We establish these methods in Section 5, following ideas of the seminal work of Bickel and Rosenblatt [4]. here are some important differences, though, starting from the fact that Bickel and Rosenblatt considered kernel estimators for probability densities, while here we consider a global nonparametric method. We are able to show the uniform confidence bands for piece-wise constant and linear regular polynomials, though we believe the result holds true for a general degree. he paper is structured as follows. In Section 2, we revise the projection estimators and optimality results obtained in [4] as a manner of introducing some of the terminology used in the sequel. An important small-time ergodic property of Lévy processes, that plays a fundamental role in our results, is presented in Section 3. he point-wise central limit theorems for Lévy densities are derived in Section 4. he uniform case and the resulting confidence bands are developed in Section 5. In Section 6, we propose a data-driven selection method for the sieve. Instead of deciding the dimension of the sieve from a presumed degree of smoothness of s, we propose to choose the sieve that minimizes an unbiased estimator of the risk of the projection estimator corresponding to that sieve. Since the proposed estimator of the risk will require the knowledge of all jumps of X up to time, we replace it by a natural discrete-based proxy, where the jumps X t are replaced by the increments X tk X tk. Section 7 illustrates the performance of the projection estimators and confidence bands using a simulation experiment in the case of a variance gamma Lévy model.

7 J.E. Figueroa-López/Confidence intervals and bands for Lévy densities 7 2. An overview of the estimators and some optimality results We assume that the Lévy process {X t } t 0 is being sampled over a time horizon [0, ] at discrete times 0 = t 0 < < tn =. In the sequel, we use the notation π := {t k }n k=0, and π := max k{t k tk }, where we will sometimes drop the subscript. he following statistics are the main building blocks for our estimation: ˆβ π n ϕ := ϕ X t k X t k. 2. k= In the case of a quadratic function ϕx = x 2, n k= X ϕ t k X t k is the so-called the realized quadratic variation or variance of the process. hus, the statistics 2. can be interpreted as the realized ϕ variation of the process per unit time based on the observations X t 0,..., X n t. o explain the motivation behind the estimators 2., let us assume for now that the sampling observations are equally-spaced in time so that n := = /n for all i, and hence, t i ti E { ˆβ π ϕ} = E ϕ X n, n { } Var ˆβπ ϕ = E ϕ 2 X n n n 2 E ϕ X n. n It turns out that if ϕ is ν-continuous, bounded, and vanishing in a neighborhood of the origin, then lim 0 E ϕx = ϕxνdx = ϕxsxdx; 2.2 see e.g. [28, Corollary 8.9]. Actually, 2.2 is even valid for certain unbounded functions see e.g. [5] and [27] for more details. It is now evident that lim E { ˆβ π ϕ} = ϕxsxdx, and lim Var ˆβπ ϕ = n he previous arguments lead us to propose ŝ π x := sup n d ˆβ π ϕ j ϕ j x, 2.4 j= as a natural estimator for the orthogonal projection s defined in.5. In view of 2.3, ŝ π is a consistent estimator for s, in the integrated mean-square sense, as both the time horizon and the sampling frequency n/ increase to. he estimators 2. were proposed independently by Woerner [30] and Figueroa-López [3]. he nonparametric estimator 2.4 was proposed for the first time in [3], where the problem of model selection was also considered.

8 J.E. Figueroa-López/Confidence intervals and bands for Lévy densities 8 It is worth pointing out that ŝ π is independent of the specific orthonormal basis of S as it can be proved that ŝ π is the unique solution to the minimization problem min f S γπ D f, where γ π : L2 D, dx R is given by D γ π D f 2 n k= fx t k X t k + f 2 xdx. 2.5 In the literature of model selection see e.g. [5] and [25], γd π is called the contrast function. Some of the non-parametric properties of the projection estimators 2.4 on the class of regular piece-wise polynomials S k,m described in Definition. were obtained in [4]. he main result there states that there exist m and δ > 0 such that the projection estimator on S k,m, denoted by s, converges to s at the rate O 2α/2α+, as, provided that the mesh π < δ and that s has degree of smoothness α. Concretely, let B L α [a, b] be the class of Besov functions; that is, those functions satisfying.6 with r N and κ 0, ] being such that α = r + κ. Below, s B α L is the smallest possible L in.6. he next theorem was shown in [4]. heorem 2.. Fix α > 0 and k > α. Let m := [ /2α+] and let Θ α R, L be the class of Lévy densities s such that s χ D < R, and such that the restriction of s to D := [a, b] is a member of B α L [a, b] with s B α L < L. hen, for each > 0, there exists a constant δ > 0 such that lim sup 2α/2α+ sup s ΘR,L E [ s s 2 D] <, 2.6 where the estimator s is given by 2. and 2.4 with S = S k, m and with sampling points 0 = t 0 < < tn = satisfying that π < δ. It was also proved in [4] that the rate O 2α/2α+, attained by projection estimators, is the best possible, in the sense that there is no estimator ŝ that can converge to s faster than 2α/2α+, for any s Θ, even if this estimator were allowed to use the whole path X during [0, ]. Concretely, we have the following minimax result. heorem 2.2. Let α > 0 and let l : R R + be an even strictly convex loss function such that l0 = 0 and lu exp{ε u 2 } 0 as u for any ε > 0. If x 0 is an interior point of the interval [a, b], then lim inf { inf ŝ sup s Θ E s [l α/2α+ ŝ x 0 sx 0 ] } > 0, 2.7 where Θ := Θ α R, L, and the infimum is over all the estimators ŝ on {Xt} 0 t. of s based

9 J.E. Figueroa-López/Confidence intervals and bands for Lévy densities 9 As a consequence, the following result is proved. Corollary 2.3. Under the notation and conditions of heorem 2.2, the following two limits hold: { lim inf inf inf sup E s [l α/2α+ ŝ x sx] } > 0, 2.8 ŝ x a,b s Θ { ]} lim inf 2α/2α+ inf ŝ [ b sup E s ŝ x sx 2 dx s Θ a > We conclude that there is no reasonable estimator ŝ of s capable of outperforming the rate 2α/2α+ uniformly on Θ: there is always an s Θ for which 2α/2α+ [ E s ŝ s 2] > B, for some B > 0 and for large enough. herefore, the estimator described in Proposition 2. achieves the optimum rate of convergence on ΘR, L from a minimax point of view. 3. An useful small-time asymptotic result he critical span δ, required for the validity of heorem 2., is characterized by the property that sup P [X y] ν[y, < k, 3. y D for all 0 < < δ, where k is a constant independent of and. Of course, an explicit estimate of this critical mesh is necessary for practical reasons. In the compound Poisson case when νr\{0} <, it could be postulated that sampling at a frequency faster than should be enough since in that case, the increments of the process will retrieve the jumps up to a negligible additive error due to the increments of the Wiener process. In the case of an infinite jump activity processes, any increment will be the result not only of the increment of the Wiener process, but also of the superposition of infinitely many small jumps. In [4], the problem of assessing δ was addressed using an inequality for the tails of a Lévy process with small jump sizes. It was shown that if ρ > 0 is such that aρ > recall that by assumption the estimation window is [a, b] with a > 0, then there exists 0 ρ > 0 and k > 0 independent from such that sup t P [X t y] ν[y, < k, y D for all > 0 and t < ρ. his yields a critical mesh of the form δ = ρ. We now show that it is possible to take δ =. In the following section, where the central limit theorems are given, we will also use the following result to determine how frequently to sample relative to the number of classes of

10 J.E. Figueroa-López/Confidence intervals and bands for Lévy densities 0 the sieve, without using explicitly the critical mesh anymore. he proof of the below result is provided in the Appendix A. Related higher order polynomial expansions for P X t y are provided in [7]. Proposition 3.. Suppose that the Lévy density s of X is Lipschitz in an open set D 0 containing D = [a, b] R\{0}, and that sx is uniformly bounded on x > δ, for any δ > 0. hen, there exists a k > 0 and t 0 > 0 such that for all 0 < t < t 0 sup t P [X t y] ν[y, < k t. 3.2 y D 4. CLs for the projection estimators In Section 2, we saw that there exists projection estimators s on regular piecewise polynomials S = S k,m that converges to s, under the integrated meansquare distance, at a rate at least as good as 2α/2α+ see heorem 2.. Such a rate was ensured by tuning the number of classes m in the sieve, as well as the sampling frequency π, to both the degree of smoothness of s and the given time horizon. It is natural to wonder if the estimators s in heorem 2. meet a central limit theorem of the form α/2α+ s x sx D σz, as, for Z N 0, and a constant σ. We are unable to conclude this result due to the fact that the bias of the estimator ŝ, namely E ŝ x sx, is just O α/2α+. However, we now show that for any β < α 2α+, there exists a projection estimator ŝ β such that β ŝ β D x sx σz. he following easy lemma will be useful in the sequel. Lemma 4.. Suppose that ϕ has support [c, d] R + \{0}, where ϕ is continuous with continuous derivative. hen, E ϕ X d βϕ ϕc + ϕ u du M [c, d], where βϕ := ϕxsxdx and M [c, d] := sup y [c,d] P [X y] ν[y,. Proof. he result is clear from the below identities E ϕx = ϕc P [X c] + ϕxνdx = ϕcν [c, + which are standard consequences of Fubini. c c c ϕ u P [X u] du, ϕ uν [u, du,

11 J.E. Figueroa-López/Confidence intervals and bands for Lévy densities We first consider the simplest case where k = 0 hence, the estimators are simply piece-wise constant functions. Below, for each time horizon, we consider a normalizing constant c, a set π of sampling times, and a number of classes m. Contrary to the results in [4], which have been described in Section 2, we do not employ the critical mesh δ, and we rather specify the rate at which the sampling mesh π must converge to 0 relative to the number of classes m of the sieve. heorem 4.2. Suppose that the Lévy density s of X satisfies the conditions of Proposition 3.. Let x a, b be such that sx > 0 and let Z N 0, and σ 2 x := b a sx. hen, the projection estimator ŝ on S 0,m is such that if the following conditions hold: c ŝ x sx D σz, i c iii π c m, 0, ii c2 m iv c m, 0. In particular, for any fixed 0 < β < 3, the resulting projection estimator ŝ = [ 2β] is such that m with β ŝ x sx provided that π = γ with γ > β. D σz, Proof. We apply a simple version of the Central Limit heorem for independent random variables see e.g. the Corollary following heorem 7..2 in []. Writing π : 0 = t 0 < < tn = and i := ti ti, we have S := c ŝ x E ŝ x = c { ϕ x ϕ X t i i } X t i E ϕ X i, where ϕ is of the form b a [a,b, with a, b [a, b and b a = b a/m. In that case, σ 2 := Var S = c2 m 2 b a i satisfying that x { } 2 E ϕ 2 X i E ϕ X i. 4. Next, we show that lim E ϕ 2 X = sx.

12 J.E. Figueroa-López/Confidence intervals and bands for Lévy densities 2 Indeed, from Proposition 3., there exists a t 0 > 0 and C > 0 such that if 0 < i < t 0, then i E ϕ 2 X i sx b b a i P {X i [a, b } sydy a + b a b a sy sx dy C i + C m b a, b a which converges to 0 in the light of the conditions i-iv. Also, note that i { } { 2 E ϕ X i = i b a i 2 E ϕ 2 X } i 0, as. In view of condition ii, we conclude that σ 2 b a sx. Also, the terms of S vanish uniformly since c sup i ϕ x ϕ X t i X t i c m b a 0, as. We proved all hypothesis required for the CL and hence, S It remains to show that lim c { E ŝ x sx} = 0. Indeed, where A i c E ŝ x sx i Ai, := c i b a P {X i [a, b } sx. i D σz. As before, we can show that there exists a t 0 > 0 and C > 0 such that if 0 < i < t 0, then sup A i C c m i i b a + C c m b a, which converges to 0 by ii -iv. he second part of the result follows directly. here are some issues that need to be taken care of for heorem 4.2 to hold for a general regular sieve S k,m. Remember that S k,m consists of piecewise polynomials of degree at most k on each of the intervals of the partition a := x 0 < < x m := b, where x i = a + b ai m. An orthonormal basis for S k,m is given by the functions ˆϕ i,j x := 2j + x i x i Q j 2x xi + x i x i x i [xi,x ix, 4.2

13 J.E. Figueroa-López/Confidence intervals and bands for Lévy densities 3 for i =,..., m and j = 0,..., k, where Q j is the Legendre polynomials of order j on L 2 [, ], dx. For future reference, let us remember that Q j x, and Q jx Q j = jj he fact that Q j is not constant for j > 0 will pose some issues, since in that case, the relative position of x inside its corresponding interval might change greatly with m a fact which in turn prevents the convergence of the variance 4. towards b a sx. We have however the following result. Lemma 4.3. Suppose that the Lévy density s of X satisfies the conditions of Proposition 3. and let x a.b be such that sx > 0. Define hen, b 2 k,mx := m 2j + j=0 i= Q 2 j c b k,m x ŝ x E ŝ x 2x xi + x i x i x i D σz, with Z N 0, and σ 2 := b a sx, if the following conditions hold true: i c, ii c2 m., iii m π 0. Proof. We use the notation of the proof of heorem 4.2. For simplicity, we will just write b m instead of b k,m x when k and x are fixed. Notice that in the present case S := c b m = c b m ŝ x E ŝ x i j=0 where ϕ j, is of the form { ϕ j, x ϕ j, X t i } X t i E ϕ j, X i, 2j + 2 a + b Q j [a,b b a b a, with a, b satisfying that x [a, b and b a = b a/m. In that case, σ 2 := Var S is given by σ 2 := c2 2 b 2 m i j,j 2=0 ϕ x ϕ xcov j, j 2 ϕ X, j, i, ϕ X j2, i. 4.4

14 J.E. Figueroa-López/Confidence intervals and bands for Lévy densities 4 Let us analyze the above covariances, scaled by i. First, applying Lemma 4., 4.3, and Proposition 3., there exists a t 0 > 0 and K > 0 such that whenever < t 0, E ϕ X j, ϕ X j2, ϕ j, y ϕ j 2, ysydy K b a. Similarly, using also that ϕ j, ysydy s, there exists a t0 > 0 and K > 0 such that whenever < t 0, E ϕ X j, E ϕ X j2, K. hus, we deduce that if π 0 and m π 0, then Cov ϕ X j, i, ϕ X j2, i = o + ϕ x ϕ ysydy, j, j 2, i where o 0 uniformly in i, as. hus, in view that b m, 4.3, and condition ii, σ 2 ˆσ2 0, where ˆσ 2 := c2 b 2 m j,j 2=0 ϕ j, x ϕ j 2, x ϕ j, y ϕ j 2, ysydy. Next, the continuity of s at x, condition ii, and the fact that the support of ϕ j, contains x and shrink to 0, yield that lim c 2 b 2 m j,j 2=0 ϕ j, x ϕ j 2, x ϕ y ϕ j, j 2 ysy sxdy = 0., his implies that lim ˆσ2 = lim σ2 = sx b a, in view of condition ii and the definition of b k. Finally, we then consider the standardized sum Z := σ S. By the Corollary following heorem 7..2 in [], Z will converge to N 0, because sup i c σ b m ϕ j, x ϕ j, X t i j=0 X t i c m σ b m b a 0, as in view of conditions i-ii and the fact that b m. his implies the proposition since σ 2 sxb a.

15 J.E. Figueroa-López/Confidence intervals and bands for Lévy densities 5 he last step towards a general CL is to analyze the rate of convergence of the bias term. heorem 4.4. Suppose that the Lévy density s of X satisfies the conditions of Proposition 3.. For r N and κ 0, ], let α = r + κ and assume that the restriction of s to D := [a, b] is a member of B α L [a, b]. Suppose that conditions i-ii in Lemma 4.3 are satisfied together with the following three conditions: iii c m π 0, iv c m α hen, for any fixed x a, b for which sx > 0, lim c b m E ŝ x sx = 0, 0, v k α. where ŝ v, is the projection estimator on S k,m. Moreover, under conditions i- c b m ŝ x sx D σxz, with Z N 0, and σ 2 x := b a sx. Also, for any fixed 0 < β < the resulting projection estimator ŝ with m = [ 2β] is such that α 2α+, β b m ŝ x sx D σxz, provided that π = γ with γ > β. Proof. We use the same notation as in the proof of Lemma 4.3. Since the case α = was considered in heorem 4.2, we assume that α >. Obviously, we have c E ŝ x sx i A i, b m i where A := c b m ϕ j, x E ϕ j, X sx. j=0 hen, it suffices to show that max i A i 0, as. Notice that A c { ysydy} b m ϕ j, x E ϕ X j, ϕ j, j=0 + c b m ϕ j, x ϕ j, ysy sxdy, j=0

16 J.E. Figueroa-López/Confidence intervals and bands for Lévy densities 6 where we have used that ϕ j, ydy = δ 0 j. Let us denote A and A2 the first and second terms on the right hand side of the above inequality, respectively. Using 4.3, Lemma 4., and Proposition 3., there exists a K > 0 and 0 > 0 such that for > 0, A i K c i b m b a K c m π b a 0, as due to i-iii. he second term is more tricky. First, we remark that ϕ j, x ϕ j, yy x j dy = 0, j=0 for j =,..., k. his is because the left hand side is p x, where p y is defined as the orthogonal projection of the function py := y x j on S k,m. Clearly, p x = px = 0. Also, when α >, aylor s heorem implies that sy sx = r j = s j x j y x j +! y x y v s r v s r r x dv, r! where r := α, the largest integer that is strictly smaller than α. Since k α, we have that k r and ϕ j, x ϕ j, ysy sxdy j=0 = ϕ j, x ϕ j, y j=0 y x s r v s r x y vr dvdy. r! hus, applying two times Cauchy-Schwartz for summation and for integral, A 2 c b m b c ϕ j, x j=0 a j=0 b a ϕ j, y x y x s r v s r x y vr r! dvdy { y 2 s r v s r y vr x dv} dy r! Finally, using the Holder condition.6 for s r, A 2 Kc m α 0. Remark 4.5. he previous theorem will allow us to construct approximate confidence intervals for sx. Concretely, the 00 α% interval for sx is approximately given by ŝ x ± b m where z α/2 is the α/2 normal quantile. ŝ/2xz c b a /2 α/2, /2.

17 J.E. Figueroa-López/Confidence intervals and bands for Lévy densities 7 5. Confidence bands for the Lévy densities In this part we address the problem of constructing confidence bands for the Lévy density based on the estimators 2.4 using the sieves in Definition.. We follow ideas from the seminal paper of Bickel and Rosenblatt [4], which construct confidence bands for the probability densities based on kernel estimators. here are some important differences, though, since here we consider a global nonparametric method. For simplicity, let us assume that the sampling times π n : t 0 = 0 < < t n = are evenly spaced in time so that t i := iδ n with δn := δ n := /n. hroughout this part we assume that the distribution of X t, F t x := P X t x, is continuous for t > 0 small enough. It turns out this is a time independent property and furthermore, a necessary and sufficient condition for F t to be continuous for any t is that σ 0 or νr = see [28, heorem 27.4]. Let Zn 0 be the uniform standardized empirical process Z 0 nx := n /2 {F nx x}, where Fn is the empirical distribution of {F δ nx ti X ti } i n, which is necessarily a random sample of uniform random variables since F δ n is continuous. In that case, we remark that a.s. Z 0 n F δ nx = n /2 {F n x F δ nx}, x R, where F n := F n be the empirical process of {X t i X ti : i = 0,..., n}. he following transformation will be useful below see 4.2 for the notation: m L x; m, κ, H = κ i= j=0 xi ˆϕ i,j x { ˆϕ i,j x i Hx i Hx i x i ˆϕ i,ju Hu Hx i du}, where H : R R is a locally integrable function. Notice that if H is a bounded variation function, then L x; m, κ, H = κ m i= j=0 he following estimate follows easily from 4.3: sup Lx; m, κ, H K κ m ω x [a,b] xi ˆϕ i,j x ˆϕ i,j udhu. x i H; [a, b], b a m, 5. where K is a constant depending only on k and ω is the modulus of continuity of H defined by ωh; [a, b], δ = sup { Hu Hv : u, v [a, b], u v < δ}.

18 J.E. Figueroa-López/Confidence intervals and bands for Lévy densities 8 We first notice that we can write the estimator 2.4 corresponding to the sieve S k,m of Definition. as follows: ŝ n x := m i= j=0 ˆβ πn ˆϕ i,j ˆϕ i,j x = L x; m, n, F n, 5.2 where ˆϕ i,j is the basis element in 4.2. Notice that E ŝ n x has a similar expression with F n replaced by F δ n. In terms of Zn, 0 it follows that, a.s., Y x; n x := ŝnx E ŝnx = L m, n /2, Z 0 nf δ n, 5.3 for all x. he key idea in [4] was to approximate Z 0 n by a Brownian bridge Z 0. In this direction, the following consequence of the Komlós, Major, and usnády construction [22] plays a fundamental role. hroughout, Z [c,d] := sup c x d Zx. heorem 5.. here exists a probability space Ω, F, P, equipped with a standard Brownian motion Z, on which one can construct a version Z n 0 of Zn 0 such that Z n 0 Z 0 [0,] = O p n /2 log n, where Z 0 x := Zx x Z is the corresponding Brownian bridge. Since we are looking for the asymptotic distribution of sup x Y n x, properly scaled and centered, we can work with the process Z n, 0 instead of Zn. 0 hus, with some abuse of notation, we drop the tilde in all the processes of heorem 5.. he following is an easy estimate. Abusing again of the notation, the process 0 Y n in the following result is actually the process resulting from replacing ZnF 0 δ n in 5.3 by Z nf 0 δ n. Lemma 5.2. Let 0 Y n x = L x; m, n /2, Z 0 F δ n. hen, Proof. Clearly, 0 Y n Y n [a,b] = m O p log n ωh; [a, b], δ 2 H [a,b], for any process H. hus, we get the result from 5. and heorem 5.. As in [4], our approach is to devise successive approximations of 0 Y n x, denoted by Y n,..., Y n, such that the asymptotic distribution of the supremum N sup x [a,b] N Y n x, properly centered and scaled by constants b n and an, respectively, is easy to determine, and the error of approximations in consecutive approximations is negligible when multiplied by a n.

19 J.E. Figueroa-López/Confidence intervals and bands for Lévy densities 9 Notice that, since a Brownian bridge satisfies that {Z 0 D x} x = {Z 0 x} x, we have { 0 Y n x} D x [a,b] = { Y n x} x [a,b], where Y n x := L x; m, n /2, Z 0 F δ n and F := F. he following estimate is easy to derive Lemma 5.3. Suppose that the assumptions of Proposition 3. are satisfied. here exist constants K and t 0 such that if /n < t 0, then 2Y x; n x = L m, n /2, Z F δ n is such that for a constant K <. m Y n 2Y n [a,b] Kn /2 n Z Proof. Clearly, 2Y n x Y x; n x = L n,, m, n /2, Z F δ n hus, by 5., Y n 2Y n [a,b] K mn/2 ω Fδ n; [a, b], d m Z, where d m = b a/m. In view of Proposition 3., for n and such that /n < t 0, there are constants k and k such that F δ nu F δ nv 2kδ n 2 + 2k δ n m, provided that u, v [a, b] and v u < d m. Let us now work with 2 Y n. Because of the self-similarity of the Brownian motion, we have that where { 2 Y n x} x [a,b] D = { 3 Y n x} x [a,b], 3Y n x := L x; m, /2, Z δ F n δ n. he following estimate results from Lévy s modulus of continuity theorem. Lemma 5.4. Let 4 Y nx = L x; m, /2, Z sudu. If n is such that δ n := n n 0, then for n large enough, 3 Y n 4Y n [a,b] m O n n p n /2 log /2 n for a constant K <. n

20 J.E. Figueroa-López/Confidence intervals and bands for Lévy densities 20 Proof. It is not hard to see that there exists a constant K such that 3 Y n 4Y n K /2 m sup Z δ F n δ nx Z sudu x [a,b] By Proposition 3., there exists a constants k > 0 and t 0 > 0 such that for all 0 < δ < t 0 sup δ P [X δ y] ν[y, < k δ. 5.4 y D hus, there exists a constant K > 0 such that, for large enough n, with probability, 3 Y n 4Y n Kn /2 m log /2 n. n x We now notice that { } Z sudu x x [a,b] D = { x } s /2 udzu, x [a,b] and hence, where { 4 Y n x} x [a,b] 5Y n x := L x; m, /2, D = { 5 Y n x} x [a,b], s /2 udzu. Using integration by parts, one can simplify 5 Y n x as follows: 5Y n /2 x = m i=0 j=0 xi ˆϕ i,j x s /2 u ˆϕ i,j udzu. x i For our last estimate we need some conditions on the Lévy density. Standing assumption.. s is positive and continuous on [a, b]; 2. s is differentiable in a, b and moreover, the derivative of s /2 is bounded in absolute value on a, b. Lemma 5.5. Let 6Y n x := b a/2 /2 m i=0 j=0 hen, there exists a random variable M such that xi ˆϕ i,j x ˆϕ i,j udzu. x i 6 Y n b a/2 s /2 5 Y n M /2.

21 J.E. Figueroa-López/Confidence intervals and bands for Lévy densities 2 Proof. Let qx = s /2 x and c = b a /2. Using integration by parts, we have H i,j x := s /2 x xi x i s /2 u ˆϕ i,j udzu xi x i ˆϕ i,j udzu = q x { ˆϕ i,j x i qx i qx Zx i ˆϕ i,j x i qx i qx Zx i } q x xi x i { ˆϕ i,j u qu qx ˆϕ i,j uq u } Zudu. Since q and q are bounded on [a, b], there exists a constant K such that hus, sup H i,j x Km /2 sup Zu. x [x i,x i] u [x i,x i] /2 6 Y n cs /2 5 Y n m b a K /2 sup u [a,b] sup i=0 j=0 x [x i,x i] Zu. H i,j x ˆϕ i,j x he last approximation 6 Y n is simple enough for trying to determine its asymptotic distribution appropriately centered and scaled. Indeed, M, n, m := sup 6 Y n x = D /2 m /2 x [a,b] max {ζ i,m}, 5.5 i m where {ζ i,m } i,m are independent copies of the following random variable ζ := sup 2j + Qj xz j, 5.6 x [,] j=0 for i.i.d. standard normal random variables Z j. In the case k = 0, ζ = Z 0, while ζ = Z Z when k =. We now proceed to determine the extreme value distribution of M n := max i n {ζ i,n}, for the cases of k = 0 and k =. First, we recall the following well-known limit lim n Φ u ny = e y, 5.7 n where Φ is the normal distribution, u n y = y/a n + b n, and a n = 2 log n /2 5.8 b n = 2 log n /2 2 2 log n /2 log log n + log 4π. 5.9

22 J.E. Figueroa-López/Confidence intervals and bands for Lévy densities 22 It follows that lim P max Z j0 y + b n = e 2e y, n j n a n for all y, where {Z j0 } j 0 are iid N 0,. Indeed, for large enough n, the quantity after the limit can be written as follows 2Φ u n y n = 2n Φu n ny e 2e y. n In view of 5.5, we have proved the following result. Lemma 5.6. With constants a n and b n as in and k = 0, lim P n /2 m /2 n sup 6 Y n x y + b mn = e 2e y, 5.0 n n x [a,b] a mn for all y 0, n > 0, and m n such that m n. o handle the case k =, we embed the problem into the theory of multivariate extreme values see e.g. [8]. Consider independent copies {V i } i of the following vector of jointly standard Gaussian variables One can see that { = V := 2 /2 m /2 sup 3 2 Z Z, 3 2 Z 0 2 Z. 5. x [a,b] 6 Y n x } y + b m a m { } max V i â m y + ˆb m, min V i â m y ˆb m, i m i m where y := y, y, ˆb m := b m, b m, and â m := a m, a m, and all operations are point-wise. he following result determines the asymptotic properties of the maxima and minima. he result is natural from the general theory of multivariate extreme value theory, and its proof is provided just for completeness. Lemma 5.7. Let y = y, y 2 and z = z, z 2. hen, lim P max V i â n y + ˆb n, min V i â n z ˆb n n i n i n = e e y e y 2 e z e z, where ˆb n and â n are defined in the previous paragraph. 5.2

23 J.E. Figueroa-López/Confidence intervals and bands for Lévy densities 23 Proof. Clearly, the probability in 5.2 can be written as follows A n := { P u n z V u n y, u n z 2 V 2 u n y 2 } n, where V := V, V 2 is defined in 5. and u n x := x/a n +b n. Let us introduce the following notation F n y, z; X, Y := P X u n y, Y u n z, Fn y; X := P X u n y, where X and Y represent random variables. We recall the following results valid for any jointly normal variables X and Y and arbitrary y and z see Example 5.3. in [8]: lim n F n y, z; X, Y = 0, n lim n F n y; X = e y. n he limit 5.2 follows once we notice that A /n n can be written as follows: A /n n = n { n Fn z ; V + n F n z 2 ; V 2 + n F n y ; V + n F n y 2 ; V 2 n F n z, z 2 ; V, V 2 n F n y, z 2 ; V, V 2 n F n z, y 2 ; V, V 2 }. We conclude the following result: Lemma 5.8. With constants a n and b n as in and k =, lim P 2 n /2 m /2 n sup 6 Y n x y n n + b mn = e 4e y. 5.3 x [a,b] a mn for all y R +, n > 0, and m n such that m n. he limit behavior of sup x [a,b] 6 Y n x for general k is still under research. n Nevertheless, for k = 0 piece-wise constant estimators and k = piece-wise linear estimators, we have the following result. heorem 5.9. Suppose that νr = or σ 0. Also, suppose that the Lévy density s exists and satisfies the conditions of Proposition 3. and the standing assumptions. Let n = c n α, and m n = [d n α2 ], where 0 < α <, 0 < α 2 < α α, and c, d > 0 are constants. hen, for k = 0,, { } a mn lim P n κ /2 n sup x [a,b] s /2 xy n n x b m n y = e κ e y, 5.4 where n := n /m n, a n and b n are defined as in 5.7, κ, κ = b a /2, 2 if k = 0, and κ, κ = b a /2 2, 4 if k =.

24 J.E. Figueroa-López/Confidence intervals and bands for Lévy densities 24 Proof. he idea is to use the following simple observations. Let L n be a functional on D[a, b] such that L n ω L n ω 2 M n ω ω 2, 5.5 and let A n, B n processes with values on D[a, b] such that A n B n = o p /M n. hen, if L n A n converges in distribution to F, then L n B n converges to F as well. hroughout this proof, { } L n ω := a mn κ c d n α α2 2 sup s /2 xωx b mn, x [a,b] which satisfies the Lipschitz condition 5.5 with M n = κ c d a m n n α α 2 2. From Lemma 5.5 and Lemmas , in order for 5.4 to hold with Y n replaced n by 5 Y n, it suffices that n lim n α α2 2 a mn n /2 = 0, n which is obvious since α 2 > 0. Hence, 4 Y n has the same law as 5Y n, 5.4 n n holds for 4 Y n. In the light of Lemma 5.4, 5.4 will hold for 3Y n and hence, n n for 2 Y n as well since n lim n α α2 2 a mn m n n /2 log /2 n = c lim n n α+α2 2 log n = 0. n n Similarly, in view of Lemma 5.3, 5.4 will hold for Y n n as well since lim n α α2 2 a mn n /2 mn n n n = 0. and hence, for 0Y n n Finally, in the light of Lemma 5.2, in order for 5.4 to hold for Y n, it suffices n that lim n α α2 m n 2 a mn log n = 0. n n his holds since the sequence is O n α 2 α 2 log n, which converges to 0. he previous result shows that { } a mn κ s /2 x ŝ n x E ŝnx b n m n /2 n sup x [a,b] converges to a Gumbel distribution. he final step in order to construct confidence bands will be to determine conditions to replace E ŝ n with s. Corollary 5.0. Suppose that the conditions of heorem 5.9 hold true, that the restriction of s to [a, b] is a member of B α L [a, b], and also that 0 < α < 2α + 3α + 2, α + 2α < α 2 < 2 3α α. 5.6

25 hen, lim P n J.E. Figueroa-López/Confidence intervals and bands for Lévy densities 25 a mn { κ /2 n sup x [a,b] s /2 x } ŝ n x sx n bmn y = e κ e y, where we used the same notation for κ and κ as in heorem Proof. Using the same reasoning in the proof of heorem 4.4, it turns out that sup E ŝ n x sx K mn n m α n n, x [a,b] n for an absolute constant K. As in the proof of heorem 5.9, to show 5.7, it suffices that lim a m n n α α2 mn n 2 m α n = 0. n n he previous limit holds if and only if α 3α + α 2 2 < 0, and + 2α < α 2. he two previous inequalities in combination with the restrictions on α and α 2 stated in heorem 5.9 will yield the conditions in 5.6. Since α 2 α /2 can be made arbitrarily close to α/3α + on the range /2 of values 5.6, a mn n can be made to vanish at a rate arbitrarily close to log /2 n α 3α+. In particular, if 0 < ε << and s is smooth enough, one can choose m n and n such that the rate of convergence of ŝ n to s is n O log /2 n n /3+ε, uniformly on [a, b]. Corollary 5.0 allows us to construct confidence bands for s on [a, b] based on the projection estimators ŝ on regular piece-wise linear or constant polynomials. Indeed, suppose that y α is such that Fix hen, as n, sx exp{ k e y α } = α. d n := y α + b mn n /2. 2κ a mn ŝ nn x + { d 2 n ± ŝ nn x + d2 n 2 ŝ nn x2 }, 5.8 with 00 α%-confidence. he above interval is asymptotically equivalent to the following simpler interval: sx ŝ n x ± y /2 α + b mn /2 n n ŝ nn κ x. 5.9 a mn

26 J.E. Figueroa-López/Confidence intervals and bands for Lévy densities A data-driven selection method he approach so far has been that of fixing the dimension of the sieve so that the resulting projection estimator has certain desirable asymptotic properties. In Section 2, heorem 2., we tuned the dimension m to the smoothness parameter α such that the rate of convergence of the risk of estimation is optimal. In Sections 4 and 5, the dimensions m for which the central limit theorems there hold true will also depend on the smoothness parameter α. Such an approach has the obvious drawback of needing or at least presuming the value of α. A typical approach for model selection consists of minimizing an unbiased estimator of the risk of estimation. his approach was developed in [3] in the context of Lévy density estimation. Let us briefly discuss the findings there. he key idea comes from the following risk decomposition: E [ s ŝ c 2] = s 2 + E [ ŝ c 2 + pen c S ], 6. where ŝ c is as in 2.4 substituting ˆβ π ϕ j by its continuous-time version and pen c S is defined by ˆβ c ϕ j := ϕ j X t, j =,..., d, t pen c S := 2 2 t d ϕ 2 j X t. 6.2 Equation 6. shows that the observable statistics ŝ c 2 +pen c S is an unbiased estimator for the risk of estimation of the projection estimator ŝ c, up to the constant s 2, suggesting a data-driven criterion for model selection. Concretely, given a collection of sieves {S m, m M}, we should choose the projection estimator s c ŝcˆm, where ˆm is such that j= ˆm argmin m M { ŝ c m 2 + pen c S m }. Such an estimator s c is called a penalized projection estimator p.p.e. since the role of pen c S is to penalize large linear models. In [6], it is shown that the p.p.e. s c is adaptive in the class of Besov Lévy densities of Section 2 in the sense that s c attains the optimal rate of convergence O 2α/2α+ without using the knowledge of α. Unfortunately, the previous approach intrinsically requires continuous-time sampling of the process to determine the jumps X t. However, the analysis could still be useful if one uses the natural discrete-based proxies of ˆβ c and pen c, where the jumps X t are replaced by the increments X tk X tk. his idea leads to the estimators ŝ π in 2.4 and to take pen π S = 2 2 n k= ϕ 2 X tk X tk, 6.3 ϕ G

27 J.E. Figueroa-López/Confidence intervals and bands for Lévy densities 27 as the penalization term. In the light of the previous arguments, we proposed a discrete-sampling model selection criterion as follows ˆm π { := argmin m M ŝ π m 2 + pen π S m } 6.4 d = argmin m M { ˆβ π ϕ} 2 + pen π S m j= where G m := {ϕ,..., ϕ d } is an orthonormal basis of S m, ˆβ π is given by 2., and pen π is given by 6.3. he resulting estimator s := s πˆm π 6.5 will be called discrete-based penalized projection estimator. We hope to extend in a future work the adaptability result in [6] for this discrete-based p.p.e. In the sequel, we illustrate the performance of these estimators for an infinite-jump activity Lévy process of relevance in the area of mathematical finance. 7. An example: estimation of variance Gamma processes. Variance Gamma processes VG were proposed in [23] see also [9] and [2] as substitutes to the Brownian Motion in the Black-Scholes model. Since their introduction, VG processes have received a great dealt of attention, even in the financial industry. A variance Gamma process X = {Xt} t 0 is a Brownian motion with drift, time changed by a Gamma Lévy process. Concretely, Xt = θut + σw Ut, 7. where {W t} t 0 is a standard Brownian motion, θ R, σ > 0, and U = {Ut} t 0 is an independent Gamma Lévy process with density at time t given by f t x = xt/ν exp x ν ν t/ν Γ t. 7.2 ν Notice that E [Ut] = t and Var [Ut] = νt; therefore, the random clock U has a mean rate of one and a variance rate of ν. he process X is itself a Lévy process since Gamma processes are subordinators see heorem 30. of [28]. Moreover, it is not hard to check that X is the difference of two Gamma Lévy processes see e.g. 2. of [7]: {Xt} t 0 D = {X+ t X t} t 0, 7.3 where {X + t} t 0 and {X t} t 0 are Gamma Lévy processes with respective Lévy measures ν ± dx = α exp x β ± dx, for x > 0.

28 J.E. Figueroa-López/Confidence intervals and bands for Lévy densities 28 Here, α = /ν and β ± θ2 ν = 2 + σ2 ν 4 2 ± θν 2. As a consequence of this decomposition, the Lévy density of X takes the form α x sx = exp x β if x < 0, α x exp 7.4 x β if x > 0, + where α > 0, β 0, and β + 0 of course, β + β + > 0. In that case, α controls the overall jump activity, while β + and β take respectively charge of the intensity of large positive and negative jumps. In particular, the difference between /β + and /β determines the frequency of drops relative to rises, while their sum measures the frequency of large moves relative to small ones. he above two characterizations provide straightforward methods to simulate a variance Gamma model. One way will be to simulate the Gamma Lévy processes {X + t} 0 t and {X t} 0 t of 7.3 using the series representation method introduced in Rosiński [26]. Another approach is to generate the random time change {Ut} 0 t of 7., and then construct a discrete skeleton from the increments Xi t Xi t, i. he increments of X are simply simulated using normal random variables with mean and variances determined by the increments of U. he performance of projection estimation for variance gamma Lévy processes was illustrated in [4] via simulation experiments. In this part we want to extend further this analysis to show briefly the performance of confidence bands. As in [4], we take as approximating linear models S m the span of the indicator functions χ,..., χ, where x [x0,x ] x m,xm] 0 < < x m is a regular partition of an interval D [a, b], with 0 < a or b < 0. he simulation experiment we consider is actually motivated by the empirical findings of [9] based on daily returns on the S&P stock index from January 992 to September 994 see their able I. Using maximum likelihood methods, the annualized estimates of the parameters for the variance Gamma model were reported to be ˆθ ML = , ˆσ 2 = , and ˆν = 0.002, from where we ML ML obtain ˆα ML = 500, ˆβ + = , and ˆβ = ML ML aking the parameters α = 500, β + = , and β = , we simulate the Lévy process on [0, max = ] in years. hen, for each n = 00,..., 000 and n = 2000,..., 20000, we sample the process at n equally spaced time points during [0, n ], where n := 50 nα, α = For each n, we compute the projection estimator on S mn with m n = [5n α2 ], α 2 =.. Out of these estimators, we select the projection estimator via the method 6.4, and determine the corresponding confidence bands. Figures and 2 show the

Model Selection and Geometry

Model Selection and Geometry Model Selection and Geometry Pascal Massart Université Paris-Sud, Orsay Leipzig, February Purpose of the talk! Concentration of measure plays a fundamental role in the theory of model selection! Model

More information

Statistical inference on Lévy processes

Statistical inference on Lévy processes Alberto Coca Cabrero University of Cambridge - CCA Supervisors: Dr. Richard Nickl and Professor L.C.G.Rogers Funded by Fundación Mutua Madrileña and EPSRC MASDOC/CCA student workshop 2013 26th March Outline

More information

An overview of a nonparametric estimation method for Lévy processes

An overview of a nonparametric estimation method for Lévy processes An overview of a nonparametric estimation method for Lévy processes Enrique Figueroa-López Department of Mathematics Purdue University 150 N. University Street, West Lafayette, IN 47906 jfiguero@math.purdue.edu

More information

Statistical methods for financial models driven by Lévy processes

Statistical methods for financial models driven by Lévy processes Statistical methods for financial models driven by Lévy processes Part IV: Some Non-parametric Methods for Lévy Models José Enrique Figueroa-López 1 1 Department of Statistics Purdue University PASI CIMAT,

More information

Poisson random measure: motivation

Poisson random measure: motivation : motivation The Lévy measure provides the expected number of jumps by time unit, i.e. in a time interval of the form: [t, t + 1], and of a certain size Example: ν([1, )) is the expected number of jumps

More information

Short-time expansions for close-to-the-money options under a Lévy jump model with stochastic volatility

Short-time expansions for close-to-the-money options under a Lévy jump model with stochastic volatility Short-time expansions for close-to-the-money options under a Lévy jump model with stochastic volatility José Enrique Figueroa-López 1 1 Department of Statistics Purdue University Statistics, Jump Processes,

More information

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539 Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory

More information

9 Brownian Motion: Construction

9 Brownian Motion: Construction 9 Brownian Motion: Construction 9.1 Definition and Heuristics The central limit theorem states that the standard Gaussian distribution arises as the weak limit of the rescaled partial sums S n / p n of

More information

Numerical Methods with Lévy Processes

Numerical Methods with Lévy Processes Numerical Methods with Lévy Processes 1 Objective: i) Find models of asset returns, etc ii) Get numbers out of them. Why? VaR and risk management Valuing and hedging derivatives Why not? Usual assumption:

More information

Small-time asymptotics of stopped Lévy bridges and simulation schemes with controlled bias

Small-time asymptotics of stopped Lévy bridges and simulation schemes with controlled bias Small-time asymptotics of stopped Lévy bridges and simulation schemes with controlled bias José E. Figueroa-López 1 1 Department of Statistics Purdue University Seoul National University & Ajou University

More information

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3 Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................

More information

3 Integration and Expectation

3 Integration and Expectation 3 Integration and Expectation 3.1 Construction of the Lebesgue Integral Let (, F, µ) be a measure space (not necessarily a probability space). Our objective will be to define the Lebesgue integral R fdµ

More information

GAUSSIAN PROCESSES; KOLMOGOROV-CHENTSOV THEOREM

GAUSSIAN PROCESSES; KOLMOGOROV-CHENTSOV THEOREM GAUSSIAN PROCESSES; KOLMOGOROV-CHENTSOV THEOREM STEVEN P. LALLEY 1. GAUSSIAN PROCESSES: DEFINITIONS AND EXAMPLES Definition 1.1. A standard (one-dimensional) Wiener process (also called Brownian motion)

More information

Monte-Carlo MMD-MA, Université Paris-Dauphine. Xiaolu Tan

Monte-Carlo MMD-MA, Université Paris-Dauphine. Xiaolu Tan Monte-Carlo MMD-MA, Université Paris-Dauphine Xiaolu Tan tan@ceremade.dauphine.fr Septembre 2015 Contents 1 Introduction 1 1.1 The principle.................................. 1 1.2 The error analysis

More information

ASYMPTOTIC EQUIVALENCE OF DENSITY ESTIMATION AND GAUSSIAN WHITE NOISE. By Michael Nussbaum Weierstrass Institute, Berlin

ASYMPTOTIC EQUIVALENCE OF DENSITY ESTIMATION AND GAUSSIAN WHITE NOISE. By Michael Nussbaum Weierstrass Institute, Berlin The Annals of Statistics 1996, Vol. 4, No. 6, 399 430 ASYMPTOTIC EQUIVALENCE OF DENSITY ESTIMATION AND GAUSSIAN WHITE NOISE By Michael Nussbaum Weierstrass Institute, Berlin Signal recovery in Gaussian

More information

X n D X lim n F n (x) = F (x) for all x C F. lim n F n(u) = F (u) for all u C F. (2)

X n D X lim n F n (x) = F (x) for all x C F. lim n F n(u) = F (u) for all u C F. (2) 14:17 11/16/2 TOPIC. Convergence in distribution and related notions. This section studies the notion of the so-called convergence in distribution of real random variables. This is the kind of convergence

More information

Jump-type Levy Processes

Jump-type Levy Processes Jump-type Levy Processes Ernst Eberlein Handbook of Financial Time Series Outline Table of contents Probabilistic Structure of Levy Processes Levy process Levy-Ito decomposition Jump part Probabilistic

More information

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition Filtrations, Markov Processes and Martingales Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition David pplebaum Probability and Statistics Department,

More information

Extreme Value Analysis and Spatial Extremes

Extreme Value Analysis and Spatial Extremes Extreme Value Analysis and Department of Statistics Purdue University 11/07/2013 Outline Motivation 1 Motivation 2 Extreme Value Theorem and 3 Bayesian Hierarchical Models Copula Models Max-stable Models

More information

Lecture 7 Introduction to Statistical Decision Theory

Lecture 7 Introduction to Statistical Decision Theory Lecture 7 Introduction to Statistical Decision Theory I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw December 20, 2016 1 / 55 I-Hsiang Wang IT Lecture 7

More information

Asymptotics for posterior hazards

Asymptotics for posterior hazards Asymptotics for posterior hazards Pierpaolo De Blasi University of Turin 10th August 2007, BNR Workshop, Isaac Newton Intitute, Cambridge, UK Joint work with Giovanni Peccati (Université Paris VI) and

More information

Lecture 9. d N(0, 1). Now we fix n and think of a SRW on [0,1]. We take the k th step at time k n. and our increments are ± 1

Lecture 9. d N(0, 1). Now we fix n and think of a SRW on [0,1]. We take the k th step at time k n. and our increments are ± 1 Random Walks and Brownian Motion Tel Aviv University Spring 011 Lecture date: May 0, 011 Lecture 9 Instructor: Ron Peled Scribe: Jonathan Hermon In today s lecture we present the Brownian motion (BM).

More information

Covariance function estimation in Gaussian process regression

Covariance function estimation in Gaussian process regression Covariance function estimation in Gaussian process regression François Bachoc Department of Statistics and Operations Research, University of Vienna WU Research Seminar - May 2015 François Bachoc Gaussian

More information

Understanding Regressions with Observations Collected at High Frequency over Long Span

Understanding Regressions with Observations Collected at High Frequency over Long Span Understanding Regressions with Observations Collected at High Frequency over Long Span Yoosoon Chang Department of Economics, Indiana University Joon Y. Park Department of Economics, Indiana University

More information

Probability and Measure

Probability and Measure Part II Year 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2018 84 Paper 4, Section II 26J Let (X, A) be a measurable space. Let T : X X be a measurable map, and µ a probability

More information

Small-time expansions for the transition distributions of Lévy processes

Small-time expansions for the transition distributions of Lévy processes Small-time expansions for the transition distributions of Lévy processes José E. Figueroa-López and Christian Houdré Department of Statistics Purdue University W. Lafayette, IN 4796, USA figueroa@stat.purdue.edu

More information

Web Appendix for Hierarchical Adaptive Regression Kernels for Regression with Functional Predictors by D. B. Woodard, C. Crainiceanu, and D.

Web Appendix for Hierarchical Adaptive Regression Kernels for Regression with Functional Predictors by D. B. Woodard, C. Crainiceanu, and D. Web Appendix for Hierarchical Adaptive Regression Kernels for Regression with Functional Predictors by D. B. Woodard, C. Crainiceanu, and D. Ruppert A. EMPIRICAL ESTIMATE OF THE KERNEL MIXTURE Here we

More information

Notes, March 4, 2013, R. Dudley Maximum likelihood estimation: actual or supposed

Notes, March 4, 2013, R. Dudley Maximum likelihood estimation: actual or supposed 18.466 Notes, March 4, 2013, R. Dudley Maximum likelihood estimation: actual or supposed 1. MLEs in exponential families Let f(x,θ) for x X and θ Θ be a likelihood function, that is, for present purposes,

More information

Experience Rating in General Insurance by Credibility Estimation

Experience Rating in General Insurance by Credibility Estimation Experience Rating in General Insurance by Credibility Estimation Xian Zhou Department of Applied Finance and Actuarial Studies Macquarie University, Sydney, Australia Abstract This work presents a new

More information

1 Lyapunov theory of stability

1 Lyapunov theory of stability M.Kawski, APM 581 Diff Equns Intro to Lyapunov theory. November 15, 29 1 1 Lyapunov theory of stability Introduction. Lyapunov s second (or direct) method provides tools for studying (asymptotic) stability

More information

Supermodular ordering of Poisson arrays

Supermodular ordering of Poisson arrays Supermodular ordering of Poisson arrays Bünyamin Kızıldemir Nicolas Privault Division of Mathematical Sciences School of Physical and Mathematical Sciences Nanyang Technological University 637371 Singapore

More information

Empirical Processes: General Weak Convergence Theory

Empirical Processes: General Weak Convergence Theory Empirical Processes: General Weak Convergence Theory Moulinath Banerjee May 18, 2010 1 Extended Weak Convergence The lack of measurability of the empirical process with respect to the sigma-field generated

More information

Econ 2148, fall 2017 Gaussian process priors, reproducing kernel Hilbert spaces, and Splines

Econ 2148, fall 2017 Gaussian process priors, reproducing kernel Hilbert spaces, and Splines Econ 2148, fall 2017 Gaussian process priors, reproducing kernel Hilbert spaces, and Splines Maximilian Kasy Department of Economics, Harvard University 1 / 37 Agenda 6 equivalent representations of the

More information

Convergence rates in weighted L 1 spaces of kernel density estimators for linear processes

Convergence rates in weighted L 1 spaces of kernel density estimators for linear processes Alea 4, 117 129 (2008) Convergence rates in weighted L 1 spaces of kernel density estimators for linear processes Anton Schick and Wolfgang Wefelmeyer Anton Schick, Department of Mathematical Sciences,

More information

Chapter 9. Non-Parametric Density Function Estimation

Chapter 9. Non-Parametric Density Function Estimation 9-1 Density Estimation Version 1.2 Chapter 9 Non-Parametric Density Function Estimation 9.1. Introduction We have discussed several estimation techniques: method of moments, maximum likelihood, and least

More information

Statistics: Learning models from data

Statistics: Learning models from data DS-GA 1002 Lecture notes 5 October 19, 2015 Statistics: Learning models from data Learning models from data that are assumed to be generated probabilistically from a certain unknown distribution is a crucial

More information

LAN property for ergodic jump-diffusion processes with discrete observations

LAN property for ergodic jump-diffusion processes with discrete observations LAN property for ergodic jump-diffusion processes with discrete observations Eulalia Nualart (Universitat Pompeu Fabra, Barcelona) joint work with Arturo Kohatsu-Higa (Ritsumeikan University, Japan) &

More information

Mathematical Methods for Physics and Engineering

Mathematical Methods for Physics and Engineering Mathematical Methods for Physics and Engineering Lecture notes for PDEs Sergei V. Shabanov Department of Mathematics, University of Florida, Gainesville, FL 32611 USA CHAPTER 1 The integration theory

More information

Risk Bounds for Lévy Processes in the PAC-Learning Framework

Risk Bounds for Lévy Processes in the PAC-Learning Framework Risk Bounds for Lévy Processes in the PAC-Learning Framework Chao Zhang School of Computer Engineering anyang Technological University Dacheng Tao School of Computer Engineering anyang Technological University

More information

Gaussian Random Field: simulation and quantification of the error

Gaussian Random Field: simulation and quantification of the error Gaussian Random Field: simulation and quantification of the error EMSE Workshop 6 November 2017 3 2 1 0-1 -2-3 80 60 40 1 Continuity Separability Proving continuity without separability 2 The stationary

More information

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product Chapter 4 Hilbert Spaces 4.1 Inner Product Spaces Inner Product Space. A complex vector space E is called an inner product space (or a pre-hilbert space, or a unitary space) if there is a mapping (, )

More information

Optimal global rates of convergence for interpolation problems with random design

Optimal global rates of convergence for interpolation problems with random design Optimal global rates of convergence for interpolation problems with random design Michael Kohler 1 and Adam Krzyżak 2, 1 Fachbereich Mathematik, Technische Universität Darmstadt, Schlossgartenstr. 7, 64289

More information

Statistics 612: L p spaces, metrics on spaces of probabilites, and connections to estimation

Statistics 612: L p spaces, metrics on spaces of probabilites, and connections to estimation Statistics 62: L p spaces, metrics on spaces of probabilites, and connections to estimation Moulinath Banerjee December 6, 2006 L p spaces and Hilbert spaces We first formally define L p spaces. Consider

More information

Convergence of Multivariate Quantile Surfaces

Convergence of Multivariate Quantile Surfaces Convergence of Multivariate Quantile Surfaces Adil Ahidar Institut de Mathématiques de Toulouse - CERFACS August 30, 2013 Adil Ahidar (Institut de Mathématiques de Toulouse Convergence - CERFACS) of Multivariate

More information

Multiple Random Variables

Multiple Random Variables Multiple Random Variables Joint Probability Density Let X and Y be two random variables. Their joint distribution function is F ( XY x, y) P X x Y y. F XY ( ) 1, < x

More information

Nonparametric Drift Estimation for Stochastic Differential Equations

Nonparametric Drift Estimation for Stochastic Differential Equations Nonparametric Drift Estimation for Stochastic Differential Equations Gareth Roberts 1 Department of Statistics University of Warwick Brazilian Bayesian meeting, March 2010 Joint work with O. Papaspiliopoulos,

More information

7: FOURIER SERIES STEVEN HEILMAN

7: FOURIER SERIES STEVEN HEILMAN 7: FOURIER SERIES STEVE HEILMA Contents 1. Review 1 2. Introduction 1 3. Periodic Functions 2 4. Inner Products on Periodic Functions 3 5. Trigonometric Polynomials 5 6. Periodic Convolutions 7 7. Fourier

More information

Stable Process. 2. Multivariate Stable Distributions. July, 2006

Stable Process. 2. Multivariate Stable Distributions. July, 2006 Stable Process 2. Multivariate Stable Distributions July, 2006 1. Stable random vectors. 2. Characteristic functions. 3. Strictly stable and symmetric stable random vectors. 4. Sub-Gaussian random vectors.

More information

Multivariate Regression

Multivariate Regression Multivariate Regression The so-called supervised learning problem is the following: we want to approximate the random variable Y with an appropriate function of the random variables X 1,..., X p with the

More information

Discretization of SDEs: Euler Methods and Beyond

Discretization of SDEs: Euler Methods and Beyond Discretization of SDEs: Euler Methods and Beyond 09-26-2006 / PRisMa 2006 Workshop Outline Introduction 1 Introduction Motivation Stochastic Differential Equations 2 The Time Discretization of SDEs Monte-Carlo

More information

ELEMENTS OF PROBABILITY THEORY

ELEMENTS OF PROBABILITY THEORY ELEMENTS OF PROBABILITY THEORY Elements of Probability Theory A collection of subsets of a set Ω is called a σ algebra if it contains Ω and is closed under the operations of taking complements and countable

More information

ON ADDITIVE TIME-CHANGES OF FELLER PROCESSES. 1. Introduction

ON ADDITIVE TIME-CHANGES OF FELLER PROCESSES. 1. Introduction ON ADDITIVE TIME-CHANGES OF FELLER PROCESSES ALEKSANDAR MIJATOVIĆ AND MARTIJN PISTORIUS Abstract. In this note we generalise the Phillips theorem [1] on the subordination of Feller processes by Lévy subordinators

More information

Introduction to Empirical Processes and Semiparametric Inference Lecture 02: Overview Continued

Introduction to Empirical Processes and Semiparametric Inference Lecture 02: Overview Continued Introduction to Empirical Processes and Semiparametric Inference Lecture 02: Overview Continued Michael R. Kosorok, Ph.D. Professor and Chair of Biostatistics Professor of Statistics and Operations Research

More information

Nonparametric Regression. Badr Missaoui

Nonparametric Regression. Badr Missaoui Badr Missaoui Outline Kernel and local polynomial regression. Penalized regression. We are given n pairs of observations (X 1, Y 1 ),...,(X n, Y n ) where Y i = r(x i ) + ε i, i = 1,..., n and r(x) = E(Y

More information

Reproducing Kernel Hilbert Spaces Class 03, 15 February 2006 Andrea Caponnetto

Reproducing Kernel Hilbert Spaces Class 03, 15 February 2006 Andrea Caponnetto Reproducing Kernel Hilbert Spaces 9.520 Class 03, 15 February 2006 Andrea Caponnetto About this class Goal To introduce a particularly useful family of hypothesis spaces called Reproducing Kernel Hilbert

More information

The deterministic Lasso

The deterministic Lasso The deterministic Lasso Sara van de Geer Seminar für Statistik, ETH Zürich Abstract We study high-dimensional generalized linear models and empirical risk minimization using the Lasso An oracle inequality

More information

Reflected Brownian Motion

Reflected Brownian Motion Chapter 6 Reflected Brownian Motion Often we encounter Diffusions in regions with boundary. If the process can reach the boundary from the interior in finite time with positive probability we need to decide

More information

LAN property for sde s with additive fractional noise and continuous time observation

LAN property for sde s with additive fractional noise and continuous time observation LAN property for sde s with additive fractional noise and continuous time observation Eulalia Nualart (Universitat Pompeu Fabra, Barcelona) joint work with Samy Tindel (Purdue University) Vlad s 6th birthday,

More information

Maximum Likelihood Drift Estimation for Gaussian Process with Stationary Increments

Maximum Likelihood Drift Estimation for Gaussian Process with Stationary Increments Austrian Journal of Statistics April 27, Volume 46, 67 78. AJS http://www.ajs.or.at/ doi:.773/ajs.v46i3-4.672 Maximum Likelihood Drift Estimation for Gaussian Process with Stationary Increments Yuliya

More information

CALCULATION METHOD FOR NONLINEAR DYNAMIC LEAST-ABSOLUTE DEVIATIONS ESTIMATOR

CALCULATION METHOD FOR NONLINEAR DYNAMIC LEAST-ABSOLUTE DEVIATIONS ESTIMATOR J. Japan Statist. Soc. Vol. 3 No. 200 39 5 CALCULAION MEHOD FOR NONLINEAR DYNAMIC LEAS-ABSOLUE DEVIAIONS ESIMAOR Kohtaro Hitomi * and Masato Kagihara ** In a nonlinear dynamic model, the consistency and

More information

Metric Spaces and Topology

Metric Spaces and Topology Chapter 2 Metric Spaces and Topology From an engineering perspective, the most important way to construct a topology on a set is to define the topology in terms of a metric on the set. This approach underlies

More information

A tailor made nonparametric density estimate

A tailor made nonparametric density estimate A tailor made nonparametric density estimate Daniel Carando 1, Ricardo Fraiman 2 and Pablo Groisman 1 1 Universidad de Buenos Aires 2 Universidad de San Andrés School and Workshop on Probability Theory

More information

Regular Variation and Extreme Events for Stochastic Processes

Regular Variation and Extreme Events for Stochastic Processes 1 Regular Variation and Extreme Events for Stochastic Processes FILIP LINDSKOG Royal Institute of Technology, Stockholm 2005 based on joint work with Henrik Hult www.math.kth.se/ lindskog 2 Extremes for

More information

Regression and Statistical Inference

Regression and Statistical Inference Regression and Statistical Inference Walid Mnif wmnif@uwo.ca Department of Applied Mathematics The University of Western Ontario, London, Canada 1 Elements of Probability 2 Elements of Probability CDF&PDF

More information

Model selection theory: a tutorial with applications to learning

Model selection theory: a tutorial with applications to learning Model selection theory: a tutorial with applications to learning Pascal Massart Université Paris-Sud, Orsay ALT 2012, October 29 Asymptotic approach to model selection - Idea of using some penalized empirical

More information

Week 9 The Central Limit Theorem and Estimation Concepts

Week 9 The Central Limit Theorem and Estimation Concepts Week 9 and Estimation Concepts Week 9 and Estimation Concepts Week 9 Objectives 1 The Law of Large Numbers and the concept of consistency of averages are introduced. The condition of existence of the population

More information

Wiener Measure and Brownian Motion

Wiener Measure and Brownian Motion Chapter 16 Wiener Measure and Brownian Motion Diffusion of particles is a product of their apparently random motion. The density u(t, x) of diffusing particles satisfies the diffusion equation (16.1) u

More information

Chapter 9. Non-Parametric Density Function Estimation

Chapter 9. Non-Parametric Density Function Estimation 9-1 Density Estimation Version 1.1 Chapter 9 Non-Parametric Density Function Estimation 9.1. Introduction We have discussed several estimation techniques: method of moments, maximum likelihood, and least

More information

MAT 570 REAL ANALYSIS LECTURE NOTES. Contents. 1. Sets Functions Countability Axiom of choice Equivalence relations 9

MAT 570 REAL ANALYSIS LECTURE NOTES. Contents. 1. Sets Functions Countability Axiom of choice Equivalence relations 9 MAT 570 REAL ANALYSIS LECTURE NOTES PROFESSOR: JOHN QUIGG SEMESTER: FALL 204 Contents. Sets 2 2. Functions 5 3. Countability 7 4. Axiom of choice 8 5. Equivalence relations 9 6. Real numbers 9 7. Extended

More information

SYSM 6303: Quantitative Introduction to Risk and Uncertainty in Business Lecture 4: Fitting Data to Distributions

SYSM 6303: Quantitative Introduction to Risk and Uncertainty in Business Lecture 4: Fitting Data to Distributions SYSM 6303: Quantitative Introduction to Risk and Uncertainty in Business Lecture 4: Fitting Data to Distributions M. Vidyasagar Cecil & Ida Green Chair The University of Texas at Dallas Email: M.Vidyasagar@utdallas.edu

More information

Stochastic Volatility and Correction to the Heat Equation

Stochastic Volatility and Correction to the Heat Equation Stochastic Volatility and Correction to the Heat Equation Jean-Pierre Fouque, George Papanicolaou and Ronnie Sircar Abstract. From a probabilist s point of view the Twentieth Century has been a century

More information

Proof. We indicate by α, β (finite or not) the end-points of I and call

Proof. We indicate by α, β (finite or not) the end-points of I and call C.6 Continuous functions Pag. 111 Proof of Corollary 4.25 Corollary 4.25 Let f be continuous on the interval I and suppose it admits non-zero its (finite or infinite) that are different in sign for x tending

More information

Topological properties

Topological properties CHAPTER 4 Topological properties 1. Connectedness Definitions and examples Basic properties Connected components Connected versus path connected, again 2. Compactness Definition and first examples Topological

More information

Lecture 8: Information Theory and Statistics

Lecture 8: Information Theory and Statistics Lecture 8: Information Theory and Statistics Part II: Hypothesis Testing and I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw December 23, 2015 1 / 50 I-Hsiang

More information

Minimax Rate of Convergence for an Estimator of the Functional Component in a Semiparametric Multivariate Partially Linear Model.

Minimax Rate of Convergence for an Estimator of the Functional Component in a Semiparametric Multivariate Partially Linear Model. Minimax Rate of Convergence for an Estimator of the Functional Component in a Semiparametric Multivariate Partially Linear Model By Michael Levine Purdue University Technical Report #14-03 Department of

More information

Goodness-of-fit tests for the cure rate in a mixture cure model

Goodness-of-fit tests for the cure rate in a mixture cure model Biometrika (217), 13, 1, pp. 1 7 Printed in Great Britain Advance Access publication on 31 July 216 Goodness-of-fit tests for the cure rate in a mixture cure model BY U.U. MÜLLER Department of Statistics,

More information

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability...

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability... Functional Analysis Franck Sueur 2018-2019 Contents 1 Metric spaces 1 1.1 Definitions........................................ 1 1.2 Completeness...................................... 3 1.3 Compactness......................................

More information

Risk-Minimality and Orthogonality of Martingales

Risk-Minimality and Orthogonality of Martingales Risk-Minimality and Orthogonality of Martingales Martin Schweizer Universität Bonn Institut für Angewandte Mathematik Wegelerstraße 6 D 53 Bonn 1 (Stochastics and Stochastics Reports 3 (199, 123 131 2

More information

1.5 Approximate Identities

1.5 Approximate Identities 38 1 The Fourier Transform on L 1 (R) which are dense subspaces of L p (R). On these domains, P : D P L p (R) and M : D M L p (R). Show, however, that P and M are unbounded even when restricted to these

More information

Hardy-Stein identity and Square functions

Hardy-Stein identity and Square functions Hardy-Stein identity and Square functions Daesung Kim (joint work with Rodrigo Bañuelos) Department of Mathematics Purdue University March 28, 217 Daesung Kim (Purdue) Hardy-Stein identity UIUC 217 1 /

More information

I forgot to mention last time: in the Ito formula for two standard processes, putting

I forgot to mention last time: in the Ito formula for two standard processes, putting I forgot to mention last time: in the Ito formula for two standard processes, putting dx t = a t dt + b t db t dy t = α t dt + β t db t, and taking f(x, y = xy, one has f x = y, f y = x, and f xx = f yy

More information

Some Aspects of Universal Portfolio

Some Aspects of Universal Portfolio 1 Some Aspects of Universal Portfolio Tomoyuki Ichiba (UC Santa Barbara) joint work with Marcel Brod (ETH Zurich) Conference on Stochastic Asymptotics & Applications Sixth Western Conference on Mathematical

More information

THE INVERSE FUNCTION THEOREM

THE INVERSE FUNCTION THEOREM THE INVERSE FUNCTION THEOREM W. PATRICK HOOPER The implicit function theorem is the following result: Theorem 1. Let f be a C 1 function from a neighborhood of a point a R n into R n. Suppose A = Df(a)

More information

Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016

Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016 Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016 1 Entropy Since this course is about entropy maximization,

More information

Stochastic Differential Equations.

Stochastic Differential Equations. Chapter 3 Stochastic Differential Equations. 3.1 Existence and Uniqueness. One of the ways of constructing a Diffusion process is to solve the stochastic differential equation dx(t) = σ(t, x(t)) dβ(t)

More information

Regression: Lecture 2

Regression: Lecture 2 Regression: Lecture 2 Niels Richard Hansen April 26, 2012 Contents 1 Linear regression and least squares estimation 1 1.1 Distributional results................................ 3 2 Non-linear effects and

More information

Chapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space.

Chapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space. Chapter 1 Preliminaries The purpose of this chapter is to provide some basic background information. Linear Space Hilbert Space Basic Principles 1 2 Preliminaries Linear Space The notion of linear space

More information

The Skorokhod reflection problem for functions with discontinuities (contractive case)

The Skorokhod reflection problem for functions with discontinuities (contractive case) The Skorokhod reflection problem for functions with discontinuities (contractive case) TAKIS KONSTANTOPOULOS Univ. of Texas at Austin Revised March 1999 Abstract Basic properties of the Skorokhod reflection

More information

Functional Analysis. Martin Brokate. 1 Normed Spaces 2. 2 Hilbert Spaces The Principle of Uniform Boundedness 32

Functional Analysis. Martin Brokate. 1 Normed Spaces 2. 2 Hilbert Spaces The Principle of Uniform Boundedness 32 Functional Analysis Martin Brokate Contents 1 Normed Spaces 2 2 Hilbert Spaces 2 3 The Principle of Uniform Boundedness 32 4 Extension, Reflexivity, Separation 37 5 Compact subsets of C and L p 46 6 Weak

More information

Analysis Qualifying Exam

Analysis Qualifying Exam Analysis Qualifying Exam Spring 2017 Problem 1: Let f be differentiable on R. Suppose that there exists M > 0 such that f(k) M for each integer k, and f (x) M for all x R. Show that f is bounded, i.e.,

More information

Lecture 22 Girsanov s Theorem

Lecture 22 Girsanov s Theorem Lecture 22: Girsanov s Theorem of 8 Course: Theory of Probability II Term: Spring 25 Instructor: Gordan Zitkovic Lecture 22 Girsanov s Theorem An example Consider a finite Gaussian random walk X n = n

More information

Real Analysis Problems

Real Analysis Problems Real Analysis Problems Cristian E. Gutiérrez September 14, 29 1 1 CONTINUITY 1 Continuity Problem 1.1 Let r n be the sequence of rational numbers and Prove that f(x) = 1. f is continuous on the irrationals.

More information

OPTIMAL POINTWISE ADAPTIVE METHODS IN NONPARAMETRIC ESTIMATION 1

OPTIMAL POINTWISE ADAPTIVE METHODS IN NONPARAMETRIC ESTIMATION 1 The Annals of Statistics 1997, Vol. 25, No. 6, 2512 2546 OPTIMAL POINTWISE ADAPTIVE METHODS IN NONPARAMETRIC ESTIMATION 1 By O. V. Lepski and V. G. Spokoiny Humboldt University and Weierstrass Institute

More information

Lecture 17 Brownian motion as a Markov process

Lecture 17 Brownian motion as a Markov process Lecture 17: Brownian motion as a Markov process 1 of 14 Course: Theory of Probability II Term: Spring 2015 Instructor: Gordan Zitkovic Lecture 17 Brownian motion as a Markov process Brownian motion is

More information

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R. Ergodic Theorems Samy Tindel Purdue University Probability Theory 2 - MA 539 Taken from Probability: Theory and examples by R. Durrett Samy T. Ergodic theorems Probability Theory 1 / 92 Outline 1 Definitions

More information

Stochastic integration. P.J.C. Spreij

Stochastic integration. P.J.C. Spreij Stochastic integration P.J.C. Spreij this version: April 22, 29 Contents 1 Stochastic processes 1 1.1 General theory............................... 1 1.2 Stopping times...............................

More information

Riemann integral and volume are generalized to unbounded functions and sets. is an admissible set, and its volume is a Riemann integral, 1l E,

Riemann integral and volume are generalized to unbounded functions and sets. is an admissible set, and its volume is a Riemann integral, 1l E, Tel Aviv University, 26 Analysis-III 9 9 Improper integral 9a Introduction....................... 9 9b Positive integrands................... 9c Special functions gamma and beta......... 4 9d Change of

More information

Gaussian, Markov and stationary processes

Gaussian, Markov and stationary processes Gaussian, Markov and stationary processes Gonzalo Mateos Dept. of ECE and Goergen Institute for Data Science University of Rochester gmateosb@ece.rochester.edu http://www.ece.rochester.edu/~gmateosb/ November

More information

Does k-th Moment Exist?

Does k-th Moment Exist? Does k-th Moment Exist? Hitomi, K. 1 and Y. Nishiyama 2 1 Kyoto Institute of Technology, Japan 2 Institute of Economic Research, Kyoto University, Japan Email: hitomi@kit.ac.jp Keywords: Existence of moments,

More information