Probability approximation by Clark-Ocone covariance representation

Size: px
Start display at page:

Download "Probability approximation by Clark-Ocone covariance representation"

Transcription

1 Probability approximation by Clark-Ocone covariance representation Nicolas Privault Giovanni Luca Torrisi October 19, 13 Abstract Based on the Stein method and a general integration by parts framework we derive various bounds on the distance between probability measures. We show that this framework can be implemented on the Poisson space by covariance identities obtained from the Clark-Ocone representation formula and derivation operators. Our approach avoids the use of the inverse of the Ornstein-Uhlenbeck operator as in the existing literature, and also applies to the Wiener space. Keywords: Poisson space, Stein-Chen method, Malliavin calculus, Clark-Ocone formula. Mathematics Subject Classification: 6F5, 6G57, 6H7. 1 Introduction The Stein and Chen-Stein methods have been applied to derive bounds on distances between probability laws on the Wiener and Poisson spaces, cf. [6, [7 and [8. The results of these papers rely on covariance representations based on the number (or Ornstein-Uhlenbeck) operator L on multiple Wiener-Poisson stochastic integrals and its inverse L 1. In particular the bound d W (F, N) E [ 1 DF, DL 1 F (1.1) has been derived for centered functionals of a standard real-valued Brownian motion in [6, Theorem 3.1. Here d W is the Wasserstein distance, N is a random variable distributed according to the standard Gaussian law, D is the classical Malliavin gradient and, is the Division of Mathematical Sciences, Nanyang Technological University, SPMS-MAS-5-43, 1 Nanyang Link Singapore nprivault@ntu.edu.sg Istituto per le Applicazioni del Calcolo Mauro Picone, CNR, Via dei Taurini 19, 185 Roma, Italy. torrisi@iac.rm.cnr.it 1

2 usual inner product on L (R +, B(R + ), l), with l the Lebesgue measure. Although the Ornstein-Uhlenbeck operator L has nice contractivity properties as well as an integral representation, it can be difficult to compute in practice as its eigenspaces are made of multiple stochastic integrals. Thus, although the Ornstein-Uhlenbeck operator applies particularly well to functionals based on multiple stochastic integrals, it is of a more delicate use in applications to functionals whose multiple stochastic integral expansion is not explicitly known. This is due to the fact that the operator L is expressed as the composition of a divergence and a gradient operator, on both the Poisson and Wiener spaces. In this paper we derive bounds on distances between probability laws using covariance representations based on the Clark-Ocone representation formula. In contrast with covariance identities based on the number operator, which relies on the divergence-gradient composition, the Clark-Ocone formula only requires the computation of a gradient and a conditional expectation. In particular, in Corollary 3.4 below we show that (1.1) can be replaced by d W (F, N) E[ 1 D F, E[D F F, (1.) where F is a functional of a normal martingale such that E[F =. Here, D denotes a Malliavin type gradient operator having the chain rule of derivation, and {F t } t is the natural filtration of the normal martingale. In case D is the classical Malliavin gradient on the Wiener space, the bound (1.) offers an alternative to (1.1). For example, if F = I n (f n ) is a multiple stochastic integral with respect to the Brownian motion and the symmetric kernels f n satisfy certain integrability conditions the inequality (1.) gives d W (I n (f n ), N) 1 n! f n L (R n ) + + n n ( ) 4 n 1 (k!) ((n 1) k)! k k= g s n 1,k, gt n 1,k L (R (n 1 k) + ) obtained by the multiplication formula for multiple Wiener integrals, where g t n 1,k( ) = (f n (, t)) k (f n (, t)1 [,t n 1( )), t R +, dsdt (1.3) and the symbol k denotes the canonical symmetrization of the L contraction over k variables, denoted by k. On the other hand, by Proposition 3. of [6 the inequality (1.1)

3 yields d W (I n (f n ), N) 1 n! f n L (R n ) + +n n ( ) 4 n 1 (k!) ((n 1) k)! f n k+1 f n k. L (R (n k 1) + ) k= However, due to its importance, the Wiener case will be the object of a more detailed analysis in a subsequent work. Here our focus will be on the Poisson space, for which (1.) provides an alternative to Theorem 3.1 of [7. Several applications are considered in Section 4. This includes functionals of Poisson jump times (T k ) k 1 of the form f(t k ), for which we obtain the bound cf. d W (f(t k ), N) 1 f (T k ) Tk E[f (T k h + t) h=nt dt L 1 (P ), Proposition 4.1, and a similar result for the gamma approximation, with linear and quadratic functionals of the Poisson jump times as examples. The analogs of (1.3) for Poisson multiple stochastic integrals are treated in Proposition 4.3, and comparisons with the results of [7 are discussed. This paper is organized as follows. In Section we present a general framework for bounds on probability distances based on an abstract integration by parts formula. Next in Section 3 we show that the conditions of this integration by parts setting can be satisfied under the existence of a Clark-Ocone type stochastic representation formula. In Section 4 we apply this general setting to a Clark-Ocone formula stated with a derivation operator on the Poisson space, and consider several examples, including multiple stochastic integrals and other functionals of jump times. In Section 5 we consider the total variation distance between a normalized Poisson compound sum and the standard Gaussian distribution. We close this section by quoting Stein s lemmas for normal and gamma approximations. The following lemma on normal approximation can be traced back to Stein s contribution [1, see also the recent survey [5, and [6. Lemma 1.1 Let h : R [, 1 be a continuous function. The functional equation has a solution f h C 1 b (R) given by f (x) = xf(x) + h(x) E[h(N), x R, f h (x) = e x / x (h(a) E[h(N))e a / da, 3

4 with the bounds f h (x) π and f h(x), x R. The next lemma on the gamma approximation can be found in e.g. Lemma 1.3-(ii) of [6. In the sequel we denote by Γ(ν/) a random variable distributed according to the gamma law with parameters (ν/, 1), ν >. Lemma 1. Let h : R R be a twice differentiable function such that h(x) ce ax, x > ν, for some c > and a < 1/. Then, letting Γ ν := Γ(ν/) ν, the functional equation (x + ν)f (x) = xf(x) + h(x) E[h(Γ ν ), x > ν, (1.4) has a solution f h which is bounded and differentiable on ( ν, ), and such that f h h and f h h. General results.1 Integration by parts The main results of this paper will be derived under the abstract integration by parts (IBP) formula (.1) below. Let T denote a subset of C 1 (R) containing the constant functions. Given F and G two real-valued random variables defined on a probability space (Ω, F, P ) and A F an event with P (A) >, we let Cov A (F, G) := E[F G A E[F AE[G A denote the covariance of F and G given A. The following general Assumption.1 says that the integration by parts formula with weights W 1 and W holds for a random variable F given A on T. Assumption.1 Given F a random variable, we assume that there exist two real-valued random variables W 1 L 1 (P ( A)) and W such that E[W φ (F ) A = Cov A (φ(f ), W 1 ), (.1) for any φ T such that φ(f ), W 1 φ(f ), and W φ (F ) L 1 (P ( A)). 4

5 In particular, we note that if the IBP formula (.1) with weights W 1 and W holds on T for the random variable F given A, then W 1 is centered with respect to P ( A) if and only if we have E[W 1 φ(f ) A = E[W φ (F ) A, φ T, as follows by taking φ = 1 identically. An implementation of this formula on the Poisson space will be provided in Section 4 via the Clark-Ocone representation formula.. Normal approximation Total variation distance The total variation distance between two real-valued random variables Z 1 and Z with laws P Z1 and P Z is defined by d T V (Z 1, Z ) := sup P Z1 (C) P Z (C) = sup P Z1 (C) P Z (C), C B(R) C B b (R) where B(R) and B b (R) stand for the families of Borel and bounded Borel subsets of R, respectively. The following bounds on the total variation distance d T V (F A, N) between the law of F given A and the law of N hold under Assumption.1. Theorem.1 Let A F be such that P (A) > and assume that the IBP formula (.1) holds for F given A on T = C 1 b (R). Then 1. If W = 1 and W 1 is P ( A)-centered we have π d T V (F A, N) E[ W 1 F A. (.). If W 1 = F is P ( A)-centered we have d T V (F A, N) E[ 1 W A. (.3) Proof. 1) Take C B b (R) and let a > be such that C [ a, a. Consider a sequence of continuous functions h n : R [, 1, n 1, such that lim n h n (x) = 1 C (x), µ-a.e. where µ(dx) = (dx + P F A (dx)) [ a,a (restriction to [ a, a of the sum of the Lebesgue measure and the law of F given A), cf. [11 or Corollary 1.1 of [. Lemma 1.1 and the integration by parts formula (.1) show that for any n 1 we have E[h n (F )1 A E[h n (N)P (A) = E[(f h n (F ) F f hn (F ))1 A (.4) 5

6 = E[f hn (F )(W 1 F )1 A π E[ W 1 F 1 A. Dividing first this inequality by P (A) > and then taking the limit as n goes to infinity, the Dominated Convergence Theorem shows that P (F C A) P (N C) π E[ W 1 F A, for any C B b (R). The claim follows taking the supremum over all bounded Borel sets. ) By (.4) and the integration by parts formula, for any n 1, we have E[h n (F )1 A E[h n (N)P (A) = E[f h n (F )(1 W )1 A E[ 1 W 1 A. The claim follows arguing exactly as in case (1) above. Wasserstein distance The Wasserstein distance between the laws of Z 1 and Z is defined by d W (Z 1, Z ) := sup E[h(Z 1 ) E[h(Z ), h Lip(1) where Lip(1) denotes the class of real-valued Lipschitz functions with Lipschitz constant less than or equal to 1. We have the following upper bound for the Wasserstein distance between a centered random variable F and N. Theorem. Assume that the IBP formula (.1) holds for F given A with W 1 = F, on the space T of twice differentiable functions whose first derivative is bounded by 1 and whose second derivative is bounded by. Then we have provided F is P ( A)-centered. d W (F A, N) E[ 1 W A, (.5) Proof. Using the bound (.33) in [7 and the IBP formula (.1), we have d W (F A, N) sup E[φ (F ) F φ(f ) A φ T = sup E[φ (F )(1 W ) A E[ 1 W A. φ T 6

7 .3 Gamma approximation Here we use the distance d H (Z 1, Z ) := sup E[h(Z 1 ) E[h(Z ), (.6) h H where H := {h C b(r) : max{ h, h, h } 1}. The following upper bound for the d H -distance between the centered random variable F given A and a centered gamma random variable holds under the IBP formula (.1) of Assumption.1. Theorem.3 Let F be a P ( A)-centered, a.s. ( ν, )-valued random variable. Given A F such that P (A) >, assume that the IBP formula (.1) holds for F given A on T = C 1 b (R) with W 1 = F. Then we have d H (F A, Γ ν ) E[ (F + ν) W A, (.7) where the random variable Γ ν is defined in Lemma 1.. Proof. Let h H be arbitrarily fixed. Since h is bounded above by 1, there exist c > and a < 1/ such that h(x) ce ax, x > ν (take c > 1 and < a < 1/ so small that 1 < ce aν ). Let f h be solution of (1.4) (its existence is guaranteed by Lemma 1.). By the IBP formula (.1) on C 1 b (R) for the centered random variable F given A with W 1 = F, we have E[h(F )1 A E[h(Γ ν )P (A) = E[((F + ν)f h(f ) F f h (F ))1 A = E [((F + ν)f h(f ) W f h(f ))1 A h E[ (F + ν) W 1 A. The claim follows by dividing the above inequality by P (A) > and then taking the supremum over all functions h H. 3 Integration by parts via the Clark-Ocone formula In this section we consider an implementation of the the IBP formula (.1) of Assumption.1, based on the Clark-Ocone formula for a real-valued normal martingale (M t ) t defined on a 7

8 probability space (Ω, F, P ), generating a right-continuous filtration (F t ) t. In other words, (M t ) t is a square integrable martingale with respect to the natural filtration F t = σ(m s : s t), such that E[ M t M s F s = t s, s < t, and the filtration is rightcontinuous. Let l be the Lebesgue measure on R +. In this section we assume the existence of a gradient operator D : Dom(D) L (Ω, F, P ) L (Ω R +, F B(R + ), P l) with domain Dom(D), defined by DF = (D t F ) t and satisfying the following properties: (i) D satisfies the Clark-Ocone representation formula F = E[F + E[D t F F t dm t, F Dom(D), (3.1) (ii) D satisfies the chain rule of derivation D t φ(f ) = φ (F )D t F, F Dom(D), (3.) for all φ T C 1 (R), cf. e.g. 3.6 of [1 (here T contains the constant functions). This condition will be satisfied in both the Wiener and Poisson settings of Section 4. In addition we will assume that for any F Dom(D) and φ T we have φ(f ) Dom(D). From (3.1) the gradient operator D satisfies the following covariance identity, cf. e.g. Proposition in [1, p. 11. Lemma 3.1 For any F, G Dom(D) we have [ Cov(F, G) = E E[D t F F t D t G dt. (3.3) 3.1 Integration by parts We now implement the IBP formula (.1) for functionals in the domain of D, based on the Clark-Ocone representation formula (3.1). Note that IBP formulas of the form (.1) can also be obtained by the Ornstein-Uhlenbeck semigroup, cf. e.g. Proposition.1 of [3. Proposition 3. If F, G Dom(D) and φ (F )ϕ F,G (F ) L 1 (P ) for any φ T, then the IBP formula (.1) holds on T for F with W 1 = G and W = ϕ F,G (F ), i.e. E[ϕ F,G (F )φ (F ) = Cov(φ(F ), G) 8

9 where ϕ F,G is the function [ ϕ F,G (z) := E D t F E[D t G F t dt F = z, z R. (3.4) Proof. By Lemma 3.1 and the properties of the gradient operator, for any φ T and F, G Dom(D), we have [ Cov(φ(F ), G) = E E[D t G F t D t φ(f ) dt [ = E φ (F ) D t F E[D t G F t dt [ [ = E E φ (F ) D t F E[D t G F t dt F = E[φ (F )ϕ F,G (F ). (3.5) 3. Normal and gamma approximation We now apply Theorems.1 and. using the Clark-Ocone formula (3.3). For any F Dom(D), we define [ ϕ F (z) := ϕ F,F (z) = E D t F E[D t F F t dt F = z, z R (3.6) and note that by Jensen s inequality ϕ F (F ) L 1 (P ) DF L (P l) = E [ D t F dt <, F Dom(D). The next proposition follows as a simple consequence of Theorems.1,.,.3 and Proposition 3. and uses the definition (.6) of the distance d H. Proposition 3.3 For any F Dom(D) such that E[F =, we have d T V (F, N) E[ 1 ϕ F (F ) and d W (F, N) E[ 1 ϕ F (F ), where ϕ F is defined in (3.6). If moreover F is a.s. ( ν, )-valued then we have d H (F, Γ ν ) E[ (F + ν) ϕ F (F ). 9

10 Letting, denote the usual inner product on L (R + ), from Proposition 3.3 we also have the following corollary: Corollary 3.4 For any F Dom(D) such that E[F =, we have and d T V (F, N) 1 D F, E[D F F L (P ) 1 F L (P ) + D F, E[D F F E[ D F, E[D F F L (P ) d W (F, N) 1 D F, E[D F F L (P ) 1 F L (P ) + D F, E[D F F E[ D F, E[D F F L (P ). If moreover F is a.s. ( ν, )-valued then we have d H (F, Γ ν ) (F + ν) D F, E[D F F L (P ) Proof. (F + ν) F L (P ) L (P ) + D F, E[D F F E[ D F, E[D F F L (P ). The first inequality follows by Proposition 3.3 and the Cauchy-Schwarz inequality. The second inequality follows by the triangle inequality noticing that by the Itô isometry and the Clark-Ocone formula we have [ E[ D F, E[D F F = E D t F E[D t F F t dt [ = E E[D t F F t dt [ ( ) = E E[D t F F t dm t = F L (P ). The counterpart of this statement for the Wasserstein and d H distances is proved similarly. In this work our main focus will be on the Poisson space, and in Section 4, we shall compare the upper bound on the Wasserstein distance with the bound obtained in [7 on the Poisson space. 1

11 4 Analysis on the Poisson space In this section we apply the results of Section 3 to functionals of a standard Poisson process (N t ) t with jump times (T k ) k 1 defined on an underlying probability space (Ω, F, P ). We let S = {F = f(t 1,..., T d ) : d 1, f C 1 p(r d +)}, where C 1 p(r d +) denotes the space of continuously differentiable functions such that f and its partial derivatives have polynomial growth, i.e. for any i {, 1,..., d} there exist α (i) j, j = 1,..., d, such that sup (x 1,...,x d ) R d + x α(i) x α(i) d d i f(x 1,..., x d ) <, where f := f. Given F = f(t 1,..., T d ) S, we consider the gradient on the Poisson space defined as D t F = d 1 [,Tk (t) k f(t 1,..., T d ), t (4.1) k=1 (see e.g. Definition 7..1 in [1 p. 56). We recall that the gradient operator D : S L (Ω, F, P ) L (Ω R +, F B(R + ), P l) is closable (see [1 p. 59). We shall continue to denote by D its minimal closed extension, whose domain Dom(D) coincides with the completion of S with respect to the norm F 1, = F L (P ) + DF L (P l). By Proposition 7..8 in [1 p. 6 the operator D satisfies the Clark-Ocone representation formula, i.e. for any F Dom(D) we have F = E[F + E[D t F F t (dn t dt), (4.) where (F t ) t is the filtration generated by (N t ) t. We note that the gradient D satisfies the chain rule on the set T of real-valued functions which have polynomial growth and are continuously differentiable with bounded derivative, i.e. for any g T and F Dom(D) we have g(f ) Dom(D) and Dg(F ) = g (F )DF, cf. Lemma 6.1 in the Appendix. Before turning to some concrete examples of Poisson functionals we note that identifying the process E[D t F F t in (4.) amounts to finding the predictable representation of the random 11

12 variable F. For example if F = X T stochastic differential equation is the terminal value of the solution (X t ) t [,T to the X t = X + t σ(s, X s )(dn s ds), (4.3) where σ : [, T Ω R is a measurable function, then we immediately have E[D t F F t = σ(t, X t ) and Corollary 3.4 shows that e.g. T d T V (X T, N) 1 σ(t, X t )D t X T dt L (P ) 1 X T T L (P ) + (σ(t, X t )D t X T E[σ(t, X t )D t X T ) dt L (P ) provided the terminal value X T belongs to Dom(D) and E[X =. In particular, the domain condition can be achieved under a usual Lipschitz condition on σ(, x), x R, and a usual sub-linear growth condition on σ(t, ), t [, T. We refer the reader to e.g. Proposition 3. of [4 for an explicit solution of (4.3) which is suitable for D-differentiation when σ(t, x) vanishes at t = T, for any x R. 4.1 Approximation of Poisson jump times functionals Proposition 4.1 Let f C 1 p(r + ) be such that E[f(T k ) =, k 1. Then d T V (f(t k ), N) 1 f (T k ) Tk E[f (T k h + t) h=nt dt L 1 (P ), (4.4) and Tk d W (f(t k ), N) 1 f (T k ) E[f (T k h + t) h=nt dt (4.5) L 1 (P ). If moreover f(t k ) > ν a.s. we have Tk d H (f(t k ), Γ ν ) (f(t k ) + ν) f (T k ) E[f (T k h + t) h=nt dt (4.6) L 1 (P ). Proof. We have f(t k ) Dom(D) and D t f(t k ) = f k (T k)d t T k = f k (T k)1 [,Tk (t), t. By the formula in [1 p. 61 we have E[D t f(t k ) F t = t f (x)p k 1 Nt (x t) dx, where p k (t) = P (N t = k). So [ ϕ f(tk )(f(t k )) = E D t f(t k )E[D t f(t k ) F t dt f(t k ) 1

13 Finally, by Proposition 3.3 we deduce [ Tk = E f (T k ) E[D t f(t k ) F t dt f(t k ) [ Tk = E f (T k ) f (x)p k 1 Nt (x t) dxdt f(t k ) t [ Tk = E f (T k ) f (x + t)p k 1 Nt (x) dxdt f(t k ) [ Tk = E f (T k ) E[f (T k h + t) h=nt dt f(t k ). (4.7) d T V (f(t k ), N) E[ 1 ϕ f(tk )(f(t k )) [ E [ Tk = E 1 f (T k ) E[f (T k h + t) h=nt dt f(t k ) 1 f (T k ) Tk E[f (T k h + t) h=nt dt L 1 (P ). The inequalities concerning d W and d H can be proved similarly. Example - Linear Poisson jump times functionals Proposition 4.1 can be applied to linear functionals of Poisson jump times. Consider first the normal approximation. Take e.g. f(x) = (x k)/ k, i.e. f(t k ) = (T k k)/ k, k 1, and note that T k /k is gamma distributed with mean 1 and variance 1/k. All hypotheses of Proposition 4.1 are satisfied and we have Tk 1 f (T k ) E[f (T k h + t) h=nt dt = T k L 1 (P ) k 1 L Var(T k /k) = 1, 1 (P ) k where the latter inequality follows by the Cauchy-Schwarz inequality. So (4.4) and (4.5) recovers the classical Berry-Esséen bound. For the gamma approximation we take e.g. ν := k and f(x) = (x k), k 1. In such a case Γ ν has the same law of f(t k ) and we check that d H (f(t k ), Γ ν ) =. Indeed, (f(t k ) + ν) f (T k ) Tk E[f L (T k h + t) h=nt dt = 4T k 4T k =. L 1 (P ) 1 (P ) Example - Quadratic Poisson jump times functionals Proposition 4.1 can also be applied to quadratic functionals of Poisson jump times. Consider first the normal approximation, take e.g. f(x) = αx β, with α = 1 k 3/ and β = k + 1 k, k 1. 13

14 Recall that if X is gamma distributed with parameters a and b, then E[X k = (a+k 1)(a+ k ) (a + 1)a/b k, k 1. One easily sees that all the assumptions of Proposition 4.1 are satisfied, and we find 1 f (T k ) Tk E[f (T k h + t) h=nt dt 1 k + 4 k + 1 k + 6 k k L 1 (P ) ( 4 3k k + 3k 3 + ) 4 k + 1 k + 6 k +, 3 k (4.8) cf. the Appendix. Note that this upper bound is asymptotically equivalent to ( )/ k as k, and so we recover the Berry-Esséen bound. 4. Approximation of multiple Poisson stochastic integrals We present some applications of Corollary 3.4 to Poisson functionals. For n 1, we denote by I n (f n ) = n! t n t f n (t 1,..., t n ) (dn t1 dt 1 ) (dn tn dt n ) the multiple Poisson stochastic integral of the symmetric function f n L (R n +) with I n (f n ) = I n ( f n ) when f n is not symmetric, where f n denotes the symmetrization of f n in n variables (see e.g. Section 6. in [1). As a convention we identify L (R +) with R, and let I (f ) = f, f L (R +). Moreover, we shall adopt the usual convention j k=i = if i > j. Let the space Sn 1, of weakly differentiable functions be defined as the completion of the symmetric functions f n C 1 c([, ) n ) under the norm f n 1, = f n L (R n + ) + = f n L (R n + ) + t 1 f n (s 1,..., s n ) ds 1 dtds ds n 1 f n[t (s 1,..., s n ) ds 1 dtds ds n (4.9) where i f n[t (s 1,..., s n ) = i f n (s 1,..., s n )1 [t, ) (s i ). The next lemma is proved in the Appendix, cf. Proposition 8 of [9 or Proposition 7.7. page 79 of [1. Lemma 4. For any function f n S 1, n Dom(D) with symmetric in its n variables we have I n (f n ) D t I n (f n ) = ni n 1 (f n (, t)) ni n ( 1 f n[t ), t R +, (4.1) 14

15 and DI n (f n ) L (P l) = n (n 1)! +n n! t f n (t 1,..., t n ) dt 1 dt n 1 f n (t 1,..., t n ) dt 1 dtdt dt n. We recall the multiplication formula for multiple Poisson stochastic integrals, cf. e.g. Proposition of [1. For symmetric functions f n L (R n +) and g m L (R m +), we define f n l k g m, l k, to be the function (x l+1,..., x n, y k+1,..., y m ) f n (x 1,..., x n )g m (x 1,..., x k, y k+1,..., y m ) dx 1 dx l R l + of n+m k l variables. We denote by f n l k g m the symmetrization in n+m k l variables of f n l k g m, l k. Note that if k = l then k := k k is the classical L contraction over k variables and k := k k is the canonical symmetrization of k. We have if the functions h n,m,k = I n (f n )I m (g m ) = k i (k n m) (n m) k= ( n i! i )( m i I n+m k (h n,m,k ) )( i k i ) f n k i i belong to L (R n+m k + ), k (n m). In particular, letting 1 {(t1,...,t n)<t} denote the function 1 [,t n(t 1,..., t n ), for any symmetric function f n Sn 1,, we have if the functions g (1,t) n 1,n 1,k = I n 1 (f n (, t))i n 1 (f n (, t)1 { <t} ) = k i (k (n 1)) belong to L (R n k + ), k n, and if the functions g (,t) n,n 1,k = n k= g m I n k (g (1,t) n 1,n 1,k ) (4.11) ( ) i! n 1 ( ) i f n (, t) k i i f n (, t)1 { <t} (4.1) i k i I n ( 1 f n[t )I n 1 (f n (, t)1 { <t} ) = k i (k (n 1)) n k= ( )( )( ) n n 1 i i! 1 f n[t k i i i i k i I n 1 k (g (,t) n,n 1,k ) (4.13) f n (, t)1 { <t} belong to L (R n 1 k + ), k n. Part () of the next proposition proposes an alternative to the Gamma bound of Theorem.6 of [8. 15

16 Proposition 4.3 1) For any symmetric function f n S 1, n such that g (1,t) n 1,n 1,k L (R n k + ), k n and g (,t) n,n 1,k L (R n 1 k + ), k n, we have d T V (I n (f n ), N) 1 n! f n L (R n ) + ( n 3 +n (n k)! g (1,t) n 1,n 1,k g(,t) n,n 1,k+1, g(1,s) n 1,n 1,k g(,s) n,n 1,k+1 L (R n k (, ) + ) dsdt and + k= (n 1)! g (,t) n,n 1,, g (,s) n,n 1, L (R n 1 + ) dsdt ) 1/ d W (I n (f n ), N) 1 n! f n L (R n ) + ( n 3 +n (n k)! g (1,t) n 1,n 1,k g(,t) n,n 1,k+1, g(1,s) n 1,n 1,k g(,s) n,n 1,k+1 L (R n k (, ) + ) dsdt + k= (n 1)! g (,t) n,n 1,, g (,s) n,n 1, L (R n 1 + ) dsdt ) 1/. ) If moreover I n (f n ) is a.s. ( ν, )-valued then we have d H (I n (f n ), Γ ν ) ( n 3 +n (n k)! + Proof. k= 4ν + n! f n 4 L (R n + ) + 4n!(1 ν) f n L (R n + ) g (1,t) n 1,n 1,k g(,t) n,n 1,k+1, g(1,s) n 1,n 1,k g(,s) n,n 1,k+1 L (R n k (, ) + ) dsdt ) 1/. (n 1)! g (,t) n,n 1,, g (,s) n,n 1, L (R n 1 + ) dsdt By Lemma 4. we have I n (f n ) Dom(D) and D t I n (f n ) = ni n 1 (f n (, t)) ni n ( 1 f n[t ), t R +. So by Lemma.7. p. 88 of [1 and the definition of 1 f n[t, we have E[D t I n (f n ) F t = ni n 1 (f n (, t)1 { <t} ) ni n ( 1 f n[t 1 [,t n) = ni n 1 (f n (, t)1 { <t} ), (4.14) 16

17 since f n (, t)1 { <t} =, t R +. Combining this with the multiplication formulas (4.11) and (4.13) for multiple Poisson stochastic integrals, we deduce D I n (f n ), E[D I n (f n ) F = n I n 1 (f n (, t))i n 1 (f n (, t)1 { <t} ) dt n I n ( 1 f n[t )I n 1 (f n (, t)1 { <t} ) dt n = n n = n k= k= n n = n k= n = n I n k (g (1,t) n 1,n 1,k ) dt n I n k (g (1,t) n 1,n 1,k ) dt n I n 1 (g (,t) n,n 1,) dt I n k (g (1,t) n 1,n 1,k ) dt n I n 1 (g (,t) n,n 1,) dt I (g (1,t) n 1,n 1,n ) dt + n n I n 1 (g (,t) = n (n 1)! n 3 +n k= = n (n 1)! n 3 +n k= n,n 1,) dt n 3 k= n k= n k=1 n 3 s= f n (, t) n 1 n 1 f n (, t)1 { <t} dt I n k (g (1,t) n 1,n 1,k g(,t) I n 1 k (g (,t) n,n 1,k ) dt I n 1 k (g (,t) n,n 1,k ) dt I n s (g (,t) n,n 1,s+1) dt I n k (g (1,t) n 1,n 1,k g(,t) n,n 1,k+1 ) dt n,n 1,k+1 ) dt n I n 1 (g (,t) n,n 1,) dt [,t n 1 f n (t 1,... t n 1, t)f n (t 1,... t n 1, t) dt 1 dt n 1 dt I n k (g (1,t) n 1,n 1,k g(,t) n,n 1,k+1 ) dt n I n 1 (g (,t) n,n 1,) dt. Using the first equality above and the isometry formula for multiple Poisson stochastic integrals (see Proposition.7.1 p. 87 in [1) we have Hence E[ D I n (f n ), E[D I n (f n ) F = n (n 1)! f n (t 1,... t n 1, t)f n (t 1,... t n 1, t) dt 1 dt n 1 dt. [,t n 1 DI n (f n ), E[DI n (f n ) F E[ DI n (f n ), E[DI n (f n ) F 17

18 n 3 = n k= I n k (g (1,t) n 1,n 1,k g(,t) n,n 1,k+1 ) dt n I n 1 (g (,t) n,n 1,) dt. We conclude by Corollary 3.4, noticing that I n (f n ) is a centered random variable (see [1 pp ), I n (f n ) L (P ) = n! f n L (R n + ), for any ν > and n 3 k= (I n (f n ) + ν) n! f n L (R n + ) L (P ) = n! f n 4 L (R n + ) + 4n!(1 ν) f n L (R n + ) + 4ν, = E +E I n k (g (1,t) n 1,n 1,k g(,t) ( n 3 k= [ ( n 3 = E = = = +E n 3 k= +E n 3 k= [ E k= ( n 3 k= [ ( E n,n 1,k+1 ) dt I n k (g (1,t) n 1,n 1,k g(,t) ) I n 1 (g n,n 1,) (,t) dt [ ( [ ( ) n,n 1,k+1 ) dt I n 1 (g (,t) n,n 1,) dt L (P ) I n k (g (1,t) n 1,n 1,k g(,t) n,n 1,k+1 )I n 1(g n,n 1,) (,s) ds dt I n k (g (1,t) n 1,n 1,k g(,t) ) I n 1 (g n,n 1,) (,t) dt [ E [ +E ) n,n 1,k+1 ) dt ) I n k (g (1,t) n 1,n 1,k g(,t) n,n 1,k+1 ) dt ) I n 1 (g n,n 1,) (,t) dt n 3 (n k)! k= +(n 1)! I n k (g (1,t) n 1,n 1,k g(,t) n,n 1,k+1 )I n k(g (1,s) n 1,n 1,k g(,s) n,n 1,k+1 ) dsdt I n 1 (g (,t) n,n 1,)I n 1 (g (,s) n,n 1,) dsdt g (1,t) n 1,n 1,k g(,t) n,n 1,k+1, g(1,s) n 1,n 1,k g(,s) n,n 1,k+1 L (R n k + ) dsdt g (,t) n,n 1,, g (,s) n,n 1, L (R n 1 + ) dsdt. 18

19 Single Poisson stochastic integrals In the particular case n = 1, the space S 1, 1 is the completion of C 1 c([, )) under the norm f 1, = f L (R + ) + f[ L (R + ) L (R + ) : = f(t) dt + f (s) dsdt = f(t) dt + s f (s) ds where f [t (s) := f (s)1 [t, ) (s) and we have I 1 (f) Dom(D) with and D t I 1 (f) = f(t) I 1 (1 [t, ) f ), t R +, DI 1 (f) L (P l) = f(t) dt + t f (s) dsdt. The following result is a simple consequence of Proposition 4.3 for n = 1. Corollary 4.4 For any f S 1, 1, we have and ( t d T V (I 1 (f), N) 1 f L (R + ) + f (t) f(z) dz) dt ( t d W (I 1 (f), N) 1 f L (R + ) + f (t) f(z) dz) dt. If moreover I 1 (f) is a.s. ( ν, )-valued then we have t (4.15) ( t d H (I 1 (f), Γ ν ) 4ν + f 4L (R+) + 4(1 ν) f L (R+) + f (t) f(z) dz) dt. Note that Corollary 3.4 of [7 states that for any f L (R + ), which shows that d W (I 1 (f), N) 1 f L (R + ) + f 3 L 3 (R + ), I 1 (f k ) N in law provided f k L (R + ) 1 and f k L 3 (R + ) as k goes to infinity. Next we consider a couple of examples for comparison with Corollary

20 Example - Single Poisson stochastic integrals with specific kernels 1. Take g k (t) = (/k) 1/ e t/k t, k 1. We shall show later on that g k S 1, 1. We have g k L (R + ) = 1 and ( ( t ) 1/ g k(t) g k (z) dz) dt = ( k = ( 1/ (e t/k e 3t/k + e )dt) 4t/k = k hence by Corollary 4.4 we get while Corollary 3.4 of [7 yields and 8/3 > 1. d W (I 1 (g k ), N) d W (I 1 (g k ), N) ) 1/ e t/k 1 e t/k dt 1 3k, g k (t) 3 dt = k 1 3k, To check that g k S 1, 1 it suffices to verify that g S 1, 1, where g(t) = e t, t. Let {χ k } k 1 be a sequence of functions in C 1 c([, )) with χ k (t) 1, for any t, χ k (t) = 1, for any t [, k, and sup k 1,t χ k (t) <. Then one may easily see that {χ k g} k 1 is a sequence in C 1 c([, )) converging to g in the norm 1,.. Take g k (t) = (k + 1 )1/ 1 [, 1 (k+ 1 )(t) cos(πt), t, k 1. Note that g k is continuous and piecewise differentiable (with a piecewise continuous derivative) and so g k is weakly differentiable. We shall show later on that g k S 1, 1. We have and g k L (R + ) = 1 (k+ 1 ) (k + 1 cos(πt) dt = 1, )1/ ( ( t g k(t) g k (z) dz) dt = ) 1/ ( 4π 1 (k+ 1 ) ( t (k + 1 sin(πt) g k (z) dz) dt )1/ ) 1/

21 ( = 8π 1 (k+ 1 ) ( t k + 1 sin(πt) ( 4 1 (k+ 1 ) = sin(πt) 4 dt k + 1 = 4 k + 1 ( (k+ 1 ) ( = 3 1 (k + 1 ) k = k + 1 hence by Corollary 4.4 we get k + 1 = 3 k + 1 sin(πt) dt 1 (k+ 1 ) d W (I 1 (f k ), N) whereas by Corollary 3.4 of [7 we have Note that 16/(3π) < 3. d W (I 1 (f k ), N) cos(πz) dz) dt ) 1/ ) 1/ 3 k + 1, cos(πt) dt g k (t) 3 dt = π. k + 1 ) 1/ ) 1/ To check that g k S 1, 1 it suffices to verify that g S 1, 1, where g(t) = 1 [,π/ (t) cos t, t. Let ρ be a smooth probability density on [, ) with support in [, 1, ρ k (t) = kρ(kt) and G k (t) := g ρ k (t) = t ρ k (t s)g(s) ds, t. Then one may easily see that {G k } k 1 is a sequence in C 1 c([, )) converging to g in the norm 1,. Double Poisson stochastic integrals For the case of double Poisson stochastic integrals we have the following corollary. Corollary 4.5 For any symmetric function f S 1,, we have d T V (I (f), N) 1 f L (R + ) ( +8! f(x, t)f(x, s) dx (, ) 1 t s f(y, t)f(y, s) dydsdt

22 and + (, ) +3! (, ) t s f(x, t)f(x, s) dxdsdt t s 1 f(x, y) dxdy t s f(z, t)f(z, s) dzdsdt ) 1/ d W (I (f), N) 1 f L (R + ) ( +4! f(x, t)f(x, s) dx + (, ) +3! (, ) t s (, ) f(x, t)f(x, s) dxdsdt t s 1 f(x, y) dxdy t s t s If moreover I (f) is a.s. ( ν, )-valued then we have Proof. f(y, t)f(y, s) dydsdt f(z, t)f(z, s) dzdsdt) 1/. d H (I (f), Γ ν ) ( f 4 L (R + ) + 8(1 ν) f L (R + ) + 4ν ) 1/ ( t s +4! f(x, t)f(x, s) dx f(y, t)f(y, s) dydsdt + (, ) +3! (, ) t s (, ) f(x, t)f(x, s) dxdsdt t s 1 f(x, y) dxdy t s f(z, t)f(z, s) dzdsdt) 1/. It follows after a direct computation taking n = in Proposition 4.3 and noticing that, for any f S 1,, g (1,t) 1,1,(x, y) = f(x, t)f(y, t)1 (,t) (y), and g (1,t) 1,1,1(x) = f(x, t) 1 (,t) (x), g (,t),1,(x, y, z) = 1 f(x, y)f(z, t)1 [t, ) (x)1 (,t) (z), g (1,t) 1,1, = t g (,t),1,1 g (,t),1,. f(x, t) dx 5 Normal approximation of the compound Poisson distribution In this section we present an application of formula (.) to the compound Poisson distribution. Let (Z k ) k 1 be a sequence of real-valued i.i.d. random variables independent of a

23 Poisson distributed random variable N n with parameter n 1. We assume that Z 1 has moments of any order and that its distribution has a continuously differentiable density p Z1 (z) with respect to the Lebesgue measure, such that lim z ± z p p Z1 (z) = for all p 1. We also assume that d dz log p Z 1 (z) = p Z 1 (z)/p Z1 (z) has at most polynomial growth. Consider the sequence F n := Nn k=1 Z k ne[z 1, n 1. ne[z 1 It is well-known that F n N, in law, as n. In the following we are going to upper bound the total variation distance between F n and N. The following lemma applies the IBP formula of [1 to each F n N n = m, m, n 1. Lemma 5.1 Let m, n 1 be fixed integers. Under the foregoing assumptions on the law of the jump amplitude Z 1, we have that the IBP formula (.1) holds on C 1 b (R) for F n N n = m with P ( N n = m)-centered ne[z W 1 = W n (m) := 1 m and W = 1. m k=1 q Z1 (Z k ) where q Z1 (z) := p Z 1 (z) p Z1 (z) 1 {pz1 (z)>} Proof. See Theorem 3.1 and Section 4 in [1. We have the following bound for the total variation distance. Proposition 5. Under the foregoing assumptions on the law of the jump amplitude Z 1, we have where Proof. d T V (F n, N) e n + For any n 1, we have d T V (F n, N) = sup C B(R) = sup C B(R) R n = n N n P (F n C) P (N C) π (1 R n)1 {Nn 1} L (P ), n 1 E[Z1 N n k=1 q Z 1 (Z k ) Nn k=1 Z k ne[z 1. [P (F n C N n = m)p (N n = m) P (N C)P (N n = m) m m d T V (F n N n = m, N)P (N n = m) e n + π E[ W n (m) F n N n = mp (N n = m) (5.1) m 1 3

24 π = e n + F n 1 {Nn 1} [ π = e n + E F n 1 W n (Nn) 1{Nn 1} F n ( ) π e n + F n L (P ) 1 W n (Nn) F n ( ) π = e n + 1 W n (Nn) 1 {Nn 1} F n L (P ), E[ W (Nn) n 1 {Nn 1} L (P ) where the inequality (5.1) follows by (.). Example - Normal approximation of Poisson compound sums with Gaussian addends Suppose that Z 1 is a standard Gaussian random variable. Then all the hypotheses of Proposition 5. are satisfied and R n = n/n n. So, for any n 1, π ) d T V (F n, N) e n + (1 nnn We have ) (1 nnn 1 {Nn 1} L (P ) = e n/ m 1 1 {Nn 1} ( 1 n ) n m m m! (5.) L (P ). = 1 e n ne n S 1 (n) + n e n S (n), (5.3) where and S 1 (n) := m 1 S (n) := m 1 n m m m! n m m m!. Note that these two series converge (by e.g. the ratio test), but their sum do not have a closed form. After some manipulations, one can realize that S 1 (n) and S (n) have the following series expansion at n = : S 1 (n) = e n (n 1 + n + O(n 3 )) + log n 1 iπ γ + O(n 7 ) and S (n) = e n (n +3n 3 +O(n 4 )) 1 log (n 1 )+(γ+iπ) log n 1 +5π /1 iγπ γ /+O(n 6 ) 4

25 where γ denotes the Euler-Mascheroni constant. So n(1 e n ne n S 1 (n) + n e n S (n)) = 1 + ω(n), where ω(n) is a suitable function converging to zero as n. Therefore, combining (5.) and (5.3) we get a Berry-Esseen type upper bound for F n, which is asymptotically equivalent to π, as n. n 6 Appendix Lemma 6.1 Let g : R R be with polynomial growth and continuously differentiable with bounded derivative and F Dom(D), we have g(f ) Dom(D) and Dg(F ) = g (F )DF. Proof. For any F S we clearly have g(f ) S and the claim is an immediate consequence of the action of D t on S and the chain rule for the derivative. Now, take F Dom(D). Then there exists a sequence (F (n) ) n 1 S such that F (n) F in L (P ) and DF (n) DF in L (P l). Since the claim holds for functionals in S, for g : R R with polynomial growth and continuously differentiable with bounded derivative, we have Dg(F (n) ) = g (F (n) )DF (n), n 1. By the convergence in L (P ) we have that there exists a subsequence {n } of {n} such that F (n ) F a.s.. The claim follows if we check that Dg(F (n ) ) g (F )DF in L (P l). By the boundedness of g, for a positive constant C > we have Dg(F (n ) ) g (F )DF L (P l) = g (F (n ) )DF (n ) g (F )DF L (P l) g (F (n ) )DF (n ) g (F (n ) )DF L (P l) + g (F (n ) )DF g (F )DF L (P l) ( [ 1/ C DF (n ) DF L (P l) + E g (F (n ) ) g (F ) D t F dt). This latter quantity tend to zero as n. Indeed, the first term goes to zero since DF n DF in L (P l); the second term goes to zero by the Dominated Convergence Theorem since for some constant C > we have g (F (n ) ) g (F ) D t F C D t F, P l-a.e, DF L (P l) and g (F (n ) ) g (F ) a.s. by the continuity of g. Proof of Lemma 4.. Let f n C 1 c([, ) n ) be a symmetric function and define the sequence F (m) = n ( ) n ( 1) n k k k= 1 l 1 =l k m f n (T l1,..., T lk, s k+1,..., s n ) ds k+1 ds n 5

26 = I n (f n 1 n) [,Tm n ( ) n + ( 1) n k k k= 1 l 1 =l k m R n k + \[,Tmn k f n (T l1,..., T lk, s k+1,..., s n ) ds k+1 ds n, (6.1) m 1. We have (F (m) ) m 1 S and F (m) converges to F = I n (f n ) in L (P ). Indeed, by the isometry formula for multiple Poisson stochastic integrals (see e.g. Proposition 6..4 in [1), we have [ (In E[ F I n (f n 1 [,Tm n) = E (f n (1 1 n))) [,Tm (6.) [ = n!e f n (t 1,..., t n ) (1 1 n(t [,Tm 1,..., t n )) dt 1 dt n and this latter term tends to as m goes to infinity by the Dominated Convergence Theorem. Moreover, each of the remaining terms R n k + \[,Tmn k f n (T l1,..., T lk, s k+1,..., s n ) ds k+1 ds n in (6.1) is bounded and tends to a.s. as m goes to infinity, since f n has compact support. Next, by (4.1) we have n ( ) n D t F (m) = ( 1) n k k = k= k 1 [,Tli (t) i=1 k= + k= = k= n ( ) n ( 1) n k k 1 l 1 =l k m 1 l 1 =l k m i f n (T l1,..., T lk, s k+1,..., s n ) ds k+1 ds n n i f n[t (T l1,..., T lk, s k+1,..., s n ) ds k+1 ds n i=1 n ( ) n ( 1) n k k n i=k+1 n ( ) n ( 1) n k k 1 l 1 =l k m i f n[t (T l1,..., T lk, s k+1,..., s n ) ds k+1 ds n 1 l 1 =l k m n i f n[t (T l1,..., T lk, s k+1,..., s n ) ds k+1 ds n i=1 6

27 ( ) n ( 1) n k k n 1 k= n ( ) n = ( 1) n k k k= n 1 ( ) n ( 1) n k (n k) k k= n 1 l 1 =l k m i=k+1 f n (T l1,..., T lk, s k+1,..., s i 1, t, s i+1,... s n ) ds k+1 ds i 1 ds i+1 ds n 1 1 l 1 =l k m 1 l 1 =l k m n i f n[t (T l1,..., T lk, s k+1,..., s n ) ds k+1 ds n f n (T l1,..., T lk, t, z 1,... z n k 1 ) dz 1 dz n k 1 n ( ) n n = ( 1) n k i f n[t (T l1,..., T lk, s k+1,..., s n ) ds k+1 ds n k k= 1 l 1 =l k m i=1 n 1 ( ) n 1 n ( 1) n k k k= 1 l 1 =l k m i=1 f n (T l1,..., T lk, t, z 1,..., z n k 1 ) dz 1 dz n k 1 = ni n 1 (f n (, t)1 n 1) ni [,Tm n( 1 f n[t 1 n) [,Tm n ( ) n ( 1) n k k k= R n k + \[,Tmn k 1 l 1 =l k m n i f n[t (T l1,..., T lk, s k+1,..., s n ) ds k+1 ds n i=1 n 1 ( n 1 +n ( 1) n 1 k k k= ) 1 l 1 =l k m R n 1 k + \[,T m n 1 k f n (T l1,..., T lk, t, z 1,..., z n 1 k ) dz 1 dz n k 1, t R +. To conclude we note that, as in (6.), ni n 1 (f n (, t)1 n 1) ni [,Tm n( 1 f n[t 1 [,Tm n) converges in L (P l) to D t F := ni n 1 (f n (, t)) ni n ( 1 f n[t ), t R +, (6.5) as m goes to infinity by the isometry formula for multiple Poisson stochastic integrals, and the two terms in (6.3) and (6.4) converge to since f n C 1 c([, ) n ). (6.3) (6.4) In order to complete the proof of the first part of the lemma by closability, given f n Sn 1, we choose a sequence (f n (m) ) m N C 1 c([, ) n ) converging to f n for the norm (4.9) and we 7

28 define the sequence of functionals (F (m) ) m 1 in Dom(D) by F (m) := I n (f (m) n ), m 1. Then we note that by the isometry formula for multiple Poisson stochastic integrals and the convergence of f (m) n to f n in L (R + ), we have I n (f (m) n ) I n (f n ) in L (P ) as m. Moreover, D t F defined by (4.1) and (6.5) satisfies [ E D t F D t F (m) dt [ n E I n ( 1 f n[t ) I n ( 1 f (m) n[t ) dt [ +n E I n 1 (f n (, t)) I n 1 (f n (m) (, t)) dt = n n! +n (n 1)! t 1 f n (s 1,..., s n ) 1 f (m) n (s 1,..., s n ) ds 1 dtds ds n f n (s 1,..., s n ) f (m) n (s 1,..., s n ) ds 1 ds n. So DF (m) converges to DF in L (P l) by the convergence of f (m) n to f n with respect to the norm 1,. Finally, using again the isometry formula for multiple Poisson stochastic integrals, we have [ [ ( E D t F dt = n E In 1 (f n (, t)) I n ( 1 f n[t ) ) dt [ [ = n E (I n 1 (f n (, t))) ( dt + n E In ( 1 f n[t ) ) dt [ n E I n 1 (f n (, t))i n ( 1 f n[t ) dt = n (n 1)! +n n! Proof of (4.8). We have Tk f (T k ) t f n (t 1,..., t n ) dt 1 dt n 1 f n (t 1,..., t n ) dt 1 dtdt dt n. Tk E[f (T k h + t) h=nt dt = 4α T k (t + E[T k h h=nt ) dt Tk = 4α T k (t + k N t ) dt 8

29 Therefore Tk Tk = α Tk 3 + 4kα Tk 4α T k N t dt k = α Tk 3 + 4kα Tk 4α T k (h 1)(T h T h 1 ). 1 f (T k ) E[f (T k h + t) h=nt dt L 1 (P ) [ E[ 1 4kα Tk + α k E T k (h 1)(T h T h 1 ) Tk h=1 E[ 1 4kα Tk + α T k L (P ) [ 1 T = E k + 1 k + 1 k k k k h=1 h=1 k (h 1)(T h T h 1 ) Tk h=1 L (P ) (h 1)(T h T h 1 ) T k L. k (P ) We shall provide an upper bound for both these addends. We have [ 1 T [ E k Tk (k + 1)k = E + 1 k k k Now, consider the other term. We have k k h=1 (h 1)(T h T h 1 ) T k k 1 k + 1 k Var(T k ) = 1 k + 4 k + 1 k + 6 k 3. = 1 k ((k + 1)k T k ) + k (h 1)(T k h T h 1 1) + k (h 1) k h=1 h=1 (k + 1)k k, hence k k (h 1)(T h T h 1 ) T k L k (P ) h=1 + T k (k + 1)k L + k (P ) k h=1 (h 1)(T h T h 1 1) k L (P ) k h=1 (h 1) (k + 1)k k h=1 (h 1)(T h T h 1 1) L + k (P ) k T k (k + 1)k L + k (P ) k. 9

30 Note that k h=1 (h 1)(T h T h 1 1) L = k (P ) ( ) k Var h=1 (h 1)(T h T h 1 ) = k h 1 k = h=1 k 4 3k k + 3k 3, and T k (k + 1)k L = 1 4 Var (T k (P ) k k ) = k + 1 k + 6 k. 3 Collecting all these inequalities leads to (4.8). References [1 V. Bally, M.-P. Bavouzet-Morel, and M. Messaoud. Integration by parts formula for locally smooth laws and applications to sensitivity computations. Ann. Appl. Probab., 17(1):33 66, 7. [ F. Hirsch and G. Lacombe. Elements of functional analysis, volume 19 of Graduate Texts in Mathematics. Springer-Verlag, New York, [3 C. Houdré and N. Privault. Concentration and deviation inequalities in infinite dimensions via covariance representations. Bernoulli, 8(6):697 7,. [4 J. León, J.L. Solé, and J. Vives. A pathwise approach to backward and forward stochastic differential equations on the Poisson space. Stochastic Anal. Appl., 19(5):81 839, 1. [5 I. Nourdin. Lectures on Gaussian approximations with Malliavin calculus. Preprint, 1. [6 I. Nourdin and G. Peccati. Stein s method on Wiener chaos. Probab. Theory Related Fields, 145(1- ):75 118, 9. [7 G. Peccati, J. L. Solé, M. S. Taqqu, and F. Utzet. Stein s method and normal approximation of Poisson functionals. Ann. Probab., 38(): , 1. [8 G. Peccati and C. Thäle. Gamma limits and U-statistics on the Poisson space. Preprint arxiv: , 13. [9 N. Privault. A calculus on Fock space and its probabilistic interpretations. Bull. Sci. Math., 13():97 114, [1 N. Privault. Stochastic Analysis in Discrete and Continuous Settings, volume 198 of Lecture Notes in Mathematics. Springer-Verlag, Berlin, 39 pp., 9. [11 W. Rudin. Real and Complex Analysis. Mc graw-hill, [1 C. Stein. Estimation of the mean of a multivariate normal distribution. Ann. Stat., 9(6): ,

The Stein and Chen-Stein methods for functionals of non-symmetric Bernoulli processes

The Stein and Chen-Stein methods for functionals of non-symmetric Bernoulli processes The Stein and Chen-Stein methods for functionals of non-symmetric Bernoulli processes Nicolas Privault Giovanni Luca Torrisi Abstract Based on a new multiplication formula for discrete multiple stochastic

More information

The Stein and Chen-Stein Methods for Functionals of Non-Symmetric Bernoulli Processes

The Stein and Chen-Stein Methods for Functionals of Non-Symmetric Bernoulli Processes ALEA, Lat. Am. J. Probab. Math. Stat. 12 (1), 309 356 (2015) The Stein Chen-Stein Methods for Functionals of Non-Symmetric Bernoulli Processes Nicolas Privault Giovanni Luca Torrisi Division of Mathematical

More information

Normal approximation of Poisson functionals in Kolmogorov distance

Normal approximation of Poisson functionals in Kolmogorov distance Normal approximation of Poisson functionals in Kolmogorov distance Matthias Schulte Abstract Peccati, Solè, Taqqu, and Utzet recently combined Stein s method and Malliavin calculus to obtain a bound for

More information

ON THE PROBABILITY APPROXIMATION OF SPATIAL

ON THE PROBABILITY APPROXIMATION OF SPATIAL ON THE PROBABILITY APPROIMATION OF SPATIAL POINT PROCESSES Giovanni Luca Torrisi Consiglio Nazionale delle Ricerche Giovanni Luca Torrisi (CNR) Approximation of point processes May 2015 1 / 33 POINT PROCESSES

More information

Malliavin calculus and central limit theorems

Malliavin calculus and central limit theorems Malliavin calculus and central limit theorems David Nualart Department of Mathematics Kansas University Seminar on Stochastic Processes 2017 University of Virginia March 8-11 2017 David Nualart (Kansas

More information

Independence of some multiple Poisson stochastic integrals with variable-sign kernels

Independence of some multiple Poisson stochastic integrals with variable-sign kernels Independence of some multiple Poisson stochastic integrals with variable-sign kernels Nicolas Privault Division of Mathematical Sciences School of Physical and Mathematical Sciences Nanyang Technological

More information

Stein approximation for functionals of independent random sequences

Stein approximation for functionals of independent random sequences Stein approximation for functionals of independent random sequences Nicolas Privault Grzegorz Serafin November 7, 17 Abstract We derive Stein approximation bounds for functionals of uniform random variables,

More information

Stein s method and weak convergence on Wiener space

Stein s method and weak convergence on Wiener space Stein s method and weak convergence on Wiener space Giovanni PECCATI (LSTA Paris VI) January 14, 2008 Main subject: two joint papers with I. Nourdin (Paris VI) Stein s method on Wiener chaos (ArXiv, December

More information

KOLMOGOROV DISTANCE FOR MULTIVARIATE NORMAL APPROXIMATION. Yoon Tae Kim and Hyun Suk Park

KOLMOGOROV DISTANCE FOR MULTIVARIATE NORMAL APPROXIMATION. Yoon Tae Kim and Hyun Suk Park Korean J. Math. 3 (015, No. 1, pp. 1 10 http://dx.doi.org/10.11568/kjm.015.3.1.1 KOLMOGOROV DISTANCE FOR MULTIVARIATE NORMAL APPROXIMATION Yoon Tae Kim and Hyun Suk Park Abstract. This paper concerns the

More information

The Wiener Itô Chaos Expansion

The Wiener Itô Chaos Expansion 1 The Wiener Itô Chaos Expansion The celebrated Wiener Itô chaos expansion is fundamental in stochastic analysis. In particular, it plays a crucial role in the Malliavin calculus as it is presented in

More information

COVARIANCE IDENTITIES AND MIXING OF RANDOM TRANSFORMATIONS ON THE WIENER SPACE

COVARIANCE IDENTITIES AND MIXING OF RANDOM TRANSFORMATIONS ON THE WIENER SPACE Communications on Stochastic Analysis Vol. 4, No. 3 (21) 299-39 Serials Publications www.serialspublications.com COVARIANCE IDENTITIES AND MIXING OF RANDOM TRANSFORMATIONS ON THE WIENER SPACE NICOLAS PRIVAULT

More information

NEW FUNCTIONAL INEQUALITIES

NEW FUNCTIONAL INEQUALITIES 1 / 29 NEW FUNCTIONAL INEQUALITIES VIA STEIN S METHOD Giovanni Peccati (Luxembourg University) IMA, Minneapolis: April 28, 2015 2 / 29 INTRODUCTION Based on two joint works: (1) Nourdin, Peccati and Swan

More information

Stein s Method: Distributional Approximation and Concentration of Measure

Stein s Method: Distributional Approximation and Concentration of Measure Stein s Method: Distributional Approximation and Concentration of Measure Larry Goldstein University of Southern California 36 th Midwest Probability Colloquium, 2014 Stein s method for Distributional

More information

Supermodular ordering of Poisson arrays

Supermodular ordering of Poisson arrays Supermodular ordering of Poisson arrays Bünyamin Kızıldemir Nicolas Privault Division of Mathematical Sciences School of Physical and Mathematical Sciences Nanyang Technological University 637371 Singapore

More information

Stein s method and stochastic analysis of Rademacher functionals

Stein s method and stochastic analysis of Rademacher functionals E l e c t r o n i c J o u r n a l o f P r o b a b i l i t y Vol. 15 (010), Paper no. 55, pages 1703 174. Journal URL http://www.math.washington.edu/~ejpecp/ Stein s method and stochastic analysis of Rademacher

More information

Optimal Berry-Esseen bounds on the Poisson space

Optimal Berry-Esseen bounds on the Poisson space Optimal Berry-Esseen bounds on the Poisson space Ehsan Azmoodeh Unité de Recherche en Mathématiques, Luxembourg University ehsan.azmoodeh@uni.lu Giovanni Peccati Unité de Recherche en Mathématiques, Luxembourg

More information

1. Stochastic Processes and filtrations

1. Stochastic Processes and filtrations 1. Stochastic Processes and 1. Stoch. pr., A stochastic process (X t ) t T is a collection of random variables on (Ω, F) with values in a measurable space (S, S), i.e., for all t, In our case X t : Ω S

More information

I forgot to mention last time: in the Ito formula for two standard processes, putting

I forgot to mention last time: in the Ito formula for two standard processes, putting I forgot to mention last time: in the Ito formula for two standard processes, putting dx t = a t dt + b t db t dy t = α t dt + β t db t, and taking f(x, y = xy, one has f x = y, f y = x, and f xx = f yy

More information

ON MEHLER S FORMULA. Giovanni Peccati (Luxembourg University) Conférence Géométrie Stochastique Nantes April 7, 2016

ON MEHLER S FORMULA. Giovanni Peccati (Luxembourg University) Conférence Géométrie Stochastique Nantes April 7, 2016 1 / 22 ON MEHLER S FORMULA Giovanni Peccati (Luxembourg University) Conférence Géométrie Stochastique Nantes April 7, 2016 2 / 22 OVERVIEW ı I will discuss two joint works: Last, Peccati and Schulte (PTRF,

More information

arxiv: v2 [math.pr] 22 Aug 2009

arxiv: v2 [math.pr] 22 Aug 2009 On the structure of Gaussian random variables arxiv:97.25v2 [math.pr] 22 Aug 29 Ciprian A. Tudor SAMOS/MATISSE, Centre d Economie de La Sorbonne, Université de Panthéon-Sorbonne Paris, 9, rue de Tolbiac,

More information

Multi-dimensional Gaussian fluctuations on the Poisson space

Multi-dimensional Gaussian fluctuations on the Poisson space E l e c t r o n i c J o u r n a l o f P r o b a b i l i t y Vol. 15 (2010), Paper no. 48, pages 1487 1527. Journal URL http://www.math.washington.edu/~ejpecp/ Multi-dimensional Gaussian fluctuations on

More information

Convergence at first and second order of some approximations of stochastic integrals

Convergence at first and second order of some approximations of stochastic integrals Convergence at first and second order of some approximations of stochastic integrals Bérard Bergery Blandine, Vallois Pierre IECN, Nancy-Université, CNRS, INRIA, Boulevard des Aiguillettes B.P. 239 F-5456

More information

Asymptotics for posterior hazards

Asymptotics for posterior hazards Asymptotics for posterior hazards Pierpaolo De Blasi University of Turin 10th August 2007, BNR Workshop, Isaac Newton Intitute, Cambridge, UK Joint work with Giovanni Peccati (Université Paris VI) and

More information

Regularity of the density for the stochastic heat equation

Regularity of the density for the stochastic heat equation Regularity of the density for the stochastic heat equation Carl Mueller 1 Department of Mathematics University of Rochester Rochester, NY 15627 USA email: cmlr@math.rochester.edu David Nualart 2 Department

More information

A Concise Course on Stochastic Partial Differential Equations

A Concise Course on Stochastic Partial Differential Equations A Concise Course on Stochastic Partial Differential Equations Michael Röckner Reference: C. Prevot, M. Röckner: Springer LN in Math. 1905, Berlin (2007) And see the references therein for the original

More information

On pathwise stochastic integration

On pathwise stochastic integration On pathwise stochastic integration Rafa l Marcin Lochowski Afican Institute for Mathematical Sciences, Warsaw School of Economics UWC seminar Rafa l Marcin Lochowski (AIMS, WSE) On pathwise stochastic

More information

EULER MARUYAMA APPROXIMATION FOR SDES WITH JUMPS AND NON-LIPSCHITZ COEFFICIENTS

EULER MARUYAMA APPROXIMATION FOR SDES WITH JUMPS AND NON-LIPSCHITZ COEFFICIENTS Qiao, H. Osaka J. Math. 51 (14), 47 66 EULER MARUYAMA APPROXIMATION FOR SDES WITH JUMPS AND NON-LIPSCHITZ COEFFICIENTS HUIJIE QIAO (Received May 6, 11, revised May 1, 1) Abstract In this paper we show

More information

Malliavin Calculus: Analysis on Gaussian spaces

Malliavin Calculus: Analysis on Gaussian spaces Malliavin Calculus: Analysis on Gaussian spaces Josef Teichmann ETH Zürich Oxford 2011 Isonormal Gaussian process A Gaussian space is a (complete) probability space together with a Hilbert space of centered

More information

arxiv: v2 [math.pr] 12 May 2015

arxiv: v2 [math.pr] 12 May 2015 Optimal Berry-Esseen bounds on the Poisson space arxiv:1505.02578v2 [math.pr] 12 May 2015 Ehsan Azmoodeh Unité de Recherche en Mathématiques, Luxembourg University ehsan.azmoodeh@uni.lu Giovanni Peccati

More information

GAUSSIAN PROCESSES; KOLMOGOROV-CHENTSOV THEOREM

GAUSSIAN PROCESSES; KOLMOGOROV-CHENTSOV THEOREM GAUSSIAN PROCESSES; KOLMOGOROV-CHENTSOV THEOREM STEVEN P. LALLEY 1. GAUSSIAN PROCESSES: DEFINITIONS AND EXAMPLES Definition 1.1. A standard (one-dimensional) Wiener process (also called Brownian motion)

More information

arxiv: v1 [math.pr] 7 May 2013

arxiv: v1 [math.pr] 7 May 2013 The optimal fourth moment theorem Ivan Nourdin and Giovanni Peccati May 8, 2013 arxiv:1305.1527v1 [math.pr] 7 May 2013 Abstract We compute the exact rates of convergence in total variation associated with

More information

ON THE STRUCTURE OF GAUSSIAN RANDOM VARIABLES

ON THE STRUCTURE OF GAUSSIAN RANDOM VARIABLES ON THE STRUCTURE OF GAUSSIAN RANDOM VARIABLES CIPRIAN A. TUDOR We study when a given Gaussian random variable on a given probability space Ω, F,P) is equal almost surely to β 1 where β is a Brownian motion

More information

Citation Osaka Journal of Mathematics. 41(4)

Citation Osaka Journal of Mathematics. 41(4) TitleA non quasi-invariance of the Brown Authors Sadasue, Gaku Citation Osaka Journal of Mathematics. 414 Issue 4-1 Date Text Version publisher URL http://hdl.handle.net/1194/1174 DOI Rights Osaka University

More information

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539 Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory

More information

Normal approximation of geometric Poisson functionals

Normal approximation of geometric Poisson functionals Institut für Stochastik Karlsruher Institut für Technologie Normal approximation of geometric Poisson functionals (Karlsruhe) joint work with Daniel Hug, Giovanni Peccati, Matthias Schulte presented at

More information

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3 Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................

More information

Random Fields: Skorohod integral and Malliavin derivative

Random Fields: Skorohod integral and Malliavin derivative Dept. of Math. University of Oslo Pure Mathematics No. 36 ISSN 0806 2439 November 2004 Random Fields: Skorohod integral and Malliavin derivative Giulia Di Nunno 1 Oslo, 15th November 2004. Abstract We

More information

Pathwise volatility in a long-memory pricing model: estimation and asymptotic behavior

Pathwise volatility in a long-memory pricing model: estimation and asymptotic behavior Pathwise volatility in a long-memory pricing model: estimation and asymptotic behavior Ehsan Azmoodeh University of Vaasa Finland 7th General AMaMeF and Swissquote Conference September 7 1, 215 Outline

More information

OPTIMAL SOLUTIONS TO STOCHASTIC DIFFERENTIAL INCLUSIONS

OPTIMAL SOLUTIONS TO STOCHASTIC DIFFERENTIAL INCLUSIONS APPLICATIONES MATHEMATICAE 29,4 (22), pp. 387 398 Mariusz Michta (Zielona Góra) OPTIMAL SOLUTIONS TO STOCHASTIC DIFFERENTIAL INCLUSIONS Abstract. A martingale problem approach is used first to analyze

More information

arxiv: v3 [math.mg] 3 Nov 2017

arxiv: v3 [math.mg] 3 Nov 2017 arxiv:702.0069v3 [math.mg] 3 ov 207 Random polytopes: central limit theorems for intrinsic volumes Christoph Thäle, icola Turchi and Florian Wespi Abstract Short and transparent proofs of central limit

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

Contents. 1 Preliminaries 3. Martingales

Contents. 1 Preliminaries 3. Martingales Table of Preface PART I THE FUNDAMENTAL PRINCIPLES page xv 1 Preliminaries 3 2 Martingales 9 2.1 Martingales and examples 9 2.2 Stopping times 12 2.3 The maximum inequality 13 2.4 Doob s inequality 14

More information

9 Brownian Motion: Construction

9 Brownian Motion: Construction 9 Brownian Motion: Construction 9.1 Definition and Heuristics The central limit theorem states that the standard Gaussian distribution arises as the weak limit of the rescaled partial sums S n / p n of

More information

GENERALIZED COVARIATION FOR BANACH SPACE VALUED PROCESSES, ITÔ FORMULA AND APPLICATIONS

GENERALIZED COVARIATION FOR BANACH SPACE VALUED PROCESSES, ITÔ FORMULA AND APPLICATIONS Di Girolami, C. and Russo, F. Osaka J. Math. 51 (214), 729 783 GENERALIZED COVARIATION FOR BANACH SPACE VALUED PROCESSES, ITÔ FORMULA AND APPLICATIONS CRISTINA DI GIROLAMI and FRANCESCO RUSSO (Received

More information

A Short Introduction to Diffusion Processes and Ito Calculus

A Short Introduction to Diffusion Processes and Ito Calculus A Short Introduction to Diffusion Processes and Ito Calculus Cédric Archambeau University College, London Center for Computational Statistics and Machine Learning c.archambeau@cs.ucl.ac.uk January 24,

More information

Some SDEs with distributional drift Part I : General calculus. Flandoli, Franco; Russo, Francesco; Wolf, Jochen

Some SDEs with distributional drift Part I : General calculus. Flandoli, Franco; Russo, Francesco; Wolf, Jochen Title Author(s) Some SDEs with distributional drift Part I : General calculus Flandoli, Franco; Russo, Francesco; Wolf, Jochen Citation Osaka Journal of Mathematics. 4() P.493-P.54 Issue Date 3-6 Text

More information

A Note on the Central Limit Theorem for a Class of Linear Systems 1

A Note on the Central Limit Theorem for a Class of Linear Systems 1 A Note on the Central Limit Theorem for a Class of Linear Systems 1 Contents Yukio Nagahata Department of Mathematics, Graduate School of Engineering Science Osaka University, Toyonaka 560-8531, Japan.

More information

Chaos expansions and Malliavin calculus for Lévy processes

Chaos expansions and Malliavin calculus for Lévy processes Chaos expansions and Malliavin calculus for Lévy processes Josep Lluís Solé, Frederic Utzet, and Josep Vives Departament de Matemàtiques, Facultat de Ciències, Universitat Autónoma de Barcelona, 8193 Bellaterra

More information

A NOTE ON STOCHASTIC INTEGRALS AS L 2 -CURVES

A NOTE ON STOCHASTIC INTEGRALS AS L 2 -CURVES A NOTE ON STOCHASTIC INTEGRALS AS L 2 -CURVES STEFAN TAPPE Abstract. In a work of van Gaans (25a) stochastic integrals are regarded as L 2 -curves. In Filipović and Tappe (28) we have shown the connection

More information

A connection between the stochastic heat equation and fractional Brownian motion, and a simple proof of a result of Talagrand

A connection between the stochastic heat equation and fractional Brownian motion, and a simple proof of a result of Talagrand A connection between the stochastic heat equation and fractional Brownian motion, and a simple proof of a result of Talagrand Carl Mueller 1 and Zhixin Wu Abstract We give a new representation of fractional

More information

{σ x >t}p x. (σ x >t)=e at.

{σ x >t}p x. (σ x >t)=e at. 3.11. EXERCISES 121 3.11 Exercises Exercise 3.1 Consider the Ornstein Uhlenbeck process in example 3.1.7(B). Show that the defined process is a Markov process which converges in distribution to an N(0,σ

More information

Stein s method on Wiener chaos

Stein s method on Wiener chaos Stein s method on Wiener chaos by Ivan Nourdin and Giovanni Peccati University of Paris VI Revised version: May 10, 2008 Abstract: We combine Malliavin calculus with Stein s method, in order to derive

More information

Stein s Method and Stochastic Geometry

Stein s Method and Stochastic Geometry 1 / 39 Stein s Method and Stochastic Geometry Giovanni Peccati (Luxembourg University) Firenze 16 marzo 2018 2 / 39 INTRODUCTION Stein s method, as devised by Charles Stein at the end of the 60s, is a

More information

1 Brownian Local Time

1 Brownian Local Time 1 Brownian Local Time We first begin by defining the space and variables for Brownian local time. Let W t be a standard 1-D Wiener process. We know that for the set, {t : W t = } P (µ{t : W t = } = ) =

More information

Hardy-Stein identity and Square functions

Hardy-Stein identity and Square functions Hardy-Stein identity and Square functions Daesung Kim (joint work with Rodrigo Bañuelos) Department of Mathematics Purdue University March 28, 217 Daesung Kim (Purdue) Hardy-Stein identity UIUC 217 1 /

More information

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition Filtrations, Markov Processes and Martingales Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition David pplebaum Probability and Statistics Department,

More information

Lecture 19 L 2 -Stochastic integration

Lecture 19 L 2 -Stochastic integration Lecture 19: L 2 -Stochastic integration 1 of 12 Course: Theory of Probability II Term: Spring 215 Instructor: Gordan Zitkovic Lecture 19 L 2 -Stochastic integration The stochastic integral for processes

More information

Chapter 2 Introduction to the Theory of Dirichlet Forms

Chapter 2 Introduction to the Theory of Dirichlet Forms Chapter 2 Introduction to the Theory of Dirichlet Forms A Dirichlet form is a generalization of the energy form f f 2 dλ introduced in the 1840s especially by William Thomson (Lord Kelvin) (cf. Temple

More information

The Codimension of the Zeros of a Stable Process in Random Scenery

The Codimension of the Zeros of a Stable Process in Random Scenery The Codimension of the Zeros of a Stable Process in Random Scenery Davar Khoshnevisan The University of Utah, Department of Mathematics Salt Lake City, UT 84105 0090, U.S.A. davar@math.utah.edu http://www.math.utah.edu/~davar

More information

Pointwise convergence rates and central limit theorems for kernel density estimators in linear processes

Pointwise convergence rates and central limit theorems for kernel density estimators in linear processes Pointwise convergence rates and central limit theorems for kernel density estimators in linear processes Anton Schick Binghamton University Wolfgang Wefelmeyer Universität zu Köln Abstract Convergence

More information

A REPRESENTATION FOR THE KANTOROVICH RUBINSTEIN DISTANCE DEFINED BY THE CAMERON MARTIN NORM OF A GAUSSIAN MEASURE ON A BANACH SPACE

A REPRESENTATION FOR THE KANTOROVICH RUBINSTEIN DISTANCE DEFINED BY THE CAMERON MARTIN NORM OF A GAUSSIAN MEASURE ON A BANACH SPACE Theory of Stochastic Processes Vol. 21 (37), no. 2, 2016, pp. 84 90 G. V. RIABOV A REPRESENTATION FOR THE KANTOROVICH RUBINSTEIN DISTANCE DEFINED BY THE CAMERON MARTIN NORM OF A GAUSSIAN MEASURE ON A BANACH

More information

Regular Variation and Extreme Events for Stochastic Processes

Regular Variation and Extreme Events for Stochastic Processes 1 Regular Variation and Extreme Events for Stochastic Processes FILIP LINDSKOG Royal Institute of Technology, Stockholm 2005 based on joint work with Henrik Hult www.math.kth.se/ lindskog 2 Extremes for

More information

Gaussian Processes. 1. Basic Notions

Gaussian Processes. 1. Basic Notions Gaussian Processes 1. Basic Notions Let T be a set, and X : {X } T a stochastic process, defined on a suitable probability space (Ω P), that is indexed by T. Definition 1.1. We say that X is a Gaussian

More information

(B(t i+1 ) B(t i )) 2

(B(t i+1 ) B(t i )) 2 ltcc5.tex Week 5 29 October 213 Ch. V. ITÔ (STOCHASTIC) CALCULUS. WEAK CONVERGENCE. 1. Quadratic Variation. A partition π n of [, t] is a finite set of points t ni such that = t n < t n1

More information

Kolmogorov Berry-Esseen bounds for binomial functionals

Kolmogorov Berry-Esseen bounds for binomial functionals Kolmogorov Berry-Esseen bounds for binomial functionals Raphaël Lachièze-Rey, Univ. South California, Univ. Paris 5 René Descartes, Joint work with Giovanni Peccati, University of Luxembourg, Singapore

More information

Notes on uniform convergence

Notes on uniform convergence Notes on uniform convergence Erik Wahlén erik.wahlen@math.lu.se January 17, 2012 1 Numerical sequences We begin by recalling some properties of numerical sequences. By a numerical sequence we simply mean

More information

Lecture 21 Representations of Martingales

Lecture 21 Representations of Martingales Lecture 21: Representations of Martingales 1 of 11 Course: Theory of Probability II Term: Spring 215 Instructor: Gordan Zitkovic Lecture 21 Representations of Martingales Right-continuous inverses Let

More information

P-adic Functions - Part 1

P-adic Functions - Part 1 P-adic Functions - Part 1 Nicolae Ciocan 22.11.2011 1 Locally constant functions Motivation: Another big difference between p-adic analysis and real analysis is the existence of nontrivial locally constant

More information

ON THE FRACTIONAL CAUCHY PROBLEM ASSOCIATED WITH A FELLER SEMIGROUP

ON THE FRACTIONAL CAUCHY PROBLEM ASSOCIATED WITH A FELLER SEMIGROUP Dedicated to Professor Gheorghe Bucur on the occasion of his 7th birthday ON THE FRACTIONAL CAUCHY PROBLEM ASSOCIATED WITH A FELLER SEMIGROUP EMIL POPESCU Starting from the usual Cauchy problem, we give

More information

An Introduction to Malliavin calculus and its applications

An Introduction to Malliavin calculus and its applications An Introduction to Malliavin calculus and its applications Lecture 3: Clark-Ocone formula David Nualart Department of Mathematics Kansas University University of Wyoming Summer School 214 David Nualart

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 15. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

A Gentle Introduction to Stein s Method for Normal Approximation I

A Gentle Introduction to Stein s Method for Normal Approximation I A Gentle Introduction to Stein s Method for Normal Approximation I Larry Goldstein University of Southern California Introduction to Stein s Method for Normal Approximation 1. Much activity since Stein

More information

(1) Consider the space S consisting of all continuous real-valued functions on the closed interval [0, 1]. For f, g S, define

(1) Consider the space S consisting of all continuous real-valued functions on the closed interval [0, 1]. For f, g S, define Homework, Real Analysis I, Fall, 2010. (1) Consider the space S consisting of all continuous real-valued functions on the closed interval [0, 1]. For f, g S, define ρ(f, g) = 1 0 f(x) g(x) dx. Show that

More information

Topics in fractional Brownian motion

Topics in fractional Brownian motion Topics in fractional Brownian motion Esko Valkeila Spring School, Jena 25.3. 2011 We plan to discuss the following items during these lectures: Fractional Brownian motion and its properties. Topics in

More information

Stochastic Processes II/ Wahrscheinlichkeitstheorie III. Lecture Notes

Stochastic Processes II/ Wahrscheinlichkeitstheorie III. Lecture Notes BMS Basic Course Stochastic Processes II/ Wahrscheinlichkeitstheorie III Michael Scheutzow Lecture Notes Technische Universität Berlin Sommersemester 218 preliminary version October 12th 218 Contents

More information

An almost sure invariance principle for additive functionals of Markov chains

An almost sure invariance principle for additive functionals of Markov chains Statistics and Probability Letters 78 2008 854 860 www.elsevier.com/locate/stapro An almost sure invariance principle for additive functionals of Markov chains F. Rassoul-Agha a, T. Seppäläinen b, a Department

More information

Stein s method, logarithmic Sobolev and transport inequalities

Stein s method, logarithmic Sobolev and transport inequalities Stein s method, logarithmic Sobolev and transport inequalities M. Ledoux University of Toulouse, France and Institut Universitaire de France Stein s method, logarithmic Sobolev and transport inequalities

More information

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( )

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( ) Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio (2014-2015) Etienne Tanré - Olivier Faugeras INRIA - Team Tosca November 26th, 2014 E. Tanré (INRIA - Team Tosca) Mathematical

More information

Gaussian Phase Transitions and Conic Intrinsic Volumes: Steining the Steiner formula

Gaussian Phase Transitions and Conic Intrinsic Volumes: Steining the Steiner formula Gaussian Phase Transitions and Conic Intrinsic Volumes: Steining the Steiner formula Larry Goldstein, University of Southern California Nourdin GIoVAnNi Peccati Luxembourg University University British

More information

STOCHASTIC DIFFERENTIAL EQUATIONS DRIVEN BY PROCESSES WITH INDEPENDENT INCREMENTS

STOCHASTIC DIFFERENTIAL EQUATIONS DRIVEN BY PROCESSES WITH INDEPENDENT INCREMENTS STOCHASTIC DIFFERENTIAL EQUATIONS DRIVEN BY PROCESSES WITH INDEPENDENT INCREMENTS DAMIR FILIPOVIĆ AND STEFAN TAPPE Abstract. This article considers infinite dimensional stochastic differential equations

More information

Metric Spaces and Topology

Metric Spaces and Topology Chapter 2 Metric Spaces and Topology From an engineering perspective, the most important way to construct a topology on a set is to define the topology in terms of a metric on the set. This approach underlies

More information

n E(X t T n = lim X s Tn = X s

n E(X t T n = lim X s Tn = X s Stochastic Calculus Example sheet - Lent 15 Michael Tehranchi Problem 1. Let X be a local martingale. Prove that X is a uniformly integrable martingale if and only X is of class D. Solution 1. If If direction:

More information

Large time behavior of reaction-diffusion equations with Bessel generators

Large time behavior of reaction-diffusion equations with Bessel generators Large time behavior of reaction-diffusion equations with Bessel generators José Alfredo López-Mimbela Nicolas Privault Abstract We investigate explosion in finite time of one-dimensional semilinear equations

More information

Calculus in Gauss Space

Calculus in Gauss Space Calculus in Gauss Space 1. The Gradient Operator The -dimensional Lebesgue space is the measurable space (E (E )) where E =[0 1) or E = R endowed with the Lebesgue measure, and the calculus of functions

More information

arxiv: v1 [math.pr] 10 Jan 2019

arxiv: v1 [math.pr] 10 Jan 2019 Gaussian lower bounds for the density via Malliavin calculus Nguyen Tien Dung arxiv:191.3248v1 [math.pr] 1 Jan 219 January 1, 219 Abstract The problem of obtaining a lower bound for the density is always

More information

The Continuity of SDE With Respect to Initial Value in the Total Variation

The Continuity of SDE With Respect to Initial Value in the Total Variation Ξ44fflΞ5» ο ffi fi $ Vol.44, No.5 2015 9" ADVANCES IN MATHEMATICS(CHINA) Sep., 2015 doi: 10.11845/sxjz.2014024b The Continuity of SDE With Respect to Initial Value in the Total Variation PENG Xuhui (1.

More information

Problem set 2 The central limit theorem.

Problem set 2 The central limit theorem. Problem set 2 The central limit theorem. Math 22a September 6, 204 Due Sept. 23 The purpose of this problem set is to walk through the proof of the central limit theorem of probability theory. Roughly

More information

Generalized Gaussian Bridges of Prediction-Invertible Processes

Generalized Gaussian Bridges of Prediction-Invertible Processes Generalized Gaussian Bridges of Prediction-Invertible Processes Tommi Sottinen 1 and Adil Yazigi University of Vaasa, Finland Modern Stochastics: Theory and Applications III September 1, 212, Kyiv, Ukraine

More information

ON THE PATHWISE UNIQUENESS OF SOLUTIONS OF STOCHASTIC DIFFERENTIAL EQUATIONS

ON THE PATHWISE UNIQUENESS OF SOLUTIONS OF STOCHASTIC DIFFERENTIAL EQUATIONS PORTUGALIAE MATHEMATICA Vol. 55 Fasc. 4 1998 ON THE PATHWISE UNIQUENESS OF SOLUTIONS OF STOCHASTIC DIFFERENTIAL EQUATIONS C. Sonoc Abstract: A sufficient condition for uniqueness of solutions of ordinary

More information

Harnack Inequalities and Applications for Stochastic Equations

Harnack Inequalities and Applications for Stochastic Equations p. 1/32 Harnack Inequalities and Applications for Stochastic Equations PhD Thesis Defense Shun-Xiang Ouyang Under the Supervision of Prof. Michael Röckner & Prof. Feng-Yu Wang March 6, 29 p. 2/32 Outline

More information

B. LAQUERRIÈRE AND N. PRIVAULT

B. LAQUERRIÈRE AND N. PRIVAULT DEVIAION INEQUALIIES FOR EXPONENIAL JUMP-DIFFUSION PROCESSES B. LAQUERRIÈRE AND N. PRIVAUL Abstract. In this note we obtain deviation inequalities for the law of exponential jump-diffusion processes at

More information

Stochastic Differential Equations.

Stochastic Differential Equations. Chapter 3 Stochastic Differential Equations. 3.1 Existence and Uniqueness. One of the ways of constructing a Diffusion process is to solve the stochastic differential equation dx(t) = σ(t, x(t)) dβ(t)

More information

In terms of measures: Exercise 1. Existence of a Gaussian process: Theorem 2. Remark 3.

In terms of measures: Exercise 1. Existence of a Gaussian process: Theorem 2. Remark 3. 1. GAUSSIAN PROCESSES A Gaussian process on a set T is a collection of random variables X =(X t ) t T on a common probability space such that for any n 1 and any t 1,...,t n T, the vector (X(t 1 ),...,X(t

More information

Brownian Motion. An Undergraduate Introduction to Financial Mathematics. J. Robert Buchanan. J. Robert Buchanan Brownian Motion

Brownian Motion. An Undergraduate Introduction to Financial Mathematics. J. Robert Buchanan. J. Robert Buchanan Brownian Motion Brownian Motion An Undergraduate Introduction to Financial Mathematics J. Robert Buchanan 2010 Background We have already seen that the limiting behavior of a discrete random walk yields a derivation of

More information

Asymptotics for posterior hazards

Asymptotics for posterior hazards Asymptotics for posterior hazards Igor Prünster University of Turin, Collegio Carlo Alberto and ICER Joint work with P. Di Biasi and G. Peccati Workshop on Limit Theorems and Applications Paris, 16th January

More information

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product Chapter 4 Hilbert Spaces 4.1 Inner Product Spaces Inner Product Space. A complex vector space E is called an inner product space (or a pre-hilbert space, or a unitary space) if there is a mapping (, )

More information

Random Bernstein-Markov factors

Random Bernstein-Markov factors Random Bernstein-Markov factors Igor Pritsker and Koushik Ramachandran October 20, 208 Abstract For a polynomial P n of degree n, Bernstein s inequality states that P n n P n for all L p norms on the unit

More information

Monte-Carlo MMD-MA, Université Paris-Dauphine. Xiaolu Tan

Monte-Carlo MMD-MA, Université Paris-Dauphine. Xiaolu Tan Monte-Carlo MMD-MA, Université Paris-Dauphine Xiaolu Tan tan@ceremade.dauphine.fr Septembre 2015 Contents 1 Introduction 1 1.1 The principle.................................. 1 1.2 The error analysis

More information

THE INVERSE FUNCTION THEOREM

THE INVERSE FUNCTION THEOREM THE INVERSE FUNCTION THEOREM W. PATRICK HOOPER The implicit function theorem is the following result: Theorem 1. Let f be a C 1 function from a neighborhood of a point a R n into R n. Suppose A = Df(a)

More information

Notes 9 : Infinitely divisible and stable laws

Notes 9 : Infinitely divisible and stable laws Notes 9 : Infinitely divisible and stable laws Math 733 - Fall 203 Lecturer: Sebastien Roch References: [Dur0, Section 3.7, 3.8], [Shi96, Section III.6]. Infinitely divisible distributions Recall: EX 9.

More information