Normal approximation of Poisson functionals in Kolmogorov distance
|
|
- Nora Walker
- 6 years ago
- Views:
Transcription
1 Normal approximation of Poisson functionals in Kolmogorov distance Matthias Schulte Abstract Peccati, Solè, Taqqu, and Utzet recently combined Stein s method and Malliavin calculus to obtain a bound for the Wasserstein distance of a Poisson functional and a Gaussian random variable. Convergence in the Wasserstein distance always implies convergence in the Kolmogorov distance at a possibly weaker rate. But there are many examples of central limit theorems having the same rate for both distances. The aim of this paper is to show this behaviour for a large class of Poisson functionals, namely so-called U-statistics of Poisson point processes. The technique used by Peccati et al. is modified to establish a similar bound for the Kolmogorov distance of a Poisson functional and a Gaussian random variable. This bound is evaluated for a U-statistic, and it is shown that the resulting expression is up to a constant the same as it is for the Wasserstein distance. Key words: Central limit theorem, Malliavin calculus, Poisson point process, Stein s method, U-statistic, Wiener-Itô chaos expansion MSC (200): Primary: 60F05; 60H07 Secondary: 60G55 Introduction and results Let η be a Poisson point process over a Borel space (, ) with a σ-finite intensity measure µ and let F = F (η) be a random variable depending on the Poisson point process η. In the following, we call such random variables Poisson functionals. Moreover, we assume that F is square integrable (we write F L 2 (P η )) and satisfies EF = 0. By N we denote a standard Gaussian random variable. In [], Peccati, Solé, Taqqu, and Utzet derived by a combination of Stein s method and Malliavin calculus the upper bound d W (F, N) E DF, DL F L 2 (µ) + E(D z F ) 2 D z L F µ(dz) () for the Wasserstein distance of F and N. Here,, L 2 (µ) stands for the inner product in L 2 (µ), and the difference operator D and the inverse of the Ornstein-Uhlenbeck generator L are operators from Malliavin calculus. The underlying idea of these operators Karlsruhe Institute of Technology, Institute of Stochastics, D-7628 Karlsruhe, matthias.schulte[at]kit.edu
2 is that each square integrable Poisson functional has a representation F = EF + I n (f n ), where the f n are square integrable functions supported on n, I n stands for the n- th multiple Wiener-Itô integral, and the right-hand side converges in L 2 (P η ). This decomposition is called Wiener-Itô chaos expansion, and the Malliavin operators of F are defined via their chaos expansions. The operators D z F and D z L F that occur in () are given by D z F = n= n= n I n (f n (z, )) and D z L F = I n (f n (z, )) for z. Here, f n (z, ) stands for the function on n we obtain by taking z as first argument. For exact definitions including the domains and more details on the Malliavin operators we refer to Section 2. The Wasserstein distance between two random variables Y and Z is defined by d W (Y, Z) = n= sup Eh(Y ) Eh(Z), h Lip() where Lip() is the set of all functions h : R R with Lipschitz constant less than or equal to one. Another commonly used distance for random variables is the Kolmogorov distance d K (Y, Z) = sup P(Y s) P(Z s), which is the supremum norm of the difference of the distribution functions of Y and Z. Because of this straightforward interpretation, one is often more interested in the Kolmogorov distance than in the Wasserstein distance. For the important case that Z is a standard Gaussian random variable N it is known (see [2, Theorem 3.]) that d K (Y, N) 2 d W (Y, N). (2) This inequality gives us for the Kolmogorov distance a weaker rate of convergence than for the Wasserstein distance. But for many classical central limit theorems, one has actually the same rate of convergence for both metrics. In order to overcome the problem that a detour around the Wasserstein distance and the inequality (2) often gives a suboptimal rate of convergence for the Kolmogorov distance, we derive a similar bound as () for the Kolmogorov distance by a modification of the proof in []. Theorem. Let F L 2 (P η ) with EF = 0 be in the domain of D and let N be a 2
3 standard Gaussian random variable. Then d K (F, N) E DF, DL F L 2 (µ) + 2E (DF ) 2, DL F L 2 (µ) (3) +2E (DF ) 2, F DL F L 2 (µ) + 2E (DF ) 2, DF DL F L 2 (µ) + sup E DI(F > s), DF DL F L 2 (µ) E DF, DL F L 2 (µ) + 2c(F ) E (DF ) 2, (DL F ) 2 L 2 (µ) + sup E DI(F > s), DF DL F L 2 (µ) with c(f ) = ( ) ( E (DF ) 2, (DF ) 2 L 2 (µ) + E DF, DF 2 4 ( ) ) L 2 (µ) EF Comparing () and (3), one notes that both terms of the Wasserstein bound () also occur in (3), which means that the bound for the Kolmogorov distance is always larger. We apply our Theorem. to two situations, where we obtain the same rate of convergence for the Kolmogorov distance and the Wasserstein distance. At first we derive the classical Berry-Esseen inequality with the optimal rate of convergence for the normal approximation of a classical Poisson random variable. As another application of Theorem., we consider so-called U-statistics of Poisson point processes, which are defined as F = f(x,..., x k ) (x,...,x k ) η k where k N, f L (µ k ), and η k is the set of all k-tuples of distinct points of η. In [4, 5] and [4], Lachièze-Rey and Peccati and Reitzner and Schulte used the bound () for the Wasserstein distance to derive central limit theorems with explicit rates of convergence for such Poisson functionals occurring in stochastic geometry and random graph theory. Now Theorem. allows us to replace the Wasserstein distance by the Kolmogorov distance without changing the rate of convergence, which means that the inequality (2) is not sharp for this class of Poisson functionals. The main finding of this work, Theorem., is refined and proven in a different way in the the subsequent paper [3] by Eichelsbacher and Thäle. In [7], Last, Peccati, and Schulte further simplify the bounds for the normal approximation of Poisson functionals in Kolmogorov distance from Theorem. and [3] and apply them to several problems, where they provide new presumably optimal rates of convergence. Our result for the normal approximation of U-statistics of Poisson point processes (see Theorem 4.2) is used by Reitzner, Schulte, and Thäle in [5] and [6] to derive central limit theorems with rates of convergence for the Kolmogorov distance for the total edge length of the Gilbert graph and for distances between non-intersecting Poisson k-flats, respectively. This paper is organized in the following way. Before we prove our main result Theorem. in Section 3, we introduce some facts from Malliavin calculus and Stein s method in Section 2. The applications of Theorem. are discussed in Section 4, and the result for U-statistics is shown in Section 5. 3
4 In this paper, we use the following notation. By L p (P η ), p > 0, we denote the set of random variables Y depending on a Poisson point process η such that E Y p <. Let L p (µ n ), p > 0, be the set of functions f : n R := R {± } satisfying f p dµ n = f(x n n,..., x n ) p µ n (d(x,..., x n )) < and let n and, L 2 (µ n ) be the norm and the inner product in L 2 (µ n ), respectively. By L p s(µ n ) we denote the set of all functions f L p (µ n ) that are symmetric, i.e. invariant under permutations of their arguments. 2 Preliminaries Malliavin calculus for Poisson functionals. In the sequel, we briefly introduce three Malliavin operators and some properties of them that are necessary for the proofs in this paper. For more details on Malliavin calculus for Poisson functionals we refer to [8, 0,, 3] and the references therein. By I n ( ), n, we denote the n-th multiple Wiener-Itô integral, which is defined for all functions f L 2 s(µ n ) and satisfies EI n (f) = 0. The multiple Wiener-Itô integrals are orthogonal in the sense that { n! f, g L EI m (f)i n (g) = 2 (µ n ), m = n (4) 0, m n for all f L 2 s(µ m ), g L 2 s(µ n ), m, n. We use the convention I 0 (c) = c for c R. It is known (see [8] for a proof) that every Poisson functional F L 2 (P η ) has a unique so-called Wiener-Itô chaos expansion F = EF + I n (f n ) (5) with f n L 2 s(µ n ), where the series converges in L 2 (P η ). In the following, we call the functions f n kernels of the Wiener-Itô chaos expansion of F and say that F has a chaos expansion of order k if f n = 0 for all n > k. Combining (4) and (5), we obtain Var F = n= n! f n 2 n. n= The representation (5) allows us to define the difference operator D, the Ornstein- Uhlenbeck generator L, and the Skorohod integral δ in the following way: Definition 2. Let F L 2 (P η ) with the Wiener-Itô chaos expansion (5). If n= n n! f n 2 n <, then the random function z D z F defined by D z F = n I n (f n (z, )) n= 4
5 is called the difference operator of F. For n= n2 n! f n 2 n < the Ornstein- Uhlenbeck generator of F, denoted by LF, is given by LF = n I n (f n ). n= Let z g(z) be a random function with a chaos expansion g(z) = g 0 (z) + I n (g n (z, )), g n (z, ) L 2 s(µ n ), n= for µ-almost all z and n=0 (n + )! g n 2 n+ <. Then the Skorohod integral of g is the random variable δ(g) defined by δ(g) = I n+ ( g n ), n=0 where g n is the symmetrization g n (x,..., x n+ ) = all permutations σ of the n + variables. (n+)! σ g n(x σ(),..., x σ(n+) ) over We denote the domains of these operators by dom D, dom L, and dom δ. The difference operator also has the geometric interpretation D z F = F (η + δ z ) F (η) (6) a.s. for µ-almost all z, where δ z stands for the Dirac measure concentrated at the point z, whence it is sometimes called add-one-cost operator (see Theorem 3.3 in [8]). If F / dom D, we can define the difference operator by (6). If we iterate this definition and put Dx n,...,x n F = D xn Dx n,...,x n F, the kernels of the Wiener-Itô chaos expansion of F in (5) are given by the formula f n (x,..., x n ) = n! EDn x,...,x n F = n! E I {,...,n}( ) n+ I F (η + δ xi ) i I for x,..., x n (see Theorem.3 in [8]). For centred random variables F L 2 (P η ), i.e. EF = 0, the inverse Ornstein- Uhlenbeck generator is given by L F = n= n I n(f n ). The following lemma summarizes how the operators from Malliavin calculus are related. Lemma 2.2 a) For every F dom L it holds that F dom D, DF dom δ, and δdf = LF. (7) 5
6 b) Let F dom D and g dom δ. Then E DF, g L 2 (µ) = E[F δ(g)]. (8) For proofs we refer to [] and [0], respectively. Equation (8) is sometimes called integration by parts formula. Because of this identity, one can see the difference operator and the Skorohod integral as dual operators. For our applications in Section 4 we need a special integration by parts formula, where it is not required that the first Poisson functional is in dom D. In this case, the difference operator is given by (6). Lemma 2.3 Let F L 2 (P η ), s R, and g dom δ such that g(z) has a Wiener-Itô chaos expansion of order k for µ-almost all z. Moreover, assume that D z I(F > s) g(z) 0 a.s. for µ-almost all z. Then E DI(F > s), g L 2 (µ) = E[I(F > s) δ(g)]. Proof: It is easy to see that I(F > s) L 2 (P η ), whence it has a Wiener-Itô chaos expansion I(F > s) = I n (h n ) with h 0 = EI(F > s) and kernels h n L 2 s(µ n ), n, that are given by n=0 h n (x,..., x n ) = n! EDn x,...,x n I(F > s). For a fixed z the expression D z I(F > s) is bounded, and its chaos expansion has the kernels n! EDn x,...,x n D z I(F > s) = n! EDn+ z,x,...,x n I(F > s) = (n + ) h n+ (z, x,..., x n ). Hence, we obtain the representation D z I(F > s) = n I n (h n (z, )) n= for all z. From Fubini s theorem and the orthogonality of the multiple Wiener-Itô integrals it follows that E DI(F > s), g L 2 (µ) = E [D z I(F > s) g(z)] µ(dz) [ ] = E n I n (h n (z, )) I n (g n (z, )) µ(dz) = n= n= n=0 k+ n! h n (z, x,..., x n ) g n (z, x,..., x n ) µ n (d(x,..., x n )) µ(dz). n 6 (9)
7 On the other hand, we have [ ] E [I(F > s) δ(g)] = E I n (h n ) I n+ ( g n ) n=0 n=0 k+ = n! h n (x,..., x n ) g n (x,..., x n ) µ n (d(x,..., x n )) n= n k+ = n! h n (x,..., x n ) g n (x,..., x n ) µ n (d(x,..., x n )). n n= (0) Here, we use the symmetry of h n in the last step. Comparing (9) and (0) concludes the proof. The next lemma provides an upper bound for the second moment of a Skorohod integral, which is used in Section 5. Lemma 2.4 Let f L 2 (µ k+ ) be symmetric in its last k arguments and let g(z) = I k (f(z, )). Then E [ δ(g) 2] (k + ) E I k (f(z, )) 2 µ(dz). Proof: By the definition of δ, we obtain δ(g) = I k+ ( f) with the symmetrization f(x,..., x k+ ) = (k + )! f(x σ(),..., x σ(k+) ) as above. From the Cauchy-Schwarz inequality, it follows that f 2 k+ f 2 k+. Combining this with Fubini s Theorem, we have E [ δ(g) 2] = (k + )! f 2 k+ (k + )! f 2 k+ = (k + ) E I k (f(z, )) 2 µ(dz), which concludes the proof. σ Stein s method. Besides Malliavin calculus our proof of Theorem. rests upon Stein s method, which goes back to Charles Stein [7, 8] and is a powerful tool for proving limit theorems. For a detailed and more general introduction into this topic we refer to [, 2, 8]. Very fundamental for this approach is the following lemma (see Chapter II in [8]): Lemma 2.5 For s R the function g s (w) = e w2 2 w is a solution of the differential equation (I(u (, s]) P(N s)) e u2 2 du () g s(w) w g s (w) = I(w (, s]) P(N s) (2) 7
8 and satisfies for any w R. 0 < g s (w) 2π 4, g s(w), and w g s (w) (3) Equation (2) is usually called Stein s equation. The function g s is infinitely differentiable on R \ {s}, but it is not differentiable in s. We denote the left-sided and right-sided limits of the derivatives in s by g s (m) (s ) and g s (m) (s+), respectively. For the first derivative, a direct computation proves g s(s+) = + g s(s ), (4) and we define g s(s) := g s(s ). By replacing w by a random variable Z and taking the expectation in (2), one obtains E[g s(z) Z g s (Z)] = P(Z s) P(N s) and as a consequence of the definition of the Kolmogorov distance d K (Z, N) = sup E[g s(z) Z g s (Z)]. (5) The identity (5) will be our starting point in Section 3. Note furthermore, that we obtain, by combining (2) and (3), the upper bound for w R \ {s}. g s (w) 2π 4 + w (6) 3 Proof of Theorem. By a combination of Malliavin calculus and Stein s method similar to that in [], we derive the upper bound for the Kolmogorov distance. Proof of Theorem.: (8), we obtain Using the identity (7) and the integration by parts formula E[F g s (F )] = E[LL F g s (F )] = E[δ( DL F ) g s (F )] = E DL F, Dg s (F ) L 2 (µ). (7) In order to compute D z g s (F ), we fix z and consider the following cases:. F, F + D z F s or F, F + D z F > s; 2. F s < F + D z F ; 3. F + D z F s < F. 8
9 For F, F + D z F s or F, F + D z F > s, it follows from (6) and Taylor expansion that D z g s (F ) = g s (F + D z F ) g s (F ) = g s(f )D z F + 2 g s ( F )(D z F ) 2 =: g s(f )D z F + r (F, z, s), where F is between F and F + D z F. For F s < F + D z F, we obtain by (6), Taylor expansion, and (4) D z g s (F ) = g s (F + D z F ) g s (F ) = g s (F + D z F ) g s (s) + g s (s) g s (F ) = g s(s+)(f + D z F s) + 2 g s ( F )(F + D z F s) 2 +g s(f )(s F ) + 2 g s ( F 2 )(s F ) 2 = g s(f )D z F + (g s(s ) g s(f ))(F + D z F s) + 2 g s ( F )(F + D z F s) g s ( F 2 )(s F ) 2 = g s(f )D z F (F + D z F s) + g s ( F 0 )(s F )(F + D z F s) + 2 g s ( F )(F + D z F s) g s ( F 2 )(s F ) 2 =: g s(f )D z F (F + D z F s) + r 2 (F, z, s) with F 0, F, F 2 (F, F + D z F ). For F + D z F s < F, we have analogously D z g s (F ) = g s (F + D z F ) g s (F ) = g s (F + D z F ) g s (s) + g s (s) g s (F ) = g s(s )(F + D z F s) + 2 g s ( F )(F + D z F s) 2 +g s(f )(s F ) + 2 g s ( F 2 )(s F ) 2 = g s(f )D z F + (g s(s+) + g s(f ))(F + D z F s) + 2 g s ( F )(F + D z F s) g s ( F 2 )(s F ) 2 = g s(f )D z F + (F + D z F s) + g s ( F 0 )(s F )(F + D z F s) + 2 g s ( F )(F + D z F s) g s ( F 2 )(s F ) 2 =: g s(f )D z F + (F + D z F s) + r 2 (F, z, s) with F 0, F, F 2 (F + D z F, F ). Thus, D z g s (F ) has a representation where R(F, z, s) is given by R(F, z, s) D z g s (F ) = g s(f )D z F + R(F, z, s), (8) = (I(F, F + D z F s) + I(F, F + D z F > s)) r (F, z, s) + (I(F s < F + D z F ) + I(F + D z F s < F )) (r 2 (F, z, s) F + D z F s ). 9
10 Combining (7) and (8) yields E [g s(f ) F g s (F )] = E [ g s(f ) g s(f )DF + R(F,, s), DL F L (µ)] 2. Thus, the triangle inequality and g s(f ) lead to E [g s(f ) F g s (F )] E [ g s(f ) ( DF, DL F L (µ))] 2 (9) + E R(F,, s), DL F L 2 (µ) E DF, DL F L 2 (µ) + E R(F,, s), DL F L 2 (µ). In r 2 (F, z, s), we assume that s is between F and F + D z F so that F + D z F s D z F and F s D z F. The inequality (6) and the fact that F i is between F and F + D z F allow us to bound all second derivatives in R(F, z, s) by Now it is easy to see that g s ( F i ) 2π 4 + F + D z F. R(F, z, s) ( ) (I(F, F + D z F s) + I(F, F + D z F > s)) 2π + F + D z F (D z F ) (I(F s < F + D z F ) + I(F + D z F s < F )) D z F ( ) 2π + (I(F s < F + D z F ) + I(F + D z F s < F )) 2 + F + D z F (D z F ) 2 4 ( ) 2π 2 + F + D z F (D z F ) (I(F s < F + D z F ) + I(F + D z F s < F )) D z F. By (6), the last summand can be rewritten as (I(F s < F + D z F ) + I(F + D z F s < F )) D z F = D z I(F > s) D z F. Hence, it follows directly that E R(F,, s), DL F L 2 (µ) 2E (DF ) 2, DL F L 2 (µ) + 2E (DF ) 2, F DL F L 2 (µ) + 2E (DF ) 2, DF DL F L 2 (µ) + E DI(F > s) DF, DL F L 2 (µ). Putting this in (9) concludes the proof of the first inequality in (3). The second bound in (3) is a direct consequence of the Cauchy-Schwarz inequality. In [], the right-hand side of (7) is evaluated for twice differentiable functions f : R R with sup x R f (x) and sup x R f (x) 2 (for the Wasserstein distance 0
11 the solutions of Stein s equation must have these properties) instead of the functions g s as defined in (). For such a function f it holds that D z f(f ) = f (F )D z F + r(f ) with r(f ) (D z F ) 2. Since this representation is easier than the representation we obtain for D z g s (F ), the bound for the Wasserstein distance in () is shorter and easier to evaluate than the bound for the Kolmogorov distance in (3). 4 Applications of Theorem. Normal approximation of a Poisson random variable. As a first application of Theorem., we compute an upper bound for the Kolmogorov distance between a Poisson random variable Y with parameter t > 0 and a normal distribution. In Example 3.5 in [], the bound () is used to compute a bound for the Wasserstein distance, and the known optimal rate of convergence t 2 is obtained. Y has the same distribution as F t = x η t, where η t is a Poisson point process on [0, ] with t times the the restriction of the Lebesgue measure on [0, ] as intensity measure µ t. In the following, we denote by I n,t ( ) the n-th multiple Wiener-Itô integral with respect to η t. The representation I,t (f) = f(x) f(x) µ t (dx) x η t for a Wiener-Itô integral of a function f L (µ t ) L 2 (µ t ) and the fact that F t = t 0 dx + x η t t imply that F t has the Wiener-Itô chaos expansion F t = EF t + I,t (f ) = t + I,t (). Hence, the standardized random variable 0 dx G t = F t EF t Var Ft = F t t t has the chaos expansion G t = I,t ()/ t and D z G t = D z L G t = / t for z [0, ]. It is easy to see that We obtain E DG t, DL G t L 2 (µ t) = t, L 2 (µ t) = t t = 0. E (DG t ) 2, (DL G t ) 2 L 2 (µ t) = E (DG t ) 2, (DG t ) 2 L 2 (µ t) = t, E DG t, DG t 2 L 2 (µ t) =, and EG4 t = 3+/t by analogous computations. Since D z I(G t > s) D z G t D z L G t 0 for s R and z [0, ] and D z G t D z L G t = /t for z [0, ],
12 it follows from Lemma 2.3 and the Cauchy-Schwarz inequality that sup E DI(G t > s), DG t DL G t L 2 (µ t) = sup E[I(G t > s) δ(dg t DL G t )] E[δ(DG t DL G t ) 2 ] 2 = t E[I,t() 2 ] 2 = t. Now Theorem. yields ( ) Y t d K, N t 2 ( t + ( 3 + ) ) 4 + t t + t 8 t for t, which is the classical Berry-Esseen inequality with the optimal rate of convergence (up to a constant). As a sec- Normal approximation of U-statistics of Poisson point processes. ond application of Theorem., we discuss U-statistics of the form F = f(x,..., x k ) (x,...,x k ) η k with k N and f L s(µ k ). Here, η k stands for the set of all k-tuples of distinct points of η. If the intensity measure µ is non-atomic, η has no multiple points, and η k can be written as η k = { (x,..., x k ) η k : x i x j i j }. In case that µ has atoms, one has to take into account that distinct points of η can have the same location. We denote k as the order of the U-statistic F. From now on, we always assume that F L 2 (P η ) and Var F > 0. In [4], the chaos expansions of such Poisson functionals are investigated, and the bound () is used to prove a central limit theorem with a rate of convergence for the Wasserstein distance. From there it is known that the kernels of the chaos expansion of a U-statistic F are ( ) k f i (x,..., x i ) = f(x,..., x i, y,..., y k i ) µ k i (d(y,..., y k i )) (20) i k i for i =,..., k and f i = 0 for i > k. An application of the bound () to such Poisson functionals yields (see Theorem 4. in [4]) ( ) F EF Rij Ri d W, N k Var F Var F + k 7 2 Var F, (2) where R ij and R i are given by ( R ij = E i= ) 2 I i (f i (z, )) I j (f j (z, )) µ(dz) ( ) 2 E I i (f i (z, )) I j (f j (z, )) µ(dz) R i = E I i (f i (z, )) 4 µ(dz) 2
13 for i, j =,..., k. In [4], the right-hand side of (2) is bounded by a sum of deterministic integrals depending on f. Due to technical reasons it is assumed that the U-statistic F is absolutely convergent, which means that the U-statistic F given by F = f(x,..., x k ) (x,...,x k ) η k is in L 2 (P η ). The U-statistic F has a finite Wiener-Itô chaos expansion with kernels ( ) k f i (x,..., x i ) = f(x,..., x i, y,..., y k i ) µ k i (d(y,..., y k i )) i k i for i =,..., k and f i = 0 for i > k. In order to bound the right-hand side of (2) by a sum of deterministic integrals, we use the following notation. For i, j =,..., k let Π 2 (i, i, j, j) be the set of all partitions π of such that x (),..., x () i, x (2),..., x (2) i, x (3),..., x (3) j, x (4),..., x (4) j two variables with the same upper index are in different blocks of π; each block of π includes at least two variables; there are no sets A, A 2 {, 2, 3, 4} with A A 2 = {, 2, 3, 4} and A A 2 = such that each block of π either consists of variables with upper index in A or of variables with upper index in A 2. Let π stand for the number of blocks of a partition π. For functions g, g 2 : i R and g 3, g 4 : j R the tensor product g g 2 g 3 g 4 : 2i+2j R is given by (g g 2 g 3 g 4 )(x (),..., x (4) j ) = g (x (),..., x () i ) g 2 (x (2),..., x (2) i ) g 3 (x (3),..., x (3) j ) g 4 (x (4),..., x (4) j ). For π Π 2 (i, i, j, j) we define the function (g g 2 g 3 g 4 ) π : π R by replacing all variables that are in the same block of π by a new common variable. Since we later integrate over all these new variables, their order does not matter. Using this notation, we define M ij (f) = (f i f i f j f j ) π dµ π π π Π 2 (i,i,j,j) for i, j =,..., k. Now we can state the following upper bound for the Wasserstein distance (see Theorem 4.7 in [4]): Proposition 4. Let F L 2 (P η ) be an absolutely convergent U-statistic of order k with Var F > 0 and let N be a standard Gaussian random variable. Then ( ) F EF d W, N 2k 7 Mij (f) 2 Var F Var F. (22) i j k 3
14 In this situation, we can use Theorem. to prove the following bound analogous to (22) for the Kolmogorov distance between a standardized U-statistic and a standard Gaussian random variable: Theorem 4.2 Let F L 2 (P η ) be an absolutely convergent U-statistic of order k with Var F > 0 and let N be a standard Gaussian random variable. Then ( ) F EF d K, N 9k 5 Mij (f) Var F Var F. (23) Before we prove this theorem in Section 5, we discuss some of its consequences. Let us consider a family of Poisson point processes η t with σ-finite intensity measures µ t and U-statistics F t = f t (x,..., x k ) (x,...,x k ) (η t) k with f t L s(µ k t ) and F t L 2 (P ηt ) such that Mij (f t ) Var F t 0 as t for all i, j =,..., k. Here, we integrate with respect to µ t in M ij (f t ). Comparing the right-hand sides in (22) and (23) for the U-statistics F t, we see that the bounds for the Wasserstein and Kolmogorov distance have the same rates of convergence and differ only by constants. An important special case of the described setting is that the Poisson point process depends on a real valued intensity parameter. The following corollary deals with this situation and is the counterpart of Theorem 5.2 in [4] for the Kolmogorov distance. Corollary 4.3 Let η t be a Poisson point process with intensity measure µ t = tµ with t and a fixed σ-finite measure µ and let N be a standard Gaussian random variable. We consider U-statistics F t L 2 (P ηt ) of the form F t = g(t) (x,...,x k ) (η t) k f(x,..., x k ) with g : (0, ) (0, ) and f L s(µ k ) independent of t. Moreover, we assume that ( ) 2 f(x, y,..., y k ) µ k (d(y,..., y k )) µ(dx) > 0 k and M ij (f) < for i, j =,..., k. Then there is a constant C > 0 such that ( ) Ft EF t d K, N C t 2 Var Ft for all t. 4
15 This corollary follows from bounding M ij (f t )/ Var F t in the same way as in the proof of [4, Theorem 5.2]. In [4], a so-called fourth moment condition for Poisson functionals with positive variance, finite Wiener-Itô chaos expansion, and non-negative kernels satisfying some integrability conditions is derived. More precisely, for such Poisson functionals it is proven that ( ) ( ) 4 F EF F EF d W, N C W,k E 3 Var F Var F with a constant C W,k > 0 only depending on k. U-statistics F L 2 (P η ) of the form F = f(x,..., x k ) with f L s(µ k ) and f 0 (x,...,x k ) η k such that Var F > 0 and M ij (f) < for i, j =,..., k belong to this class and satisfy ( ) 4 M ij (f) F EF (Var F ) E 3. 2 Var F Then (23) can be modified to ( ) F EF d K, N Var F with a constant C k > 0 only depending on k. 5 Proof of Theorem 4.2 ( ) 4 F EF C k E 3 Var F In our proof of Theorem 4.2, we make use of the following property of U-statistics: Lemma 5. For a U-statistic F L 2 (P η ) of order k the inverse of the Ornstein- Uhlenbeck generator has a representation L (F EF ) = m m= m= (x,...,x m) η m m k m f(x,..., x m, y,..., y k m ) µ k m (d(y,..., y k m )) k f(y,..., y k ) µ k (d(y,..., y k )). Proof: We define f i : i R by f i (x,..., x i ) = ( k fi i) (x,..., x i ) for i =,..., k. Using this notation and formula (20) for the kernels of a U-statistic, we obtain the chaos expansion f(x,..., x m, y,..., y k m ) µ k m (d(y,..., y k m )) (x,...,x m) η m k m m ( ) m = f(y,..., y k ) µ k (d(y,..., y k )) + I i ( i f i ) k 5 i= (24)
16 for m =,..., k. Combining this with an identity for binomial coefficients, we see that the right-hand side in (24) equals m= m m i= ( ) m I i ( i f i ) = = m= i= i= m ( k i i ( ) m I i ( i f i ) = ) I i ( f i ) = i= i= m= i I i(f i ), which is the chaos expansion of L (F EF ) by definition. m ( ) m I i ( i f i ) In order to deal with expressions as R ij and R i in the previous section, one needs to compute the expectation of products of multiple Wiener-Itô integrals. This can be done by using Proposition 6. in [6] (see also [9, Theorem 3.], [3, Proposition 4.5.6], [2, Proposition 6.5.], or [9, Theorem 3.]). This so-called product formula gives us the Wiener-Itô chaos expansion of a product of two multiple Wiener-Itô integrals. Together with (4), one obtains that the expectation of a product of four multiple Wiener-Itô integrals is a sum of deterministic integrals depending on the integrands of the stochastic integrals and partitions of their variables as used for the definition of M ij (f). By using this product formula, one can prove in a similar way as in [4, Subsection 4.2] that R ij M ij (f) and EI i (f i (z, )) 2 I j (f j (z, )) 2 µ(dz) M ij (f) (25) for i, j =,..., k. Moreover, we prepare the proof of Theorem 4.2 by the following two lemmas: Lemma 5.2 Let F L 2 (P η ) be an absolutely convergent U-statistic with M ij (f) < for i, j =,..., k. Then E(F EF ) 4 k 2 M ij (f) + 3k 2 (Var F ) 2. Proof: Using the Wiener-Itô chaos expansion of F and the Cauchy-Schwarz inequality, we obtain ( ) 2 ( 2 E(F EF ) 4 = E I i (f i ) I j (f j )) k 2 E I i (f i ) 2 I j (f j ) 2. i= j= Now the previously mentioned product formula for multiple Wiener-Itô integrals and Var F = k n= n! f n 2 n yield that EI i (f i ) 2 I j (f j ) 2 M ij (f) + 3 i! f i 2 i j! f j 2 j M ij (f) + 3(Var F ) 2. i= j= In the first inequality, we have equality for i = j and f 0. 6
17 Lemma 5.3 Let F L 2 (P η ) be an absolutely convergent U-statistic with M ij (f) < for i, j =,..., k. Then sup E DI(F > s), DF DL (F EF ) L 2 (µ) (2k ) E (DF ) 2, (DL ( F EF ) ) 2 L 2 (µ). Proof: We can write the U-statistic F as F = f(x,..., x k ) = (x,...,x k ) η k (x,...,x k ) η k f + (x,..., x k ) }{{} =F + (x,...,x k ) η k f (x,..., x k ) }{{} with f + = max {f, 0} and f = max { f, 0} and have F = F + +F. As a consequence of (6), we know that D z V 0 for a U-statistic V with non-negative summands. Combining this with f +, f 0 and Lemma 5., we see that =F D z L ( F + EF +) 0 and D z L ( F EF ) 0. Moreover, it holds that D z I(F > s) D z F 0. Proposition 6. in [6] implies that the product D z F D z L ( F EF ) has a finite chaos expansion with an order less than or equal to 2k 2. Together with Lemma 2.3, the Cauchy-Schwarz inequality, and Lemma 2.4, we obtain sup E DI(F > s), DF DL (F EF ) L 2 (µ) = sup E DI(F > s), DF DL ( F + EF + F + EF ) L 2 (µ) sup E DI(F > s), DF ( DL (F + EF + ) DL (F EF ) ) L 2 (µ) [ E δ ( DF DL ( F EF )) ] 2 2 (2k )E (DF ) 2, (DL ( F EF ) ) 2 L 2 (µ). Now the fact that (D z F ) 2 (D z F ) 2 concludes the proof. Proof of Theorem 4.2: In the following, we can assume that M ij (f) < for i, j =,..., k since (23) is obviously true, otherwise. We consider the standardized random variable G = (F EF )/ Var F, whose Wiener-Itô chaos expansion has the kernels g i = f i / Var F for i =,..., k. In order to simplify our notation, we use the abbreviation Mij (f) S = Var F. Exactly as in the proof of Theorem 4. in [4], we obtain E DG, DL G L Rij 2 (µ) k Var F k 7 Mij (f) Var F = ks.
18 From straightforward computations using Fubini s Theorem, the Cauchy-Schwarz inequality, and (25), it follows that E (DG) 2, (DL G) 2 L 2 (µ) = E(D z G) 2 (D z L G) 2 µ(dz) = E (DG) 2, (DG) 2 L 2 (µ) = k 4 ( ) 2 ( 2 E i I i (g i (z, )) I i (g i (z, ))) µ(dz) k 4 k 6 E i= I i (g i (z, )) 2 i= M ij (f) (Var F ) 2 k4 S 2, i= I j (g j (z, )) 2 µ(dz) j= ( 4 E i I i (g i (z, ))) µ(dz) k 6 and ( E DG, DG 2 L 2 (µ) = E ij E ( k 2 i 2 j 2 E k 6 k 6 i= I i (g i (z, )) 2 i= M ij (f) (Var F ) 2 k6 S 2, I j (g j (z, )) 2 µ(dz) j= I i (g i (z, ))I j (g j (z, )) µ(dz) R ij (Var F ) 2 + k4 ) 2 ) 2 I i (g i (z, ))I j (g j (z, )) µ(dz) i= R ij (Var F ) 2 + k4 k 6 (i!) 2 f i 4 i (Var F ) 2 M ij (f) (Var F ) 2 + k4 k 6 S 2 + k 4. As a consequence of Lemma 5.2, we have that ( ) 4 F EF E k 2 M ij (f) Var F (Var F ) + 2 3k2 k 2 S 2 + 3k 2. Combining the last three inequalities, we obtain 2 E (DG) 2, (DG) 2 L 2 (µ) + 2 2k 3 S + 2(k 3 2 ( E DG, DG 2 L 2 (µ) S + k)( k S k + ) 6k 3 8 ) 4 ( (EG 4 ) 4 + )
19 for S. Lemma 5.3 together with a similar computation as for E (DG) 2, (DL G) 2 L 2 (µ) implies that sup E DI(G > s), DG DL G L 2 (µ) (2k )k 4 Thus, it follows from Theorem. that ( ) F EF d K, N ks + 6k 3 k 2 S + 2k 5 2 S 9k 5 S Var F M ij (f) (Var F ) 2 2k 5 2 S. for S. Otherwise, this inequality still holds since the Kolmogorov distance is at most one. In a similar way, one can also obtain an upper bound for the Kolmogorov distance between a Gaussian random variable and a finite sum of Poisson U-statistics. This class of Poisson functionals is interesting since Theorem 3.6 in [4] tells us that each Poisson functional F L 2 (P η ) of order k with kernels f i L s(µ i ) L 2 s(µ i ) for i =,..., k is a finite sum of Poisson U-statistics (and a constant). For such a Poisson functional the upper bounds for the inner products in the proof of Theorem 4.2 that depend on R ij and EI i (f i (z, )) 2 I j (f j (z, )) 2 µ(dz) for i, j =,..., k still hold. Moreover, we can compute a similar bound as in Lemma 5.3 using the representation of F as a sum of Poisson U-statistics. Together with the fourth centred moment of F, one can obtain an upper bound for the Kolmogorov distance between (F EF )/ Var F and a standard Gaussian random variable. Acknowledgement: Large parts of this paper were written during a stay at Case Western Reserve University (February to July 202) supported by the German Academic Exchange Service. The author thanks Elizabeth Meckes, Giovanni Peccati, Matthias Reitzner, and Christoph Thäle for some valuable hints and helpful discussions. References [] Chen, L., Goldstein, L., Shao, Q.: Normal Approximation by Stein s Method. Springer, Berlin (20) [2] Chen, L., Shao, Q.: Stein s method for normal approximation. An Introduction to Stein s Method, 59, Singapore University Press, Singapore (2005) [3] Eichelsbacher, P., Thäle, C.: New Berry-Esseen bounds for non-linear functionals of Poisson random measures. ariv: (203) 9
20 [4] Lachièze-Rey, R., Peccati, G.: Fine Gaussian fluctuations on the Poisson space, I: contractions, cumulants and geometric random graphs. Electron. J. Probab. 8, Article 32 (203) [5] Lachièze-Rey, R., Peccati, G.: Fine Gaussian fluctuations on the Poisson space, II: rescaled kernels, marked processes and geometric U-statistics. Stochastic Process. Appl. 23, (203) [6] Last, G.: Stochastic analysis for Poisson processes. ariv: (204) [7] Last, G., Peccati, G., Schulte, M.: Normal approximation on Poisson spaces: Mehler s formula, second order Poincaré inequalitites and stabilization. ariv: (204) [8] Last, G., Penrose, M.D.: Poisson process Fock space representation, chaos expansion and covariance inequalities. Probab. Theory Related Fields 50, (20) [9] Last, G., Penrose, M.D., Schulte, M., Thäle, C.: Moments and central limit theorems for some multivariate Poisson functionals. Adv. in Appl. Probab. 46, (204) [0] Nualart, D., Vives, J.: Anticipative calculus for the Poisson process based on the Fock space. Lecture Notes in Math. 426, (990) [] Peccati, G., Solé, J.L., Taqqu, M.S., Utzet, F.: Stein s method and normal approximation of Poisson functionals. Ann. Probab. 38, (200) [2] Peccati, G., Taqqu, M.S.: Wiener Chaos: Moments, Cumulants and Diagram Formulae. Springer, Berlin (20) [3] Privault, N.: Stochastic Analysis in Discrete and Continuous Settings with Normal Martingales. Springer, Berlin (2009) [4] Reitzner, M., Schulte, M.: Central limit theorems for U-statistics of Poisson point processes. Ann. Probab. 4, (203) [5] Reitzner, M., Schulte, M., Thäle, C.: Limit theory for the Gilbert graph. ariv: (203) [6] Schulte, M., Thäle, C.: Distances between Poisson k-flats. Methodol. Comput. Appl. Probab. 6, (204) [7] Stein, C.: A bound for the error in the normal approximation to the distribution of a sum of dependent random variables. Proceedings of the Sixth Berkley Symposium on Mathematical Statistics Probability 2, Univ. California Press, Berkley (972) [8] Stein, C.: Approximate Computation of Expectations. Institute of Mathematical Statistics, Hayward, CA (986) [9] Surgailis, D.: On multiple Poisson stochastic integrals and associated Markov semigroups. Probab. Math. Statist. 38, (984) 20
ON MEHLER S FORMULA. Giovanni Peccati (Luxembourg University) Conférence Géométrie Stochastique Nantes April 7, 2016
1 / 22 ON MEHLER S FORMULA Giovanni Peccati (Luxembourg University) Conférence Géométrie Stochastique Nantes April 7, 2016 2 / 22 OVERVIEW ı I will discuss two joint works: Last, Peccati and Schulte (PTRF,
More informationMalliavin-Stein Method in Stochastic Geometry
Malliavin-Stein Method in Stochastic Geometry Dissertation zur Erlangung des Doktorgrades (Dr. rer. nat.) des Fachbereichs Mathematik/Informatik der Universität Osnabrück vorgelegt von Matthias Schulte
More informationNormal approximation of geometric Poisson functionals
Institut für Stochastik Karlsruher Institut für Technologie Normal approximation of geometric Poisson functionals (Karlsruhe) joint work with Daniel Hug, Giovanni Peccati, Matthias Schulte presented at
More informationarxiv: v2 [math.pr] 12 May 2015
Optimal Berry-Esseen bounds on the Poisson space arxiv:1505.02578v2 [math.pr] 12 May 2015 Ehsan Azmoodeh Unité de Recherche en Mathématiques, Luxembourg University ehsan.azmoodeh@uni.lu Giovanni Peccati
More informationOptimal Berry-Esseen bounds on the Poisson space
Optimal Berry-Esseen bounds on the Poisson space Ehsan Azmoodeh Unité de Recherche en Mathématiques, Luxembourg University ehsan.azmoodeh@uni.lu Giovanni Peccati Unité de Recherche en Mathématiques, Luxembourg
More informationIndependence of some multiple Poisson stochastic integrals with variable-sign kernels
Independence of some multiple Poisson stochastic integrals with variable-sign kernels Nicolas Privault Division of Mathematical Sciences School of Physical and Mathematical Sciences Nanyang Technological
More informationProbability approximation by Clark-Ocone covariance representation
Probability approximation by Clark-Ocone covariance representation Nicolas Privault Giovanni Luca Torrisi October 19, 13 Abstract Based on the Stein method and a general integration by parts framework
More informationKOLMOGOROV DISTANCE FOR MULTIVARIATE NORMAL APPROXIMATION. Yoon Tae Kim and Hyun Suk Park
Korean J. Math. 3 (015, No. 1, pp. 1 10 http://dx.doi.org/10.11568/kjm.015.3.1.1 KOLMOGOROV DISTANCE FOR MULTIVARIATE NORMAL APPROXIMATION Yoon Tae Kim and Hyun Suk Park Abstract. This paper concerns the
More informationThe Stein and Chen-Stein methods for functionals of non-symmetric Bernoulli processes
The Stein and Chen-Stein methods for functionals of non-symmetric Bernoulli processes Nicolas Privault Giovanni Luca Torrisi Abstract Based on a new multiplication formula for discrete multiple stochastic
More informationMalliavin calculus and central limit theorems
Malliavin calculus and central limit theorems David Nualart Department of Mathematics Kansas University Seminar on Stochastic Processes 2017 University of Virginia March 8-11 2017 David Nualart (Kansas
More informationStein s Method: Distributional Approximation and Concentration of Measure
Stein s Method: Distributional Approximation and Concentration of Measure Larry Goldstein University of Southern California 36 th Midwest Probability Colloquium, 2014 Stein s method for Distributional
More informationThe Stein and Chen-Stein Methods for Functionals of Non-Symmetric Bernoulli Processes
ALEA, Lat. Am. J. Probab. Math. Stat. 12 (1), 309 356 (2015) The Stein Chen-Stein Methods for Functionals of Non-Symmetric Bernoulli Processes Nicolas Privault Giovanni Luca Torrisi Division of Mathematical
More informationarxiv: v3 [math.mg] 3 Nov 2017
arxiv:702.0069v3 [math.mg] 3 ov 207 Random polytopes: central limit theorems for intrinsic volumes Christoph Thäle, icola Turchi and Florian Wespi Abstract Short and transparent proofs of central limit
More informationKolmogorov Berry-Esseen bounds for binomial functionals
Kolmogorov Berry-Esseen bounds for binomial functionals Raphaël Lachièze-Rey, Univ. South California, Univ. Paris 5 René Descartes, Joint work with Giovanni Peccati, University of Luxembourg, Singapore
More informationStein s method and weak convergence on Wiener space
Stein s method and weak convergence on Wiener space Giovanni PECCATI (LSTA Paris VI) January 14, 2008 Main subject: two joint papers with I. Nourdin (Paris VI) Stein s method on Wiener chaos (ArXiv, December
More informationStein s Method and Stochastic Geometry
1 / 39 Stein s Method and Stochastic Geometry Giovanni Peccati (Luxembourg University) Firenze 16 marzo 2018 2 / 39 INTRODUCTION Stein s method, as devised by Charles Stein at the end of the 60s, is a
More informationDifferent scaling regimes for geometric Poisson functionals
Different scaling regimes for geometric Poisson functionals Raphaël Lachièze-Rey, Univ. Paris 5 René Descartes Joint work with Giovanni Peccati, University of Luxembourg isson Point Processes: Malliavin
More informationStein s method and stochastic analysis of Rademacher functionals
E l e c t r o n i c J o u r n a l o f P r o b a b i l i t y Vol. 15 (010), Paper no. 55, pages 1703 174. Journal URL http://www.math.washington.edu/~ejpecp/ Stein s method and stochastic analysis of Rademacher
More informationMulti-dimensional Gaussian fluctuations on the Poisson space
E l e c t r o n i c J o u r n a l o f P r o b a b i l i t y Vol. 15 (2010), Paper no. 48, pages 1487 1527. Journal URL http://www.math.washington.edu/~ejpecp/ Multi-dimensional Gaussian fluctuations on
More informationContents. 1 Preliminaries 3. Martingales
Table of Preface PART I THE FUNDAMENTAL PRINCIPLES page xv 1 Preliminaries 3 2 Martingales 9 2.1 Martingales and examples 9 2.2 Stopping times 12 2.3 The maximum inequality 13 2.4 Doob s inequality 14
More informationThe Wiener Itô Chaos Expansion
1 The Wiener Itô Chaos Expansion The celebrated Wiener Itô chaos expansion is fundamental in stochastic analysis. In particular, it plays a crucial role in the Malliavin calculus as it is presented in
More informationarxiv: v2 [math.pr] 22 Aug 2009
On the structure of Gaussian random variables arxiv:97.25v2 [math.pr] 22 Aug 29 Ciprian A. Tudor SAMOS/MATISSE, Centre d Economie de La Sorbonne, Université de Panthéon-Sorbonne Paris, 9, rue de Tolbiac,
More informationON THE STRUCTURE OF GAUSSIAN RANDOM VARIABLES
ON THE STRUCTURE OF GAUSSIAN RANDOM VARIABLES CIPRIAN A. TUDOR We study when a given Gaussian random variable on a given probability space Ω, F,P) is equal almost surely to β 1 where β is a Brownian motion
More informationNEW FUNCTIONAL INEQUALITIES
1 / 29 NEW FUNCTIONAL INEQUALITIES VIA STEIN S METHOD Giovanni Peccati (Luxembourg University) IMA, Minneapolis: April 28, 2015 2 / 29 INTRODUCTION Based on two joint works: (1) Nourdin, Peccati and Swan
More informationarxiv: v3 [math.pr] 17 Apr 2012
The Chen-Stein method for Poisson functionals by Giovanni Peccati Université du Luxembourg arxiv:1112.5051v3 [math.pr 17 Apr 2012 Abstract: We establish a general inequality on the Poisson space, yielding
More informationR. Lachieze-Rey Recent Berry-Esseen bounds obtained with Stein s method andgeorgia PoincareTech. inequalities, 1 / with 29 G.
Recent Berry-Esseen bounds obtained with Stein s method and Poincare inequalities, with Geometric applications Raphaël Lachièze-Rey, Univ. Paris 5 René Descartes, Georgia Tech. R. Lachieze-Rey Recent Berry-Esseen
More informationarxiv: v1 [math.pr] 7 May 2013
The optimal fourth moment theorem Ivan Nourdin and Giovanni Peccati May 8, 2013 arxiv:1305.1527v1 [math.pr] 7 May 2013 Abstract We compute the exact rates of convergence in total variation associated with
More informationA Gentle Introduction to Stein s Method for Normal Approximation I
A Gentle Introduction to Stein s Method for Normal Approximation I Larry Goldstein University of Southern California Introduction to Stein s Method for Normal Approximation 1. Much activity since Stein
More informationSTEIN S METHOD AND NORMAL APPROXIMATION OF POISSON FUNCTIONALS
The Annals of Probability 2010, Vol. 38, No. 2, 443 478 DOI: 10.1214/09-AOP477 Institute of Mathematical Statistics, 2010 STEIN S METHOD AND NORMAL APPROXIMATION OF POISSON FUNCTIONALS BY G. PECCATI, J.L.SOLÉ
More informationI forgot to mention last time: in the Ito formula for two standard processes, putting
I forgot to mention last time: in the Ito formula for two standard processes, putting dx t = a t dt + b t db t dy t = α t dt + β t db t, and taking f(x, y = xy, one has f x = y, f y = x, and f xx = f yy
More informationStein s method, logarithmic Sobolev and transport inequalities
Stein s method, logarithmic Sobolev and transport inequalities M. Ledoux University of Toulouse, France and Institut Universitaire de France Stein s method, logarithmic Sobolev and transport inequalities
More informationarxiv: v1 [math.pr] 7 Sep 2018
ALMOST SURE CONVERGENCE ON CHAOSES GUILLAUME POLY AND GUANGQU ZHENG arxiv:1809.02477v1 [math.pr] 7 Sep 2018 Abstract. We present several new phenomena about almost sure convergence on homogeneous chaoses
More informationA REPRESENTATION FOR THE KANTOROVICH RUBINSTEIN DISTANCE DEFINED BY THE CAMERON MARTIN NORM OF A GAUSSIAN MEASURE ON A BANACH SPACE
Theory of Stochastic Processes Vol. 21 (37), no. 2, 2016, pp. 84 90 G. V. RIABOV A REPRESENTATION FOR THE KANTOROVICH RUBINSTEIN DISTANCE DEFINED BY THE CAMERON MARTIN NORM OF A GAUSSIAN MEASURE ON A BANACH
More informationBOOK REVIEW. Review by Denis Bell. University of North Florida
BOOK REVIEW By Paul Malliavin, Stochastic Analysis. Springer, New York, 1997, 370 pages, $125.00. Review by Denis Bell University of North Florida This book is an exposition of some important topics in
More informationWasserstein-2 bounds in normal approximation under local dependence
Wasserstein- bounds in normal approximation under local dependence arxiv:1807.05741v1 [math.pr] 16 Jul 018 Xiao Fang The Chinese University of Hong Kong Abstract: We obtain a general bound for the Wasserstein-
More informationNormal approximation on Poisson spaces: Mehler s formula, second order Poincaré inequalities and stabilization arxiv: v1 [math.
Normal approximation on Poisson spaces: Mehler s formula, second order Poincaré inequalities and stabilization arxiv:141.7568v1 [math.pr] 9 Jan 14 Günter Last, Giovanni Peccati and Matthias Schulte January
More informationStein s Method and the Zero Bias Transformation with Application to Simple Random Sampling
Stein s Method and the Zero Bias Transformation with Application to Simple Random Sampling Larry Goldstein and Gesine Reinert November 8, 001 Abstract Let W be a random variable with mean zero and variance
More informationChapter 6. Stein s method, Malliavin calculus, Dirichlet forms and the fourth moment theorem
November 20, 2014 18:45 BC: 9129 - Festschrift Masatoshi Fukushima chapter6 page 107 Chapter 6 Stein s method, Malliavin calculus, Dirichlet forms and the fourth moment theorem Louis H.Y. Chen and Guillaume
More informationLecture Notes on Metric Spaces
Lecture Notes on Metric Spaces Math 117: Summer 2007 John Douglas Moore Our goal of these notes is to explain a few facts regarding metric spaces not included in the first few chapters of the text [1],
More informationIntroduction to Infinite Dimensional Stochastic Analysis
Introduction to Infinite Dimensional Stochastic Analysis By Zhi yuan Huang Department of Mathematics, Huazhong University of Science and Technology, Wuhan P. R. China and Jia an Yan Institute of Applied
More informationDifferential Stein operators for multivariate continuous distributions and applications
Differential Stein operators for multivariate continuous distributions and applications Gesine Reinert A French/American Collaborative Colloquium on Concentration Inequalities, High Dimensional Statistics
More informationSTEIN MEETS MALLIAVIN IN NORMAL APPROXIMATION. Louis H. Y. Chen National University of Singapore
STEIN MEETS MALLIAVIN IN NORMAL APPROXIMATION Louis H. Y. Chen National University of Singapore 215-5-6 Abstract Stein s method is a method of probability approximation which hinges on the solution of
More informationCOVARIANCE IDENTITIES AND MIXING OF RANDOM TRANSFORMATIONS ON THE WIENER SPACE
Communications on Stochastic Analysis Vol. 4, No. 3 (21) 299-39 Serials Publications www.serialspublications.com COVARIANCE IDENTITIES AND MIXING OF RANDOM TRANSFORMATIONS ON THE WIENER SPACE NICOLAS PRIVAULT
More informationFiltrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition
Filtrations, Markov Processes and Martingales Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition David pplebaum Probability and Statistics Department,
More informationA NOTE ON STOCHASTIC INTEGRALS AS L 2 -CURVES
A NOTE ON STOCHASTIC INTEGRALS AS L 2 -CURVES STEFAN TAPPE Abstract. In a work of van Gaans (25a) stochastic integrals are regarded as L 2 -curves. In Filipović and Tappe (28) we have shown the connection
More informationRandom Fields: Skorohod integral and Malliavin derivative
Dept. of Math. University of Oslo Pure Mathematics No. 36 ISSN 0806 2439 November 2004 Random Fields: Skorohod integral and Malliavin derivative Giulia Di Nunno 1 Oslo, 15th November 2004. Abstract We
More informationChaos expansions and Malliavin calculus for Lévy processes
Chaos expansions and Malliavin calculus for Lévy processes Josep Lluís Solé, Frederic Utzet, and Josep Vives Departament de Matemàtiques, Facultat de Ciències, Universitat Autónoma de Barcelona, 8193 Bellaterra
More informationA Short Introduction to Diffusion Processes and Ito Calculus
A Short Introduction to Diffusion Processes and Ito Calculus Cédric Archambeau University College, London Center for Computational Statistics and Machine Learning c.archambeau@cs.ucl.ac.uk January 24,
More informationd(x n, x) d(x n, x nk ) + d(x nk, x) where we chose any fixed k > N
Problem 1. Let f : A R R have the property that for every x A, there exists ɛ > 0 such that f(t) > ɛ if t (x ɛ, x + ɛ) A. If the set A is compact, prove there exists c > 0 such that f(x) > c for all x
More informationAsymptotics for posterior hazards
Asymptotics for posterior hazards Pierpaolo De Blasi University of Turin 10th August 2007, BNR Workshop, Isaac Newton Intitute, Cambridge, UK Joint work with Giovanni Peccati (Université Paris VI) and
More informationStein s method on Wiener chaos
Stein s method on Wiener chaos by Ivan Nourdin and Giovanni Peccati University of Paris VI Revised version: May 10, 2008 Abstract: We combine Malliavin calculus with Stein s method, in order to derive
More informationStein's method meets Malliavin calculus: a short survey with new estimates. Contents
Stein's method meets Malliavin calculus: a short survey with new estimates by Ivan Nourdin and Giovanni Peccati Université Paris VI and Université Paris Ouest Abstract: We provide an overview of some recent
More informationSome SDEs with distributional drift Part I : General calculus. Flandoli, Franco; Russo, Francesco; Wolf, Jochen
Title Author(s) Some SDEs with distributional drift Part I : General calculus Flandoli, Franco; Russo, Francesco; Wolf, Jochen Citation Osaka Journal of Mathematics. 4() P.493-P.54 Issue Date 3-6 Text
More informationON THE PROBABILITY APPROXIMATION OF SPATIAL
ON THE PROBABILITY APPROIMATION OF SPATIAL POINT PROCESSES Giovanni Luca Torrisi Consiglio Nazionale delle Ricerche Giovanni Luca Torrisi (CNR) Approximation of point processes May 2015 1 / 33 POINT PROCESSES
More informationMetric Spaces and Topology
Chapter 2 Metric Spaces and Topology From an engineering perspective, the most important way to construct a topology on a set is to define the topology in terms of a metric on the set. This approach underlies
More informationGaussian Processes. 1. Basic Notions
Gaussian Processes 1. Basic Notions Let T be a set, and X : {X } T a stochastic process, defined on a suitable probability space (Ω P), that is indexed by T. Definition 1.1. We say that X is a Gaussian
More informationDiscrete approximation of stochastic integrals with respect to fractional Brownian motion of Hurst index H > 1 2
Discrete approximation of stochastic integrals with respect to fractional Brownian motion of urst index > 1 2 Francesca Biagini 1), Massimo Campanino 2), Serena Fuschini 2) 11th March 28 1) 2) Department
More information4th Preparation Sheet - Solutions
Prof. Dr. Rainer Dahlhaus Probability Theory Summer term 017 4th Preparation Sheet - Solutions Remark: Throughout the exercise sheet we use the two equivalent definitions of separability of a metric space
More informationEECS 598: Statistical Learning Theory, Winter 2014 Topic 11. Kernels
EECS 598: Statistical Learning Theory, Winter 2014 Topic 11 Kernels Lecturer: Clayton Scott Scribe: Jun Guo, Soumik Chatterjee Disclaimer: These notes have not been subjected to the usual scrutiny reserved
More informationConvergence at first and second order of some approximations of stochastic integrals
Convergence at first and second order of some approximations of stochastic integrals Bérard Bergery Blandine, Vallois Pierre IECN, Nancy-Université, CNRS, INRIA, Boulevard des Aiguillettes B.P. 239 F-5456
More informationMAT 570 REAL ANALYSIS LECTURE NOTES. Contents. 1. Sets Functions Countability Axiom of choice Equivalence relations 9
MAT 570 REAL ANALYSIS LECTURE NOTES PROFESSOR: JOHN QUIGG SEMESTER: FALL 204 Contents. Sets 2 2. Functions 5 3. Countability 7 4. Axiom of choice 8 5. Equivalence relations 9 6. Real numbers 9 7. Extended
More informationThe Codimension of the Zeros of a Stable Process in Random Scenery
The Codimension of the Zeros of a Stable Process in Random Scenery Davar Khoshnevisan The University of Utah, Department of Mathematics Salt Lake City, UT 84105 0090, U.S.A. davar@math.utah.edu http://www.math.utah.edu/~davar
More information1 Introduction. 2 Measure theoretic definitions
1 Introduction These notes aim to recall some basic definitions needed for dealing with random variables. Sections to 5 follow mostly the presentation given in chapter two of [1]. Measure theoretic definitions
More informationTools from Lebesgue integration
Tools from Lebesgue integration E.P. van den Ban Fall 2005 Introduction In these notes we describe some of the basic tools from the theory of Lebesgue integration. Definitions and results will be given
More informationFinite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product
Chapter 4 Hilbert Spaces 4.1 Inner Product Spaces Inner Product Space. A complex vector space E is called an inner product space (or a pre-hilbert space, or a unitary space) if there is a mapping (, )
More information(B(t i+1 ) B(t i )) 2
ltcc5.tex Week 5 29 October 213 Ch. V. ITÔ (STOCHASTIC) CALCULUS. WEAK CONVERGENCE. 1. Quadratic Variation. A partition π n of [, t] is a finite set of points t ni such that = t n < t n1
More informationStein approximation for functionals of independent random sequences
Stein approximation for functionals of independent random sequences Nicolas Privault Grzegorz Serafin November 7, 17 Abstract We derive Stein approximation bounds for functionals of uniform random variables,
More informationRegularity of the density for the stochastic heat equation
Regularity of the density for the stochastic heat equation Carl Mueller 1 Department of Mathematics University of Rochester Rochester, NY 15627 USA email: cmlr@math.rochester.edu David Nualart 2 Department
More information{σ x >t}p x. (σ x >t)=e at.
3.11. EXERCISES 121 3.11 Exercises Exercise 3.1 Consider the Ornstein Uhlenbeck process in example 3.1.7(B). Show that the defined process is a Markov process which converges in distribution to an N(0,σ
More informationPROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS
PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please
More informationRough paths methods 4: Application to fbm
Rough paths methods 4: Application to fbm Samy Tindel Purdue University University of Aarhus 2016 Samy T. (Purdue) Rough Paths 4 Aarhus 2016 1 / 67 Outline 1 Main result 2 Construction of the Levy area:
More informationMeasure and Integration: Solutions of CW2
Measure and Integration: s of CW2 Fall 206 [G. Holzegel] December 9, 206 Problem of Sheet 5 a) Left (f n ) and (g n ) be sequences of integrable functions with f n (x) f (x) and g n (x) g (x) for almost
More informationLecture 12. F o s, (1.1) F t := s>t
Lecture 12 1 Brownian motion: the Markov property Let C := C(0, ), R) be the space of continuous functions mapping from 0, ) to R, in which a Brownian motion (B t ) t 0 almost surely takes its value. Let
More informationSEPARABILITY AND COMPLETENESS FOR THE WASSERSTEIN DISTANCE
SEPARABILITY AND COMPLETENESS FOR THE WASSERSTEIN DISTANCE FRANÇOIS BOLLEY Abstract. In this note we prove in an elementary way that the Wasserstein distances, which play a basic role in optimal transportation
More informationCumulants on the Wiener Space
Cumulants on the Wiener Space by Ivan Nourdin and Giovanni Peccati Université Paris VI and Université Paris Ouest Abstract: We combine innite-dimensional integration by parts procedures with a recursive
More informationPathwise volatility in a long-memory pricing model: estimation and asymptotic behavior
Pathwise volatility in a long-memory pricing model: estimation and asymptotic behavior Ehsan Azmoodeh University of Vaasa Finland 7th General AMaMeF and Swissquote Conference September 7 1, 215 Outline
More informationGiovanni Peccati. 1. Introduction QUANTITATIVE CLTS ON A GAUSSIAN SPACE: A SURVEY OF RECENT DEVELOPMENTS
ESAIM: PROCEEDINGS, January 214, Vol. 44, p. 61-78 SMAI Groupe MAS Journées MAS 212 Exposé plénier QUANTITATIVE CLTS ON A GAUSSIAN SPACE: A SURVEY OF RECENT DEVELOPMENTS Giovanni Peccati Abstract. I will
More informationOberwolfach Preprints
Oberwolfach Preprints OP 2014-18 MATTHIAS SCHULTE AND CHRISTOPH THÄLE Central Limit Theorems for the Radial Spanning Tree Mathematisches Forschungsinstitut Oberwolfach ggmbh Oberwolfach Preprints OP ISSN
More informationBrownian Motion. 1 Definition Brownian Motion Wiener measure... 3
Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................
More informationTopics in Stochastic Geometry. Lecture 4 The Boolean model
Institut für Stochastik Karlsruher Institut für Technologie Topics in Stochastic Geometry Lecture 4 The Boolean model Lectures presented at the Department of Mathematical Sciences University of Bath May
More information08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms
(February 24, 2017) 08a. Operators on Hilbert spaces Paul Garrett garrett@math.umn.edu http://www.math.umn.edu/ garrett/ [This document is http://www.math.umn.edu/ garrett/m/real/notes 2016-17/08a-ops
More informationNormal Approximation for Hierarchical Structures
Normal Approximation for Hierarchical Structures Larry Goldstein University of Southern California July 1, 2004 Abstract Given F : [a, b] k [a, b] and a non-constant X 0 with P (X 0 [a, b]) = 1, define
More informationGeneralized Gaussian Bridges of Prediction-Invertible Processes
Generalized Gaussian Bridges of Prediction-Invertible Processes Tommi Sottinen 1 and Adil Yazigi University of Vaasa, Finland Modern Stochastics: Theory and Applications III September 1, 212, Kyiv, Ukraine
More informationAdditive functionals of infinite-variance moving averages. Wei Biao Wu The University of Chicago TECHNICAL REPORT NO. 535
Additive functionals of infinite-variance moving averages Wei Biao Wu The University of Chicago TECHNICAL REPORT NO. 535 Departments of Statistics The University of Chicago Chicago, Illinois 60637 June
More informationBrownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539
Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory
More informationNotes on Poisson Approximation
Notes on Poisson Approximation A. D. Barbour* Universität Zürich Progress in Stein s Method, Singapore, January 2009 These notes are a supplement to the article Topics in Poisson Approximation, which appeared
More informationStochastic Partial Differential Equations with Levy Noise
Stochastic Partial Differential Equations with Levy Noise An Evolution Equation Approach S..PESZAT and J. ZABCZYK Institute of Mathematics, Polish Academy of Sciences' CAMBRIDGE UNIVERSITY PRESS Contents
More informationMeasure Theory on Topological Spaces. Course: Prof. Tony Dorlas 2010 Typset: Cathal Ormond
Measure Theory on Topological Spaces Course: Prof. Tony Dorlas 2010 Typset: Cathal Ormond May 22, 2011 Contents 1 Introduction 2 1.1 The Riemann Integral........................................ 2 1.2 Measurable..............................................
More informationNormal approximation for sums of stabilizing functionals
Normal approximation for sums of stabilizing functionals Raphael Lachièze-Rey, Matthias Schulte, J. E. Yukich December 19, 2016 Abstract We establish presumably optimal rates of normal convergence with
More informationLAN property for sde s with additive fractional noise and continuous time observation
LAN property for sde s with additive fractional noise and continuous time observation Eulalia Nualart (Universitat Pompeu Fabra, Barcelona) joint work with Samy Tindel (Purdue University) Vlad s 6th birthday,
More information9 Brownian Motion: Construction
9 Brownian Motion: Construction 9.1 Definition and Heuristics The central limit theorem states that the standard Gaussian distribution arises as the weak limit of the rescaled partial sums S n / p n of
More informationWhite noise generalization of the Clark-Ocone formula under change of measure
White noise generalization of the Clark-Ocone formula under change of measure Yeliz Yolcu Okur Supervisor: Prof. Bernt Øksendal Co-Advisor: Ass. Prof. Giulia Di Nunno Centre of Mathematics for Applications
More informationTheorem 2.1 (Caratheodory). A (countably additive) probability measure on a field has an extension. n=1
Chapter 2 Probability measures 1. Existence Theorem 2.1 (Caratheodory). A (countably additive) probability measure on a field has an extension to the generated σ-field Proof of Theorem 2.1. Let F 0 be
More informationRichard F. Bass Krzysztof Burdzy University of Washington
ON DOMAIN MONOTONICITY OF THE NEUMANN HEAT KERNEL Richard F. Bass Krzysztof Burdzy University of Washington Abstract. Some examples are given of convex domains for which domain monotonicity of the Neumann
More informationON THE ZERO-ONE LAW AND THE LAW OF LARGE NUMBERS FOR RANDOM WALK IN MIXING RAN- DOM ENVIRONMENT
Elect. Comm. in Probab. 10 (2005), 36 44 ELECTRONIC COMMUNICATIONS in PROBABILITY ON THE ZERO-ONE LAW AND THE LAW OF LARGE NUMBERS FOR RANDOM WALK IN MIXING RAN- DOM ENVIRONMENT FIRAS RASSOUL AGHA Department
More informationTHE LENT PARTICLE FORMULA
THE LENT PARTICLE FORMULA Nicolas BOULEAU, Laurent DENIS, Paris. Workshop on Stochastic Analysis and Finance, Hong-Kong, June-July 2009 This is part of a joint work with Laurent Denis, concerning the approach
More informationMathematical Analysis Outline. William G. Faris
Mathematical Analysis Outline William G. Faris January 8, 2007 2 Chapter 1 Metric spaces and continuous maps 1.1 Metric spaces A metric space is a set X together with a real distance function (x, x ) d(x,
More informationRandom Process Lecture 1. Fundamentals of Probability
Random Process Lecture 1. Fundamentals of Probability Husheng Li Min Kao Department of Electrical Engineering and Computer Science University of Tennessee, Knoxville Spring, 2016 1/43 Outline 2/43 1 Syllabus
More informationPart II Probability and Measure
Part II Probability and Measure Theorems Based on lectures by J. Miller Notes taken by Dexter Chua Michaelmas 2016 These notes are not endorsed by the lecturers, and I have modified them (often significantly)
More informationModeling with Itô Stochastic Differential Equations
Modeling with Itô Stochastic Differential Equations 2.4-2.6 E. Allen presentation by T. Perälä 27.0.2009 Postgraduate seminar on applied mathematics 2009 Outline Hilbert Space of Stochastic Processes (
More information