The Stein and Chen-Stein methods for functionals of non-symmetric Bernoulli processes

Size: px
Start display at page:

Download "The Stein and Chen-Stein methods for functionals of non-symmetric Bernoulli processes"

Transcription

1 The Stein and Chen-Stein methods for functionals of non-symmetric Bernoulli processes Nicolas Privault Giovanni Luca Torrisi Abstract Based on a new multiplication formula for discrete multiple stochastic integrals with respect to non-symmetric Bernoulli random wals, we extend the results of [4] on the Gaussian approximation of symmetric Rademacher sequences to the setting of possibly non-identically distributed independent Bernoulli sequences. We also provide Poisson approximation results for these sequences, by following the method of [5]. Our arguments use covariance identities obtained from the Clar-Ocone representation formula in addition to those usually based on the inverse of the Ornstein-Uhlenbec operator. Keywords: Bernoulli processes; Chen-Stein method; Stein method; Malliavin calculus; Clar- Ocone representation. Mathematics Subject Classification: 60F05, 60G57, 60H07. Introduction Malliavin calculus and the Stein method were combined for the first time for Gaussian fields in the seminal paper [3], whose results have later been extended to other settings, including Poisson processes [5], [6]. In particular, the Stein method has been applied in [4] to Rademacher sequences (X n ) n N of independent and identically distributed Bernoulli random variables with P (X = ) = P (X = ) = /2, in order to derive bounds on distances between the probability laws of functionals of (X n ) n N and the law N(0, ) of a standard N(0, ) normal random variable Z. Those approaches exploit a covariance representation based on the number (or Ornstein-Uhlenbec) operator L and its inverse L. Division of Mathematical Sciences, Nanyang Technological University, SPMS-MAS-05-43, 2 Nanyang Lin Singapore nprivault@ntu.edu.sg, Istituto per le Applicazioni del Calcolo Mauro Picone, CNR, Via dei Taurini 9, 0085 Roma, Italy. torrisi@iac.rm.cnr.it,

2 From here onwards, we denote by C 2 b the set of all real-valued bounded functions with bounded derivatives up to the second order. In particular, for h C 2 b, using a chain rule proved in the symmetric case, the bound E[h(F )] E[h(Z)] A min{4 h, h } + h A 2, (.) has been derived in [4] (see Theorem 3. therein) for centered functionals F of a symmetric Bernoulli random wal (X n ) n N. projections on Ω := {, } N and Here, (X n ) n N is built as the sequence of canonical [ ] A = E DF, DL F l 2 (N), A 2 = 20 3 E [ ] DL F, DF 3 l 2 (N), where, l 2 (N) is the usual inner product on l 2 (N), and D is the symmetric gradient defined as where, given ω = (ω 0, ω,...) Ω, we let D F (ω) = 2 (F (ω +) F (ω )), N, ω + = (ω 0,..., ω, +, ω +,...) and ω = (ω 0,..., ω,, ω +,...). The above bound (.) can be used to control the Wasserstein distance between N(0, ) and the law of F as in Corollary 3.6 in [4]. In addition, the right-hand side of (.) yields explicit bounds in the case where F is a single discrete stochastic integral (see Corollary 3.3 in [4]) or a multiple discrete stochastic integral (see Section 4 in [4]). In this latter case the derivation of explicit bounds is based on a multiplication formula proved in the symmetric case (see Proposition 2.9 in [4]). In this paper we provide Gaussian and Poisson approximations for functionals of notnecessarily symmetric Bernoulli sequences via the Stein and Chen-Stein methods, respectively. See [] for recent related results on Gaussian approximation, without relying on a multiplication formula for discrete multiple stochastic integrals. The normal and Poisson approximations are based on suitable chain rules in Propositions 2. and 2.2 and on an extension to the non-symmetric case of the multiplication formula 2

3 for discrete multiple stochastic integrals (see Proposition 5. and Section 9 for its proof). In addition to using the Ornstein-Uhlenbec operator L for covariance representations, we also derive error bounds for the normal and Poisson approximations using covariance representations based on the Clar-Ocone formula, following the argument implemented in [20]. Indeed the operator L is of a more delicate use in applications to functionals whose multiple stochastic integral expansion is not explicitly nown. In contrast with covariance identities based on the number operator, which rely on the divergence-gradient composition, the Clar- Ocone formula only requires the computation of a gradient and a conditional expectation. A bound for the Wasserstein distance between a standard Gaussian random variable and a (standardized) function of a finite sequence of independent random variables has been obtained in [4], via the construction of an auxiliary random variable which allows one to approximate the Stein equation. Although the results in our paper are restricted to the Bernoulli case, they may be applied to functionals of an infinite sequence of Bernoulli distributed random variables. A comparison between a bound in our paper and that one in [4] is given at the end of the first example of Section 4. As far as the Gaussian approximation is concerned, using a covariance representation based on the Clar-Ocone formula, in Theorem 3.2 below we find sufficient conditions on centered functionals F of a not necessarily symmetric Bernoulli random wal so that E[h(F )] E[h(Z)] B min{4 h, h } + h B 2 + h B 3 (.2) for any h C 2 b, and for some positive constants B, B 2, B 3 > 0; similarly, using a covariance representation based on the Ornstein-Uhlenbec operator, in Theorem 3.4 below we provide alternate sufficient conditions on centered functionals F of a not necessarily symmetric Bernoulli random wal so that the bound (.2) holds for different positive constants C, C 2, C 3 > 0, in place of B, B 2, B 3 respectively. In Theorem 3.6 below we show that the bound (.2) can be used to control the Fortet-Mourier distance d FM between F and the standard N(0, ) normal random variable Z, i.e. we prove d FM (F, Z) 2(B + B 3 )(5 + E[ F ]) + B 2. A similar bound holds, under alternate conditions on F, with the constant B i replaced by C i (i =, 2, 3). Replacing the Stein method by the Chen-Stein method, we also show that this 3

4 approach applies to the Poisson approximation in addition to the Gaussian approximation, and treat discrete multiple stochastic integrals as examples in both cases. This paper is organized as follows. In Section 2 we recall some elements of stochastic analysis of Bernoulli processes, including chain rules for finite difference operators. In Section 3 we present the two different upper bounds for the quantity E[h(F )] E[h(Z)], h C 2 b, described above and the related application to the Fortet-Mourier distance. Section 4 contains explicit first chaos bounds with application to determinantal processes, while Section 5 is concerned with bounds for the nth chaoses. The important case of quadratic functionals (second chaoses) is treated in a separate paragraph. In Section 6 we apply our arguments to the Poisson approximation and in Sections 7 and 8 we investigate the case of single and multiple discrete stochastic integrals. Finally, Section 9 deals with the new multiplication formula for discrete multiple stochastic integrals in the non-symmetric case, whose proof is modeled on normal martingales that are solution of a deterministic structure equation. 2 Stochastic analysis of Bernoulli processes In this section we provide some preliminaries. The reader is directed to [8] and references therein for more insight into the stochastic analysis of Bernoulli processes. From now on we assume that the canonical projections X n : Ω {, }, Ω = {, } N, are considered under the not necessarily symmetric measure P given on cylinder sets by P ({ε 0,..., ε n } {, } N ) = n =0 p (+ε )/2 q ( ε )/2, ε {, }, = 0,..., n. Given ω = (ω 0, ω,...) Ω and ω +, ω defined as above, for any F : Ω R we consider the finite difference operator D F (ω) = p q (F (ω +) F (ω )), N and, denoting by κ the counting measure on N, we consider the L 2 (Ω N) = L 2 (Ω N, P κ)- valued operator D defined for any F : Ω R, by DF = (D F ) N. Given n we denote by l 2 (N) n = l 2 (N n ) the class of functions on N n that are square integrable with respect to κ n, we denote by l 2 (N) n the subspace of l 2 (N) n formed by functions that are symmetric 4

5 in n variables. The L 2 domain of D is given by Dom(D) = {F L 2 (Ω) : DF L 2 (Ω N)} = {F L 2 (Ω) : E[ DF 2 l 2 (N) ] < }. We let (Y n ) n 0 denote the sequence of centered and normalized random variables defined by which satisfies the discrete structure equation Y n = q n p n + X n 2 p n q n, Y 2 n = + q n p n 2 p n q n Y n. (2.) Given f l 2 (N) we define the first order discrete stochastic integral of f as J (f ) = 0 f ()Y, and we let J n (f n ) = f n (i,..., i n )Y i... Y in (i,...,i n) n denote the discrete multiple stochastic integral of order n of f n in the subspace l 2 s( n ) of l 2 (N) n composed of symmetric ernels that vanish on diagonals, i.e. on the complement of n = {(,..., n ) N n : i j, i < j n}, n. As a convention we identify l 2 (N 0 ) to R and let J 0 (f 0 ) = f 0, f 0 R. Hereafter, we shall refer to the set of functionals of the form J n (f) as the n-chaos. The multiple stochastic integrals satisfy the isometry formula E[J n (f n )J m (g m )] = {n=m} n! f n, g m l 2 s ( n), f n l 2 s( n ), g m l 2 s( m ), cf. e.g. Proposition.3.2 of [9]. The finite difference operator acts on multiple stochastic integrals as follows: D J n (f n ) = nj n (f n (, ) n (, )) = nj n (f n (, )), (2.2) N, f n l 2 s( n ). Due to the chaos representation property any square integrable F may be represented as F = n 0 J n(f n ), f n l 2 s( n ), and so the L 2 domain of D may be rewritten as { Dom(D) = F = J n (f n ) : n 0 5 } n n! f n 2 l 2 (N) <. n n

6 Next we present a chain rule for the finite difference operator that extends Proposition 2.4 in [4] from the symmetric to the non-symmetric case. This chain rule will be used later on for the normal approximation. In the following we write F ± in place of F (ω ±). Proposition 2. Let F Dom(D) and f : R R be thrice differentiable with bounded third derivative. Assume moreover that f(f ) Dom(D). Then, for any integer 0 there exists a random variable R F such that where Proof. D f(f ) = f (F )D F D F 2 4 (f (F + p q ) + f (F ))X + R F, a.s. (2.3) R F 5 3! f D F 3, p q a.s. (2.4) By a standard Taylor expansion we have D f(f ) = p q (f(f + ) f(f )) = p q (f(f + ) f(f )) p q (f(f ) f(f )) where = p q f (F )(F + F ) + p q f (F )(F + 2 F )2 + R + p q f (F )(F F ) p q f (F )(F 2 F )2 + R = f p q (F )D F + f (F )[(F + 2 F )2 (F F )2 ] + R + + R, (2.5) R ± p q f F ± 3! F 3. (2.6) By the mean value theorem we find where f (F ) = f (F + ) + f (F ) 2 = f (F + ) + f (F ) 2 f + f (F ) f (F + ) + f (F ) f (F ) 2 + R, R ( F + 2 F + F F ). Substituting this into (2.5) we get D f(f ) = f p q (F )D F + (f (F + 4 ) + f (F ))[(F + F )2 (F F )2 ] + R + + R + R, (2.7) where R p q f ( F + 4 F + F F )( F + F 2 F F 2 ) 6

7 Note that p q f ( F + 4 F + F F ) F + F 2. (2.8) F + F = (F + F ) {X = } + (F + F ) {X =} = (F + F ) {X = } = (F + F ) {X = }, (2.9) and similarly, F F = (F + F ) {X =}. (2.0) Therefore we have F ± F D F / p q, and combining this with (2.6) and (2.8) we find R ± f D F 3, R 3!p q By (2.9) and (2.0) we also have therefore f D F 3. (2.) 2p q (F + F )2 = (F + F )2 {X = } and (F F )2 = (F + F )2 {X =}, (F + F )2 (F F )2 = (F + F )2 ( {X = } {X =}) = (F + F )2 X = D F 2 p q X. The claim follows substituting this expression into (2.7) and by using (2.) to estimate the remainder. Now we present a chain rule for the finite difference operator, which is suitable for integervalued functionals. This chain rule will be used later on for the Poisson approximation. Given a function f : N R we define the operators f() := f( + ) f(), 2 f := ( f). Proposition 2.2 Let F Dom(D) be an N-valued random variable. Then, for any f : N R so that f(f ) Dom(D), we have where R F 2 f 2 D f(f ) = f(f )D F + R F, (2.2) ( ( ) D F D F ( ) ) D F D F {X = } + {X =} +. p q p q p q p q 7 (2.3)

8 Proof. As shown in the proof of Theorem 3. in [5], for any f : N R and any, a N, f() f(a) f(a)( a) 2 f ( a)( a ). (2.4) 2 Therefore, taing first = F +, a = F and then = F, a = F, we deduce D f(f ) = p q (f(f + ) f(f )) p q (f(f ) f(f )) = p q f(f )(F + F ) + R() + p q f(f )(F F ) + R(2), where by (2.4), setting R F := R() + R (2), we have R F 2 f ( (F + 2 F )(F + F ) + (F F )(F F ) ). The claim follows from (2.9) and (2.0). Next we give two alternative covariance representation formulas. Let (F n ) n be the filtration defined by F = {, Ω}, F n = σ{x 0,..., X n }, n 0. By Proposition.0. of [9], for any F, G Dom(D) with F centered we have the Clar- Ocone covariance representation formula [ ] Cov(F, G) = E[F G] = E E[D F F ]D G. (2.5) The second covariance representation formula involves the inverse of the Ornstein-Uhlenbec operator. The domain Dom(L) of the Ornstein-Uhlenbec operator L : L 2 (Ω) L 2 0(Ω), where L 2 0(Ω) denotes the subspace of L 2 (Ω) composed of centered random variables, is given by Dom(L) = and, for any F Dom(L), { F = n 0 J n (f n ) : LF = 0 } n 2 n! f n 2 l 2 (N) < n n nj n (f n ). n= The inverse of L, denoted by L, is defined on L 2 0(Ω) by L F = n= 8 n J n(f n ),

9 with the convention L F = L (F E[F ]) in case F is not centered, as in e.g. [5]. Using this convention, for any F, G Dom(D) we have Cov(F, G) = E[G(F E[F ])] = E [ DG, DL F l 2 (N)], (2.6) cf. Lemma 2.2 of [4] in the symmetric case. For the sae of completeness, we provide an alternative expression for the covariance representation formula (2.6). Let (P t ) t 0 be the semigroup associated to the Ornstein-Uhlenbec operator L (we refer the reader to Section 0 of [8] for the details). Then P t J n (f n ) = e nt J n (f n ), n, and so for any F = n=0 J n(f n ) Dom(D) one has 0 e t P t D F dt = n n= = n n = 0 0 J n (f n (, )) n= = D L F. e t P t J n (f n (, )) dt e t e (n )t J n (f n (, )) dt Consequently, the covariance representation (2.6) may be rewritten as [ ] Cov(F, G) = E[G(F E[F ])] = E e t D GP t D F dt, =0 for any F, G Dom(D), cf. Proposition.0.2 of [9]. 3 Normal approximation of Bernoulli functionals In this section we present two different upper bounds for the quantity E[h(F )] E[h(Z)], h C 2 b. The first one is obtained by using the covariance representation formula (2.5), while the second one, obtained by using the covariance representation formula (2.6), is a strict extension of the bound given in Theorem 3. of [4]. 0 Before proceeding further we recall some necessary bacground on the Stein method for the normal approximation and refer to [], [0], [24], [25] and to [4] for more insight into this technique. 9

10 Stein s method for normal approximation Let Z be a standard N(0, ) normal random variable and consider the so-called Stein s equation associated with h : R R: h(x) E[h(Z)] = f (x) xf(x), x R. We refer to part (ii) of Lemma 2.5 in [4] for the following lemma. More precisely, the estimates on the first and second derivative are proved in Lemma II.3 of [25], the estimate of the third derivative is proved in Theorem. of [6] and the alternative estimate on the first derivative may be found in [] and [0]. Lemma 3. If h C 2 b, then the Stein equation has a solution f h which is thrice differentiable and such that f h 4 h, f h 2 h and f h f h h. 2 h. We also have Combining the Stein equation with this lemma, for a generic square integrable and centered random variable F we have E[h(F )] E[h(Z)] = E[f h(f ) F f h (F )]. (3.) Let (F n ) n be a sequence of square integrable and centered random variables. If E[h(F n )] E[h(Z)] 0, h C 2 b then (F n ) n converges to Z in distribution as n tends to infinity, and so an upper bound for the right-hand side of (3.) may provide informations about this normal approximation. The results of Sections 3. and 3.2 below are given in terms of bounds for E[h(F )] E[h(Z)], for test functions in C 2 b, and they are applied in Section 3.3 to derive bounds for the Fortet- Mourier distance between the laws of two random variables X and Y, which is defined by d FM (X, Y ) = sup E[h(X)] E[h(Y )], (3.2) h FM where FM is the class of functions h such that h BL = h L + h, where L denotes the standard Lipschitz semi-norm. Clearly, any h FM is Lipschitz with Lipschitz constant less than or equal to and so it is Lebesgue a.e. differentiable and h. One can also shows that d FM metrizes the convergence in distribution, see e.g. Chapter in [7]. 0

11 3. Clar-Ocone bound Theorem 3.2 Let F Dom(D) be a centered random variable and assume that [ ] B : = E E[D F F ]D F, 0 B 2 : = 0 2p p q E[ E[D F F ] D F 2 ], (3.3) are finite. Then we have for all h C 2 b. Proof. L 2 (Ω) and B 3 : = p q E[ E[D F F ] D F 3 ] (3.4) E[h(F )] E[h(Z)] B min{4 h, h } + h B 2 + h B 3 (3.5) Since the first derivative of f h is bounded we have that f h is Lipschitz. So f h (F ) Consequently we have D f h (F ) = p q f h (F + ) f h(f ) f h D F. E[ Df h (F ) 2 l 2 (N) ] f h E[ DF 2 l 2 (N) ] and f h (F ) Dom(D). Since F is centered, by the covariance representation (2.5) and the chain rule of Proposition 2. we have [ ] E[F f h (F )] = E E[D F F ]D f h (F ) 0 [ ] [ ] = E E[D F F ]D F f h(f ) E E[D F F ] D F 2 4 (f h (F + p q ) + f h (F ))X 0 0 [ ] +E E[D F F ]R F (h). (3.6) 0 Note that the three expectations in (3.6) are finite. The first one since DF L 2 (Ω N) and f h is bounded, indeed by Jensen s inequality [ ] [ ] E E[D F F ]D F f h(f ) 4 h E E[ D F F ] D F 0 0

12 [ ] = 4 h E E[E[ D F F ] D F F ] 0 [ ] = 4 h E E[E[ D F F ] 2 ] 0 [ ] 4 h E E[E[ D F 2 F ] 0 = 4 h E[ D F 2 ] < ; 0 the second relation follows from the boundedness of f h and (3.3), while the third one follows from (2.4) and (3.4). The random variables E[D F F ], D F, F ± are independent of X (the first one because it is F -measurable and the random variables (X ) N are independent, the others by their definition). Therefore, the equality (3.6) reduces to [ ] E[F f h (F )] = E f h(f ) 0 E[D F F ]D F (3.7) + 2p 4 E[E[D F F ] D F 2 (f h (F + p q ) + f h (F ))] 0 [ ] + E E[D F F ]R F (h). (3.8) 0 Inserting this expression into the right-hand side of (3.) we deduce E[h(F )] E[h(Z)] B min{4 h, h } + h B 2 (3.9) [ ] +E E[D F F ] R F (h), (3.0) where to get the term (3.9) we used the inequalities f h 0 min{4 h, h } and f h 2 h (see Lemma 3.). Using (2.4) one may easily see that the term in (3.0) is bounded above by h B 3. The proof is complete. Corollary 3.3 Let F Dom(D) be a centered random variable and assume that B : = F 2 L 2 (Ω) + D F, E[D F F ] l 2 (N) E[ D F, E[D F F ] l 2 (N)] L 2 (Ω), B 2 : = 0 2p p q D F L 2 (Ω) E[ D F 4 ], (3.) 2

13 B 3 : = p q E[ D F 4 ] are finite. Then (3.5) holds for all h C 2 b. Proof. By the Cauchy-Schwarz and the triangular inequalities we have [ ] E E[D F F ]D F E[D F F ]D F 0 0 L 2 (Ω) F 2 L 2 (Ω) + D F, E[D F F ] l 2 (N) F 2 L 2 (Ω) L 2 (Ω). By the Clar-Ocone formula (2.5) we have [ ] E[ D F, E[D F F ] l 2 (N)] = E D F E[D F F ] Therefore =0 = F 2 L 2 (Ω). [ ] E E[D F F ]D F B. 0 By the Cauchy-Schwarz and Jensen inequalities we have and E[ E[D F F ] D F 2 ] E[ E[D F F ] 2 ] E[ D F 4 ] E[E[ D F 2 F ]] E[ D F 4 ] = E[ D F 2 ] E[ D F 4 ], E[ E[D F F ] D F 3 ] = E[ E[D F F ] 2 D F 2 ] E[ D F 4 ] E[E[ D F 2 F ] D F 2 ] E[ D F 4 ] E[ D F 4 ] E[ D F 4 ] = E[ D F 4 ]. The claim follows from Theorem Semigroup bound Theorem 3.4 Let F Dom(D) be a centered random variable and let [ ] C : = E DF, DL F l 2 (N), 3

14 C 2 : = 0 C 3 : = p p q E[ D L F D F 2 ], (3.2) p q E[ D L F D F 3 ] (3.3) be finite. Then for all h C 2 b we have Proof. E[h(F )] E[h(Z)] C min{4 h, h } + h C 2 + h C 3. (3.4) Although the proof is similar to that of Theorem 3.2, we give the details since some points need a different justification. As in the proof of Theorem 3.2 one has f h (F ) Dom(D). Since F is centered, by the covariance representation (2.6) and the chain rule of Proposition 2. we have [ ] E[F f h (F )] = E D f h (F )D L F 0 [ ] [ ] = E D F f h(f )D L D F 2 F + E X 4 (f h (F + p q ) + f h (F ))D L F 0 0 [ ] E D L F R F (h). (3.5) 0 Note that the three expectations in (3.5) are finite. The first one since DF L 2 (Ω N) and f h is bounded, indeed [ ] [ ] E D L F D F f h(f ) 4 h E D L F D F 0 0 ( 4 h E ) /2 ( D L F 2 E ) /2 D F ) /2 ( ) /2 = 4 h (E DL F 2 l 2 (N) E DF 2 l 2 (N) 4 h E[ DF 2 l 2 (N) ] <, where for the latter inequality we used the relation E[ DL F 2 l 2 (N) ] E[ DF 2 l 2 (N) ] (see Lemma 2.3(3) in [4]); the second one due to the boundedness of f h and (3.2); the third one due to (2.4) and (3.3). By Lemma 2.3 in [4] we have that the random variables 4

15 D L F, D F and F ± are independent of X. Therefore, the equality (3.5) reduces to [ ] E[F f h (F )] = E f h(f )D F D L F + 2p 4 E[ D F 2 (f (F + p q ) + f (F ))D L F ] 0 0 [ ] E R F (h)d L F. (3.6) 0 Inserting this expression into the right-hand side of (3.) we deduce E[h(F )] E[h(Z)] C min{4 h, h } + h C 2 (3.7) [ ] + E D L F R F (h), (3.8) where to get the term (3.7) we used the inequalities f h 0 min{4 h, h } and f h 2 h (see Lemma 3.). Using (2.4) one may easily see that the term in (3.8) is bounded above by h C 3. The proof is complete. Note that, formally, the upper bound (3.5) may be obtained by (3.4) substituting the term D L F in the definitions of C, C 2, C 3, with E[D F F ], and vice versa. Corollary 3.5 Let F Dom(D) be a centered random variable and let C : = F 2 L 2 (Ω) + D F, D L F l 2 (N) E[ D F, D L F l 2 (N)] L 2 (Ω), C 2 : = B 2, where B 2 is defined by (3.) and C 3 defined by (3.3) be finite. Then (3.4) holds for all h C 2 b. Proof. By the Cauchy-Schwarz and the triangular inequalities we have [ ] E D F, D L F l 2 (N) D F, D L F l 2 (N) L 2 (Ω) F 2 L 2 (Ω) + D F, D L F l 2 (N) F 2 L 2 (Ω) L 2 (Ω). By the covariance representation formula (2.6) we have E[ D F, D L F l 2 (N)] = F 2 L 2 (Ω). Therefore [ ] E D F, D L F l 2 (N) C. 5

16 Let F Dom(D) be of the form F = n 0 J n (f n ), f n l 2 s( n ). Then D L F = n J n (f n (, )) and D F = n nj n (f n (, )). So, by the isometry formula, we have [ ] E[ D L F 2 ] = E J n (f n (, )) 2 n = n E[ J n (f n (, )) 2 ] = n (n )! f n (, ) 2 l 2 (N) (n ) and [ ] E[ D F 2 ] = E nj n (f n (, )) 2 n = n n 2 E[ J n (f n (, )) 2 ] = n n 2 (n )! f n (, ) 2 l 2 (N) (n ). So E[ D L F 2 ] E[ D F 2 ] (3.9) and by the Cauchy-Schwarz inequality, we deduce E[ D L F D F 2 ] E[ D L F 2 ] E[ D F 4 ] E[ D F 2 ] E[ D F 4 ]. The claim follows from Theorem Fortet-Mourier distance In this section we provide bounds in the Fortet-Mourier distance (3.2). 6

17 Theorem 3.6 Let F Dom(D) be centered. We have: (i) If (3.5) holds for any h C 2 b and B + B 3 (5 + E[ F ])/4, then d FM (F, Z) 2(B + B 3 )(5 + E[ F ]) + B 2. (3.20) (ii) If (3.4) holds for any h C 2 b and C + C 3 (5 + E[ F ])/4, then Proof. d FM (F, Z) 2(C + C 3 )(5 + E[ F ]) + C 2. (3.2) We only give the details for the proof of (3.20). The inequality (3.2) can be proved similarly. Tae h FM and define h t (x) = h( ty + tx)φ(y) dy, t [0, ], R where φ is the density of the standard N(0, ) normal random variable Z. As in the proof of Corollary 3.6 in [4], for 0 < t /2, one has h t C 2 b and the bounds h t / t, (3.22) and So E[h(F )] E[h t (F )] ( t + ) E[ F ], E[h(Z)] E[h t (Z)] 3 t. 2 2 E[h(F )] E[h(Z)] = (E[h(F )] E[h t (F )]) + (E[h t (F )] E[h t (Z)]) + (E[h t (Z)] E[h(Z)]) E[h(F )] E[h t (F )] + E[h t (F )] E[h t (Z)] + E[h t (Z)] E[h(Z)] ( ) E[ F ] t + + B min{4 h t, h t } + h 2 t B 2 + h t B t 2 ( ) 5 + E[ F ] t + B + B 3 + B 2, (3.23) 2 t where in the latter inequality we used (3.22) and that h t, for all t. Minimizing in t (0, /2] the term in (3.23), we have that the optimal is attained at t = 2(B + B 3 )/(5 + E[ F ]) (0, /2]. The conclusion follows substituting t in (3.23) and then taing the supremum over all the h FM. 4 First chaos bound for the normal approximation In this section we specialize the results of Section 3 to first order discrete stochastic integrals. As we shall see, the bounds (3.5) and (3.4) (and the corresponding assumptions) coincide on the first chaos, although they differ on n-chaoses, n 2. 7

18 Corollary 4. Assume that α = (α ) 0 is in l 2 (N), 0 2p p q α 3 < and 0 p q α 4 <. (4.) Then for the first chaos F = J (α) = 0 α Y the bound (3.5) (which in this case coincides with the bound (3.4)) holds with B = C = α 2, B2 = C 2 = 2p α 3, p q 0 0 and Proof. B 3 = C 3 = p q α 4. Since α l 2 (N) we have that F L 2 (Ω). Moreover F is centered, and since ( q p + D F = α p q 2 q ) p p q 2 = α, p q we have F Dom(D). The finiteness of the corresponding quantities B, B 2 and B 3 is guaranteed by α l 2 (N) and (4.). The claim follows from e.g. Theorem 3.2. Example Consider the sequence of functionals (F n ) n defined by F n = n Y. n =0 Setting α = n, = 0,..., n, and α = 0, n, we have B = 0 and B 2 = n 2p and B n 3/2 3 = 5 p q 3n 2 =0 n =0 p q. In the symmetric case p = q = /2 we find B 2 = 0 and the bound is of order /n, implying a faster rate than in the classical Berry-Esséen estimate (however here we are using C 2 b test functions; cf. the comment after Corollary 3.3 in [4]). 8

19 In the non-symmetric case p = p and q = q, p q, the bound is of order n /2 as in the classical Berry-Esséen estimate. Indeed we have B 2 = B (n) 2 = 2p and B 3 = B (n) n 3 = 5 p( p) 3n p( p) hence the inequality B + B 3 (5 + E[ F n ])/4 of Theorem 3.6 reads [ ( ) ] 5 5 n n + 4 3p( p) n 4 2p E 2 n + X, p( p) which holds if e.g. n 4 4. Consequently, by (3.20) it follows that for any n 3p( p) 3p( p) we have 2B (n) 3 (5 + E[ F n ]) + B (n) 2 d FM (F n, Z) = 50 3p( p) n + 0 3p( p) n E 50 3p( p) n + 0 3p( p) n E [ n n =0 [ n n =0 ( ( 2p + X 2 p( p) 2p + X 2 p( p) A straightforward computation gives [ ( ) n 2p + X E n 2] 2 =, p( p) hence where In the general case, if a n := n 2p n 3/2 =0 =0 d FM (F n, Z) n K (p), n K (p) := p p( p) 0 and b n := n p q n 2 =0 ) ] 4 3p( p) =0 + ) ] / p n p( p) 2p p( p) n. (4.2) p q 0, as n, (4.3) then F n Z in distribution, and the rate depends on the rate of convergence to zero of the sequences (a n ) n and (b n ) n. For instance, if p = ( + 2) α, 0 < α <, 0, we have p q (n + ) α ( (/2) α ), = 0,..., n. Consequently we have n 2p n 3/2 p q =0 ( (/2) α /2 (n + )α/2 n ) n 3/2 9 =0 2p

20 and which yields a bound of order n ( α)/ (α ) ( (/2) α ) /2 (n + ) α/2 n /2, n ( (/2) α ) n (n + ) α, n 2 p q =0 Finally we note that the bound (4.2) in the non-symmetric case p = p and q = q, p q, is consistent with the bound on the Wasserstein distance between F n and Z provided by Theorem 2.2 in [4]. Indeed, letting d W denote the Wasserstein distance and Y an independent copy of Y, a simple computation shows that d FM (F n, Z) d W (F n, Z) (4.4) ( ) 2 E[ Y Y n 4 ] (E[ Y Y 2 ]) 2 + E[ Y 3 ] = ( ) 2 E[ Y Y n 4 ] 4 + E[ Y 3 ] = K 2 (p), (4.5) n where since we have E[ Y 3 ] K 2 (p) := + 2 2p p 2 ( p) 4 p( p) + 2 2p 2 p( p) and E[ Y Y 4 ] = p 2 ( p). We note that when e.g. p is small it holds K 2 (p) > K (p). Application to determinantal processes Let E be a locally compact Hausdorff space with countable basis and B(E) the Borel σ-field. We fix a Radon measure λ on (E, B(E)). The configuration space Γ E is the family of nonnegative N-valued Radon measures on E. We equip Γ E with the topology which is generated by the functions Γ E ξ ξ(a) N, A B(E), where ξ(a) denotes the number of points of ξ in A. The existence and uniqueness of a determinantal process with Hermitian ernel K is due to Macchi [2] and Soshniov [22] and can be summarized as follows (we refer the reader to [3] for notions of functional analysis). 20

21 Theorem 4.2 Let K be a self-adjoint integral operator on L 2 (E, λ) with ernel K. Suppose that the spectrum of K is contained in [0, ] and that K is locally of trace-class, i.e. for any relatively compact Λ E, K Λ = P Λ KP Λ is of trace-class (here P Λ f = f Λ is the orthogonal projection.) Then there exists a unique probability measure µ K on Γ E with n-th correlation measure λ n (dx,..., dx n ) = det(k(x i, x j )) i,j n λ(dx )... λ(dx n ), where det(k(x i, x j )) i,j n is the determinant of the n n matrix with ij-entry K(x i, x j ). The probability measure µ K is called determinantal process with ernel K. Given a relatively compact set Λ E, we focus on the random variable ξ(λ) and recall the following basic result (see e.g. Proposition 2.2 in [2]). Theorem 4.3 Let K be as in the statement of Theorem 4.2 and denote by κ [0, ], 0, the eigenvalues of K Λ. Under µ K the random variable ξ(λ) has the same distribution of 0 Z, where Z 0, Z,... are independent random variables such that Z obeys the Bernoulli distribution with mean κ, i.e. Z n = X n + 2 {0, }, n N where the X s tae values on {, } and are independent with P (X n = ) = κ n. The central limit theorem for the number of points on a relatively compact set of a determinantal process may be obtained in different manners, see [5], [2] and [23]. In the following we provide an alternate derivation which gives the rate of the normal approximation. Corollary 4.4 Let K be as in the statement of Theorem 4.2 and (Λ n ) n 0 E be an increasing sequence of relatively compact sets such that Var µk (ξ(λ n )) = 0 κ (n) ( κ(n) ), as n where κ (n) [0, ], 0, are the eigenvalues of K Λn, n 0. Setting F n = ξ(λ n) E µk [ξ(λ n )], VarµK (ξ(λ n )) for any h C 2 b, we have E µk [h(f n )] E[h(Z)] h B (n) 2 + h B (n) 3, n 0 2

22 where B (n) 2 = 0 κ(n) ( κ(n) ) 2κ(n) ( ) 3/2 0 κ(n) ( κ(n) ) ( ) /2 0 κ(n) ( κ(n) ) and B (n) 3 = κ(n) So we have a bound of order [Var µk (ξ(λ n ))] /2. ( κ(n) ). Proof. For n 0, let (Z (n) ) 0 be a sequence of independent {0, }-valued random variables with Z (n) Be(κ (n) (n) ) and (Y ) 0 defined by By Theorem 4.3 we have ξ(λ n ) d = 0 Z (n) Y (n) = Z(n) κ (n) = 0 κ (n) κ (n). ( κ(n) ) ( κ(n) (n) )Y + 0 κ (n), where d = denotes the equality in distribution. Then F n = ξ(λ n) E µk [ξ(λ n )] VarµK (ξ(λ n )) d = 0 α (n) Y (n), where α (n) = κ (n) 0 κ(n) ( κ(n) ). ( κ(n) ) We are going to apply Corollary 4.. Clearly, for any n 0, the sequence (α (n) ) 0 is in l 2 (N). Moreover, for any n 0, and 0 κ (n) α (n) 3 = ( κ(n) ) α (n) 4 0 κ (n) VarµK (ξ(λ n )) < ( κ(n) ) = Var µk (ξ(λ n )) <. So condition (4.) is satisfied. Moreover, a straightforward computation gives B = B (n) = 0, B 2 = B (n) 2 and B 3 = B (n) 3, and the proof is completed. 22

23 Example Let E = C and λ the standard complex Gaussian measure on C, i.e. where dz is the Lebesgue measure on C. λ(dz) = π e z 2 dz, The Ginibre process µ exp is the determinantal process with exponential ernel K(z, w) = e zw, where z is the complex conjugate of z C. Let b(o, n) be the complex ball centered at the origin with radius n. By Theorem.3 in [2] we have 4n 2 Var µexp (ξ(b(o, n))) = n ( x/(4n 2 )) /2 x /2 e x dx π 0 n, as n. π So for the Ginibre process Corollary 4.4 provides a bound of order n /2. 5 nth chaos bounds for the normal approximation In this section we give explicit upper bounds for the constants B i and C i, i =, 2, 3, involved in (3.5) and (3.4), when F = J n (f n ), f n l 2 s( n ). Our approach is based on the multiplication formula (5.3) below, which extends formula (2.) in [4] (see the discussion after Proposition 5.). Given f n l 2 s( n ) and g m l 2 s( m ), the contraction f n l g m, 0 l, is defined to be the function of n + m l variables f n l g m (a l+,..., a n, b +,..., b m ) := ϕ(a l+ ) ϕ(a )f n l g m (a l+,..., a n, b +,..., b m ), where (cf. the structure equation (2.)) and f n l g m (a l+,..., a n, b +,..., b m ) := ϕ(n) = q n p n 2 p n q n, n N (5.) a,...,a l N f n (a,..., a n )g m (a,..., a, b +,..., b m ) is the contraction considered in [4] for the symmetric case, see p. 707 therein. By convention, we define ϕ(a l+ ) ϕ(a ) = if l = (even when ϕ 0). Denote by fn l g m, 0 l, the symmetrization of f n l g m. Then, we shall consider the contraction f n l g m (a l+,..., a n, b +,..., b m ) := 23

24 = n+m l (a l+,..., a n, b +,..., b m ) f n l g m(a l+,..., a n, b +,..., b m ). (5.2) Note that in the symmetric case p n = q n = /2 we have f n g m = f n g m. However, f n l g m = 0 if l < and so f n l g m f n l g m for l <. The following multiplication formula holds. Proposition 5. We have the chaos expansion provided the functions h n,m,s := J n (f n )J m (g m ) = s 2i 2(s n m) 2(n m) ( n i! i )( m i J n+m s (h n,m,s ), (5.3) )( i s i ) f n s i i g m belong to l 2 s( n+m s ), 0 s 2(n m). Here the symbol s 2i 2(s n m) sum is taen over all the integers i in the interval [s/2, s n m]. means that the Since it is not obvious that formula (5.3) extends the product formula (2.) in [4], it is worthwhile to explain this point in detail. In the symmetric case p n = q n = /2, for any f n l 2 s( n ), g m l 2 s( m ), we have f n s i i g m = 0 if s < 2i, 0 s 2(n m). Therefore, for any fixed 0 s 2(n m) we have h n,m,s = 0 if s/2 is not an integer and ( )( ) n m h n,m,s = (s/2)! f n s/2 s/2 s/2 s/2 g m, if s/2 is an integer. Note that if s/2 is an integer, we have where we used the straightforward relation f n s/2 s/2 g m l 2 s ( n+m s ) = f n s/2 s/2 g m l 2 s ( n+m s ) f n s/2 s/2 g m l 2 ( n+m s ), f l 2 (N) n f l 2 (N) n, (5.4) being f the symmetrization of f. Therefore, by Lemma 2.4() in [4] we have f n s/2 s/2 g m l 2 s( n+m s ), and so h n,m,s l 2 s( n+m s ), for any 0 s 2(n m). By (5.3) we have J n (f n )J m (g m ) = 2(n m) J n+m s (h n,m,s ) 24

25 2(n m) ( )( ) n m = (s/2)! s/2 s/2 n m ( )( n m = r! r r r=0 which is exactly formula (2.) in [4]. We conclude this part with the following lemma. J n+m s (f n s/2 s/2 g m) ) J n+m 2r (f n r r g m ), Lemma 5.2 For any f n l 2 s( n ), g m l 2 s( m ), we have Proof. Note that q 0 f n (, q) l g m (, q) = f n l+ + g m. f n (, q) l g m (, q)(a l+,..., a n, b +,..., b m ) = ϕ(a l+ )... ϕ(a ) f n (a,..., a n, q)g m (a,..., a, b +,..., b m, q), a,...,a l N and so summing up over q N we deduce f n (, q) l g m (, q)(a l+,..., a n, b +,..., b m ) q 0 = ϕ(a l+ )... ϕ(a ) a,...,a l,q N f n (a,..., a n, q)g m (a,..., a, b +,..., b m, q) = ϕ(a l+ )... ϕ(a )f n l+ + g m(a l+,..., a n, b +,..., b m ) = f n l+ + g m(a l+,..., a n, b +,..., b m ). 5. Clar-Ocone bound By e.g. Lemma 4.6 in [8], for the nth-chaos J n (f n ), n 2, f n l 2 s( n ), we have E[J n (f n ) F ] = J n (f n [0,] n), N. Therefore where E[D J n (f n ) F ] = ne[j n (f n (, )) F ] = nj n (f n] ), (5.5) f n] ( ) := f n (, ) [0, ] n ( ). (5.6) 25

26 So by the isometric properties of discrete multiple stochastic integrals we have that the constants B i of Corollary 3.3 are equal, respectively, to B := n! f n 2 l 2 s( n) + n2 J n (f n (, )), J n (f n] ( )) l 2 (N) E[ J n (f n (, )), J n (f n] ( )) l 2 (N)] L 2 (Ω), (5.7) B 2 : = n 3 (n )! 0 2p p q f n (, ) l 2 s ( n ) E[ Jn (f n (, )) 4 ], (5.8) B 3 : = 5n4 3 0 p q E[ J n (f n (, )) 4 ]. (5.9) In the proof of the next theorem we show that these constants can be bounded above by computable quantities. Theorem 5.3 Let n 2 be fixed and let f n l 2 s( n ). Assume that for any N the functions and h () n,n,s := h () n,n,s := s 2i 2(s (n )) s 2i 2(s (n )) belong to l 2 s( 2n 2 s ), 0 s 2n 2, and that B := n! f n 2 l 2 s( n) ( 2n 3 + n 2 (2n 2 s)! 0 s {2i, 2i 2 } 2(s (n )) f n (, ) s i i f n] ( ) l 2 ( 2n 2 s ) B 2 : = n 3 (n )! 0 s {2i, 2i 2 } 2(s (n )) ( ) 2 ( ) n i i! f n (, ) s i i f n] ( ) (5.0) i s i ( ) 2 ( ) n i i! f n (, ) s i i f n (, ) (5.) i s i 0 ( ) 2 ( ) 2 ( n n i i!i 2! i i 2 s i )( i2 f n (, ) s i 2 i 2 f n] ( ) l 2 ( 2n 2 s )) /2, ( 2n 2 2p f n (, ) p q l 2 s ( n ) (2n 2 s)! ( ) 2 ( ) 2 ( n n i i!i 2! i i 2 s i )( i2 s i 2 s i 2 f n (, ) s i i f n (, ) l 2 ( 2n 2 s ) f n (, ) s i 2 i 2 f n (, ) l 2 ( 2n 2 s )) /2, ) ) 26

27 B 3 : = 5n4 3 2n 2 (2n 2 s)! s {2i, 2i 2 } 2(s (n )) 0 ( ) 2 ( ) 2 ( n n i i!i 2! i i 2 s i )( i2 s i 2 ) (5.2) p q f n (, ) s i i f n (, ) l 2 ( 2n 2 s ) f n (, ) s i 2 i 2 f n (, ) l 2 ( 2n 2 s ) (5.3) are finite. Then for all h C 2 b we have Proof. E[h(J n (f n ))] E[h(Z)] B min{4 h, h } + h B 2 + h B 3. The claim follows from Corollary 3.3 if we show that the constants B i defined by (5.7), (5.8) and (5.9) are bounded above by the constants B i defined in the statement, respectively. Step : Proof of B B. By the hypotheses on the functions h () n,n,s and the multiplication formula (5.3), we deduce J n (f n (, ))J n (f n] ) 2n 2 ( ) 2 ( ) n i = i! J 2n 2 s (f n (, ) s i i i s i s 2i 2(s (n )) = (n )!f n (, ) n n f n] ( ) 2n 3 ( ) 2 ( ) n i + i! J 2n 2 s (f n (, ) s i i i s i s 2i 2(s (n )) f n] ( )) f n] ( )). (5.4) Since the constant B in the statement is finite, we have f n (, ) s i i f n] ( ) l 2 ( 2n 2 s ) <, 0 0 s 2n 3, s 2i 2(s (n )). By (5.4) this in turn implies f n (, ) s i i f n] ( ) l 2 s ( 2n 2 s ) <, 0 0 s 2n 3, s 2i 2(s (n )), and so f n (, ) s i i f n] ( ) l 2 s( 2n 2 s ), 0 27

28 0 s 2n 3, s 2i 2(s (n )) (it is worthwhile to note that one can not use Lemma 5.2 to express the infinite sum 0 f n(, ) s i i f n] ( ) since the function of n variables f n] ( ) is not symmetric). Therefore, summing up over 0 in the equality (5.4), we get J n (f n (, )), J n (f n] ( )) l 2 (N) = (n )! 0 2n 3 + f n (, ) n n f n] ( ) s 2i 2(s (n )) ( ) 2 ( ) ( n i i! J 2n 2 s i s i 0 f n (, ) s i i f n] ( ) Taing the mean and noticing that discrete multiple stochastic integrals are centered, we have and so E[ J n (f n (, )), J n (f n] ( )) l 2 (N)] = (n )! 0 f n (, ) n n f n] ( ), ). J n (f n (, )), J n (f n] ( )) l 2 (N) E[ J n (f n (, )), J n (f n] ( )) l 2 (N) 2n 3 ( ) 2 ( ) ( ) n i = i! J 2n 2 s f n (, ) s i i f n] ( ). i s i 0 s 2i 2(s (n )) By means of the orthogonality and isometric properties of discrete multiple stochastic integrals, we have 2n 3 ( ) 2 ( ) ( ) E n i i! J 2n 2 s f n (, ) s i i f n] ( ) 2 i s i s 2i 2(s (n )) 0 2n 3 ( ) 2 ( ) ( ) = E n i i! J 2n 2 s f n (, ) s i i f n] ( ) 2 i s i s 2i 2(s (n )) 0 [( 0,2n 3 ( ) 2 ( ) ( )) n i + E i! J 2n 2 s f n (, ) s i i f n] ( ) i s i s s 2 s 2i 2(s (n )) 0 ( ( ) 2 ( ) ( ))] n i i! J 2n 2 s2 f n (, ) s 2 i i f n] ( ) i s 2 i s 2 2i 2(s 2 (n )) 0 2n 3 ( ) 2 ( ) ( ) = E n i i! J 2n 2 s f n (, ) s i i f n] ( ) 2 i s i s 2i 2(s (n )) 0 [ 2n 3 ( ) 2 ( ) ( ) 2 ( ) n i n i2 = E i! i 2! i s i i 2 s i 2 s {2i, 2i 2 } 2(s (n )) 28

29 J 2n 2 s ( 0 f n (, ) s i i f n] ( ) ) J 2n 2 s ( 0 f n (, ) s i 2 i 2 f n] ( ) )] = 2n 3 (2n 2 s)! 0 f n (, ) s i i s {2i, 2i 2 } 2(s (n )) f n] ( ), 0 ( ) 2 ( ) 2 ( n n i i!i 2! i i 2 s i )( i2 s i 2 f n (, ) s i 2 i 2 f n] ( ) l 2 s ( 2n 2 s ). (5.5) ) By the above relations and (5.7), we deduce B = n! f n 2 l 2 s( n) ( 2n 3 + n 2 (2n 2 s)! 0 f n (, ) s i i s {2i, 2i 2 } 2(s (n )) f n] ( ), 0 Now, note that by the Cauchy-Schwarz inequality ( ) 2 ( ) 2 ( n n i i!i 2! i i 2 s i )( i2 s i 2 f n (, ) s i 2 i 2 f n] ( ) l 2 s ( 2n 2 s )) /2. (5.6) f, g l 2 (N) n f l 2 (N) n g l 2 (N) n, for any f, g l2 (N) n. (5.7) ) By this relation, (5.4) and (5.6) we easily get B B. Step 2: Proof of B i B i, i = 2, 3. By the hypotheses on the functions multiplication formula (5.3), we deduce h () n,n,s and the J n (f n (, )) 2 = 2n 2 s 2i 2(s (n )) By a similar computation as for (5.5), we have E[ J n (f n (, )) 4 ] 2n 2 = E = s 2i 2(s (n )) 2n 2 (2n 2 s)! ( ) 2 ( ) n i i! J 2n 2 s (f n (, ) s i i i s i f n (, )). ( ) 2 ( ) 2 n i ( i! J 2n 2 s fn (, ) s i i f n (, ) ) i s i s {2i, 2i 2 } 2(s (n )) ( ) 2 ( ) 2 ( n n i i!i 2 i i 2 s i )( i2 s i 2 f n (, ) s i i f n (, ), f n (, ) s i 2 i 2 f n (, ) l 2 s ( 2n 2 s ), (5.8) and so by (5.8) and (5.9) we deduce B 2 = n 3 (n )! 0 2p p q f n (, ) l 2 s ( n ) 29 ( 2n 2 (2n 2 s)! )

30 s {2i, 2i 2 } 2(s (n )) ( ) 2 ( ) 2 ( n n i i!i 2! i i 2 s i )( i2 s i 2 ) ) /2 f n (, ) s i i f n (, ), f n (, ) s i 2 i 2 f n (, ) l 2 s ( 2n 2 s ) (5.9) and B 3 = 5n4 3 2n 2 (2n 2 s)! s {2i, 2i 2 } 2(s (n )) 0 ( ) 2 ( ) 2 ( n n i i!i 2! i i 2 s i )( i2 s i 2 p q f n (, ) s i i f n (, ), f n (, ) s i 2 i 2 f n (, ) l 2 s ( 2n 2 s ). (5.20) ) The claim follows from the above equalities and relations (5.7) and (5.4). 5.2 Semigroup bound For the nth-chaos J n (f n ), n 2, f n l 2 s( n ), we have D L J n (f n ) = n D J n (f n ) = J n (f n (, )) (5.2) and the constants C i of Corollary 3.5 are equal, respectively, to C := n! f n 2 l 2 s( n) + n J n (f n (, )), J n (f n (, )) l 2 (N) E[ J n (f n (, )), J n (f n (, )) l 2 (N)] L 2 (Ω), (5.22) C 2 : = B 2, where B 2 is defined by (5.8) C 3 : = B 3 n, where B 3 is defined by (5.9). In the next theorem we show that these constants can be bounded above by computable quantities. 30

31 Theorem 5.4 Let n 2 be fixed and let f n l 2 s( n ). Assume that for any N the () functions h n,n,s defined by (5.) belong to l 2 s( 2n 2 s ), 0 s 2n 2, and that C := n! f n 2 l 2 s( n) + n ( 2n 3 (2n 2 s)! s {2i, 2i 2 } 2(s (n )) ( ) 2 ( ) 2 ( n n i i!i 2! i i 2 s i f n s i + i + f n l 2 ( 2n 2 s ) f n s i 2+ i 2 + f n l 2 ( 2n 2 s )) /2, )( i2 s i 2 ) C 2 := B 2, where B 2 is defined by (5.2), and C 3 := B 3 /n where B 3 is defined by (5.3), are finite. Then for all h C 2 b we have E[h(J n (f n ))] E[h(Z)] C min{4 h, h } + h C 2 + h C 3. Proof. The claim follows from Corollary 3.5 if we show that the constant C defined by (5.22) is bounded above by the constant C defined in the statement (for the bounds C i C i, i = 2, 3, see Step 2 of the proof of Theorem 5.3). Along a similar computation as in the Step of the proof of Theorem 5.3, we have J n (f n (, )), J n (f n (, )) l 2 (N) = (n )! 0 2n 3 + f n (, ) n n f n (, ) s 2i 2(s (n )) = (n )!f n n n f n 2n 3 + s 2i 2(s (n )) ( ) 2 ( ) ( n i i! J 2n 2 s i s i 0 f n (, ) s i i f n (, ) ( ) 2 ( ) n i ( ) i! J 2n 2 s fn s i+ i+ f n, (5.23) i s i where the latter equality follows from Lemma 5.2. By a similar computation as for (5.5), we have ) J n (f n (, )) 2 l 2 (N) E[ J n (f n (, )) 2 l 2 (N) ] 2 L 2 (Ω) 2n 3 ( ) 2 ( ) 2 = E n i i! J 2n 2 s (f n s i+ i+ f n ) i s i = s 2i 2(s (n )) 2n 3 (2n 2 s)! s {2i, 2i 2 } 2(s (n )) 3 ( ) 2 ( n n i!i 2! i i 2 ) 2

32 ( i s i )( i2 s i 2 By this relation and (5.22) we deduce C = n! f n 2 l 2 s( n) + n ( 2n 3 (2n 2 s)! ) f n s i + i + f n, f n s i 2+ i 2 + f n l 2 s ( 2n 2 s ). (5.24) s {2i, 2i 2 } 2(s (n )) ( ) 2 ( ) 2 ( n n i i!i 2! i i 2 s i f n s i + i + f n, f n s i 2+ i 2 + f n l 2 s ( 2n 2 s )) /2. )( i2 s i 2 ) By this equality and (5.7) and (5.4) we finally have C C. Connection with Theorem 4. in [4] In this subsection we refine a little the bound given by Theorem 5.4 in order to strictly extend the bound provided by Theorem 4. in [4]. For the nth chaos J n (f n ), n 2, f n l 2 s( n ), we have that the constants C i of Theorem 3.4 are equal, respectively, to ] C := E [ n J n (f n (, )) 2l 2(N), (5.25) C 2 := n 2 0 2p p q E[ J n (f n (, )) 3 ], (5.26) and C 3 := B 3 n, where B 3 is defined by (5.9) In the next theorem we show that these constants can be bounded above by computable quantities. Theorem 5.5 Let n 2 be fixed and let f n l 2 s( n ). Assume that for any N the h () functions n,n,s defined by (5.) belong to l 2 s( 2n 2 s ), 0 s 2n 2, and that ( C := n! f n 2 l 2 s( n) 2 2n 3 + n 2 (2n 2 s)! s {2i, 2i 2 } 2(s (n )) ( ) 2 ( ) 2 ( n n i i!i 2! i i 2 s i f n s i + i + f n l 2 ( 2n 2 s ) f n s i 2+ i 2 + f n l 2 ( 2n 2 s )) /2, )( i2 s i 2 ) 32

33 C 2 := B 2 /n, where B 2 is defined by (5.2) and C 3 := B 3 /n, where B 3 is defined by (5.3) are finite. Then for all h C 2 b we have E[h(J n (f n ))] E[h(Z)] C min{4 h, h } + h C 2 + h C 3. Proof. The claim follows from Theorem 3.4 if we show that the constants C i, i =, 2, defined by (5.25) and (5.26) are bounded above by the constants C i, i =, 2, defined in the statement, respectively (for the bound C 3 C 3 see Step 2 of the proof of Theorem 5.3). Step : Proof of C C. By the Cauchy-Schwarz inequality, (5.23) and (5.24) we have ] /2 C E [ n J n (f n (, )) 2l 2(N) 2 ( = n! f n 2 l 2 s( n) 2 2n 3 + n 2 E ( n! f n 2 l 2 s( n) 2 s 2i 2(s (n )) 2n 3 + n 2 (2n 2 s)! ( ) 2 ( ) 2 n i i! J 2n 2 s (f n s i+ i+ f n ) i s i s {2i, 2i 2 } 2(s (n )) ( ) 2 ( ) 2 ( n n i i!i 2! i i 2 s i f n s i + i + f n, f n s i 2+ i 2 + f n l 2 s ( 2n 2 s )) /2. ) /2 )( i2 s i 2 ) The claim follows from Relations (5.7) and (5.4). Step 2: Proof of C 2 C 2. By the Cauchy-Schwarz inequality we have E[ J n (f n (, )) 3 ] (E[ J n (f n (, )) 2 ]) /2 (E[ J n (f n (, )) 4 ]) /2. By the isometry for discrete multiple stochastic integrals we have J n (f n (, )) L 2 (Ω) = (n )! f n (, ) l 2 s ( n ). By the above relations and (5.8) we have C 2 B 2 /n, where B 2 is defined by (5.9). The claim follows from (5.7) and (5.4). Since it is not obvious that the above theorem extends Theorem 4. in [4], it is worthwhile to explain this point in detail. Tae f n l 2 () s( n ), n 2, and let h n,n,s be defined by (5.). In the symmetric case p = q = /2, by the same arguments as those one after 33

34 the statement of Proposition 5. we have that, for any fixed 0 s 2(n ) and N, h () n,n,s = 0 if s/2 is not an integer and h () n,n,s = (s/2)! otherwise. If s/2 is an integer we also have ( n s/2 ) 2 f n (, ) s/2 s/2 f n(, ) f n (, ) s/2 s/2 f n(, ) l 2 s ( 2n 2 s ) f n (, ) s/2 s/2 f n(, ) l 2 ( 2n 2 s ) = f n (, ) s/2 s/2 f n(, ) l 2 ( 2n 2 s ) f n (, ) 2 l 2 s( n ) <, where the latter relation follows from Lemma 2.4() in [4]. So h () n,n,s l 2 s( 2n 2 s ). In the symmetric case, by the definition of the contraction, for 0 s 2n 3 and s 2i 2(s (n )), we have and f n s i+ i+ f n l 2 ( 2n 2 s ) = 0, if s < 2i f n s i+ i+ f n l 2 ( 2n 2 s ) = f n i+ i+ f n l 2 ( 2n 2 2i ) f n 2 l 2 s( n), if s = 2i where the latter relation follows from Lemma 2.4() in [4]. Consequently, the constant C in the statement of Theorem 5.5 is finite and reduces to 2(n 2) ( s ) ( ) 2 4 n C = ( n! f n 2l2s( n) 2 + n 2 {s/2 N}(2n 2 s)! 2! s/2 f n s/2+ s/2+ f n 2 l 2 ( 2n 2 s )) /2, ( [ n ( ) ] 2 2 /2 n = n! f n 2 l 2 s( n) 2 + n 2 (2n 2s)! (s )! f n s s f n 2 l s 2 ( 2n 2s )). s= As far as the constant C 2 in the statement of Theorem 5.5 is concerned, in the symmetric case one clearly has C 2 = 0. 34

35 Finally, consider the constant C 3 in the statement of Theorem 5.5. The following bound holds: C n 2 3 n3 (2n 2 s)! = 20n3 3 = 20n3 3 = 20n3 3 s {2i, 2i 2 } 2(s (n )) 0 2n 2 ( ) 2 ( ) 2 ( n n i i!i 2! i i 2 s i )( i2 s i 2 f n (, ) s i i f n (, ) l 2 (N) 2n 2 s f n(, ) s i 2 i 2 f n (, ) l 2 (N) 2n 2 s {s/2 N}(2n 2 s)! ( s ) ( ) 2 4 n 2! s/2 0 [ n ( ) ] 2 2 n (2n 2s)! (s )! s 0 [ n ( n (2n 2s)! (s )! s s= s= f n (, ) s/2 s/2 f n(, ) 2 l 2 (N) 2n 2 s ) 2 ] 2 f n s s ) f n (, ) s s f n (, ) 2 l 2 (N) 2n 2s f n 2 l 2 (N) 2n 2s+, where the latter equality follows from Lemma 2.4(2) (relation (2.4)) in [4] and the constant C 3 is finite again by Lemma 2.4() in in [4]. We recovered the bound provided by Theorem 4. in [4]. 5.3 Convergence to the normal distribution The next theorems follow by Theorems 5.3 and 5.4, respectively. Theorem 5.6 Let n 2 be fixed and let F m = J n (f m ), m, be a sequence of discrete multiple stochastic integrals such that f m l 2 s( n ), for any N the functions h () n,n,s () and h n,n,s defined by (5.0) and (5.) with f m in place of f n belong to l 2 s( 2n 2 s ), 0 s 2n 2, 0 f m (, ) s i i f m] ( ) l 2 ( 2n 2 s ) 0, 0 n! f m 2 l s( n), as m (5.27) as m, for any 0 s 2n 3 and s 2i 2(s (n )) (5.28) 2p p q f m (, ) l 2 s ( n ) 35

36 and 0 f m (, ) s i i f m (, ) l 2 ( 2n 2 s ) f m (, ) s i 2 i 2 f m (, ) l 2 ( 2n 2 s ) 0, as m, for any 0 s 2n 2 and s {2i, 2i 2 } 2(s (n )) (5.29) f m (, ) s i i p q f m (, ) l 2 ( 2n 2 s ) f m (, ) s i 2 i 2 f m (, ) l 2 ( 2n 2 s ) 0, as m, for any 0 s 2n 2 and s {2i, 2i 2 } 2(s (n )). (5.30) Then F m Law N(0, ). Theorem 5.7 Let n 2 be fixed and let F m = J n (f m ), m, be a sequence of discrete multiple stochastic integrals such that f m l 2 s( n ), for any N the functions defined by (5.) with f m in place of f n belong to l 2 s( 2n 2 s ), 0 s 2n 2, f m s i+ i+ f m l 2 ( 2n 2 s ) 0, h () n,n,s as m, for any 0 s 2n 3 and s 2i 2(s (n )) (5.3) and (5.27), (5.29) and (5.30) hold. Then Connection with Proposition 4.3 in [4] F m Law N(0, ). In this paragraph we explain the connection between Theorem 5.7, specialized in the symmetric case, and Proposition 4.3 in [4]. Tae f m l 2 () s( n ), m, n 2, and let h n,n,s be defined by (5.) with f m in place of f n. We already checed (after the proof of Theorem 5.5) () that, in the symmetric case, one has h n,n,s l 2 s( 2n 2 s ), N, 0 s 2n 2. Note that assumption (5.27) is explicitly required in Proposition 4.3 of [4] and, for p = q = /2, assumption (5.29) is automatically satisfied. In the symmetric case, conditions (5.30) and (5.3) read and f m (, ) i i f m (, ) 2 l 2 ( 2n 2 2i ) 0, as m, for any 0 i n (5.32) 0 f m i+ i+ f m l 2 ( 2n 2 2i ) 0, as m, for any 0 i n 2 (5.33) 36

37 respectively. We are going to chec that assumption (4.44) of Proposition 4.3 in [4], i.e. f m r r f m l 2 (N) 2n 2r 0, as m, for any r n (5.34) implies (5.32) and (5.33). Clearly, (5.34) is equivalent to (5.33). Moreover, for any 0 i n, by Lemma 2.4(2) (relation (2.4)) and Lemma 2.4(3) in [4], we have 0 f m (, ) i i f m (, ) 2 l 2 ( 2n 2 2i ) 0 f m (, ) i i f m (, ) 2 l 2 (N) 2n 2 2i f m n n f m l 2 (N) f m 2 l 2 s( n) f m n n f m l 2 (N) 2 f m 2 l 2 s( n). So combining (5.27) and condition (5.34) with r = n we have (5.32). Quadratic functionals In the next proposition we apply Theorem 5.7 with n = 2. In comparison with Proposition 4.3 of [4] we require an additional l 4 condition in the non-symmetric case. Proposition 5.8 Assume that there exists some ε > 0 such that and consider a sequence f m l 2 s( 2 ) such that a) lim m f m 2 l 2 s( 2 ) = /2, b) lim m f m f m l 2 (N) 2 = 0, c) lim m f m l 4 s ( 2 ) 0. Then J 2 (f m ) Law N(0, ) as m. Proof. 0 < ε < p < ε, N, (5.35) We need to satisfy Conditions (5.27), (5.29), (5.30) and (5.3), in addition to an integrability chec which is postponed to the end of this proof. First we note that (5.27) is Condition a) above. Next we note that under (5.35), Conditions (5.29), (5.30) and (5.3) read f m (, ) l 2 s ( ) f m (, ) 0 0 f m (, ) l 2 s ( 2 ) 0, 0 f m (, ) l 2 s ( ) f m (, ) 0 f m (, ) l 2 s ( ) 0, 0 37

38 f m (, ) l 2 s ( ) f m (, ) f m (, ) l 2 s ( 0 ) 0, 0 f m (, ) 0 0 f m (, ) 2 l 2 s( 2 ) 0, 0 f m (, ) 0 f m (, ) 2 l 2 s( ) 0, 0 f m (, ) f m (, ) 2 l 2 s( 0 ) 0, 0 and f m f m l 2 ( 2 ), f m 2 f m l 2 ( ) 0, as m. Using again (5.35), we have that the above conditions are implied by f m (, ) l 2 s ( ) f m (, ) 0 0 f m (, ) l 2 s ( 2 ) 0, (5.36) 0 f m (, ) l 2 s ( ) f m (, ) 0 f m (, ) l 2 s ( ) 0, (5.37) 0 f m (, ) l 2 s ( ) f m (, ) f m (, ) l 2 s ( 0 ) 0, (5.38) 0 f m (, ) 0 0 f m (, ) 2 l 2 s( 2 ) 0, (5.39) 0 f m (, ) 0 f m (, ) 2 l 2 s( ) 0, (5.40) 0 f m (, ) f m (, ) 2 l 2 s( 0 ) 0, (5.4) 0 and f m f m l 2 ( 2 ), f m 2 f m l 2 ( ) 0, (5.42) as m. Now by Lemma 2.4(3) of [4] we have f m 2 f m l 2 ( ) f m f m l 2 (N) 2, hence (5.42) is implied by Condition b) above. Next, by Lemma 2.4(2)-(3) in [4] we have f m (, ) 0 0 f m (, ) 2 l 2 s( 2 ) f m 2 f m l 2 (N) f m 2 l 2 (N) 2 0 f m f m l 2 (N) 2 f m 2 l 2 (N) 2, (5.43) 38

The Stein and Chen-Stein Methods for Functionals of Non-Symmetric Bernoulli Processes

The Stein and Chen-Stein Methods for Functionals of Non-Symmetric Bernoulli Processes ALEA, Lat. Am. J. Probab. Math. Stat. 12 (1), 309 356 (2015) The Stein Chen-Stein Methods for Functionals of Non-Symmetric Bernoulli Processes Nicolas Privault Giovanni Luca Torrisi Division of Mathematical

More information

Probability approximation by Clark-Ocone covariance representation

Probability approximation by Clark-Ocone covariance representation Probability approximation by Clark-Ocone covariance representation Nicolas Privault Giovanni Luca Torrisi October 19, 13 Abstract Based on the Stein method and a general integration by parts framework

More information

Stein s method and stochastic analysis of Rademacher functionals

Stein s method and stochastic analysis of Rademacher functionals E l e c t r o n i c J o u r n a l o f P r o b a b i l i t y Vol. 15 (010), Paper no. 55, pages 1703 174. Journal URL http://www.math.washington.edu/~ejpecp/ Stein s method and stochastic analysis of Rademacher

More information

Normal approximation of Poisson functionals in Kolmogorov distance

Normal approximation of Poisson functionals in Kolmogorov distance Normal approximation of Poisson functionals in Kolmogorov distance Matthias Schulte Abstract Peccati, Solè, Taqqu, and Utzet recently combined Stein s method and Malliavin calculus to obtain a bound for

More information

KOLMOGOROV DISTANCE FOR MULTIVARIATE NORMAL APPROXIMATION. Yoon Tae Kim and Hyun Suk Park

KOLMOGOROV DISTANCE FOR MULTIVARIATE NORMAL APPROXIMATION. Yoon Tae Kim and Hyun Suk Park Korean J. Math. 3 (015, No. 1, pp. 1 10 http://dx.doi.org/10.11568/kjm.015.3.1.1 KOLMOGOROV DISTANCE FOR MULTIVARIATE NORMAL APPROXIMATION Yoon Tae Kim and Hyun Suk Park Abstract. This paper concerns the

More information

Recall that if X is a compact metric space, C(X), the space of continuous (real-valued) functions on X, is a Banach space with the norm

Recall that if X is a compact metric space, C(X), the space of continuous (real-valued) functions on X, is a Banach space with the norm Chapter 13 Radon Measures Recall that if X is a compact metric space, C(X), the space of continuous (real-valued) functions on X, is a Banach space with the norm (13.1) f = sup x X f(x). We want to identify

More information

5 Measure theory II. (or. lim. Prove the proposition. 5. For fixed F A and φ M define the restriction of φ on F by writing.

5 Measure theory II. (or. lim. Prove the proposition. 5. For fixed F A and φ M define the restriction of φ on F by writing. 5 Measure theory II 1. Charges (signed measures). Let (Ω, A) be a σ -algebra. A map φ: A R is called a charge, (or signed measure or σ -additive set function) if φ = φ(a j ) (5.1) A j for any disjoint

More information

NEW FUNCTIONAL INEQUALITIES

NEW FUNCTIONAL INEQUALITIES 1 / 29 NEW FUNCTIONAL INEQUALITIES VIA STEIN S METHOD Giovanni Peccati (Luxembourg University) IMA, Minneapolis: April 28, 2015 2 / 29 INTRODUCTION Based on two joint works: (1) Nourdin, Peccati and Swan

More information

Malliavin calculus and central limit theorems

Malliavin calculus and central limit theorems Malliavin calculus and central limit theorems David Nualart Department of Mathematics Kansas University Seminar on Stochastic Processes 2017 University of Virginia March 8-11 2017 David Nualart (Kansas

More information

Metric Spaces and Topology

Metric Spaces and Topology Chapter 2 Metric Spaces and Topology From an engineering perspective, the most important way to construct a topology on a set is to define the topology in terms of a metric on the set. This approach underlies

More information

Stein approximation for functionals of independent random sequences

Stein approximation for functionals of independent random sequences Stein approximation for functionals of independent random sequences Nicolas Privault Grzegorz Serafin November 7, 17 Abstract We derive Stein approximation bounds for functionals of uniform random variables,

More information

Multi-dimensional Gaussian fluctuations on the Poisson space

Multi-dimensional Gaussian fluctuations on the Poisson space E l e c t r o n i c J o u r n a l o f P r o b a b i l i t y Vol. 15 (2010), Paper no. 48, pages 1487 1527. Journal URL http://www.math.washington.edu/~ejpecp/ Multi-dimensional Gaussian fluctuations on

More information

Tools from Lebesgue integration

Tools from Lebesgue integration Tools from Lebesgue integration E.P. van den Ban Fall 2005 Introduction In these notes we describe some of the basic tools from the theory of Lebesgue integration. Definitions and results will be given

More information

An introduction to some aspects of functional analysis

An introduction to some aspects of functional analysis An introduction to some aspects of functional analysis Stephen Semmes Rice University Abstract These informal notes deal with some very basic objects in functional analysis, including norms and seminorms

More information

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product Chapter 4 Hilbert Spaces 4.1 Inner Product Spaces Inner Product Space. A complex vector space E is called an inner product space (or a pre-hilbert space, or a unitary space) if there is a mapping (, )

More information

Vector Spaces. Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms.

Vector Spaces. Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms. Vector Spaces Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms. For each two vectors a, b ν there exists a summation procedure: a +

More information

Calculus in Gauss Space

Calculus in Gauss Space Calculus in Gauss Space 1. The Gradient Operator The -dimensional Lebesgue space is the measurable space (E (E )) where E =[0 1) or E = R endowed with the Lebesgue measure, and the calculus of functions

More information

Some SDEs with distributional drift Part I : General calculus. Flandoli, Franco; Russo, Francesco; Wolf, Jochen

Some SDEs with distributional drift Part I : General calculus. Flandoli, Franco; Russo, Francesco; Wolf, Jochen Title Author(s) Some SDEs with distributional drift Part I : General calculus Flandoli, Franco; Russo, Francesco; Wolf, Jochen Citation Osaka Journal of Mathematics. 4() P.493-P.54 Issue Date 3-6 Text

More information

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability...

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability... Functional Analysis Franck Sueur 2018-2019 Contents 1 Metric spaces 1 1.1 Definitions........................................ 1 1.2 Completeness...................................... 3 1.3 Compactness......................................

More information

3 Integration and Expectation

3 Integration and Expectation 3 Integration and Expectation 3.1 Construction of the Lebesgue Integral Let (, F, µ) be a measure space (not necessarily a probability space). Our objective will be to define the Lebesgue integral R fdµ

More information

Independence of some multiple Poisson stochastic integrals with variable-sign kernels

Independence of some multiple Poisson stochastic integrals with variable-sign kernels Independence of some multiple Poisson stochastic integrals with variable-sign kernels Nicolas Privault Division of Mathematical Sciences School of Physical and Mathematical Sciences Nanyang Technological

More information

1 Topology Definition of a topology Basis (Base) of a topology The subspace topology & the product topology on X Y 3

1 Topology Definition of a topology Basis (Base) of a topology The subspace topology & the product topology on X Y 3 Index Page 1 Topology 2 1.1 Definition of a topology 2 1.2 Basis (Base) of a topology 2 1.3 The subspace topology & the product topology on X Y 3 1.4 Basic topology concepts: limit points, closed sets,

More information

1 Math 241A-B Homework Problem List for F2015 and W2016

1 Math 241A-B Homework Problem List for F2015 and W2016 1 Math 241A-B Homework Problem List for F2015 W2016 1.1 Homework 1. Due Wednesday, October 7, 2015 Notation 1.1 Let U be any set, g be a positive function on U, Y be a normed space. For any f : U Y let

More information

Introduction and Preliminaries

Introduction and Preliminaries Chapter 1 Introduction and Preliminaries This chapter serves two purposes. The first purpose is to prepare the readers for the more systematic development in later chapters of methods of real analysis

More information

MAT 570 REAL ANALYSIS LECTURE NOTES. Contents. 1. Sets Functions Countability Axiom of choice Equivalence relations 9

MAT 570 REAL ANALYSIS LECTURE NOTES. Contents. 1. Sets Functions Countability Axiom of choice Equivalence relations 9 MAT 570 REAL ANALYSIS LECTURE NOTES PROFESSOR: JOHN QUIGG SEMESTER: FALL 204 Contents. Sets 2 2. Functions 5 3. Countability 7 4. Axiom of choice 8 5. Equivalence relations 9 6. Real numbers 9 7. Extended

More information

Real Analysis Notes. Thomas Goller

Real Analysis Notes. Thomas Goller Real Analysis Notes Thomas Goller September 4, 2011 Contents 1 Abstract Measure Spaces 2 1.1 Basic Definitions........................... 2 1.2 Measurable Functions........................ 2 1.3 Integration..............................

More information

1 Basic Combinatorics

1 Basic Combinatorics 1 Basic Combinatorics 1.1 Sets and sequences Sets. A set is an unordered collection of distinct objects. The objects are called elements of the set. We use braces to denote a set, for example, the set

More information

1 Directional Derivatives and Differentiability

1 Directional Derivatives and Differentiability Wednesday, January 18, 2012 1 Directional Derivatives and Differentiability Let E R N, let f : E R and let x 0 E. Given a direction v R N, let L be the line through x 0 in the direction v, that is, L :=

More information

Chapter 2 Metric Spaces

Chapter 2 Metric Spaces Chapter 2 Metric Spaces The purpose of this chapter is to present a summary of some basic properties of metric and topological spaces that play an important role in the main body of the book. 2.1 Metrics

More information

MATH 426, TOPOLOGY. p 1.

MATH 426, TOPOLOGY. p 1. MATH 426, TOPOLOGY THE p-norms In this document we assume an extended real line, where is an element greater than all real numbers; the interval notation [1, ] will be used to mean [1, ) { }. 1. THE p

More information

3 (Due ). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure?

3 (Due ). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure? MA 645-4A (Real Analysis), Dr. Chernov Homework assignment 1 (Due ). Show that the open disk x 2 + y 2 < 1 is a countable union of planar elementary sets. Show that the closed disk x 2 + y 2 1 is a countable

More information

16 1 Basic Facts from Functional Analysis and Banach Lattices

16 1 Basic Facts from Functional Analysis and Banach Lattices 16 1 Basic Facts from Functional Analysis and Banach Lattices 1.2.3 Banach Steinhaus Theorem Another fundamental theorem of functional analysis is the Banach Steinhaus theorem, or the Uniform Boundedness

More information

Notions such as convergent sequence and Cauchy sequence make sense for any metric space. Convergent Sequences are Cauchy

Notions such as convergent sequence and Cauchy sequence make sense for any metric space. Convergent Sequences are Cauchy Banach Spaces These notes provide an introduction to Banach spaces, which are complete normed vector spaces. For the purposes of these notes, all vector spaces are assumed to be over the real numbers.

More information

Analysis-3 lecture schemes

Analysis-3 lecture schemes Analysis-3 lecture schemes (with Homeworks) 1 Csörgő István November, 2015 1 A jegyzet az ELTE Informatikai Kar 2015. évi Jegyzetpályázatának támogatásával készült Contents 1. Lesson 1 4 1.1. The Space

More information

Stein s method on Wiener chaos

Stein s method on Wiener chaos Stein s method on Wiener chaos by Ivan Nourdin and Giovanni Peccati University of Paris VI Revised version: May 10, 2008 Abstract: We combine Malliavin calculus with Stein s method, in order to derive

More information

P-adic Functions - Part 1

P-adic Functions - Part 1 P-adic Functions - Part 1 Nicolae Ciocan 22.11.2011 1 Locally constant functions Motivation: Another big difference between p-adic analysis and real analysis is the existence of nontrivial locally constant

More information

Introduction to Topology

Introduction to Topology Introduction to Topology Randall R. Holmes Auburn University Typeset by AMS-TEX Chapter 1. Metric Spaces 1. Definition and Examples. As the course progresses we will need to review some basic notions about

More information

Lecture 22 Girsanov s Theorem

Lecture 22 Girsanov s Theorem Lecture 22: Girsanov s Theorem of 8 Course: Theory of Probability II Term: Spring 25 Instructor: Gordan Zitkovic Lecture 22 Girsanov s Theorem An example Consider a finite Gaussian random walk X n = n

More information

Gaussian Random Fields

Gaussian Random Fields Gaussian Random Fields Mini-Course by Prof. Voijkan Jaksic Vincent Larochelle, Alexandre Tomberg May 9, 009 Review Defnition.. Let, F, P ) be a probability space. Random variables {X,..., X n } are called

More information

LECTURE 1: SOURCES OF ERRORS MATHEMATICAL TOOLS A PRIORI ERROR ESTIMATES. Sergey Korotov,

LECTURE 1: SOURCES OF ERRORS MATHEMATICAL TOOLS A PRIORI ERROR ESTIMATES. Sergey Korotov, LECTURE 1: SOURCES OF ERRORS MATHEMATICAL TOOLS A PRIORI ERROR ESTIMATES Sergey Korotov, Institute of Mathematics Helsinki University of Technology, Finland Academy of Finland 1 Main Problem in Mathematical

More information

Stein s Method: Distributional Approximation and Concentration of Measure

Stein s Method: Distributional Approximation and Concentration of Measure Stein s Method: Distributional Approximation and Concentration of Measure Larry Goldstein University of Southern California 36 th Midwest Probability Colloquium, 2014 Stein s method for Distributional

More information

ON THE PROBABILITY APPROXIMATION OF SPATIAL

ON THE PROBABILITY APPROXIMATION OF SPATIAL ON THE PROBABILITY APPROIMATION OF SPATIAL POINT PROCESSES Giovanni Luca Torrisi Consiglio Nazionale delle Ricerche Giovanni Luca Torrisi (CNR) Approximation of point processes May 2015 1 / 33 POINT PROCESSES

More information

COMPLETE METRIC SPACES AND THE CONTRACTION MAPPING THEOREM

COMPLETE METRIC SPACES AND THE CONTRACTION MAPPING THEOREM COMPLETE METRIC SPACES AND THE CONTRACTION MAPPING THEOREM A metric space (M, d) is a set M with a metric d(x, y), x, y M that has the properties d(x, y) = d(y, x), x, y M d(x, y) d(x, z) + d(z, y), x,

More information

GENERALIZED COVARIATION FOR BANACH SPACE VALUED PROCESSES, ITÔ FORMULA AND APPLICATIONS

GENERALIZED COVARIATION FOR BANACH SPACE VALUED PROCESSES, ITÔ FORMULA AND APPLICATIONS Di Girolami, C. and Russo, F. Osaka J. Math. 51 (214), 729 783 GENERALIZED COVARIATION FOR BANACH SPACE VALUED PROCESSES, ITÔ FORMULA AND APPLICATIONS CRISTINA DI GIROLAMI and FRANCESCO RUSSO (Received

More information

Functional Analysis Exercise Class

Functional Analysis Exercise Class Functional Analysis Exercise Class Week 9 November 13 November Deadline to hand in the homeworks: your exercise class on week 16 November 20 November Exercises (1) Show that if T B(X, Y ) and S B(Y, Z)

More information

SEPARABILITY AND COMPLETENESS FOR THE WASSERSTEIN DISTANCE

SEPARABILITY AND COMPLETENESS FOR THE WASSERSTEIN DISTANCE SEPARABILITY AND COMPLETENESS FOR THE WASSERSTEIN DISTANCE FRANÇOIS BOLLEY Abstract. In this note we prove in an elementary way that the Wasserstein distances, which play a basic role in optimal transportation

More information

Mathematical Methods for Physics and Engineering

Mathematical Methods for Physics and Engineering Mathematical Methods for Physics and Engineering Lecture notes for PDEs Sergei V. Shabanov Department of Mathematics, University of Florida, Gainesville, FL 32611 USA CHAPTER 1 The integration theory

More information

HILBERT SPACES AND THE RADON-NIKODYM THEOREM. where the bar in the first equation denotes complex conjugation. In either case, for any x V define

HILBERT SPACES AND THE RADON-NIKODYM THEOREM. where the bar in the first equation denotes complex conjugation. In either case, for any x V define HILBERT SPACES AND THE RADON-NIKODYM THEOREM STEVEN P. LALLEY 1. DEFINITIONS Definition 1. A real inner product space is a real vector space V together with a symmetric, bilinear, positive-definite mapping,

More information

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES JOEL A. TROPP Abstract. We present an elementary proof that the spectral radius of a matrix A may be obtained using the formula ρ(a) lim

More information

Measurable Choice Functions

Measurable Choice Functions (January 19, 2013) Measurable Choice Functions Paul Garrett garrett@math.umn.edu http://www.math.umn.edu/ garrett/ [This document is http://www.math.umn.edu/ garrett/m/fun/choice functions.pdf] This note

More information

LECTURE 15: COMPLETENESS AND CONVEXITY

LECTURE 15: COMPLETENESS AND CONVEXITY LECTURE 15: COMPLETENESS AND CONVEXITY 1. The Hopf-Rinow Theorem Recall that a Riemannian manifold (M, g) is called geodesically complete if the maximal defining interval of any geodesic is R. On the other

More information

Malliavin Calculus: Analysis on Gaussian spaces

Malliavin Calculus: Analysis on Gaussian spaces Malliavin Calculus: Analysis on Gaussian spaces Josef Teichmann ETH Zürich Oxford 2011 Isonormal Gaussian process A Gaussian space is a (complete) probability space together with a Hilbert space of centered

More information

CHAPTER VIII HILBERT SPACES

CHAPTER VIII HILBERT SPACES CHAPTER VIII HILBERT SPACES DEFINITION Let X and Y be two complex vector spaces. A map T : X Y is called a conjugate-linear transformation if it is a reallinear transformation from X into Y, and if T (λx)

More information

Your first day at work MATH 806 (Fall 2015)

Your first day at work MATH 806 (Fall 2015) Your first day at work MATH 806 (Fall 2015) 1. Let X be a set (with no particular algebraic structure). A function d : X X R is called a metric on X (and then X is called a metric space) when d satisfies

More information

SPECTRAL THEOREM FOR COMPACT SELF-ADJOINT OPERATORS

SPECTRAL THEOREM FOR COMPACT SELF-ADJOINT OPERATORS SPECTRAL THEOREM FOR COMPACT SELF-ADJOINT OPERATORS G. RAMESH Contents Introduction 1 1. Bounded Operators 1 1.3. Examples 3 2. Compact Operators 5 2.1. Properties 6 3. The Spectral Theorem 9 3.3. Self-adjoint

More information

EQUIVALENCE OF TOPOLOGIES AND BOREL FIELDS FOR COUNTABLY-HILBERT SPACES

EQUIVALENCE OF TOPOLOGIES AND BOREL FIELDS FOR COUNTABLY-HILBERT SPACES EQUIVALENCE OF TOPOLOGIES AND BOREL FIELDS FOR COUNTABLY-HILBERT SPACES JEREMY J. BECNEL Abstract. We examine the main topologies wea, strong, and inductive placed on the dual of a countably-normed space

More information

C.6 Adjoints for Operators on Hilbert Spaces

C.6 Adjoints for Operators on Hilbert Spaces C.6 Adjoints for Operators on Hilbert Spaces 317 Additional Problems C.11. Let E R be measurable. Given 1 p and a measurable weight function w: E (0, ), the weighted L p space L p s (R) consists of all

More information

Exercise Solutions to Functional Analysis

Exercise Solutions to Functional Analysis Exercise Solutions to Functional Analysis Note: References refer to M. Schechter, Principles of Functional Analysis Exersize that. Let φ,..., φ n be an orthonormal set in a Hilbert space H. Show n f n

More information

Chapter 8 Integral Operators

Chapter 8 Integral Operators Chapter 8 Integral Operators In our development of metrics, norms, inner products, and operator theory in Chapters 1 7 we only tangentially considered topics that involved the use of Lebesgue measure,

More information

The Wiener Itô Chaos Expansion

The Wiener Itô Chaos Expansion 1 The Wiener Itô Chaos Expansion The celebrated Wiener Itô chaos expansion is fundamental in stochastic analysis. In particular, it plays a crucial role in the Malliavin calculus as it is presented in

More information

The Dirichlet s P rinciple. In this lecture we discuss an alternative formulation of the Dirichlet problem for the Laplace equation:

The Dirichlet s P rinciple. In this lecture we discuss an alternative formulation of the Dirichlet problem for the Laplace equation: Oct. 1 The Dirichlet s P rinciple In this lecture we discuss an alternative formulation of the Dirichlet problem for the Laplace equation: 1. Dirichlet s Principle. u = in, u = g on. ( 1 ) If we multiply

More information

Analysis Finite and Infinite Sets The Real Numbers The Cantor Set

Analysis Finite and Infinite Sets The Real Numbers The Cantor Set Analysis Finite and Infinite Sets Definition. An initial segment is {n N n n 0 }. Definition. A finite set can be put into one-to-one correspondence with an initial segment. The empty set is also considered

More information

2 (Bonus). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure?

2 (Bonus). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure? MA 645-4A (Real Analysis), Dr. Chernov Homework assignment 1 (Due 9/5). Prove that every countable set A is measurable and µ(a) = 0. 2 (Bonus). Let A consist of points (x, y) such that either x or y is

More information

I teach myself... Hilbert spaces

I teach myself... Hilbert spaces I teach myself... Hilbert spaces by F.J.Sayas, for MATH 806 November 4, 2015 This document will be growing with the semester. Every in red is for you to justify. Even if we start with the basic definition

More information

Hilbert Spaces. Hilbert space is a vector space with some extra structure. We start with formal (axiomatic) definition of a vector space.

Hilbert Spaces. Hilbert space is a vector space with some extra structure. We start with formal (axiomatic) definition of a vector space. Hilbert Spaces Hilbert space is a vector space with some extra structure. We start with formal (axiomatic) definition of a vector space. Vector Space. Vector space, ν, over the field of complex numbers,

More information

µ X (A) = P ( X 1 (A) )

µ X (A) = P ( X 1 (A) ) 1 STOCHASTIC PROCESSES This appendix provides a very basic introduction to the language of probability theory and stochastic processes. We assume the reader is familiar with the general measure and integration

More information

11.10a Taylor and Maclaurin Series

11.10a Taylor and Maclaurin Series 11.10a 1 11.10a Taylor and Maclaurin Series Let y = f(x) be a differentiable function at x = a. In first semester calculus we saw that (1) f(x) f(a)+f (a)(x a), for all x near a The right-hand side of

More information

Lectures 2 3 : Wigner s semicircle law

Lectures 2 3 : Wigner s semicircle law Fall 009 MATH 833 Random Matrices B. Való Lectures 3 : Wigner s semicircle law Notes prepared by: M. Koyama As we set up last wee, let M n = [X ij ] n i,j=1 be a symmetric n n matrix with Random entries

More information

Optimal Berry-Esseen bounds on the Poisson space

Optimal Berry-Esseen bounds on the Poisson space Optimal Berry-Esseen bounds on the Poisson space Ehsan Azmoodeh Unité de Recherche en Mathématiques, Luxembourg University ehsan.azmoodeh@uni.lu Giovanni Peccati Unité de Recherche en Mathématiques, Luxembourg

More information

A Concise Course on Stochastic Partial Differential Equations

A Concise Course on Stochastic Partial Differential Equations A Concise Course on Stochastic Partial Differential Equations Michael Röckner Reference: C. Prevot, M. Röckner: Springer LN in Math. 1905, Berlin (2007) And see the references therein for the original

More information

arxiv:math/ v2 [math.pr] 9 Mar 2007

arxiv:math/ v2 [math.pr] 9 Mar 2007 arxiv:math/0703240v2 [math.pr] 9 Mar 2007 Central limit theorems for multiple stochastic integrals and Malliavin calculus D. Nualart and S. Ortiz-Latorre November 2, 2018 Abstract We give a new characterization

More information

Mathematics Department Stanford University Math 61CM/DM Inner products

Mathematics Department Stanford University Math 61CM/DM Inner products Mathematics Department Stanford University Math 61CM/DM Inner products Recall the definition of an inner product space; see Appendix A.8 of the textbook. Definition 1 An inner product space V is a vector

More information

Metric spaces and metrizability

Metric spaces and metrizability 1 Motivation Metric spaces and metrizability By this point in the course, this section should not need much in the way of motivation. From the very beginning, we have talked about R n usual and how relatively

More information

Categories and Quantum Informatics: Hilbert spaces

Categories and Quantum Informatics: Hilbert spaces Categories and Quantum Informatics: Hilbert spaces Chris Heunen Spring 2018 We introduce our main example category Hilb by recalling in some detail the mathematical formalism that underlies quantum theory:

More information

Stochastic Processes II/ Wahrscheinlichkeitstheorie III. Lecture Notes

Stochastic Processes II/ Wahrscheinlichkeitstheorie III. Lecture Notes BMS Basic Course Stochastic Processes II/ Wahrscheinlichkeitstheorie III Michael Scheutzow Lecture Notes Technische Universität Berlin Sommersemester 218 preliminary version October 12th 218 Contents

More information

Functional Analysis I

Functional Analysis I Functional Analysis I Course Notes by Stefan Richter Transcribed and Annotated by Gregory Zitelli Polar Decomposition Definition. An operator W B(H) is called a partial isometry if W x = X for all x (ker

More information

Lecture Notes in Advanced Calculus 1 (80315) Raz Kupferman Institute of Mathematics The Hebrew University

Lecture Notes in Advanced Calculus 1 (80315) Raz Kupferman Institute of Mathematics The Hebrew University Lecture Notes in Advanced Calculus 1 (80315) Raz Kupferman Institute of Mathematics The Hebrew University February 7, 2007 2 Contents 1 Metric Spaces 1 1.1 Basic definitions...........................

More information

PROBLEMS. (b) (Polarization Identity) Show that in any inner product space

PROBLEMS. (b) (Polarization Identity) Show that in any inner product space 1 Professor Carl Cowen Math 54600 Fall 09 PROBLEMS 1. (Geometry in Inner Product Spaces) (a) (Parallelogram Law) Show that in any inner product space x + y 2 + x y 2 = 2( x 2 + y 2 ). (b) (Polarization

More information

Invariant measures for iterated function systems

Invariant measures for iterated function systems ANNALES POLONICI MATHEMATICI LXXV.1(2000) Invariant measures for iterated function systems by Tomasz Szarek (Katowice and Rzeszów) Abstract. A new criterion for the existence of an invariant distribution

More information

Lecture Notes on PDEs

Lecture Notes on PDEs Lecture Notes on PDEs Alberto Bressan February 26, 2012 1 Elliptic equations Let IR n be a bounded open set Given measurable functions a ij, b i, c : IR, consider the linear, second order differential

More information

Your first day at work MATH 806 (Fall 2015)

Your first day at work MATH 806 (Fall 2015) Your first day at work MATH 806 (Fall 2015) 1. Let X be a set (with no particular algebraic structure). A function d : X X R is called a metric on X (and then X is called a metric space) when d satisfies

More information

Stanford Mathematics Department Math 205A Lecture Supplement #4 Borel Regular & Radon Measures

Stanford Mathematics Department Math 205A Lecture Supplement #4 Borel Regular & Radon Measures 2 1 Borel Regular Measures We now state and prove an important regularity property of Borel regular outer measures: Stanford Mathematics Department Math 205A Lecture Supplement #4 Borel Regular & Radon

More information

3. (a) What is a simple function? What is an integrable function? How is f dµ defined? Define it first

3. (a) What is a simple function? What is an integrable function? How is f dµ defined? Define it first Math 632/6321: Theory of Functions of a Real Variable Sample Preinary Exam Questions 1. Let (, M, µ) be a measure space. (a) Prove that if µ() < and if 1 p < q

More information

CHAPTER 6. Differentiation

CHAPTER 6. Differentiation CHPTER 6 Differentiation The generalization from elementary calculus of differentiation in measure theory is less obvious than that of integration, and the methods of treating it are somewhat involved.

More information

at time t, in dimension d. The index i varies in a countable set I. We call configuration the family, denoted generically by Φ: U (x i (t) x j (t))

at time t, in dimension d. The index i varies in a countable set I. We call configuration the family, denoted generically by Φ: U (x i (t) x j (t)) Notations In this chapter we investigate infinite systems of interacting particles subject to Newtonian dynamics Each particle is characterized by its position an velocity x i t, v i t R d R d at time

More information

Some Background Material

Some Background Material Chapter 1 Some Background Material In the first chapter, we present a quick review of elementary - but important - material as a way of dipping our toes in the water. This chapter also introduces important

More information

L p Spaces and Convexity

L p Spaces and Convexity L p Spaces and Convexity These notes largely follow the treatments in Royden, Real Analysis, and Rudin, Real & Complex Analysis. 1. Convex functions Let I R be an interval. For I open, we say a function

More information

08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms

08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms (February 24, 2017) 08a. Operators on Hilbert spaces Paul Garrett garrett@math.umn.edu http://www.math.umn.edu/ garrett/ [This document is http://www.math.umn.edu/ garrett/m/real/notes 2016-17/08a-ops

More information

Problem Set 5: Solutions Math 201A: Fall 2016

Problem Set 5: Solutions Math 201A: Fall 2016 Problem Set 5: s Math 21A: Fall 216 Problem 1. Define f : [1, ) [1, ) by f(x) = x + 1/x. Show that f(x) f(y) < x y for all x, y [1, ) with x y, but f has no fixed point. Why doesn t this example contradict

More information

Kernel Method: Data Analysis with Positive Definite Kernels

Kernel Method: Data Analysis with Positive Definite Kernels Kernel Method: Data Analysis with Positive Definite Kernels 2. Positive Definite Kernel and Reproducing Kernel Hilbert Space Kenji Fukumizu The Institute of Statistical Mathematics. Graduate University

More information

Regularity and compactness for the DiPerna Lions flow

Regularity and compactness for the DiPerna Lions flow Regularity and compactness for the DiPerna Lions flow Gianluca Crippa 1 and Camillo De Lellis 2 1 Scuola Normale Superiore, Piazza dei Cavalieri 7, 56126 Pisa, Italy. g.crippa@sns.it 2 Institut für Mathematik,

More information

October 25, 2013 INNER PRODUCT SPACES

October 25, 2013 INNER PRODUCT SPACES October 25, 2013 INNER PRODUCT SPACES RODICA D. COSTIN Contents 1. Inner product 2 1.1. Inner product 2 1.2. Inner product spaces 4 2. Orthogonal bases 5 2.1. Existence of an orthogonal basis 7 2.2. Orthogonal

More information

Topological properties

Topological properties CHAPTER 4 Topological properties 1. Connectedness Definitions and examples Basic properties Connected components Connected versus path connected, again 2. Compactness Definition and first examples Topological

More information

4 Sums of Independent Random Variables

4 Sums of Independent Random Variables 4 Sums of Independent Random Variables Standing Assumptions: Assume throughout this section that (,F,P) is a fixed probability space and that X 1, X 2, X 3,... are independent real-valued random variables

More information

9 Brownian Motion: Construction

9 Brownian Motion: Construction 9 Brownian Motion: Construction 9.1 Definition and Heuristics The central limit theorem states that the standard Gaussian distribution arises as the weak limit of the rescaled partial sums S n / p n of

More information

B. Appendix B. Topological vector spaces

B. Appendix B. Topological vector spaces B.1 B. Appendix B. Topological vector spaces B.1. Fréchet spaces. In this appendix we go through the definition of Fréchet spaces and their inductive limits, such as they are used for definitions of function

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

MATH MEASURE THEORY AND FOURIER ANALYSIS. Contents

MATH MEASURE THEORY AND FOURIER ANALYSIS. Contents MATH 3969 - MEASURE THEORY AND FOURIER ANALYSIS ANDREW TULLOCH Contents 1. Measure Theory 2 1.1. Properties of Measures 3 1.2. Constructing σ-algebras and measures 3 1.3. Properties of the Lebesgue measure

More information

7 Complete metric spaces and function spaces

7 Complete metric spaces and function spaces 7 Complete metric spaces and function spaces 7.1 Completeness Let (X, d) be a metric space. Definition 7.1. A sequence (x n ) n N in X is a Cauchy sequence if for any ɛ > 0, there is N N such that n, m

More information

Stein s method and weak convergence on Wiener space

Stein s method and weak convergence on Wiener space Stein s method and weak convergence on Wiener space Giovanni PECCATI (LSTA Paris VI) January 14, 2008 Main subject: two joint papers with I. Nourdin (Paris VI) Stein s method on Wiener chaos (ArXiv, December

More information