The Stein and Chen-Stein methods for functionals of non-symmetric Bernoulli processes

Similar documents
The Stein and Chen-Stein Methods for Functionals of Non-Symmetric Bernoulli Processes

Probability approximation by Clark-Ocone covariance representation

Stein s method and stochastic analysis of Rademacher functionals

Normal approximation of Poisson functionals in Kolmogorov distance

KOLMOGOROV DISTANCE FOR MULTIVARIATE NORMAL APPROXIMATION. Yoon Tae Kim and Hyun Suk Park

Recall that if X is a compact metric space, C(X), the space of continuous (real-valued) functions on X, is a Banach space with the norm

5 Measure theory II. (or. lim. Prove the proposition. 5. For fixed F A and φ M define the restriction of φ on F by writing.

NEW FUNCTIONAL INEQUALITIES

Malliavin calculus and central limit theorems

Metric Spaces and Topology

Stein approximation for functionals of independent random sequences

Multi-dimensional Gaussian fluctuations on the Poisson space

Tools from Lebesgue integration

An introduction to some aspects of functional analysis

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

Vector Spaces. Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms.

Calculus in Gauss Space

Some SDEs with distributional drift Part I : General calculus. Flandoli, Franco; Russo, Francesco; Wolf, Jochen

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability...

3 Integration and Expectation

Independence of some multiple Poisson stochastic integrals with variable-sign kernels

1 Topology Definition of a topology Basis (Base) of a topology The subspace topology & the product topology on X Y 3

1 Math 241A-B Homework Problem List for F2015 and W2016

Introduction and Preliminaries

MAT 570 REAL ANALYSIS LECTURE NOTES. Contents. 1. Sets Functions Countability Axiom of choice Equivalence relations 9

Real Analysis Notes. Thomas Goller

1 Basic Combinatorics

1 Directional Derivatives and Differentiability

Chapter 2 Metric Spaces

MATH 426, TOPOLOGY. p 1.

3 (Due ). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure?

16 1 Basic Facts from Functional Analysis and Banach Lattices

Notions such as convergent sequence and Cauchy sequence make sense for any metric space. Convergent Sequences are Cauchy

Analysis-3 lecture schemes

Stein s method on Wiener chaos

P-adic Functions - Part 1

Introduction to Topology

Lecture 22 Girsanov s Theorem

Gaussian Random Fields

LECTURE 1: SOURCES OF ERRORS MATHEMATICAL TOOLS A PRIORI ERROR ESTIMATES. Sergey Korotov,

Stein s Method: Distributional Approximation and Concentration of Measure

ON THE PROBABILITY APPROXIMATION OF SPATIAL

COMPLETE METRIC SPACES AND THE CONTRACTION MAPPING THEOREM

GENERALIZED COVARIATION FOR BANACH SPACE VALUED PROCESSES, ITÔ FORMULA AND APPLICATIONS

Functional Analysis Exercise Class

SEPARABILITY AND COMPLETENESS FOR THE WASSERSTEIN DISTANCE

Mathematical Methods for Physics and Engineering

HILBERT SPACES AND THE RADON-NIKODYM THEOREM. where the bar in the first equation denotes complex conjugation. In either case, for any x V define

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

Measurable Choice Functions

LECTURE 15: COMPLETENESS AND CONVEXITY

Malliavin Calculus: Analysis on Gaussian spaces

CHAPTER VIII HILBERT SPACES

Your first day at work MATH 806 (Fall 2015)

SPECTRAL THEOREM FOR COMPACT SELF-ADJOINT OPERATORS

EQUIVALENCE OF TOPOLOGIES AND BOREL FIELDS FOR COUNTABLY-HILBERT SPACES

C.6 Adjoints for Operators on Hilbert Spaces

Exercise Solutions to Functional Analysis

Chapter 8 Integral Operators

The Wiener Itô Chaos Expansion

The Dirichlet s P rinciple. In this lecture we discuss an alternative formulation of the Dirichlet problem for the Laplace equation:

Analysis Finite and Infinite Sets The Real Numbers The Cantor Set

2 (Bonus). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure?

I teach myself... Hilbert spaces

Hilbert Spaces. Hilbert space is a vector space with some extra structure. We start with formal (axiomatic) definition of a vector space.

µ X (A) = P ( X 1 (A) )

11.10a Taylor and Maclaurin Series

Lectures 2 3 : Wigner s semicircle law

Optimal Berry-Esseen bounds on the Poisson space

A Concise Course on Stochastic Partial Differential Equations

arxiv:math/ v2 [math.pr] 9 Mar 2007

Mathematics Department Stanford University Math 61CM/DM Inner products

Metric spaces and metrizability

Categories and Quantum Informatics: Hilbert spaces

Stochastic Processes II/ Wahrscheinlichkeitstheorie III. Lecture Notes

Functional Analysis I

Lecture Notes in Advanced Calculus 1 (80315) Raz Kupferman Institute of Mathematics The Hebrew University

PROBLEMS. (b) (Polarization Identity) Show that in any inner product space

Invariant measures for iterated function systems

Lecture Notes on PDEs

Your first day at work MATH 806 (Fall 2015)

Stanford Mathematics Department Math 205A Lecture Supplement #4 Borel Regular & Radon Measures

3. (a) What is a simple function? What is an integrable function? How is f dµ defined? Define it first

CHAPTER 6. Differentiation

at time t, in dimension d. The index i varies in a countable set I. We call configuration the family, denoted generically by Φ: U (x i (t) x j (t))

Some Background Material

L p Spaces and Convexity

08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms

Problem Set 5: Solutions Math 201A: Fall 2016

Kernel Method: Data Analysis with Positive Definite Kernels

Regularity and compactness for the DiPerna Lions flow

October 25, 2013 INNER PRODUCT SPACES

Topological properties

4 Sums of Independent Random Variables

9 Brownian Motion: Construction

B. Appendix B. Topological vector spaces

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

MATH MEASURE THEORY AND FOURIER ANALYSIS. Contents

7 Complete metric spaces and function spaces

Stein s method and weak convergence on Wiener space

Transcription:

The Stein and Chen-Stein methods for functionals of non-symmetric Bernoulli processes Nicolas Privault Giovanni Luca Torrisi Abstract Based on a new multiplication formula for discrete multiple stochastic integrals with respect to non-symmetric Bernoulli random wals, we extend the results of [4] on the Gaussian approximation of symmetric Rademacher sequences to the setting of possibly non-identically distributed independent Bernoulli sequences. We also provide Poisson approximation results for these sequences, by following the method of [5]. Our arguments use covariance identities obtained from the Clar-Ocone representation formula in addition to those usually based on the inverse of the Ornstein-Uhlenbec operator. Keywords: Bernoulli processes; Chen-Stein method; Stein method; Malliavin calculus; Clar- Ocone representation. Mathematics Subject Classification: 60F05, 60G57, 60H07. Introduction Malliavin calculus and the Stein method were combined for the first time for Gaussian fields in the seminal paper [3], whose results have later been extended to other settings, including Poisson processes [5], [6]. In particular, the Stein method has been applied in [4] to Rademacher sequences (X n ) n N of independent and identically distributed Bernoulli random variables with P (X = ) = P (X = ) = /2, in order to derive bounds on distances between the probability laws of functionals of (X n ) n N and the law N(0, ) of a standard N(0, ) normal random variable Z. Those approaches exploit a covariance representation based on the number (or Ornstein-Uhlenbec) operator L and its inverse L. Division of Mathematical Sciences, Nanyang Technological University, SPMS-MAS-05-43, 2 Nanyang Lin Singapore 63737. e-mail: nprivault@ntu.edu.sg, http://www.ntu.edu.sg/home/nprivault/ Istituto per le Applicazioni del Calcolo Mauro Picone, CNR, Via dei Taurini 9, 0085 Roma, Italy. e-mail: torrisi@iac.rm.cnr.it, http://www.iac.rm.cnr.it/~torrisi/

From here onwards, we denote by C 2 b the set of all real-valued bounded functions with bounded derivatives up to the second order. In particular, for h C 2 b, using a chain rule proved in the symmetric case, the bound E[h(F )] E[h(Z)] A min{4 h, h } + h A 2, (.) has been derived in [4] (see Theorem 3. therein) for centered functionals F of a symmetric Bernoulli random wal (X n ) n N. projections on Ω := {, } N and Here, (X n ) n N is built as the sequence of canonical [ ] A = E DF, DL F l 2 (N), A 2 = 20 3 E [ ] DL F, DF 3 l 2 (N), where, l 2 (N) is the usual inner product on l 2 (N), and D is the symmetric gradient defined as where, given ω = (ω 0, ω,...) Ω, we let D F (ω) = 2 (F (ω +) F (ω )), N, ω + = (ω 0,..., ω, +, ω +,...) and ω = (ω 0,..., ω,, ω +,...). The above bound (.) can be used to control the Wasserstein distance between N(0, ) and the law of F as in Corollary 3.6 in [4]. In addition, the right-hand side of (.) yields explicit bounds in the case where F is a single discrete stochastic integral (see Corollary 3.3 in [4]) or a multiple discrete stochastic integral (see Section 4 in [4]). In this latter case the derivation of explicit bounds is based on a multiplication formula proved in the symmetric case (see Proposition 2.9 in [4]). In this paper we provide Gaussian and Poisson approximations for functionals of notnecessarily symmetric Bernoulli sequences via the Stein and Chen-Stein methods, respectively. See [] for recent related results on Gaussian approximation, without relying on a multiplication formula for discrete multiple stochastic integrals. The normal and Poisson approximations are based on suitable chain rules in Propositions 2. and 2.2 and on an extension to the non-symmetric case of the multiplication formula 2

for discrete multiple stochastic integrals (see Proposition 5. and Section 9 for its proof). In addition to using the Ornstein-Uhlenbec operator L for covariance representations, we also derive error bounds for the normal and Poisson approximations using covariance representations based on the Clar-Ocone formula, following the argument implemented in [20]. Indeed the operator L is of a more delicate use in applications to functionals whose multiple stochastic integral expansion is not explicitly nown. In contrast with covariance identities based on the number operator, which rely on the divergence-gradient composition, the Clar- Ocone formula only requires the computation of a gradient and a conditional expectation. A bound for the Wasserstein distance between a standard Gaussian random variable and a (standardized) function of a finite sequence of independent random variables has been obtained in [4], via the construction of an auxiliary random variable which allows one to approximate the Stein equation. Although the results in our paper are restricted to the Bernoulli case, they may be applied to functionals of an infinite sequence of Bernoulli distributed random variables. A comparison between a bound in our paper and that one in [4] is given at the end of the first example of Section 4. As far as the Gaussian approximation is concerned, using a covariance representation based on the Clar-Ocone formula, in Theorem 3.2 below we find sufficient conditions on centered functionals F of a not necessarily symmetric Bernoulli random wal so that E[h(F )] E[h(Z)] B min{4 h, h } + h B 2 + h B 3 (.2) for any h C 2 b, and for some positive constants B, B 2, B 3 > 0; similarly, using a covariance representation based on the Ornstein-Uhlenbec operator, in Theorem 3.4 below we provide alternate sufficient conditions on centered functionals F of a not necessarily symmetric Bernoulli random wal so that the bound (.2) holds for different positive constants C, C 2, C 3 > 0, in place of B, B 2, B 3 respectively. In Theorem 3.6 below we show that the bound (.2) can be used to control the Fortet-Mourier distance d FM between F and the standard N(0, ) normal random variable Z, i.e. we prove d FM (F, Z) 2(B + B 3 )(5 + E[ F ]) + B 2. A similar bound holds, under alternate conditions on F, with the constant B i replaced by C i (i =, 2, 3). Replacing the Stein method by the Chen-Stein method, we also show that this 3

approach applies to the Poisson approximation in addition to the Gaussian approximation, and treat discrete multiple stochastic integrals as examples in both cases. This paper is organized as follows. In Section 2 we recall some elements of stochastic analysis of Bernoulli processes, including chain rules for finite difference operators. In Section 3 we present the two different upper bounds for the quantity E[h(F )] E[h(Z)], h C 2 b, described above and the related application to the Fortet-Mourier distance. Section 4 contains explicit first chaos bounds with application to determinantal processes, while Section 5 is concerned with bounds for the nth chaoses. The important case of quadratic functionals (second chaoses) is treated in a separate paragraph. In Section 6 we apply our arguments to the Poisson approximation and in Sections 7 and 8 we investigate the case of single and multiple discrete stochastic integrals. Finally, Section 9 deals with the new multiplication formula for discrete multiple stochastic integrals in the non-symmetric case, whose proof is modeled on normal martingales that are solution of a deterministic structure equation. 2 Stochastic analysis of Bernoulli processes In this section we provide some preliminaries. The reader is directed to [8] and references therein for more insight into the stochastic analysis of Bernoulli processes. From now on we assume that the canonical projections X n : Ω {, }, Ω = {, } N, are considered under the not necessarily symmetric measure P given on cylinder sets by P ({ε 0,..., ε n } {, } N ) = n =0 p (+ε )/2 q ( ε )/2, ε {, }, = 0,..., n. Given ω = (ω 0, ω,...) Ω and ω +, ω defined as above, for any F : Ω R we consider the finite difference operator D F (ω) = p q (F (ω +) F (ω )), N and, denoting by κ the counting measure on N, we consider the L 2 (Ω N) = L 2 (Ω N, P κ)- valued operator D defined for any F : Ω R, by DF = (D F ) N. Given n we denote by l 2 (N) n = l 2 (N n ) the class of functions on N n that are square integrable with respect to κ n, we denote by l 2 (N) n the subspace of l 2 (N) n formed by functions that are symmetric 4

in n variables. The L 2 domain of D is given by Dom(D) = {F L 2 (Ω) : DF L 2 (Ω N)} = {F L 2 (Ω) : E[ DF 2 l 2 (N) ] < }. We let (Y n ) n 0 denote the sequence of centered and normalized random variables defined by which satisfies the discrete structure equation Y n = q n p n + X n 2 p n q n, Y 2 n = + q n p n 2 p n q n Y n. (2.) Given f l 2 (N) we define the first order discrete stochastic integral of f as J (f ) = 0 f ()Y, and we let J n (f n ) = f n (i,..., i n )Y i... Y in (i,...,i n) n denote the discrete multiple stochastic integral of order n of f n in the subspace l 2 s( n ) of l 2 (N) n composed of symmetric ernels that vanish on diagonals, i.e. on the complement of n = {(,..., n ) N n : i j, i < j n}, n. As a convention we identify l 2 (N 0 ) to R and let J 0 (f 0 ) = f 0, f 0 R. Hereafter, we shall refer to the set of functionals of the form J n (f) as the n-chaos. The multiple stochastic integrals satisfy the isometry formula E[J n (f n )J m (g m )] = {n=m} n! f n, g m l 2 s ( n), f n l 2 s( n ), g m l 2 s( m ), cf. e.g. Proposition.3.2 of [9]. The finite difference operator acts on multiple stochastic integrals as follows: D J n (f n ) = nj n (f n (, ) n (, )) = nj n (f n (, )), (2.2) N, f n l 2 s( n ). Due to the chaos representation property any square integrable F may be represented as F = n 0 J n(f n ), f n l 2 s( n ), and so the L 2 domain of D may be rewritten as { Dom(D) = F = J n (f n ) : n 0 5 } n n! f n 2 l 2 (N) <. n n

Next we present a chain rule for the finite difference operator that extends Proposition 2.4 in [4] from the symmetric to the non-symmetric case. This chain rule will be used later on for the normal approximation. In the following we write F ± in place of F (ω ±). Proposition 2. Let F Dom(D) and f : R R be thrice differentiable with bounded third derivative. Assume moreover that f(f ) Dom(D). Then, for any integer 0 there exists a random variable R F such that where Proof. D f(f ) = f (F )D F D F 2 4 (f (F + p q ) + f (F ))X + R F, a.s. (2.3) R F 5 3! f D F 3, p q a.s. (2.4) By a standard Taylor expansion we have D f(f ) = p q (f(f + ) f(f )) = p q (f(f + ) f(f )) p q (f(f ) f(f )) where = p q f (F )(F + F ) + p q f (F )(F + 2 F )2 + R + p q f (F )(F F ) p q f (F )(F 2 F )2 + R = f p q (F )D F + f (F )[(F + 2 F )2 (F F )2 ] + R + + R, (2.5) R ± p q f F ± 3! F 3. (2.6) By the mean value theorem we find where f (F ) = f (F + ) + f (F ) 2 = f (F + ) + f (F ) 2 f + f (F ) f (F + ) + f (F ) f (F ) 2 + R, R ( F + 2 F + F F ). Substituting this into (2.5) we get D f(f ) = f p q (F )D F + (f (F + 4 ) + f (F ))[(F + F )2 (F F )2 ] + R + + R + R, (2.7) where R p q f ( F + 4 F + F F )( F + F 2 F F 2 ) 6

Note that p q f ( F + 4 F + F F ) F + F 2. (2.8) F + F = (F + F ) {X = } + (F + F ) {X =} = (F + F ) {X = } = (F + F ) {X = }, (2.9) and similarly, F F = (F + F ) {X =}. (2.0) Therefore we have F ± F D F / p q, and combining this with (2.6) and (2.8) we find R ± f D F 3, R 3!p q By (2.9) and (2.0) we also have therefore f D F 3. (2.) 2p q (F + F )2 = (F + F )2 {X = } and (F F )2 = (F + F )2 {X =}, (F + F )2 (F F )2 = (F + F )2 ( {X = } {X =}) = (F + F )2 X = D F 2 p q X. The claim follows substituting this expression into (2.7) and by using (2.) to estimate the remainder. Now we present a chain rule for the finite difference operator, which is suitable for integervalued functionals. This chain rule will be used later on for the Poisson approximation. Given a function f : N R we define the operators f() := f( + ) f(), 2 f := ( f). Proposition 2.2 Let F Dom(D) be an N-valued random variable. Then, for any f : N R so that f(f ) Dom(D), we have where R F 2 f 2 D f(f ) = f(f )D F + R F, (2.2) ( ( ) D F D F ( ) ) D F D F {X = } + {X =} +. p q p q p q p q 7 (2.3)

Proof. As shown in the proof of Theorem 3. in [5], for any f : N R and any, a N, f() f(a) f(a)( a) 2 f ( a)( a ). (2.4) 2 Therefore, taing first = F +, a = F and then = F, a = F, we deduce D f(f ) = p q (f(f + ) f(f )) p q (f(f ) f(f )) = p q f(f )(F + F ) + R() + p q f(f )(F F ) + R(2), where by (2.4), setting R F := R() + R (2), we have R F 2 f ( (F + 2 F )(F + F ) + (F F )(F F ) ). The claim follows from (2.9) and (2.0). Next we give two alternative covariance representation formulas. Let (F n ) n be the filtration defined by F = {, Ω}, F n = σ{x 0,..., X n }, n 0. By Proposition.0. of [9], for any F, G Dom(D) with F centered we have the Clar- Ocone covariance representation formula [ ] Cov(F, G) = E[F G] = E E[D F F ]D G. (2.5) The second covariance representation formula involves the inverse of the Ornstein-Uhlenbec operator. The domain Dom(L) of the Ornstein-Uhlenbec operator L : L 2 (Ω) L 2 0(Ω), where L 2 0(Ω) denotes the subspace of L 2 (Ω) composed of centered random variables, is given by Dom(L) = and, for any F Dom(L), { F = n 0 J n (f n ) : LF = 0 } n 2 n! f n 2 l 2 (N) < n n nj n (f n ). n= The inverse of L, denoted by L, is defined on L 2 0(Ω) by L F = n= 8 n J n(f n ),

with the convention L F = L (F E[F ]) in case F is not centered, as in e.g. [5]. Using this convention, for any F, G Dom(D) we have Cov(F, G) = E[G(F E[F ])] = E [ DG, DL F l 2 (N)], (2.6) cf. Lemma 2.2 of [4] in the symmetric case. For the sae of completeness, we provide an alternative expression for the covariance representation formula (2.6). Let (P t ) t 0 be the semigroup associated to the Ornstein-Uhlenbec operator L (we refer the reader to Section 0 of [8] for the details). Then P t J n (f n ) = e nt J n (f n ), n, and so for any F = n=0 J n(f n ) Dom(D) one has 0 e t P t D F dt = n n= = n n = 0 0 J n (f n (, )) n= = D L F. e t P t J n (f n (, )) dt e t e (n )t J n (f n (, )) dt Consequently, the covariance representation (2.6) may be rewritten as [ ] Cov(F, G) = E[G(F E[F ])] = E e t D GP t D F dt, =0 for any F, G Dom(D), cf. Proposition.0.2 of [9]. 3 Normal approximation of Bernoulli functionals In this section we present two different upper bounds for the quantity E[h(F )] E[h(Z)], h C 2 b. The first one is obtained by using the covariance representation formula (2.5), while the second one, obtained by using the covariance representation formula (2.6), is a strict extension of the bound given in Theorem 3. of [4]. 0 Before proceeding further we recall some necessary bacground on the Stein method for the normal approximation and refer to [], [0], [24], [25] and to [4] for more insight into this technique. 9

Stein s method for normal approximation Let Z be a standard N(0, ) normal random variable and consider the so-called Stein s equation associated with h : R R: h(x) E[h(Z)] = f (x) xf(x), x R. We refer to part (ii) of Lemma 2.5 in [4] for the following lemma. More precisely, the estimates on the first and second derivative are proved in Lemma II.3 of [25], the estimate of the third derivative is proved in Theorem. of [6] and the alternative estimate on the first derivative may be found in [] and [0]. Lemma 3. If h C 2 b, then the Stein equation has a solution f h which is thrice differentiable and such that f h 4 h, f h 2 h and f h f h h. 2 h. We also have Combining the Stein equation with this lemma, for a generic square integrable and centered random variable F we have E[h(F )] E[h(Z)] = E[f h(f ) F f h (F )]. (3.) Let (F n ) n be a sequence of square integrable and centered random variables. If E[h(F n )] E[h(Z)] 0, h C 2 b then (F n ) n converges to Z in distribution as n tends to infinity, and so an upper bound for the right-hand side of (3.) may provide informations about this normal approximation. The results of Sections 3. and 3.2 below are given in terms of bounds for E[h(F )] E[h(Z)], for test functions in C 2 b, and they are applied in Section 3.3 to derive bounds for the Fortet- Mourier distance between the laws of two random variables X and Y, which is defined by d FM (X, Y ) = sup E[h(X)] E[h(Y )], (3.2) h FM where FM is the class of functions h such that h BL = h L + h, where L denotes the standard Lipschitz semi-norm. Clearly, any h FM is Lipschitz with Lipschitz constant less than or equal to and so it is Lebesgue a.e. differentiable and h. One can also shows that d FM metrizes the convergence in distribution, see e.g. Chapter in [7]. 0

3. Clar-Ocone bound Theorem 3.2 Let F Dom(D) be a centered random variable and assume that [ ] B : = E E[D F F ]D F, 0 B 2 : = 0 2p p q E[ E[D F F ] D F 2 ], (3.3) are finite. Then we have for all h C 2 b. Proof. L 2 (Ω) and B 3 : = 5 3 0 p q E[ E[D F F ] D F 3 ] (3.4) E[h(F )] E[h(Z)] B min{4 h, h } + h B 2 + h B 3 (3.5) Since the first derivative of f h is bounded we have that f h is Lipschitz. So f h (F ) Consequently we have D f h (F ) = p q f h (F + ) f h(f ) f h D F. E[ Df h (F ) 2 l 2 (N) ] f h E[ DF 2 l 2 (N) ] and f h (F ) Dom(D). Since F is centered, by the covariance representation (2.5) and the chain rule of Proposition 2. we have [ ] E[F f h (F )] = E E[D F F ]D f h (F ) 0 [ ] [ ] = E E[D F F ]D F f h(f ) E E[D F F ] D F 2 4 (f h (F + p q ) + f h (F ))X 0 0 [ ] +E E[D F F ]R F (h). (3.6) 0 Note that the three expectations in (3.6) are finite. The first one since DF L 2 (Ω N) and f h is bounded, indeed by Jensen s inequality [ ] [ ] E E[D F F ]D F f h(f ) 4 h E E[ D F F ] D F 0 0

[ ] = 4 h E E[E[ D F F ] D F F ] 0 [ ] = 4 h E E[E[ D F F ] 2 ] 0 [ ] 4 h E E[E[ D F 2 F ] 0 = 4 h E[ D F 2 ] < ; 0 the second relation follows from the boundedness of f h and (3.3), while the third one follows from (2.4) and (3.4). The random variables E[D F F ], D F, F ± are independent of X (the first one because it is F -measurable and the random variables (X ) N are independent, the others by their definition). Therefore, the equality (3.6) reduces to [ ] E[F f h (F )] = E f h(f ) 0 E[D F F ]D F (3.7) + 2p 4 E[E[D F F ] D F 2 (f h (F + p q ) + f h (F ))] 0 [ ] + E E[D F F ]R F (h). (3.8) 0 Inserting this expression into the right-hand side of (3.) we deduce E[h(F )] E[h(Z)] B min{4 h, h } + h B 2 (3.9) [ ] +E E[D F F ] R F (h), (3.0) where to get the term (3.9) we used the inequalities f h 0 min{4 h, h } and f h 2 h (see Lemma 3.). Using (2.4) one may easily see that the term in (3.0) is bounded above by h B 3. The proof is complete. Corollary 3.3 Let F Dom(D) be a centered random variable and assume that B : = F 2 L 2 (Ω) + D F, E[D F F ] l 2 (N) E[ D F, E[D F F ] l 2 (N)] L 2 (Ω), B 2 : = 0 2p p q D F L 2 (Ω) E[ D F 4 ], (3.) 2

B 3 : = 5 3 0 p q E[ D F 4 ] are finite. Then (3.5) holds for all h C 2 b. Proof. By the Cauchy-Schwarz and the triangular inequalities we have [ ] E E[D F F ]D F E[D F F ]D F 0 0 L 2 (Ω) F 2 L 2 (Ω) + D F, E[D F F ] l 2 (N) F 2 L 2 (Ω) L 2 (Ω). By the Clar-Ocone formula (2.5) we have [ ] E[ D F, E[D F F ] l 2 (N)] = E D F E[D F F ] Therefore =0 = F 2 L 2 (Ω). [ ] E E[D F F ]D F B. 0 By the Cauchy-Schwarz and Jensen inequalities we have and E[ E[D F F ] D F 2 ] E[ E[D F F ] 2 ] E[ D F 4 ] E[E[ D F 2 F ]] E[ D F 4 ] = E[ D F 2 ] E[ D F 4 ], E[ E[D F F ] D F 3 ] = E[ E[D F F ] 2 D F 2 ] E[ D F 4 ] E[E[ D F 2 F ] D F 2 ] E[ D F 4 ] E[ D F 4 ] E[ D F 4 ] = E[ D F 4 ]. The claim follows from Theorem 3.2. 3.2 Semigroup bound Theorem 3.4 Let F Dom(D) be a centered random variable and let [ ] C : = E DF, DL F l 2 (N), 3

C 2 : = 0 C 3 : = 5 3 0 2p p q E[ D L F D F 2 ], (3.2) p q E[ D L F D F 3 ] (3.3) be finite. Then for all h C 2 b we have Proof. E[h(F )] E[h(Z)] C min{4 h, h } + h C 2 + h C 3. (3.4) Although the proof is similar to that of Theorem 3.2, we give the details since some points need a different justification. As in the proof of Theorem 3.2 one has f h (F ) Dom(D). Since F is centered, by the covariance representation (2.6) and the chain rule of Proposition 2. we have [ ] E[F f h (F )] = E D f h (F )D L F 0 [ ] [ ] = E D F f h(f )D L D F 2 F + E X 4 (f h (F + p q ) + f h (F ))D L F 0 0 [ ] E D L F R F (h). (3.5) 0 Note that the three expectations in (3.5) are finite. The first one since DF L 2 (Ω N) and f h is bounded, indeed [ ] [ ] E D L F D F f h(f ) 4 h E D L F D F 0 0 ( 4 h E ) /2 ( D L F 2 E ) /2 D F 2 0 0 ) /2 ( ) /2 = 4 h (E DL F 2 l 2 (N) E DF 2 l 2 (N) 4 h E[ DF 2 l 2 (N) ] <, where for the latter inequality we used the relation E[ DL F 2 l 2 (N) ] E[ DF 2 l 2 (N) ] (see Lemma 2.3(3) in [4]); the second one due to the boundedness of f h and (3.2); the third one due to (2.4) and (3.3). By Lemma 2.3 in [4] we have that the random variables 4

D L F, D F and F ± are independent of X. Therefore, the equality (3.5) reduces to [ ] E[F f h (F )] = E f h(f )D F D L F + 2p 4 E[ D F 2 (f (F + p q ) + f (F ))D L F ] 0 0 [ ] E R F (h)d L F. (3.6) 0 Inserting this expression into the right-hand side of (3.) we deduce E[h(F )] E[h(Z)] C min{4 h, h } + h C 2 (3.7) [ ] + E D L F R F (h), (3.8) where to get the term (3.7) we used the inequalities f h 0 min{4 h, h } and f h 2 h (see Lemma 3.). Using (2.4) one may easily see that the term in (3.8) is bounded above by h C 3. The proof is complete. Note that, formally, the upper bound (3.5) may be obtained by (3.4) substituting the term D L F in the definitions of C, C 2, C 3, with E[D F F ], and vice versa. Corollary 3.5 Let F Dom(D) be a centered random variable and let C : = F 2 L 2 (Ω) + D F, D L F l 2 (N) E[ D F, D L F l 2 (N)] L 2 (Ω), C 2 : = B 2, where B 2 is defined by (3.) and C 3 defined by (3.3) be finite. Then (3.4) holds for all h C 2 b. Proof. By the Cauchy-Schwarz and the triangular inequalities we have [ ] E D F, D L F l 2 (N) D F, D L F l 2 (N) L 2 (Ω) F 2 L 2 (Ω) + D F, D L F l 2 (N) F 2 L 2 (Ω) L 2 (Ω). By the covariance representation formula (2.6) we have E[ D F, D L F l 2 (N)] = F 2 L 2 (Ω). Therefore [ ] E D F, D L F l 2 (N) C. 5

Let F Dom(D) be of the form F = n 0 J n (f n ), f n l 2 s( n ). Then D L F = n J n (f n (, )) and D F = n nj n (f n (, )). So, by the isometry formula, we have [ ] E[ D L F 2 ] = E J n (f n (, )) 2 n = n E[ J n (f n (, )) 2 ] = n (n )! f n (, ) 2 l 2 (N) (n ) and [ ] E[ D F 2 ] = E nj n (f n (, )) 2 n = n n 2 E[ J n (f n (, )) 2 ] = n n 2 (n )! f n (, ) 2 l 2 (N) (n ). So E[ D L F 2 ] E[ D F 2 ] (3.9) and by the Cauchy-Schwarz inequality, we deduce E[ D L F D F 2 ] E[ D L F 2 ] E[ D F 4 ] E[ D F 2 ] E[ D F 4 ]. The claim follows from Theorem 3.4. 3.3 Fortet-Mourier distance In this section we provide bounds in the Fortet-Mourier distance (3.2). 6

Theorem 3.6 Let F Dom(D) be centered. We have: (i) If (3.5) holds for any h C 2 b and B + B 3 (5 + E[ F ])/4, then d FM (F, Z) 2(B + B 3 )(5 + E[ F ]) + B 2. (3.20) (ii) If (3.4) holds for any h C 2 b and C + C 3 (5 + E[ F ])/4, then Proof. d FM (F, Z) 2(C + C 3 )(5 + E[ F ]) + C 2. (3.2) We only give the details for the proof of (3.20). The inequality (3.2) can be proved similarly. Tae h FM and define h t (x) = h( ty + tx)φ(y) dy, t [0, ], R where φ is the density of the standard N(0, ) normal random variable Z. As in the proof of Corollary 3.6 in [4], for 0 < t /2, one has h t C 2 b and the bounds h t / t, (3.22) and So E[h(F )] E[h t (F )] ( t + ) E[ F ], E[h(Z)] E[h t (Z)] 3 t. 2 2 E[h(F )] E[h(Z)] = (E[h(F )] E[h t (F )]) + (E[h t (F )] E[h t (Z)]) + (E[h t (Z)] E[h(Z)]) E[h(F )] E[h t (F )] + E[h t (F )] E[h t (Z)] + E[h t (Z)] E[h(Z)] ( ) E[ F ] t + + B min{4 h t, h t } + h 2 t B 2 + h t B 3 + 3 t 2 ( ) 5 + E[ F ] t + B + B 3 + B 2, (3.23) 2 t where in the latter inequality we used (3.22) and that h t, for all t. Minimizing in t (0, /2] the term in (3.23), we have that the optimal is attained at t = 2(B + B 3 )/(5 + E[ F ]) (0, /2]. The conclusion follows substituting t in (3.23) and then taing the supremum over all the h FM. 4 First chaos bound for the normal approximation In this section we specialize the results of Section 3 to first order discrete stochastic integrals. As we shall see, the bounds (3.5) and (3.4) (and the corresponding assumptions) coincide on the first chaos, although they differ on n-chaoses, n 2. 7

Corollary 4. Assume that α = (α ) 0 is in l 2 (N), 0 2p p q α 3 < and 0 p q α 4 <. (4.) Then for the first chaos F = J (α) = 0 α Y the bound (3.5) (which in this case coincides with the bound (3.4)) holds with B = C = α 2, B2 = C 2 = 2p α 3, p q 0 0 and Proof. B 3 = C 3 = 5 3 0 p q α 4. Since α l 2 (N) we have that F L 2 (Ω). Moreover F is centered, and since ( q p + D F = α p q 2 q ) p p q 2 = α, p q we have F Dom(D). The finiteness of the corresponding quantities B, B 2 and B 3 is guaranteed by α l 2 (N) and (4.). The claim follows from e.g. Theorem 3.2. Example Consider the sequence of functionals (F n ) n defined by F n = n Y. n =0 Setting α = n, = 0,..., n, and α = 0, n, we have B = 0 and B 2 = n 2p and B n 3/2 3 = 5 p q 3n 2 =0 n =0 p q. In the symmetric case p = q = /2 we find B 2 = 0 and the bound is of order /n, implying a faster rate than in the classical Berry-Esséen estimate (however here we are using C 2 b test functions; cf. the comment after Corollary 3.3 in [4]). 8

In the non-symmetric case p = p and q = q, p q, the bound is of order n /2 as in the classical Berry-Esséen estimate. Indeed we have B 2 = B (n) 2 = 2p and B 3 = B (n) n 3 = 5 p( p) 3n p( p) hence the inequality B + B 3 (5 + E[ F n ])/4 of Theorem 3.6 reads [ ( ) ] 5 5 n n + 4 3p( p) n 4 2p E 2 n + X, p( p) which holds if e.g. n 4 4. Consequently, by (3.20) it follows that for any n 3p( p) 3p( p) we have 2B (n) 3 (5 + E[ F n ]) + B (n) 2 d FM (F n, Z) = 50 3p( p) n + 0 3p( p) n E 50 3p( p) n + 0 3p( p) n E [ n n =0 [ n n =0 ( ( 2p + X 2 p( p) 2p + X 2 p( p) A straightforward computation gives [ ( ) n 2p + X E n 2] 2 =, p( p) hence where In the general case, if a n := n 2p n 3/2 =0 =0 d FM (F n, Z) n K (p), n K (p) := 2 5 + 2p p( p) 0 and b n := n p q n 2 =0 ) ] 4 3p( p) =0 + ) ] /2 2 + 2p n p( p) 2p p( p) n. (4.2) p q 0, as n, (4.3) then F n Z in distribution, and the rate depends on the rate of convergence to zero of the sequences (a n ) n and (b n ) n. For instance, if p = ( + 2) α, 0 < α <, 0, we have p q (n + ) α ( (/2) α ), = 0,..., n. Consequently we have n 2p n 3/2 p q =0 ( (/2) α /2 (n + )α/2 n ) n 3/2 9 =0 2p

and which yields a bound of order n ( α)/2. + 2 (α ) ( (/2) α ) /2 (n + ) α/2 n /2, n ( (/2) α ) n (n + ) α, n 2 p q =0 Finally we note that the bound (4.2) in the non-symmetric case p = p and q = q, p q, is consistent with the bound on the Wasserstein distance between F n and Z provided by Theorem 2.2 in [4]. Indeed, letting d W denote the Wasserstein distance and Y an independent copy of Y, a simple computation shows that d FM (F n, Z) d W (F n, Z) (4.4) ( ) 2 E[ Y Y n 4 ] (E[ Y Y 2 ]) 2 + E[ Y 3 ] = ( ) 2 E[ Y Y n 4 ] 4 + E[ Y 3 ] = K 2 (p), (4.5) n where since we have E[ Y 3 ] K 2 (p) := + 2 2p 4 + 2 p 2 ( p) 4 p( p) + 2 2p 2 p( p) and E[ Y Y 4 ] = p 2 ( p). We note that when e.g. p is small it holds K 2 (p) > K (p). Application to determinantal processes Let E be a locally compact Hausdorff space with countable basis and B(E) the Borel σ-field. We fix a Radon measure λ on (E, B(E)). The configuration space Γ E is the family of nonnegative N-valued Radon measures on E. We equip Γ E with the topology which is generated by the functions Γ E ξ ξ(a) N, A B(E), where ξ(a) denotes the number of points of ξ in A. The existence and uniqueness of a determinantal process with Hermitian ernel K is due to Macchi [2] and Soshniov [22] and can be summarized as follows (we refer the reader to [3] for notions of functional analysis). 20

Theorem 4.2 Let K be a self-adjoint integral operator on L 2 (E, λ) with ernel K. Suppose that the spectrum of K is contained in [0, ] and that K is locally of trace-class, i.e. for any relatively compact Λ E, K Λ = P Λ KP Λ is of trace-class (here P Λ f = f Λ is the orthogonal projection.) Then there exists a unique probability measure µ K on Γ E with n-th correlation measure λ n (dx,..., dx n ) = det(k(x i, x j )) i,j n λ(dx )... λ(dx n ), where det(k(x i, x j )) i,j n is the determinant of the n n matrix with ij-entry K(x i, x j ). The probability measure µ K is called determinantal process with ernel K. Given a relatively compact set Λ E, we focus on the random variable ξ(λ) and recall the following basic result (see e.g. Proposition 2.2 in [2]). Theorem 4.3 Let K be as in the statement of Theorem 4.2 and denote by κ [0, ], 0, the eigenvalues of K Λ. Under µ K the random variable ξ(λ) has the same distribution of 0 Z, where Z 0, Z,... are independent random variables such that Z obeys the Bernoulli distribution with mean κ, i.e. Z n = X n + 2 {0, }, n N where the X s tae values on {, } and are independent with P (X n = ) = κ n. The central limit theorem for the number of points on a relatively compact set of a determinantal process may be obtained in different manners, see [5], [2] and [23]. In the following we provide an alternate derivation which gives the rate of the normal approximation. Corollary 4.4 Let K be as in the statement of Theorem 4.2 and (Λ n ) n 0 E be an increasing sequence of relatively compact sets such that Var µk (ξ(λ n )) = 0 κ (n) ( κ(n) ), as n where κ (n) [0, ], 0, are the eigenvalues of K Λn, n 0. Setting F n = ξ(λ n) E µk [ξ(λ n )], VarµK (ξ(λ n )) for any h C 2 b, we have E µk [h(f n )] E[h(Z)] h B (n) 2 + h B (n) 3, n 0 2

where B (n) 2 = 0 κ(n) ( κ(n) ) 2κ(n) ( ) 3/2 0 κ(n) ( κ(n) ) ( ) /2 0 κ(n) ( κ(n) ) and B (n) 3 = 5 3 0 κ(n) So we have a bound of order [Var µk (ξ(λ n ))] /2. ( κ(n) ). Proof. For n 0, let (Z (n) ) 0 be a sequence of independent {0, }-valued random variables with Z (n) Be(κ (n) (n) ) and (Y ) 0 defined by By Theorem 4.3 we have ξ(λ n ) d = 0 Z (n) Y (n) = Z(n) κ (n) = 0 κ (n) κ (n). ( κ(n) ) ( κ(n) (n) )Y + 0 κ (n), where d = denotes the equality in distribution. Then F n = ξ(λ n) E µk [ξ(λ n )] VarµK (ξ(λ n )) d = 0 α (n) Y (n), where α (n) = κ (n) 0 κ(n) ( κ(n) ). ( κ(n) ) We are going to apply Corollary 4.. Clearly, for any n 0, the sequence (α (n) ) 0 is in l 2 (N). Moreover, for any n 0, and 0 κ (n) α (n) 3 = ( κ(n) ) α (n) 4 0 κ (n) VarµK (ξ(λ n )) < ( κ(n) ) = Var µk (ξ(λ n )) <. So condition (4.) is satisfied. Moreover, a straightforward computation gives B = B (n) = 0, B 2 = B (n) 2 and B 3 = B (n) 3, and the proof is completed. 22

Example Let E = C and λ the standard complex Gaussian measure on C, i.e. where dz is the Lebesgue measure on C. λ(dz) = π e z 2 dz, The Ginibre process µ exp is the determinantal process with exponential ernel K(z, w) = e zw, where z is the complex conjugate of z C. Let b(o, n) be the complex ball centered at the origin with radius n. By Theorem.3 in [2] we have 4n 2 Var µexp (ξ(b(o, n))) = n ( x/(4n 2 )) /2 x /2 e x dx π 0 n, as n. π So for the Ginibre process Corollary 4.4 provides a bound of order n /2. 5 nth chaos bounds for the normal approximation In this section we give explicit upper bounds for the constants B i and C i, i =, 2, 3, involved in (3.5) and (3.4), when F = J n (f n ), f n l 2 s( n ). Our approach is based on the multiplication formula (5.3) below, which extends formula (2.) in [4] (see the discussion after Proposition 5.). Given f n l 2 s( n ) and g m l 2 s( m ), the contraction f n l g m, 0 l, is defined to be the function of n + m l variables f n l g m (a l+,..., a n, b +,..., b m ) := ϕ(a l+ ) ϕ(a )f n l g m (a l+,..., a n, b +,..., b m ), where (cf. the structure equation (2.)) and f n l g m (a l+,..., a n, b +,..., b m ) := ϕ(n) = q n p n 2 p n q n, n N (5.) a,...,a l N f n (a,..., a n )g m (a,..., a, b +,..., b m ) is the contraction considered in [4] for the symmetric case, see p. 707 therein. By convention, we define ϕ(a l+ ) ϕ(a ) = if l = (even when ϕ 0). Denote by fn l g m, 0 l, the symmetrization of f n l g m. Then, we shall consider the contraction f n l g m (a l+,..., a n, b +,..., b m ) := 23

= n+m l (a l+,..., a n, b +,..., b m ) f n l g m(a l+,..., a n, b +,..., b m ). (5.2) Note that in the symmetric case p n = q n = /2 we have f n g m = f n g m. However, f n l g m = 0 if l < and so f n l g m f n l g m for l <. The following multiplication formula holds. Proposition 5. We have the chaos expansion provided the functions h n,m,s := J n (f n )J m (g m ) = s 2i 2(s n m) 2(n m) ( n i! i )( m i J n+m s (h n,m,s ), (5.3) )( i s i ) f n s i i g m belong to l 2 s( n+m s ), 0 s 2(n m). Here the symbol s 2i 2(s n m) sum is taen over all the integers i in the interval [s/2, s n m]. means that the Since it is not obvious that formula (5.3) extends the product formula (2.) in [4], it is worthwhile to explain this point in detail. In the symmetric case p n = q n = /2, for any f n l 2 s( n ), g m l 2 s( m ), we have f n s i i g m = 0 if s < 2i, 0 s 2(n m). Therefore, for any fixed 0 s 2(n m) we have h n,m,s = 0 if s/2 is not an integer and ( )( ) n m h n,m,s = (s/2)! f n s/2 s/2 s/2 s/2 g m, if s/2 is an integer. Note that if s/2 is an integer, we have where we used the straightforward relation f n s/2 s/2 g m l 2 s ( n+m s ) = f n s/2 s/2 g m l 2 s ( n+m s ) f n s/2 s/2 g m l 2 ( n+m s ), f l 2 (N) n f l 2 (N) n, (5.4) being f the symmetrization of f. Therefore, by Lemma 2.4() in [4] we have f n s/2 s/2 g m l 2 s( n+m s ), and so h n,m,s l 2 s( n+m s ), for any 0 s 2(n m). By (5.3) we have J n (f n )J m (g m ) = 2(n m) J n+m s (h n,m,s ) 24

2(n m) ( )( ) n m = (s/2)! s/2 s/2 n m ( )( n m = r! r r r=0 which is exactly formula (2.) in [4]. We conclude this part with the following lemma. J n+m s (f n s/2 s/2 g m) ) J n+m 2r (f n r r g m ), Lemma 5.2 For any f n l 2 s( n ), g m l 2 s( m ), we have Proof. Note that q 0 f n (, q) l g m (, q) = f n l+ + g m. f n (, q) l g m (, q)(a l+,..., a n, b +,..., b m ) = ϕ(a l+ )... ϕ(a ) f n (a,..., a n, q)g m (a,..., a, b +,..., b m, q), a,...,a l N and so summing up over q N we deduce f n (, q) l g m (, q)(a l+,..., a n, b +,..., b m ) q 0 = ϕ(a l+ )... ϕ(a ) a,...,a l,q N f n (a,..., a n, q)g m (a,..., a, b +,..., b m, q) = ϕ(a l+ )... ϕ(a )f n l+ + g m(a l+,..., a n, b +,..., b m ) = f n l+ + g m(a l+,..., a n, b +,..., b m ). 5. Clar-Ocone bound By e.g. Lemma 4.6 in [8], for the nth-chaos J n (f n ), n 2, f n l 2 s( n ), we have E[J n (f n ) F ] = J n (f n [0,] n), N. Therefore where E[D J n (f n ) F ] = ne[j n (f n (, )) F ] = nj n (f n] ), (5.5) f n] ( ) := f n (, ) [0, ] n ( ). (5.6) 25

So by the isometric properties of discrete multiple stochastic integrals we have that the constants B i of Corollary 3.3 are equal, respectively, to B := n! f n 2 l 2 s( n) + n2 J n (f n (, )), J n (f n] ( )) l 2 (N) E[ J n (f n (, )), J n (f n] ( )) l 2 (N)] L 2 (Ω), (5.7) B 2 : = n 3 (n )! 0 2p p q f n (, ) l 2 s ( n ) E[ Jn (f n (, )) 4 ], (5.8) B 3 : = 5n4 3 0 p q E[ J n (f n (, )) 4 ]. (5.9) In the proof of the next theorem we show that these constants can be bounded above by computable quantities. Theorem 5.3 Let n 2 be fixed and let f n l 2 s( n ). Assume that for any N the functions and h () n,n,s := h () n,n,s := s 2i 2(s (n )) s 2i 2(s (n )) belong to l 2 s( 2n 2 s ), 0 s 2n 2, and that B := n! f n 2 l 2 s( n) ( 2n 3 + n 2 (2n 2 s)! 0 s {2i, 2i 2 } 2(s (n )) f n (, ) s i i f n] ( ) l 2 ( 2n 2 s ) B 2 : = n 3 (n )! 0 s {2i, 2i 2 } 2(s (n )) ( ) 2 ( ) n i i! f n (, ) s i i f n] ( ) (5.0) i s i ( ) 2 ( ) n i i! f n (, ) s i i f n (, ) (5.) i s i 0 ( ) 2 ( ) 2 ( n n i i!i 2! i i 2 s i )( i2 f n (, ) s i 2 i 2 f n] ( ) l 2 ( 2n 2 s )) /2, ( 2n 2 2p f n (, ) p q l 2 s ( n ) (2n 2 s)! ( ) 2 ( ) 2 ( n n i i!i 2! i i 2 s i )( i2 s i 2 s i 2 f n (, ) s i i f n (, ) l 2 ( 2n 2 s ) f n (, ) s i 2 i 2 f n (, ) l 2 ( 2n 2 s )) /2, ) ) 26

B 3 : = 5n4 3 2n 2 (2n 2 s)! s {2i, 2i 2 } 2(s (n )) 0 ( ) 2 ( ) 2 ( n n i i!i 2! i i 2 s i )( i2 s i 2 ) (5.2) p q f n (, ) s i i f n (, ) l 2 ( 2n 2 s ) f n (, ) s i 2 i 2 f n (, ) l 2 ( 2n 2 s ) (5.3) are finite. Then for all h C 2 b we have Proof. E[h(J n (f n ))] E[h(Z)] B min{4 h, h } + h B 2 + h B 3. The claim follows from Corollary 3.3 if we show that the constants B i defined by (5.7), (5.8) and (5.9) are bounded above by the constants B i defined in the statement, respectively. Step : Proof of B B. By the hypotheses on the functions h () n,n,s and the multiplication formula (5.3), we deduce J n (f n (, ))J n (f n] ) 2n 2 ( ) 2 ( ) n i = i! J 2n 2 s (f n (, ) s i i i s i s 2i 2(s (n )) = (n )!f n (, ) n n f n] ( ) 2n 3 ( ) 2 ( ) n i + i! J 2n 2 s (f n (, ) s i i i s i s 2i 2(s (n )) f n] ( )) f n] ( )). (5.4) Since the constant B in the statement is finite, we have f n (, ) s i i f n] ( ) l 2 ( 2n 2 s ) <, 0 0 s 2n 3, s 2i 2(s (n )). By (5.4) this in turn implies f n (, ) s i i f n] ( ) l 2 s ( 2n 2 s ) <, 0 0 s 2n 3, s 2i 2(s (n )), and so f n (, ) s i i f n] ( ) l 2 s( 2n 2 s ), 0 27

0 s 2n 3, s 2i 2(s (n )) (it is worthwhile to note that one can not use Lemma 5.2 to express the infinite sum 0 f n(, ) s i i f n] ( ) since the function of n variables f n] ( ) is not symmetric). Therefore, summing up over 0 in the equality (5.4), we get J n (f n (, )), J n (f n] ( )) l 2 (N) = (n )! 0 2n 3 + f n (, ) n n f n] ( ) s 2i 2(s (n )) ( ) 2 ( ) ( n i i! J 2n 2 s i s i 0 f n (, ) s i i f n] ( ) Taing the mean and noticing that discrete multiple stochastic integrals are centered, we have and so E[ J n (f n (, )), J n (f n] ( )) l 2 (N)] = (n )! 0 f n (, ) n n f n] ( ), ). J n (f n (, )), J n (f n] ( )) l 2 (N) E[ J n (f n (, )), J n (f n] ( )) l 2 (N) 2n 3 ( ) 2 ( ) ( ) n i = i! J 2n 2 s f n (, ) s i i f n] ( ). i s i 0 s 2i 2(s (n )) By means of the orthogonality and isometric properties of discrete multiple stochastic integrals, we have 2n 3 ( ) 2 ( ) ( ) E n i i! J 2n 2 s f n (, ) s i i f n] ( ) 2 i s i s 2i 2(s (n )) 0 2n 3 ( ) 2 ( ) ( ) = E n i i! J 2n 2 s f n (, ) s i i f n] ( ) 2 i s i s 2i 2(s (n )) 0 [( 0,2n 3 ( ) 2 ( ) ( )) n i + E i! J 2n 2 s f n (, ) s i i f n] ( ) i s i s s 2 s 2i 2(s (n )) 0 ( ( ) 2 ( ) ( ))] n i i! J 2n 2 s2 f n (, ) s 2 i i f n] ( ) i s 2 i s 2 2i 2(s 2 (n )) 0 2n 3 ( ) 2 ( ) ( ) = E n i i! J 2n 2 s f n (, ) s i i f n] ( ) 2 i s i s 2i 2(s (n )) 0 [ 2n 3 ( ) 2 ( ) ( ) 2 ( ) n i n i2 = E i! i 2! i s i i 2 s i 2 s {2i, 2i 2 } 2(s (n )) 28

J 2n 2 s ( 0 f n (, ) s i i f n] ( ) ) J 2n 2 s ( 0 f n (, ) s i 2 i 2 f n] ( ) )] = 2n 3 (2n 2 s)! 0 f n (, ) s i i s {2i, 2i 2 } 2(s (n )) f n] ( ), 0 ( ) 2 ( ) 2 ( n n i i!i 2! i i 2 s i )( i2 s i 2 f n (, ) s i 2 i 2 f n] ( ) l 2 s ( 2n 2 s ). (5.5) ) By the above relations and (5.7), we deduce B = n! f n 2 l 2 s( n) ( 2n 3 + n 2 (2n 2 s)! 0 f n (, ) s i i s {2i, 2i 2 } 2(s (n )) f n] ( ), 0 Now, note that by the Cauchy-Schwarz inequality ( ) 2 ( ) 2 ( n n i i!i 2! i i 2 s i )( i2 s i 2 f n (, ) s i 2 i 2 f n] ( ) l 2 s ( 2n 2 s )) /2. (5.6) f, g l 2 (N) n f l 2 (N) n g l 2 (N) n, for any f, g l2 (N) n. (5.7) ) By this relation, (5.4) and (5.6) we easily get B B. Step 2: Proof of B i B i, i = 2, 3. By the hypotheses on the functions multiplication formula (5.3), we deduce h () n,n,s and the J n (f n (, )) 2 = 2n 2 s 2i 2(s (n )) By a similar computation as for (5.5), we have E[ J n (f n (, )) 4 ] 2n 2 = E = s 2i 2(s (n )) 2n 2 (2n 2 s)! ( ) 2 ( ) n i i! J 2n 2 s (f n (, ) s i i i s i f n (, )). ( ) 2 ( ) 2 n i ( i! J 2n 2 s fn (, ) s i i f n (, ) ) i s i s {2i, 2i 2 } 2(s (n )) ( ) 2 ( ) 2 ( n n i i!i 2 i i 2 s i )( i2 s i 2 f n (, ) s i i f n (, ), f n (, ) s i 2 i 2 f n (, ) l 2 s ( 2n 2 s ), (5.8) and so by (5.8) and (5.9) we deduce B 2 = n 3 (n )! 0 2p p q f n (, ) l 2 s ( n ) 29 ( 2n 2 (2n 2 s)! )

s {2i, 2i 2 } 2(s (n )) ( ) 2 ( ) 2 ( n n i i!i 2! i i 2 s i )( i2 s i 2 ) ) /2 f n (, ) s i i f n (, ), f n (, ) s i 2 i 2 f n (, ) l 2 s ( 2n 2 s ) (5.9) and B 3 = 5n4 3 2n 2 (2n 2 s)! s {2i, 2i 2 } 2(s (n )) 0 ( ) 2 ( ) 2 ( n n i i!i 2! i i 2 s i )( i2 s i 2 p q f n (, ) s i i f n (, ), f n (, ) s i 2 i 2 f n (, ) l 2 s ( 2n 2 s ). (5.20) ) The claim follows from the above equalities and relations (5.7) and (5.4). 5.2 Semigroup bound For the nth-chaos J n (f n ), n 2, f n l 2 s( n ), we have D L J n (f n ) = n D J n (f n ) = J n (f n (, )) (5.2) and the constants C i of Corollary 3.5 are equal, respectively, to C := n! f n 2 l 2 s( n) + n J n (f n (, )), J n (f n (, )) l 2 (N) E[ J n (f n (, )), J n (f n (, )) l 2 (N)] L 2 (Ω), (5.22) C 2 : = B 2, where B 2 is defined by (5.8) C 3 : = B 3 n, where B 3 is defined by (5.9). In the next theorem we show that these constants can be bounded above by computable quantities. 30

Theorem 5.4 Let n 2 be fixed and let f n l 2 s( n ). Assume that for any N the () functions h n,n,s defined by (5.) belong to l 2 s( 2n 2 s ), 0 s 2n 2, and that C := n! f n 2 l 2 s( n) + n ( 2n 3 (2n 2 s)! s {2i, 2i 2 } 2(s (n )) ( ) 2 ( ) 2 ( n n i i!i 2! i i 2 s i f n s i + i + f n l 2 ( 2n 2 s ) f n s i 2+ i 2 + f n l 2 ( 2n 2 s )) /2, )( i2 s i 2 ) C 2 := B 2, where B 2 is defined by (5.2), and C 3 := B 3 /n where B 3 is defined by (5.3), are finite. Then for all h C 2 b we have E[h(J n (f n ))] E[h(Z)] C min{4 h, h } + h C 2 + h C 3. Proof. The claim follows from Corollary 3.5 if we show that the constant C defined by (5.22) is bounded above by the constant C defined in the statement (for the bounds C i C i, i = 2, 3, see Step 2 of the proof of Theorem 5.3). Along a similar computation as in the Step of the proof of Theorem 5.3, we have J n (f n (, )), J n (f n (, )) l 2 (N) = (n )! 0 2n 3 + f n (, ) n n f n (, ) s 2i 2(s (n )) = (n )!f n n n f n 2n 3 + s 2i 2(s (n )) ( ) 2 ( ) ( n i i! J 2n 2 s i s i 0 f n (, ) s i i f n (, ) ( ) 2 ( ) n i ( ) i! J 2n 2 s fn s i+ i+ f n, (5.23) i s i where the latter equality follows from Lemma 5.2. By a similar computation as for (5.5), we have ) J n (f n (, )) 2 l 2 (N) E[ J n (f n (, )) 2 l 2 (N) ] 2 L 2 (Ω) 2n 3 ( ) 2 ( ) 2 = E n i i! J 2n 2 s (f n s i+ i+ f n ) i s i = s 2i 2(s (n )) 2n 3 (2n 2 s)! s {2i, 2i 2 } 2(s (n )) 3 ( ) 2 ( n n i!i 2! i i 2 ) 2

( i s i )( i2 s i 2 By this relation and (5.22) we deduce C = n! f n 2 l 2 s( n) + n ( 2n 3 (2n 2 s)! ) f n s i + i + f n, f n s i 2+ i 2 + f n l 2 s ( 2n 2 s ). (5.24) s {2i, 2i 2 } 2(s (n )) ( ) 2 ( ) 2 ( n n i i!i 2! i i 2 s i f n s i + i + f n, f n s i 2+ i 2 + f n l 2 s ( 2n 2 s )) /2. )( i2 s i 2 ) By this equality and (5.7) and (5.4) we finally have C C. Connection with Theorem 4. in [4] In this subsection we refine a little the bound given by Theorem 5.4 in order to strictly extend the bound provided by Theorem 4. in [4]. For the nth chaos J n (f n ), n 2, f n l 2 s( n ), we have that the constants C i of Theorem 3.4 are equal, respectively, to ] C := E [ n J n (f n (, )) 2l 2(N), (5.25) C 2 := n 2 0 2p p q E[ J n (f n (, )) 3 ], (5.26) and C 3 := B 3 n, where B 3 is defined by (5.9) In the next theorem we show that these constants can be bounded above by computable quantities. Theorem 5.5 Let n 2 be fixed and let f n l 2 s( n ). Assume that for any N the h () functions n,n,s defined by (5.) belong to l 2 s( 2n 2 s ), 0 s 2n 2, and that ( C := n! f n 2 l 2 s( n) 2 2n 3 + n 2 (2n 2 s)! s {2i, 2i 2 } 2(s (n )) ( ) 2 ( ) 2 ( n n i i!i 2! i i 2 s i f n s i + i + f n l 2 ( 2n 2 s ) f n s i 2+ i 2 + f n l 2 ( 2n 2 s )) /2, )( i2 s i 2 ) 32

C 2 := B 2 /n, where B 2 is defined by (5.2) and C 3 := B 3 /n, where B 3 is defined by (5.3) are finite. Then for all h C 2 b we have E[h(J n (f n ))] E[h(Z)] C min{4 h, h } + h C 2 + h C 3. Proof. The claim follows from Theorem 3.4 if we show that the constants C i, i =, 2, defined by (5.25) and (5.26) are bounded above by the constants C i, i =, 2, defined in the statement, respectively (for the bound C 3 C 3 see Step 2 of the proof of Theorem 5.3). Step : Proof of C C. By the Cauchy-Schwarz inequality, (5.23) and (5.24) we have ] /2 C E [ n J n (f n (, )) 2l 2(N) 2 ( = n! f n 2 l 2 s( n) 2 2n 3 + n 2 E ( n! f n 2 l 2 s( n) 2 s 2i 2(s (n )) 2n 3 + n 2 (2n 2 s)! ( ) 2 ( ) 2 n i i! J 2n 2 s (f n s i+ i+ f n ) i s i s {2i, 2i 2 } 2(s (n )) ( ) 2 ( ) 2 ( n n i i!i 2! i i 2 s i f n s i + i + f n, f n s i 2+ i 2 + f n l 2 s ( 2n 2 s )) /2. ) /2 )( i2 s i 2 ) The claim follows from Relations (5.7) and (5.4). Step 2: Proof of C 2 C 2. By the Cauchy-Schwarz inequality we have E[ J n (f n (, )) 3 ] (E[ J n (f n (, )) 2 ]) /2 (E[ J n (f n (, )) 4 ]) /2. By the isometry for discrete multiple stochastic integrals we have J n (f n (, )) L 2 (Ω) = (n )! f n (, ) l 2 s ( n ). By the above relations and (5.8) we have C 2 B 2 /n, where B 2 is defined by (5.9). The claim follows from (5.7) and (5.4). Since it is not obvious that the above theorem extends Theorem 4. in [4], it is worthwhile to explain this point in detail. Tae f n l 2 () s( n ), n 2, and let h n,n,s be defined by (5.). In the symmetric case p = q = /2, by the same arguments as those one after 33

the statement of Proposition 5. we have that, for any fixed 0 s 2(n ) and N, h () n,n,s = 0 if s/2 is not an integer and h () n,n,s = (s/2)! otherwise. If s/2 is an integer we also have ( n s/2 ) 2 f n (, ) s/2 s/2 f n(, ) f n (, ) s/2 s/2 f n(, ) l 2 s ( 2n 2 s ) f n (, ) s/2 s/2 f n(, ) l 2 ( 2n 2 s ) = f n (, ) s/2 s/2 f n(, ) l 2 ( 2n 2 s ) f n (, ) 2 l 2 s( n ) <, where the latter relation follows from Lemma 2.4() in [4]. So h () n,n,s l 2 s( 2n 2 s ). In the symmetric case, by the definition of the contraction, for 0 s 2n 3 and s 2i 2(s (n )), we have and f n s i+ i+ f n l 2 ( 2n 2 s ) = 0, if s < 2i f n s i+ i+ f n l 2 ( 2n 2 s ) = f n i+ i+ f n l 2 ( 2n 2 2i ) f n 2 l 2 s( n), if s = 2i where the latter relation follows from Lemma 2.4() in [4]. Consequently, the constant C in the statement of Theorem 5.5 is finite and reduces to 2(n 2) ( s ) ( ) 2 4 n C = ( n! f n 2l2s( n) 2 + n 2 {s/2 N}(2n 2 s)! 2! s/2 f n s/2+ s/2+ f n 2 l 2 ( 2n 2 s )) /2, ( [ n ( ) ] 2 2 /2 n = n! f n 2 l 2 s( n) 2 + n 2 (2n 2s)! (s )! f n s s f n 2 l s 2 ( 2n 2s )). s= As far as the constant C 2 in the statement of Theorem 5.5 is concerned, in the symmetric case one clearly has C 2 = 0. 34

Finally, consider the constant C 3 in the statement of Theorem 5.5. The following bound holds: C 3 20 2n 2 3 n3 (2n 2 s)! = 20n3 3 = 20n3 3 = 20n3 3 s {2i, 2i 2 } 2(s (n )) 0 2n 2 ( ) 2 ( ) 2 ( n n i i!i 2! i i 2 s i )( i2 s i 2 f n (, ) s i i f n (, ) l 2 (N) 2n 2 s f n(, ) s i 2 i 2 f n (, ) l 2 (N) 2n 2 s {s/2 N}(2n 2 s)! ( s ) ( ) 2 4 n 2! s/2 0 [ n ( ) ] 2 2 n (2n 2s)! (s )! s 0 [ n ( n (2n 2s)! (s )! s s= s= f n (, ) s/2 s/2 f n(, ) 2 l 2 (N) 2n 2 s ) 2 ] 2 f n s s ) f n (, ) s s f n (, ) 2 l 2 (N) 2n 2s f n 2 l 2 (N) 2n 2s+, where the latter equality follows from Lemma 2.4(2) (relation (2.4)) in [4] and the constant C 3 is finite again by Lemma 2.4() in in [4]. We recovered the bound provided by Theorem 4. in [4]. 5.3 Convergence to the normal distribution The next theorems follow by Theorems 5.3 and 5.4, respectively. Theorem 5.6 Let n 2 be fixed and let F m = J n (f m ), m, be a sequence of discrete multiple stochastic integrals such that f m l 2 s( n ), for any N the functions h () n,n,s () and h n,n,s defined by (5.0) and (5.) with f m in place of f n belong to l 2 s( 2n 2 s ), 0 s 2n 2, 0 f m (, ) s i i f m] ( ) l 2 ( 2n 2 s ) 0, 0 n! f m 2 l s( n), as m (5.27) as m, for any 0 s 2n 3 and s 2i 2(s (n )) (5.28) 2p p q f m (, ) l 2 s ( n ) 35

and 0 f m (, ) s i i f m (, ) l 2 ( 2n 2 s ) f m (, ) s i 2 i 2 f m (, ) l 2 ( 2n 2 s ) 0, as m, for any 0 s 2n 2 and s {2i, 2i 2 } 2(s (n )) (5.29) f m (, ) s i i p q f m (, ) l 2 ( 2n 2 s ) f m (, ) s i 2 i 2 f m (, ) l 2 ( 2n 2 s ) 0, as m, for any 0 s 2n 2 and s {2i, 2i 2 } 2(s (n )). (5.30) Then F m Law N(0, ). Theorem 5.7 Let n 2 be fixed and let F m = J n (f m ), m, be a sequence of discrete multiple stochastic integrals such that f m l 2 s( n ), for any N the functions defined by (5.) with f m in place of f n belong to l 2 s( 2n 2 s ), 0 s 2n 2, f m s i+ i+ f m l 2 ( 2n 2 s ) 0, h () n,n,s as m, for any 0 s 2n 3 and s 2i 2(s (n )) (5.3) and (5.27), (5.29) and (5.30) hold. Then Connection with Proposition 4.3 in [4] F m Law N(0, ). In this paragraph we explain the connection between Theorem 5.7, specialized in the symmetric case, and Proposition 4.3 in [4]. Tae f m l 2 () s( n ), m, n 2, and let h n,n,s be defined by (5.) with f m in place of f n. We already checed (after the proof of Theorem 5.5) () that, in the symmetric case, one has h n,n,s l 2 s( 2n 2 s ), N, 0 s 2n 2. Note that assumption (5.27) is explicitly required in Proposition 4.3 of [4] and, for p = q = /2, assumption (5.29) is automatically satisfied. In the symmetric case, conditions (5.30) and (5.3) read and f m (, ) i i f m (, ) 2 l 2 ( 2n 2 2i ) 0, as m, for any 0 i n (5.32) 0 f m i+ i+ f m l 2 ( 2n 2 2i ) 0, as m, for any 0 i n 2 (5.33) 36

respectively. We are going to chec that assumption (4.44) of Proposition 4.3 in [4], i.e. f m r r f m l 2 (N) 2n 2r 0, as m, for any r n (5.34) implies (5.32) and (5.33). Clearly, (5.34) is equivalent to (5.33). Moreover, for any 0 i n, by Lemma 2.4(2) (relation (2.4)) and Lemma 2.4(3) in [4], we have 0 f m (, ) i i f m (, ) 2 l 2 ( 2n 2 2i ) 0 f m (, ) i i f m (, ) 2 l 2 (N) 2n 2 2i f m n n f m l 2 (N) f m 2 l 2 s( n) f m n n f m l 2 (N) 2 f m 2 l 2 s( n). So combining (5.27) and condition (5.34) with r = n we have (5.32). Quadratic functionals In the next proposition we apply Theorem 5.7 with n = 2. In comparison with Proposition 4.3 of [4] we require an additional l 4 condition in the non-symmetric case. Proposition 5.8 Assume that there exists some ε > 0 such that and consider a sequence f m l 2 s( 2 ) such that a) lim m f m 2 l 2 s( 2 ) = /2, b) lim m f m f m l 2 (N) 2 = 0, c) lim m f m l 4 s ( 2 ) 0. Then J 2 (f m ) Law N(0, ) as m. Proof. 0 < ε < p < ε, N, (5.35) We need to satisfy Conditions (5.27), (5.29), (5.30) and (5.3), in addition to an integrability chec which is postponed to the end of this proof. First we note that (5.27) is Condition a) above. Next we note that under (5.35), Conditions (5.29), (5.30) and (5.3) read f m (, ) l 2 s ( ) f m (, ) 0 0 f m (, ) l 2 s ( 2 ) 0, 0 f m (, ) l 2 s ( ) f m (, ) 0 f m (, ) l 2 s ( ) 0, 0 37

f m (, ) l 2 s ( ) f m (, ) f m (, ) l 2 s ( 0 ) 0, 0 f m (, ) 0 0 f m (, ) 2 l 2 s( 2 ) 0, 0 f m (, ) 0 f m (, ) 2 l 2 s( ) 0, 0 f m (, ) f m (, ) 2 l 2 s( 0 ) 0, 0 and f m f m l 2 ( 2 ), f m 2 f m l 2 ( ) 0, as m. Using again (5.35), we have that the above conditions are implied by f m (, ) l 2 s ( ) f m (, ) 0 0 f m (, ) l 2 s ( 2 ) 0, (5.36) 0 f m (, ) l 2 s ( ) f m (, ) 0 f m (, ) l 2 s ( ) 0, (5.37) 0 f m (, ) l 2 s ( ) f m (, ) f m (, ) l 2 s ( 0 ) 0, (5.38) 0 f m (, ) 0 0 f m (, ) 2 l 2 s( 2 ) 0, (5.39) 0 f m (, ) 0 f m (, ) 2 l 2 s( ) 0, (5.40) 0 f m (, ) f m (, ) 2 l 2 s( 0 ) 0, (5.4) 0 and f m f m l 2 ( 2 ), f m 2 f m l 2 ( ) 0, (5.42) as m. Now by Lemma 2.4(3) of [4] we have f m 2 f m l 2 ( ) f m f m l 2 (N) 2, hence (5.42) is implied by Condition b) above. Next, by Lemma 2.4(2)-(3) in [4] we have f m (, ) 0 0 f m (, ) 2 l 2 s( 2 ) f m 2 f m l 2 (N) f m 2 l 2 (N) 2 0 f m f m l 2 (N) 2 f m 2 l 2 (N) 2, (5.43) 38