Independence of some multiple Poisson stochastic integrals with variable-sign kernels

Similar documents
COVARIANCE IDENTITIES AND MIXING OF RANDOM TRANSFORMATIONS ON THE WIENER SPACE

Normal approximation of Poisson functionals in Kolmogorov distance

Poisson stochastic integration in Hilbert spaces

The Wiener Itô Chaos Expansion

Large time behavior of reaction-diffusion equations with Bessel generators

Citation Osaka Journal of Mathematics. 41(4)

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

ON THE STRUCTURE OF GAUSSIAN RANDOM VARIABLES

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Random Fields: Skorohod integral and Malliavin derivative

arxiv: v2 [math.pr] 22 Aug 2009

Supermodular ordering of Poisson arrays

Convergence at first and second order of some approximations of stochastic integrals

An almost sure invariance principle for additive functionals of Markov chains

Regularity of the density for the stochastic heat equation

The Equivalence of Ergodicity and Weak Mixing for Infinitely Divisible Processes1

(B(t i+1 ) B(t i )) 2

Divergence Theorems in Path Space. Denis Bell University of North Florida

A connection between the stochastic heat equation and fractional Brownian motion, and a simple proof of a result of Talagrand

Probability approximation by Clark-Ocone covariance representation

A Characterization of Einstein Manifolds

COMPACT DIFFERENCE OF WEIGHTED COMPOSITION OPERATORS ON N p -SPACES IN THE BALL

arxiv: v4 [math.pr] 19 Jan 2009

Weighted Sums of Orthogonal Polynomials Related to Birth-Death Processes with Killing

Chaos expansions and Malliavin calculus for Lévy processes

Generalized Gaussian Bridges of Prediction-Invertible Processes

Brownian Motion and Stochastic Calculus

Fisher information and Stam inequality on a finite group

ELEMENTS OF PROBABILITY THEORY

LECTURE 8: THE SECTIONAL AND RICCI CURVATURES

Means of unitaries, conjugations, and the Friedrichs operator

OPTIMAL SOLUTIONS TO STOCHASTIC DIFFERENTIAL INCLUSIONS

The Stein and Chen-Stein methods for functionals of non-symmetric Bernoulli processes

LARGE DEVIATIONS OF TYPICAL LINEAR FUNCTIONALS ON A CONVEX BODY WITH UNCONDITIONAL BASIS. S. G. Bobkov and F. L. Nazarov. September 25, 2011

The multidimensional Ito Integral and the multidimensional Ito Formula. Eric Mu ller June 1, 2015 Seminar on Stochastic Geometry and its applications

The stochastic heat equation with a fractional-colored noise: existence of solution

An example of a convex body without symmetric projections.

arxiv: v2 [math.nt] 21 May 2016

Strong log-concavity is preserved by convolution

The Stein and Chen-Stein Methods for Functionals of Non-Symmetric Bernoulli Processes

ENTIRE FUNCTIONS AND COMPLETENESS PROBLEMS. Lecture 3

Quantum stochastic calculus applied to path spaces over Lie groups

MATH4210 Financial Mathematics ( ) Tutorial 7

On the simplest expression of the perturbed Moore Penrose metric generalized inverse

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition

White noise generalization of the Clark-Ocone formula under change of measure

Wiener Measure and Brownian Motion

Lecture Notes on Metric Spaces

-variation of the divergence integral w.r.t. fbm with Hurst parameter H < 1 2

FIXED POINT AND COMMON FIXED POINT THEOREMS IN CONE BALL-METRIC SPACES. 1. Introduction and preliminaries

Failure of the Raikov Theorem for Free Random Variables

A necessary and sufficient condition for the existence of a spanning tree with specified vertices having large degrees

ON THE FIRST TIME THAT AN ITO PROCESS HITS A BARRIER

A DECOMPOSITION THEOREM FOR FRAMES AND THE FEICHTINGER CONJECTURE

Man Kyu Im*, Un Cig Ji **, and Jae Hee Kim ***

Multiple points of the Brownian sheet in critical dimensions

WLLN for arrays of nonnegative random variables

A 3 3 DILATION COUNTEREXAMPLE

Infinitely divisible distributions and the Lévy-Khintchine formula

16. M. Maejima, Some sojourn time problems for strongly dependent Gaussian processes, Z. Wahrscheinlichkeitstheorie verw. Gebiete 57 (1981), 1 14.

Problem set 1, Real Analysis I, Spring, 2015.

Your first day at work MATH 806 (Fall 2015)

THE SET OF RECURRENT POINTS OF A CONTINUOUS SELF-MAP ON AN INTERVAL AND STRONG CHAOS

{σ x >t}p x. (σ x >t)=e at.

MATH 51H Section 4. October 16, Recall what it means for a function between metric spaces to be continuous:

arxiv: v1 [math.ca] 23 Oct 2018

B. LAQUERRIÈRE AND N. PRIVAULT

POSITIVE DEFINITE FUNCTIONS AND MULTIDIMENSIONAL VERSIONS OF RANDOM VARIABLES

Radial Symmetry of Minimizers for Some Translation and Rotation Invariant Functionals

On pathwise stochastic integration

Journal of Inequalities in Pure and Applied Mathematics

Cartan s Criteria. Math 649, Dan Barbasch. February 26

Soo Hak Sung and Andrei I. Volodin

Solution for Problem 7.1. We argue by contradiction. If the limit were not infinite, then since τ M (ω) is nondecreasing we would have

Summer Jump-Start Program for Analysis, 2012 Song-Ying Li

Richard F. Bass Krzysztof Burdzy University of Washington

The Lévy-Itô decomposition and the Lévy-Khintchine formula in31 themarch dual of 2014 a nuclear 1 space. / 20

A Class of Fractional Stochastic Differential Equations

Topics in fractional Brownian motion

arxiv: v1 [math.gr] 8 Nov 2008

MOMENT CONVERGENCE RATES OF LIL FOR NEGATIVELY ASSOCIATED SEQUENCES

On the Borel-Cantelli Lemma

THE CARATHÉODORY EXTENSION THEOREM FOR VECTOR VALUED MEASURES

An Infinitesimal Approach to Stochastic Analysis on Abstract Wiener Spaces

MAXIMAL COUPLING OF EUCLIDEAN BROWNIAN MOTIONS

On Reflecting Brownian Motion with Drift

ON COMPLEMENTED SUBSPACES OF SUMS AND PRODUCTS OF BANACH SPACES

Inversion of the Dual Dunkl Sonine Transform on R Using Dunkl Wavelets

CENTRAL LIMIT THEOREMS FOR SEQUENCES OF MULTIPLE STOCHASTIC INTEGRALS

Non-Asymptotic Theory of Random Matrices Lecture 4: Dimension Reduction Date: January 16, 2007

arxiv: v1 [math.pr] 10 Jan 2019

Inverse Brascamp-Lieb inequalities along the Heat equation

EXISTENCE OF NON-SUBNORMAL POLYNOMIALLY HYPONORMAL OPERATORS

The Hopf argument. Yves Coudene. IRMAR, Université Rennes 1, campus beaulieu, bat Rennes cedex, France

Math 117: Topology of the Real Numbers

Some new fixed point theorems in metric spaces

Applications of Ito s Formula

Computers and Mathematics with Applications

Real Analysis Math 131AH Rudin, Chapter #1. Dominique Abdi

引用北海学園大学学園論集 (171): 11-24

Transcription:

Independence of some multiple Poisson stochastic integrals with variable-sign kernels Nicolas Privault Division of Mathematical Sciences School of Physical and Mathematical Sciences Nanyang Technological University SPMS-MAS, 21 Nanyang Link Singapore 637371 October 26, 216 Abstract It is known that the multiple Poisson stochastic integrals I n (f n ), I m (g m ) of two symmetric constant-sign functions are independent if and only if f n and g m satisfy a disjoint support condition. In this paper we present some examples in which this property extends to some variable-sign functions. Key words: Multiple stochastic integrals, Poisson process, independence. Mathematics Subject Classification: 6G57, 6G3, 6G55, 6H7. 1 Introduction The independence of two Gaussian random variables can be characterized by the vanishing of their covariance, and this property has been extended to multiple Wiener integrals in [9], [1], by stating that the multiple integrals I n (f n ), I m (g m ) with respect to Brownian motion are independent if and only if the contraction f n (x 1,..., x n 1, z)g m (y 1,..., y m 1, z)dz =, vanishes, x 1,..., x n 1, y 1,..., y m 1 IR, where L 2 (IR + ) n denotes the subspace of L 2 (IR + ) n = L 2 (IR n +) made of symmetric functions. 1

A proof of necessity has been provided in [2] using standard probabilistic arguments, while the original proof of [9], [1] relied on the Malliavin calculus. In addition, I n (f n ) is independent of I m (g m ) if and only if E[(I n (f n )) 2 (I m (g m )) 2 ] = E[(I n (f n )) 2 ] E[(I m (g m )) 2 ], (1.1) cf. Corollary 5.2 of [7]. Independence criteria for multiple stochastic integrals with respect to other Gaussian processes have been obtained in [1]. Coming back to the case of single integrals I 1 (f) with respect to Brownian motion, when X N (, s) and Y N (, t s) are two independent centered Gaussian random variables it is well known that X Y I 1 (f), with f = 1 [,s] 1 [s,t], is independent of X + Y I 1 (g), with g = 1 [,s] + 1 [s,t], since f, g = 1 [,s] 1 [s,t], 1 [,s] + 1 [s,t] =. The situation is completely different when X P(s) and Y P(t s) are independent centered Poisson random variables with means s and t s, in which case X Y is no longer independent of X + Y. Although we may also write X Y I 1 (f) = I 1 (1 [,s] 1 [s,t] ) and X + Y I 1 (g) = I 1 (1 [,s] + 1 [s,t] ) where I 1 (f) is the compensated Poisson integral of f, independence can occur only when fg =, as follows from Proposition 3.1 below. When I n (f n ) and I m (g m ) are multiple Poisson stochastic integrals on IR +, the condition f n 1 g m (x 1,..., x n, y 1,..., y n 1 ) := f n (x 1,..., x n )g m (y 1,..., y n 1, x n ) =, (1.2) 2

x 1,..., x n, y 1,..., y n 1 IR +, is sufficient for the independence of I n (f n ) and I m (g m ) on the Poisson space, due to the independence of increments of the Poisson process, since by symmetry of these functions, the supports of f n and g m can be respectively contained in two Borel sets of the form A n IR n + and B m IR m + with A B =, cf. [5]. On the other hand, it has been claimed in [4], [5] that (1.2) is also a necessary condition for the independence of I n (f n ) and I m (g m ) on the Poisson space. A similar result was stated independently in [8]. However that necessity claim was proved only when the functions f n and g m have constant signs. More precisely, the necessity condition therein is based on the relation (1.1) which, unlike on the Wiener space, turns out to be weaker than the independence of I n (f n ) and I m (g m ) when f n and g m have variable signs, as was shown in [7] by a counterexample for n = 2 and m = 1, cf. also Remark 3.4 below. In this note we present some examples in which (1.2) holds as a necessary and sufficient condition for the independence of I n (f n ), I m (g m ) when f n and g m are allowed to have variable signs. Namely we treat the following cases: - tensor powers: f n and g m take the form f n = f n and g m = g m, - integrals of first and multiple orders: f n L 2 (IR + ) n and g 1 has constant sign, - Integrals of 1 st and 2 nd orders: f 2 = f g, without sign restrictions on f 2 and g 1, for which we show that f n 1 g m = is necessary and sufficient for independence. The question whether (1.2) is a necessary condition for independence is still open in full generality. In Section 2 we recall the necessary conditions for independence obtained in [4], [5]. The examples of multiple integrals with variable-sign kernels for which the disjoint support condition is necessary and sufficient are given in Section 3. Finally in Section 4 we consider the expectation of multiple stochastic integrals. 3

The results of this paper are stated for a standard Poisson process on the half line IR +, however they can be extended without difficulty to Poisson measures on metric spaces. 2 Necessary condition for independence We start by recalling some results of [4]. Given f n L 2 (IR) n a symmetric squareintegrable function of n variables on IR n +, the multiple stochastic integral of f n with respect to the standard Poisson process (N t ) t IR+ I n (f n ) = n! t n We will need the multiplication formula is defined as t 2 f n (t 1,..., t n )d(n t1 t 1 ) d(n tn t n ). (2.1) I n (f n )I m (g m ) = 2(n m) cf. e.g. Proposition 4.5.6 of [6], provided ( n h n,m,s := k! k s 2k 2(s n m) s= )( m k I n+m s (h n,m,s ), (2.2) )( k s k ) f n s k k is in L 2 (IR + ) n+m s, s 2(n m) where f n l k g m, l k, is the symmetrization in n + m k l variables of the function f n l k g m (x l+1,..., x n, y k+1,..., y m ) := f n (x 1,..., x n )g m (x 1,..., x k, y k+1,..., y m )dx 1 dx l. for f n L 2 (IR + ) n and g m L 2 (IR + ) m such that f n l k g m L 2 (IR + ) n+m k l, l k n m. g m The following proposition shows that f n 1 g m = is a necessary condition for the independence of I n (f n ) and I m (g m ). 4

Proposition 2.1 Let f n L 2 (IR + ) n and g m L 2 (IR + ) m such that E[(I n (f n )) 2 (I m (g m )) 2 ] = E[(I n (f n )) 2 ] E[(I m (g m )) 2 ]. Then we have f n 1 g m =. Proof. ([4]) If I n (f n ) and I m (g m ) are independent, then I n (f n )I m (g m ) L 2 (Ω) and using the isometry formula E [I n (f n )I m (g m )] = n! f n, g m L 2 (IR + ) n1 {n=m}, (2.3) for f n L 2 (IR + ) n, g m L 2 (IR + ) m, we get (n + m)! f n g m 2 L 2 (IR + ) n+m n!m! f n g m 2 L 2 (IR + ) n+m = n!m! f n 2 L 2 (IR + ) n g m 2 L 2 (IR + ) m = E [ I n (f n ) 2] E [ I m (g m ) 2] = E [ (I n (f n )I m (g m )) 2] = 2(n m) r= (n + m r)! h n,m,r 2 L 2 (IR + ) n+m r (n + m)! h n,m, 2 L 2 (IR + ) n+m +(n + m 1)! h n,m,1 2 L 2 (IR + ) n+m 1 (n + m)! f n g m 2 L 2 (IR + ) n+m +nm(n + m 1)! f n 1 g m 2 L 2 (IR + ) n+m 1 from (2.2), which implies f n 1 g m =. It follows from Proposition 2.1 that if I n (f n ) and I m (g m ) are independent then we have f n 1 g m =. However, f n 1 g m = is in general only a necessary and not sufficient condition for f n 1 g m to vanish, since f n 1 g m, f n 1 g m f n 1 g m, f n 1 g m, which follows from the fact that f n 1 g m is the symmetrization of f n 1 g m in n+m 1 variables. A counterexample for which we have f n 1 g m = f m 1 1 g m = and f n 1 g m can be found in [7], Example 5.3, as f 2 (s, t) = (2.4) 5

1 {st<} (1 [ 1, 1/2] (s) 1 [ 1/2,1/2] (s) + 1 [1/2,1] (s))(1 [ 1, 1/2] (s) 1 [ 1/2,1/2] (s) + 1 [1/2,1] (s)) and g 1 (t) = 1 [ 1,] (t) + 1 (,1] (t). In this example the integrals I 2 (f 2 ) and I 1 (g 1 ) are not independent, however (1.1) holds and therefore (1.1) and (1.2) cannot characterize the independence of I n (f n ) and I m (g m ). In the case n = 2 and m = 1 we are able to show that (1.1) entails f 2 1 1 g 1 = in addition to f 2 1 g 1 =. Proposition 2.2 Let f 2 L 2 (IR + ) 2 and g 1 L 2 (IR + ) such that E[(I 2 (f 2 )) 2 (I 1 (g 1 )) 2 ] = E[(I 2 (f 2 )) 2 ] E[(I 1 (g 1 )) 2 ]. Then we have f 2 1 g 1 =, and f 2 1 1 g 1 =. Proof. By the same argument as in the proof of Proposition 2.1 we find h 2,1,1 = f 2 1 g 1 =, and h 2,1,2 := 2f 2 1 1 g 1 =. When f 2 = f g and g 1 = h, Proposition 2.2 yields (f g) 1 h = 1 (f (gh) + (gh) f + g (fh) + (fh) g) =, (2.5) 4 and hence (f g) 1 1 h = f g, h + g f, h =, (2.6) g, h = f, h =, (2.7) provided f g. 6

3 Variable-sign kernels In this section we consider examples of kernels f n L 2 (IR + ) n, g m L 2 (IR + ) m for which f n 1 g m = = f n 1 g m =, in which case the condition f n 1 g m = f n (x 1,..., x n 1, z)g m (y 1,..., y m 1, z) =, x 1,..., x n 1, y 1,..., y m 1, z IR, becomes both necessary and sufficient for the independence of I n (f n ) and I m (g m ), and is equivalent to (1.1), which also becomes necessary and sufficient. We note that f n 1 g m, f n 1 g m 1 = (n + m 1)! σ Σ n+m 1 (3.1) = IR n+m 1 + (f n 1 g m )(x 1,..., x n+m 1 )(f n 1 g m )(x σ1,..., x σn+m 1 )dx 1 dx n+m 1 (n 1)!(m 1)! f n 1 g m, f n 1 g m (n + m 1)! 1 + (n + m 1)! σ Θ n,m f n 1 g m (x 1,..., x n+m 1 )f n 1 g m (x σ1,..., x σn+m 1 )dx 1 dx n+m 1, IR n+m 1 + where Θ n,m = {σ Σ n+m 1 : σ({1,..., n 1}) {1,..., n 1} or σ({n + 1,..., n + m 1}) {n + 1,..., n + m 1}}, and our method of proof will rely on this decomposition to show that f n 1 g m =. 7

Tensor powers In the next proposition we obtain a necessary and sufficient condition for the independence of integrals of (symmetric) tensor powers. Proposition 3.1 Let f, g L 2 (IR). Then I n (f n ) is independent of I m (g m ) if and only if f n 1 g m =, which is equivalent to fg =. Proof. We have f n 1 g n = (fg) f (n 1) g (m 1), hence f n 1 g n = is equivalent to fg = and to f n 1 g n = f (n 1) (fg) g (m 1) =. In particular, I 1 (f) is independent of I 1 (g) if and only if fg =, for all f, g L 2 (IR + ). Integrals of first and multiple orders Here we consider the case of I n (f n ) and I 1 (g 1 ) when only g 1 has constant sign. Proposition 3.2 Let f n L 2 (IR + ) n, and assume that g 1 L 2 (IR + ) only has constant sign. Then I n (f n ) is independent of I 1 (g 1 ) if and only if f n 1 g 1 =, i.e. Proof. We have and, as in (3.1), f n 1 g 1, f n 1 g 1 = 1 n 2 = 1 n f n (x 1,..., x n )g 1 (x 1 ) =, x 1,..., x n IR +. f n 1 g 1 (x 1,..., x n ) = 1 n f n(x 1,..., x n ) + n 1 n f 2 n(x 1,..., x n ) n i=1 n g 1 (x k ), k=1 n g 1 (x i )g 1 (x j )dx 1 dx n j=1 f 2 n(x 1,..., x n ) g 1 (x i ) 2 dx 1 dx n f 2 n(x 1,..., x n )g 1 (x 1 )g 1 (x 2 )dx 1 dx n = 1 n f n 1 g 1, f n 1 g 1 + n 1 n f n n n 2 f n, g 1 g 1, (3.2) hence f n 1 g 1 = implies f n 1 g 1 = since f n n n 2 f n. 8

Integrals of first and second orders In the next result the necessary and sufficient condition for independence of multiple stochastic integrals is obtained without any sign assumption on the integrands in the first and second order case. In this section we work in the tensor case f 2 = f g, which does not include Example 2.4, see (3.5) below for an example in the tensor case. Proposition 3.3 Let f, g, h L 2 (IR + ). The double Poisson stochastic integral I 2 (f g) = 2 is independent of the single integral I 1 (h) = which is equivalent to fh = gh =. Proof. t f(t) g(s)d(n s s)d(n t t) (f g) 1 h =, h(t)d(n t t) if and only if We assume for simplicity that f, f = g, g = 1. If I 2 (f g) is independent of I 1 (h), Proposition 2.1 shows that (f g) 1 h = and the conclusion follows from Lemma 3.5 in case f 2, h g 2, h. Next, if Lemma 3.6 below shows that f 2, h g 2, h <, h = λ1 {f } λ1 {g }, (3.3) where λ = f 2, h 2 1/2 = g 2, h 2 1/2. (3.4) In addition we have fg = by Lemma 3.6, and by Proposition 2.2, (2.6) and (3.3), we get hence, letting α = f(x)dx = g(x)dx =, 1 {g(x) } dx <, and α + = 9 1 {f(x) } dx <,

which are both finite since h L 2 (IR + ), we obtain E[(I 2 (f g)) 2 1 {I1 (h)=λn}] = E[I 1 (f) 2 I 1 (g) 2 1 {I1 (h)=λn}] ( k ) 2 = e α α + 1 f(x i ) dx 1 dx k (2k + n)! k= ( n) = e α α + k= ( n) = e α α + k= ( n) IR k + 1 (2k + n)! IR k + k(k + n) (2k + n)!. i=1 k f 2 (x i )dx 1 dx k i=1 IR k+n + IR k+n + ( n+k 2 g(y i )) dy 1 dy k+n i=1 n+k g 2 (y i )dy 1 dy k+n On the other hand, being the difference of two Poisson random variables, λ 1 I 1 (h) has the Skellam distribution. n Z, where P (I 1 (h) = λn) = e α α + I n (x) = k= ( n) α n+k + α k k!(n + k)! i=1 ( ) n/2 = e α α+ α + I n (2 α + α ), α k= (x/2) n+2k k!(n + k)!, x >, is the modified Bessel function of the first kind with parameter n. It follows that ( ) n/2 k(k+n) E[(I 2 (f g)) 2 α+ k= (2k+n)! I 1 (h) = λn] = I n (2 α + α ), which cannot be constant in n because the Bessel function I n (x) is not separable in its variables x and n. Therefore the independence of I 2 (f g) with I 1 (h) imposes λ = which concludes the proof by contradiction with (3.4). From the above proof we note again that, although independence of I 2 (f g) and I 1 (h) is equivalent to (f g) 1 h =, in general the statement (f g) 1 h = does not imply (f g) 1 h =, as shown by the following example: A, B IR +, with A B =, where we have α f = 1 A, g = 1 B, h = 1 A 1 B, (3.5) (f g) 1 h = (1 A 1 B ) 1 (1 A 1 B ) = 1 A 1 B + 1 B 1 A, 1

and (f g) 1 h =, while (f g) 1 h, i.e. (1.2) does not hold. In this case the independence of I 2 (f g) and I 1 (h) does not hold, as in Example (2.4) above. Remark 3.4 Unlike in the Wiener case, Condition (1.1) is in general not sufficient for the independence of I n (f n ) and I m (g m ), as shown by the example (2.4). Therefore the search for a necessary and sufficient condition for independence in the general case has to involve more complex criteria than (1.1). The next lemma has been used in the proof of Proposition 3.3. Lemma 3.5 Let f, g, h L 2 (IR + ) such that (f g) 1 h = and f 2, h g 2, h. Then we have (f g) 1 h =, and fh = gh =. Proof. For simplicity and without loss of generality, we assume that f, f = g, g = 1. By Proposition 2.2 we have (f g) 1 h =, and from (3.2) or (2.5) we find = (f g) 1 h, (f g) 1 h = 1 2 (f g) 1 h, (f g) 1 h + 1 2 (f g) 2 (f g), h h = 1 2 (f g) 1 h, (f g) 1 h + 1 4 fg, h 2 + 1 4 f 2, h g 2, h. This shows that (f g) 1 h =, provided f 2, h g 2, h, and we conclude by Lemma 3.7 below. The next lemma has been used in the proof of Proposition 3.3. Lemma 3.6 Let f, g, h L 2 (IR + ) such that (f g) 1 h = and g 2, h f 2, h. (3.6) Then we have fg = and h = λ1 {f } λ1 {g }, 11

for some λ such that In particular it holds that λ 2 = f 2, h 2 f, f = g2, h 2 g, g. g 2, h f 2, h <. Proof. For simplicity of exposition we assume that f, f = g, g = 1. The condition (f g) 1 h = implies (f g) 1 h, f f = f, gh + f 2, h f, g =, (f g) 1 h, g g = f, gh + g 2, h f, g =, and since g 2, h f 2, h this gives f, g = f, gh =, hence (f g) 1 1 h, (f g) 1 1 h = 1 2 f 2, h g 2, h + (f g) 1 h, (f g) 1 h = 1 2 f 2, h g 2, h + 1 4 ( f, f g2, h 2 + g, g f 2, h 2 ) 1 2 f 2, h 2 1/2 g 2, h 2 1/2 + 1 4 ( g2, h 2 + f 2, h 2 ) (3.7) = 1 4 ( f 2, h 2 1/2 g 2, h 2 1/2 ) 2, which shows that (f g) 1 1 h, (f g) 1 1 h = implies f 2, h 2 = g 2, h 2, and, by the equality (3.7), for some λ IR such that fh = λf, and gh = λg, λ = f 2, h 2 1/2 = g 2, h 2 1/2. Hence we have and h = λ1 {f } λ1 {g }, (3.8) λ 2 fg = h 2 fg, which imply that fg = a.e. on {h 2 = } = {f = h = } {fg }, hence fg =. Finally we note that λ by (3.6) and (3.8). 12

The next lemma has been used in the proof of Lemma 3.5. Lemma 3.7 For any f, g, h L 2 (IR + ), the condition (f g) 1 h = is equivalent to fh = gh =. Proof. Assuming again that f, f = g, g = 1, we note that since (f g) 1 h = 1 (f (gh) + g (fh)), (3.9) 2 we have (f g) 1 h, (f g) 1 h = 1 4 ( g2, h 2 + f 2, h 2 + 2 f, g fg, h 2 ) 1 4 ( g2, h 2 + f 2, h 2 2 fg, h 2 ) (3.1) 1 4 ( g2, h 2 + f 2, h 2 2 g 2, h 2 1/2 f 2, h 2 1/2 ) = 1 4 ( f 2, h 2 1/2 g 2, h 2 1/2 ) 2. Assuming that (f g) 1 h =, if fg, h 2 then the equality (3.1) implies f = g and fh = gh = by (3.9). On the other hand, if fg, h 2 = we have = (f g) 1 h, (f g) 1 h = 1 4 ( g2, h 2 + f 2, h 2 ), hence fh = gh =. This shows that fh = gh = in all cases, and (f g) 1 h = is equivalent to fh = gh =. 4 Conditional expectations In this section we present some complements on conditional expectations. Next is an adaptation of Proposition 3 in [3] to the Poisson case. Proposition 4.1 Let f n L 2 (IR + ), n 1, and let A be a Borel set of IR + with finite measure. We have E[I n (f n ) I n (1 n A )] = E[I n(f n ) I 1 (1 A )] = f n, 1 n A 1 A, 1 A I n(1 n n A ), which is a polynomial in I 1 (1 A ). 13

Proof. For any k 1, by (2.2) we have I 1 (1 A ) k = k l= α l I l (1 l A ), (4.1) where α,..., α k (, ), hence which shows that By (2.2) we also have E[I n (f n )(I 1 (1 A )) k ] = α k 1 {k n} f n, 1 n A = f n, 1 n A 1 A, 1 A n E[(I 1(1 A )) k I n (1 n A )], E[I n (f n ) I 1 (1 A )] = f n, 1 n A 1 A, 1 A n I n(1 n A ). I n (1 n A ) = n l= β l I l (1 l A ), where β,..., β n IR, which implies σ(i n (1 n A )) σ(i 1(1 A )), and E[I n (f n ) I n (1 n A )] = E[E[I n(f n ) I 1 (1 A )] I n (1 n A )] = f n, 1 n A 1 A, 1 A n E[E[I n(1 n A ) I 1(1 A )] I n (1 n A )] = f n, 1 n A 1 A, 1 A n I n(1 n A ). When n = 1 the above result says that E[I 1 (f 1 ) I 1 (1 A )] = f 1, 1 A 1 A, 1 A I 1(1 A ), (4.2) which follows by a direct orthogonal projection argument. In particular when X P(s) and Y P(t s) are independent centered Poisson random variables with means s and t s, X I 1 (1 [,s] ) and Y I 1 (1 [s,t] ) taking f 1 = 1 [,s] and A = [, t], Relation (4.2) follows from the fact that X + s has a binomial distribution with parameter s/t given X + Y + t. 14

Finally we note that, contrary to the Wiener case and Proposition 2 of [3], the conditional expectation of an odd order integral given an even order integral is not zero in general, indeed, E[I 3 (f 3 )(I 2 (g 2 )) 2 ] does not vanish in general, from the multiplication formula (2.2). References [1] S. Chivoret and A. Amirdjanova. Product formula and independence criterion for multiple Huang-Cambanis integrals. Statist. Probab. Lett., 76(18):237 242, 26. [2] O. Kallenberg. On an independence criterion for multiple Wiener integrals. Ann. Probab., 19(2):483 485, 1991. [3] D. Nualart, A.S. Üstünel, and M. Zakai. On the moments of a multiple Wiener-Itô integral and the space induced by the polynomials of the integral. Stochastics, 25(4):233 24, 1988. [4] N. Privault. On the independence of multiple stochastic integrals with respect to a class of martingales. C. R. Acad. Sci. Paris Sér. I Math., 323:515 52, 1996. [5] N. Privault. Independence of a class of multiple stochastic integrals. In R. Dalang, M. Dozzi, and F. Russo, editors, Seminar on Stochastic Analysis, Random Fields and Applications (Ascona, 1996), volume 45 of Progress in Probability, pages 249 259. Birkhäuser, Basel, 1999. [6] N. Privault. Stochastic Analysis in Discrete and Continuous Settings, volume 1982 of Lecture Notes in Mathematics. Springer-Verlag, Berlin, 29. [7] J. Rosiński and G. Samorodnitsky. Product formula, tails and independence of multiple stable integrals. In Advances in stochastic inequalities (Atlanta, GA, 1997), volume 234 of Contemp. Math., pages 169 194. Amer. Math. Soc., Providence, RI, 1999. [8] C. Tudor. A product formula for multiple Poisson-Itô integrals. Rev. Roumaine Math. Pures Appl., 42(3-4):339 345, 1997. [9] A.S. Üstünel and M. Zakai. On independence and conditioning on Wiener space. Ann. Probab., 17(4):1441 1453, 1989. [1] A.S. Üstünel and M. Zakai. On the structure on independence on Wiener space. J. Funct. Anal., 9(1):113 137, 199. 15