Invariance of Poisson measures under random transformations

Size: px
Start display at page:

Download "Invariance of Poisson measures under random transformations"

Transcription

1 Invariance of Poisson measures under random transformations Nicolas Privault Division of Mathematical Sciences School of Physical and Mathematical Sciences Nanyang Technological University SPMS-MAS, 21 Nanyang Link Singapore February 10, 2011 Abstract We prove that Poisson measures are invariant under (random intensity preserving transformations whose finite difference gradient satisfies a cyclic vanishing condition. The proof relies on moment identities of independent interest for adapted and anticipating Poisson stochastic integrals, and is inspired by the method of Üstünel and Zakai, Probab. Theory Relat. Fields 103, 1995, on the Wiener space, although the corresponding algebra is more complex than in the Wiener case. The examples of application include transformations conditioned by random sets such as the convex hull of a Poisson random measure. Key words: Poisson measures, random transformations, invariance, Skorohod integral, moment identities. Mathematics Subject Classification: 60G57, 60G30, 60G55, 60H07, 28D05, 28C20, 37A05. 1 Introduction Poisson random measures on metric spaces are known to be quasi-invariant under deterministic transformations satisfying suitable conditions, cf. e.g. 24, 20. For 1

2 Poisson processes on the real line this quasi-invariance property also holds under adapted transformations, cf. e.g. 4, 10. The quasi-invariance of Poisson measures on the real line with respect to anticipative transformations has been studied in 13 and in the general case of metric spaces in 1. In the Wiener case, random nonadapted transformations of Brownian motion have been considered by several authors using the Malliavin calculus, cf. 23 and references therein. On the other hand, the invariance property of the law of stochastic processes has important applications, for example to the construction of identically distributed samples of antithetic random variables that can be used for variance reduction in the Monte Carlo method, cf. e.g. 4.5 of 3. Invariance results for the Wiener measure under quasi-nilpotent random isometries have been obtained in 22, 21, by means of the Malliavin calculus, based on the duality between gradient and divergence operators on the Wiener space. In comparison with invariance results, quasi-invariance in the anticipative case usually requires more smoothness on the considered transformation. Somehow surprisingly, the invariance of Poisson measures under non-adapted transformations does not seem to have been the object of many studies to date. The classical invariance theorem for Poisson measures states that given a deterministic transformation τ : Y between measure spaces (, σ and (Y, µ sending σ to µ, the corresponding transformation on point processes maps the Poisson distribution π σ with intensity σ(dx on to the Poisson distribution π µ with intensity µ(dy on Y. As a simple deterministic example in the case of Poisson jumps times (T k k 1 on the half line Y IR + with σ(dx µ(dx dx/x, the homothetic transformation τ(x rx leaves π σ invariant for all fixed r > 0. However, the random transformation of the Poisson process jump times according to the mapping τ(x x/t 1 does not yield a Poisson process since the first jump time of the transformed point process is constantly equal to 1. In thiaper we obtain sufficient conditions for the invariance of random transformations τ : Ω Y of Poisson random measures on metric spaces, Y. Here the 2

3 almost sure isometry condition on IR d assumed in the Gaussian case will be replaced by a pointwise condition on the preservation of intensity measures, and the quasinilpotence hypothesis will be replaced by a cyclic condition on the finite difference gradient of the transformation, cf. Relation (3.7 below. In particular, this condition is satisfied by predictable transformations of Poisson measures, as noted in Example 1 of Section 4. In the case of the Wiener space W C 0 (IR + ; IR d one considers almost surely defined random isometries R(ω : L 2 (IR + ; IR d L 2 (IR + ; IR d, ω W, given by R(ωh(t U(ω, th(t where U(ω, t : IR d IR d, t IR +, is a random process of isometries of IR d. The Gaussian character of the measure transformation induced by R is then given by checking for the Gaussianity of the (anticipative Wiener-Skorohod integral δ(rh of Rh, for all h L 2 (IR + ; IR d. In the Poisson case we consider random isometries R(ω : L 2 µ(y L 2 σ( given by R(ωh(x h(τ(ω, x where τ(ω, : (, σ (Y, µ is a random transformation that maps σ(dx to µ(dy for all ω Ω. Here, the Poisson character of the measure transformation induced by R is obtained by showing that the Poisson- Skorohod integral δ σ (Rh of Rh has same distribution under π σ as the compensated Poisson stochastic integral δ µ (h of h under π µ, for all h C c (Y. For this we will use the Malliavin calculus under Poisson measures, which relies on a finite difference gradient D and a divergence operator δ that extends the Poisson stochastic integral. Our results and proofs are to some extent inspired by the treatment of the Wiener case in 22, see 15 for a recent simplified proof on the Wiener space. However, the use of finite difference operators instead of derivation operators as in the continuous case makes the proofs and arguments more complex from an algebraic point of view. 3

4 As in the Wiener case, we will characterize probability measures via their moments. Recall that the moment E λ Z n of order n of a Poisson random variable Z with intensity λ can be written as E λ Z n T n (λ where T n (λ is the Touchard polynomial of order n, defined by T 0 (λ 1 and the recurrence relation T n+1 (λ λ n k0 ( n T k (λ, n 0, (1.1 k also called the exponential polynomials, cf. e.g of 6, Replacing the Touchard polynomial T n (λ by its centered version T n (λ defined by T 0 (λ 1 and n 1 ( n T n+1 (λ λ T k (λ, n 0, (1.2 k k0 yields the moments of the centered Poisson random variable with intensity λ > 0 as T n (λ E λ (Z λ n, n 0. Our characterization of Poisson measures will use recurrence relations similar to (1.2, cf. (2.12 below, and identities for the moments of compensated Poisson stochastic integrals which are another motivation for thiaper, cf. Theorem 5.1 below. The paper is organized as follows. The main results (Corollary 3.2 and Theorem 3.3 on the invariance of Poisson measures are stated in Section 3 after recalling the definition of the finite difference gradient D and the Skorohod integral operator δ under Poisson measures in Section 2. Section 4 contains examples of transformations satisfying the required conditions which include the classical adapted case and transformations acting inside the convex hull generated by Poisson random measures, given the positions of the extremal vertices. Section 5 contains the moment identities for Poisson stochastic integrals of all orders that are used in thiaper, cf. Theorem 5.1. In Section 6 we prove the main results of Section 3 based on the lemmas on moment identities established in Section 5. In the appendix Section 7 we prove some combinatorial results that are needed in the proofs. Some of the results of thiaper have been presented in 14. 4

5 2 Poisson measures and finite difference operators In this section we recall the construction of Poisson measures, finite difference operators and Poisson-Skorohod integrals, cf. e.g. 11 and 16 Chapter 6 for reviews. We also introduce some other operators that will be needed in the sequel, cf. Definition 2.5 below. Let be a σ-compact metric space with Borel σ-algebra B( and a σ-finite diffuse measure σ. Let Ω denote the configuration space on, i.e. the space of at most countable and locally finite subsets of, defined as Ω { ω (x i N i1, x i x j i j, N IN { } }. Each element ω of Ω is identified with the Radon point measure ω ω( i1 where ɛ x denotes the Dirac measure at x and ω( IN { } denotes the cardinality of ω. The Poisson random measure N(ω, dx is defined by N(ω, dx ω(dx ω( k1 ɛ xi, ɛ xk (dx, ω Ω. (2.1 The Poisson probability measure π σ on can be characterized as the only probability measure on Ω under which for all compact disjoint subsets A 1,..., A n of, n 1, the mapping ω (ω(a 1,..., ω(a n is a vector of independent Poisson distributed random variables on IN with respective intensities σ(a 1,..., σ(a n. The Poisson measure π σ is also characterized by its Fourier transform ( ψ σ (f E σ exp i f(x(ω(dx σ(dx, f L 2 σ(, 5

6 where E σ denotes expectation under π σ, which satisfies ( ψ σ (f exp (e if(x if(x 1σ(dx, f L 2 σ(, (2.2 where the compensated Poisson stochastic integral f(x(ω(dx σ(dx is defined by the isometry E σ ( 2 f(x(ω(dx σ(dx f(x 2 σ(dx, f L 2 σ(. (2.3 We refer to 8, 9, 12, for the following definition. Definition 2.1 Let D denote the finite difference gradient defined as D x F (ω ε + x F (ω F (ω, ω Ω, x, (2.4 for any random variable F : Ω IR, where ε + x F (ω F (ω {x}, ω Ω, x. The operator D is continuous on the space ID 2,1 defined by the norm F 2 2,1 F 2 L 2 (Ω,π σ + DF 2 L 2 (Ω,π σ σ, F ID 2,1. We refer to Corollary 1 of 12 for the following definition. Definition 2.2 The Skorohod integral operator δ σ is defined on any measurable process u : Ω IR by the expression δ σ (u u t (ω \ {t}(ω(dt σ(dt, (2.5 provided E σ u(ω, t σ(dt <. Relation (2.5 between δ σ and the Poisson stochastic integral will be used to characterize the distribution of the perturbed configuration points. Note that if D t u t 0, t, and in particular when applying (2.5 to u L 1 σ( a deterministic function, we have δ σ (u u(t(ω(dt σ(dt (2.6 6

7 i.e. δ σ (u with the compensated Poisson-Stieltjes integral of u. In addition if IR + and σ(dt λ t dt we have δ σ (u 0 u t (dn t λ t dt (2.7 for all square-integrable predictable processes (u t t IR+, where N t ω(0, t, t IR +, is a Poisson process with intensity λ t > 0, cf. e.g. the Example page 518 of 12. The next proposition can be obtained from Corollaries 1 and 5 in 12. Proposition 2.3 The operators D and δ σ are closable and satisfy the duality relation E σ DF, u L 2 σ ( E σ F δ σ (u, (2.8 on their L 2 domains Dom (δ σ L 2 (Ω, π σ σ and Dom (D ID 2,1 L 2 (Ω, π σ under the Poisson measure π σ with intensity σ. The operator δ σ is continuous on the space IL 2,1 Dom (δ σ defined by the norm u 2 2,1 E σ u t 2 σ(dt + E σ D s u t 2 σ(dsσ(dt, and for any u IL 2,1 we have the Skorohod isometry E σ δσ (u 2 E σ u 2 L 2 σ ( + E σ cf. Corollary 4 and pages of 12. D s u t D t u s σ(dsσ(dt. (2.9 In addition, from (2.5 we have the commutation relation provided D t u IL 2,1, t. ε + t δ σ (u δ σ (ε + t u + u t, t, (2.10 The moments identities for Poisson stochastic integralroved in thiaper rely on the decomposition stated in the following lemma. 7

8 Lemma 2.4 Let u IL 2,1 be such that δ σ (u n ID 2,1, D t u IL 2,1, σ(dt-a.e., and E σ u t n k+1 δ σ (ε + t u k σ(dt <, E σ δ σ (u k u t n k+1 σ(dt <, 0 k n. Then we have n 1 ( n E σ δ σ (u n+1 E σ δ σ (u k k k0 n ( n + E σ k for all n 1. k1 u n k+1 t σ(dt ut n k+1 (δ σ (ε + t u k δ σ (u k σ(dt, Proof. We have, applying (2.10 to F δ σ (u n, E σ δ σ (u n+1 E σ u t D t δ σ (u n σ(dt E σ u t ((ε + t δ σ (u n δ σ (u n σ(dt E σ u t ((u t + δ σ (ε + t u n δ σ (u n σ(dt n 1 ( n E σ k k0 n 1 ( n E σ k k0 n ( n + E σ k k1 u n k+1 t δ σ (ε + t u k σ(dt u n k+1 t δ σ (u k σ(dt u n k+1 t (δ σ (ε + t u k δ σ (u k σ(dt. From Relation (2.6 and Lemma 2.4 we find that the moments of the compensated N+1 Poisson stochastic integral f(t(ω(dt σ(dt of f L p σ( satisfy the recurrence identity E σ ( n+1 f(t(ω(dt σ(dt n 1 ( ( k n f n k+1 (tσ(dte σ f(t(ω(dt σ(dt, k k0 8 (2.11

9 n 0,..., N, which is analog to Relation (1.2 for the centered Touchard polynomials and coincides with (2.3 for n 1. The Skorohod isometry (2.9 shows that δ σ is continuous on IL 2,1, and that its moment of order two of δ σ (u satisfies E σ δ σ (u 2 E σ u 2 L 2 σ(, provided D s u t D t u s σ(dsσ(dt 0, as in the Wiener case 22. This condition is satisfied when D t u s D s u t 0, s, t, i.e. u is adapted in the sense of e.g. 18, Definition 4, or predictable when IR +. The computation of moments of higher orders turns out to be more technical, cf. Theorem 5.1 below, and will be used to characterize the Poisson distribution. From (2.11, in order for δ σ (u L n+1 (Ω to have the same moments as the compensated Poisson integral of f n+1 p2 σ L p σ(, it should satisfy the recurrence relation n 1 ( n E σ δ σ (u n+1 f n k+1 (tσ(dte σ δσ (u k, (2.12 k k0 n 0, which is an extension of Relation (2.11 to the moments of compensated Poisson stochastic integrals, and characterizes their distribution by Carleman s condition 5 when sup f L p σ (Y <. p 1 In order to simplify the presentation of moment identities for the Skorohod integral δ σ it will be convenient to use the following symbolic notation in the sequel. Definition 2.5 For any measurable process u : Ω IR, let s0 sj n u sp D Θ0u s0 D Θnu sn, (2.13 Θ 0 Θn{s 0,s 1,...,s j } 9

10 s 0,..., s n, 0 j n, where D Θ : D sj when Θ {s 0, s 1,..., s j }. s j Θ Note that the sum in (2.13 includes empty sets. For example we have s0 n u sp u s0 Θ 1 Θn{s 0 } s 0 / Θ 0,...,s j / Θ j D Θ1u s1 D Θnu sn u s0d s0 n u sp, and s0 u s0 0. The use of this notation allows us to rewrite the Skorohod isometry (2.9 as E σ δ σ (u 2 E σ since by definition we have u 2 sσ(ds + E σ s t (u s u t σ(dsσ(dt, s t (u s u t D s u t D t u s, s, t. As a consequence of Theorem 5.1 and Relation (6.1 of Proposition 6.1 below, the third moment of δ σ (u is given by E σ δσ (u 3 E σ u 3 sσ(ds + 3E σ δ(u u 2 sσ(ds ( E σ s1 s2 (u s1 u 2 s 2 σ(ds 1 σ(ds 2 + E σ s1 s2 s3 (u s1 u s2 u s3 σ(ds 1 σ(ds 2 σ(ds 3, 3 3 cf. (5.4 and (6.2 below, which reduces to E σ δσ (u 3 E σ u 3 sσ(ds + 3E σ δ(u when u satisfies the cyclic conditions u 2 sσ(ds D t1 u t2 D t2 u t1 0, and D t1 u t2 D t2 u t3 D t3 u t1 0, t 1,..., t 3, of Lemma 7.2 in the appendix, which shows that (2.13 vanishes, see also (6.4 below for moments of higher orders. When IR +, (7.2 is satisfied in particular when u iredictable with respect to the standard Poisson process filtration. 10

11 3 Main results The main results of thiaper are stated in this section under the form of Corollary 3.2 and Theorem 3.3. Let (Y, µ denote another measure space with associated configuration space Ω Y σ-finite diffuse intensity measure µ(dy. random mapping and Given an everywhere defined measurable τ : Ω Y, (3.1 indexed by, let τ (ω, ω Ω, denote the image measure of ω by τ, i.e. maps ω ω( i1 τ : Ω Ω Y (3.2 ɛ xi Ω to τ (ω In other terms, the random mapping τ : Ω Ω Y ω( i1 ɛ τ(ω,xi Ω Y. shifts each configuration point x ω according to x τ(ω, x, and in the sequel we will be interested in finding conditions for τ : Ω Ω Y to map π σ to π µ. This question is well known to have an affirmative answer when the transformation τ : Y is deterministic and maps σ to µ, as can be checked from the Lévy-Khintchine representation (2.2 of the characteristic function of π σ. In the random case we will use the moment identity of the next Proposition 3.1, which is a direct application of Proposition 6.2 below with 0 u Rh. We apply the convention that l i 0, so that { l 0, l 1 0 : 0 i1 l i 0 } is an arbitrary singleton. i1 Proposition 3.1 Let N 0 and let R(ω : L p µ(y L p σ(, ω Ω, be a random N+1 isometry for all p 1,..., N + 1. Then for all h L p µ(y such that Rh IL 2,1 is bounded and E σ a+1 s0 sa ( (Rh( lp σ(ds0 σ(ds a <, (3.3 11

12 l l a N + 1, l 0,..., l a 1, a 0, we have δ σ (Rh L n+1 (Ω, π σ and n 1 ( n E σ δ σ (Rh n+1 h n k+1 (yµ(dye σ δσ (Rh k k + n a n a0 j0 ba ( b qj+1 Y k0 Y l 0 + +l an b l 0,...,l a 0 l a+1,...,l b 0 h 1+lq (yµ(dy ( a C l 0,n L j a,b E σ j+1 s0 sj ( j (Rh( 1+lp dσ j+1 (s j n 0,..., N, where dσ j+1 (s j σ(ds 0 σ(ds j, L a (l 1,..., l a, and C l 0,n L a,a+c ( 1c ( n l 0 c r q 1 (c q 0r c+1 < <r 0 a+c+1 q0 pr q+1 +1 (c q ( l1 + + l p + p + q 1. l l p 1 + p + q 1, (3.4 As a consequence of Proposition 3.1, if in addition R(ω : L p µ(y L p σ( satisfies the condition ( j t0 tj (Rh(ω, t p lp σ(dt 0 σ(dt j 0, (3.5 j+1 π σ (ω-a.s. for all l l j N + 1, l 0 1,..., l j 1, j 1,..., N, then we have n 1 ( n E σ δ σ (Rh n+1 h n k+1 (yµ(dye σ δσ (Rh k, (3.6 k k0 Y n 0,..., N, i.e. the moments of δ σ (Rh satisfy the extended recurrence relation (2.11 of the Touchard type. Hence Proposition 3.1 and Lemma 7.2 yield the next corollary in which the sufficient condition (3.7 is a strengthened version of the Wiener space condition trace(drh n 0 of Theorem 2.1 in 22. Corollary 3.2 Let R : L p µ(y L p σ( be a random isometry for all p 1,. Assume that h L p µ(y is such that sup h L p µ (Y <, and that Rh satisfies (3.3 p 1 and the cyclic condition D t1 Rh(t 2 D tk Rh(t 1 0, t 1,..., t k, (3.7 12

13 π σ σ k -a.e. for all k 2. Then, under π σ, δ σ (Rh has same distribution as the compensated Poisson integral δ µ (h of h under π µ. Proof. Lemma 7.2 below shows that Condition (3.5 holds under (3.7 since D s (Rh(t l ε + s (Rh(t l (Rh(t l l k1 ( l (Rh(t l k (D s (Rh(t k 0, k s, t, l 1, hence by Proposition 3.1, Relation (3.6 holds for all n 1, and this shows by induction from (2.12 that under π σ, δ σ (Rh has same moments as δ µ (h under π µ. In addition, since sup h L p µ (Y <, Relation (3.6 also shows by induction p 1 that the moments of δ σ (Rh satisfy the bound E σ δ σ (Rh n (Cn n for some C > 0 and all n 1, hence they characterize its distribution by the Carleman condition (E σ δ σ (Rh 2n 1/(2n +, cf. 5 and page 59 of 19. k1 We will apply Corollary 3.2 to the random isometry R : L p µ(y L p σ( is given as Rh h τ, h L p µ(y, where τ : Ω Y is the random transformation (3.1 of configuration points considered at the beginning of this section. As a consequence we obtain the following invariance result for Poisson measures when (, σ (Y, µ. Theorem 3.3 Let τ : Ω Y be a random transformation such that τ(ω, : Y maps σ to µ for all ω Ω, i.e. and satisfying the cyclic condition τ (ω, σ µ, ω Ω, D t1 τ(ω, t 2 D tk τ(ω, t 1 0, ω Ω, t 1,..., t k, (3.8 for all k 1. Then τ : Ω Ω Y maps π σ to π µ, i.e. τ π σ π µ is the Poisson measure with intensity µ(dy on Y. 13

14 Proof. We first show that, under π σ, δ σ (h τ has same distribution as the compensated Poisson integral δ µ (h of h under π µ, for all h C c (Y. Let (K r r 1 denote an increasing family of compact subsets of such that K r, r 1 and let τ r : Ω Y be defined for r 1 by τ r (ω, x τ(ω K r, x, x, ω Ω. Letting R r h h τ r defines a random isometry R r : L p µ(y L p σ( for all p 1, which satisfies the assumptions of Corollary 3.2. Indeed we have D s R r h(t D s h(τ r (ω, t 1 Kr (s(h(τ r (ω, t + D s τ r (ω, t h(τ r (ω, t 1 Kr (s(h(τ r (ω {s}, t h(τ r (ω, t, s, t, hence (3.8 implies that Condition (3.7 holds, and Corollary 3.2 shows that we have E σ e iλδ µ(h τ r E µ e iλδ µ(h, (3.9 for all λ IR. Next we note that Condition (3.8 implies that D t τ r (ω, t 0, ω Ω, t, (3.10 i.e. τ r (ω, t does not depend on the presence or absence of a point in ω at t, and in particular, τ r (ω, t τ r (ω {t}, t, t / ω, and τ r (ω, t τ r (ω \ {t}, t, t ω. Hence by (2.6 we have δ µ (h τ r Y h(y(τ r ω(dy µ(dy h(τ r (ω, x(ω(dx σ(dx h(τ r (ω \ {x}, x(ω(dx σ(dx 14

15 and by (3.9 we get δ σ (h τ r, ( i Y h(y(τ r ω(dy µ(dy E σ exp ( E σ exp i h(y(ω(dy µ(dy τ r Y E σ e iδ µ(h τ r E µ e iδ µ(h ( E µ exp i h(y(ω(dy µ(dy. Y Next, letting r go to infinity we get ( ( E σ exp i h(y(τ r ω(dy µ(dy E µ exp i h(y(ω(dy µ(dy Y Y for all h C c (Y, hence the conclusion. In Theorem 3.3 above the identity (3.8 is interpreted for k 2 by stating that ω Ω, and t 1,..., t k, the k-tuples (τ(ω {t 1 }, t 2, τ(ω {t 2 }, t 3,..., τ(ω {t k 1 }, t k, τ(ω {t k }, t 1 and (τ(ω, t 2, τ(ω, t 3,..., τ(ω, t k, τ(ω, t 1 coincide on at least one component n o i {1,..., k} in Y k, i.e. D ti τ(ω, t i+1 mod k 0. 4 Examples In this section we consider some examples of transformations satisfying the hypotheses of Section 3, in case Y for σ-finite measures σ and µ. Using various binary relations on we consider successively the adapted case, and transformations that are conditioned by a random set such as the convex hull of a Poisson random measure. Such results are consistent with the fact that given the position of its extremal vertices, a Poisson random measure remains Poisson within its convex hull, cf. the 15

16 unpublished manuscript 7, see also 25 for a related use of stopping sets. 1. First, we remark that if is endowed with a total binary relation and if τ : Ω Y is (backward predictable in the sense that x y D x τ(ω, y 0, (4.1 i.e. τ(ω {x}, y τ(ω, y, x y, (4.2 then the cyclic Condition (3.8 is satisfied, i.e. we have D x1 τ(ω, x 2 D xk τ(ω, x 1 0, x 1,..., x k, ω Ω, (4.3 for all k 1. Indeed, for all x 1,..., x k there exists i {1,..., k} such that x i x j, for all 1 j k, hence D xi τ(ω, x j 0, 1 j k, by the predictability condition (4.1, hence (4.3 holds. Consequenly, τ : Ω Ω Y maps π σ to π µ by Theorem 3.3, provided τ(ω, : Y maps σ to µ for all ω Ω. Such binary relations on can be defined via an increasing family (C λ λ IR of subsets whose reunion is and such that for all x y there exists λ x, λ y IR with x C λx \ C λy and y C λy, or y C λy \ C λx and x C λx, which is equivalent to y x or x y, respectively. This framework includes the classical adaptedness condition when has the form IR + Y. For example, if and Y are of the form Y IR + Z, consider the filtration (F t t IR+, where F t is generated by {σ(0, s A : 0 s < t, A B b (Z}, where B c (Z denotes the compact Borel subsets of Z. In this case it is well-known that ω τ ω is Poisson distributed with intensity µ under π σ, provided τ(ω, : IR + Z IR + Z iredictable in the sense that ω τ(s, z is F t -measurable for all 0 s t, z Z, cf. e.g. Theorem of 2. Here, Condition (4.1 holds for 16

17 the partial order (s, x (t, y s t, (4.4 on Z IR + by taking C λ λ,, λ IR +, and the cyclic Condition (3.8 is satisfied when τ(ω, : IR + Z IR + Z iredictable in the sense of (4.1. Next, we consider other examples in which the binary relation is configuration dependent. This includes in particular transformations of Poisson measures within their convex hull, given the positions of extremal vertices. 2. Let B(0, 1 denote the closed unit ball in IR d. For all ω Ω, let C(ω denote the convex hull of ω in IR d with interior C(ω, and let ω e ω (C(ω \ C(ω denote the extremal vertices of C(ω. Consider a measurable mapping τ : Ω such that for all ω Ω, τ(ω, is measure preserving, maps C(ω to C(ω, and for all ω Ω, τ(ω e, x, τ(ω, x x C(ω, x, x \ C(ω, (4.5 i.e. the points of C(ω are shifted by τ(ω, depending on the positions of the extremal vertices of the convex hull of ω, which are left invariant by τ(ω,. The next figure shows an example of a transformation that modifies only the interior of the convex hull generated by the random measure, in which the number of points is taken to be finite for simplicity of illustration. 17

18 Next we prove the invariance of such transformations as a consequence of Theorem 3.3. This invariance property is related to the intuitive fact that given the positions of the extreme vertices, the distribution of the inside points remains Poisson when they are shifted according to the data of the vertices, cf. e.g. 7. Here we consider the binary relation ω given by x ω y x C(ω {y}, ω Ω, x, y. The relation ω is clearly reflexive, and it is transitive since x ω y and y ω z implies x C(ω {y} C(ω {z}, hence x ω z. Note that ω is also total on C(ω and it is an order relation on \ C(ω, since it is also antisymmetric on that set, i.e. if x, y / C(ω then x ω y and y ω x means x C(ω {y} and y C(ω {x}, which implies x y. We will need the following lemma. Lemma 4.1 For all x, y and ω Ω we have x ω y D x τ(ω, y 0, (4.6 and x ω y D y τ(ω, x 0. (4.7 Proof. Let x, y and ω Ω. First, if x ω y then we have x / C(ω {y} hence τ(ω {y}, x τ(ω, x x by (4.5. Next, if x ω y, i.e. x C(ω {y}, we can distinguish two cases: a x C(ω. In this case we have C(ω {x} C(ω, hence τ(ω {x}, y τ(ω, y for all y. b x C(ω {y} \ C(ω. If y C(ω {x} then x y / C(ω {x}, hence τ(ω {x}, y τ(ω, y. On the other hand if y / C(ω {x} then y ω x and τ(ω {x}, y τ(ω, y y as above. 18

19 We conclude that D x τ(ω, y 0 in both cases. Let us now show that τ : Ω Ω satisfies the cyclic condition (3.8. Let t 1,..., t k. First, if t i C(ω for some i {1,..., k}, then for all j 1,..., k we have t i ω t j and by Lemma 4.1 we get D ti τ(ω, t j 0, thus (3.8 holds, and we may assume that t i / C(ω for all i 1,..., k. In this case, if t i+1 mod k ω t i for some i 1,..., k, then by Lemma 4.1 we have D ti τ(ω, t i+1 mod k 0, which shows that (3.8 holds. Finally, if t 1 ω t k ω ω t 2 ω t 1, then by transitivity of ω we have t 1 ω t k ω t 1, which implies t 1 t k / C(ω by antisymmetry on \ C(ω, hence D tk τ(ω, t 1 0, and τ : Ω satisfies the cyclic Condition (3.8 for all k 2. Hence τ satisfies the hypotheses of Theorem 3.3, and τ π σ π µ provided τ(ω, : Y maps σ to µ for all ω Ω. 5 Moment identities for stochastic integrals In this section we prove a moment identity for Poisson stochastic integrals of arbitrary orders in Theorem 5.1, whose application will be to prove Proposition 3.1. More precisely, given F : Ω IR a random variable and u : Ω IR a measurable process, we aim at decomposing E σ δ σ (u n F in terms of the gradient D, while removing all occurrences of δ σ using the integration by parts formula (2.8. In Theorem 5.1 and in the rest of this section we will use the notation ε + s b ε + s 1 ε + s b, s b (s 1,..., s b b, b 1. Moreover, by saying that u : Ω IR has a compact support in we mean that there exists a compact subset K of such that u(ω, x 0 for all ω Ω and x \ K. 19

20 Theorem 5.1 Let F : Ω IR be a bounded random variable and let u : Ω IR be a bounded process with compact support in. For all n 0 we have E σ δ σ (u n F n a0 ba n ( 1 b a C La,bE σ ε + s a F b l 1 + +l an b l 1,...,l a 0 l a+1,...,l b 0 where dσ b (s b σ(ds 1 σ(ds b, L a (l 1,..., l a, and C La,a+c 0r c+1 < <r 0 a+c+1 c q0 r q+q c 1 pr q+1 +q c+1 b ε + s a\ dσ b (s b, (5.1 ( l1 + + l p + p + q 1. (5.2 l l p 1 + p + q 1 Before turning to the proof of Theorem 5.1 we consider some examples. 1. For n 2 and F 1, Theorem 5.1 recovers the Skorohod isometry (2.9 as follows: E σ δσ (u 2 E σ u s1 u s2 σ(ds 1 σ(ds 2 a 0, b 2 2 2E σ u s1 (I + D s1 u s2 σ(ds 1 σ(ds 2 a 1, b E σ u s1 2 σ(ds 1 a 1, b 1 + E σ (I + D s1 u s2 (I + D s2 u s1 σ(ds 1 σ(ds 2 a 2, b 2 2 E σ u s 2 σ(ds + E σ s1 s2 (u s1 u s2 σ(ds 1 σ(ds 2. ( For n 3 and F 1, Theorem 5.1 yields the following third moment identity: E σ δσ (u 3 E σ u 3 s 1 σ(ds 1 a 1, b 1 3E σ u 2 s 1 (I + D s1 u s2 σ(ds 1 σ(ds 2 a 1, b E σ (I + D s2 u s1 (I + D s1 u 2 s 2 σ(ds 1 σ(ds 2 a 2, b 2 2 E σ u s1 u s2 u s3 σ(ds 1 σ(ds 2 σ(ds 3 a 0, b E σ u s1 (I + D s1 u s3 (I + D s1 u s2 σ(ds 1 σ(ds 2 σ(ds 3 a 1, b 3 3 3E σ (I + D s1 (I + D s2 u s3 (I + D s1 u s2 (I + D s2 u s1 3 σ(ds 1 σ(ds 2 σ(ds 3 a 2, b 3 20

21 + E σ (I + D s1 (I + D s2 u s3 (I + D s1 (I + D s3 u s2 (I + D s2 (I + D s3 u s1 3 σ(ds 1 σ(ds 2 σ(ds 3 a 3, b 3 E σ u 3 s 1 σ(ds 1 + 3E σ u s1 D s1 u 2 s 2 σ(ds 1 σ(ds E σ s1 s2 (u s1 u 2 s 2 σ(ds 1 σ(ds 2 + E σ s1 s2 s3 (u s1 u s2 u s3 σ(ds 1 σ(ds 2 σ(ds ( Noting that C La,c defined in (5.2 represents the number of partitions of a set of l l a + a + c elements into a subsets of lengths 1 + l 1,..., 1 + l a and c singletons, we find that when F 1 and u 1 A Theorem 5.1 reads E σ (Z λ n n a0 λ a c0 is a deterministic indicator function, and a ( n ( 1 c S(n c, a c c for Z λ δ(1 A ω(a σ(a a compensated Poisson random variable with intensity λ σ(a, where S(n, c denotes the Stirling number of the second kind, i.e. the number of ways to partition a set of n objects into c non-empty subsets. This coincides with the moment formula E λ (Z λ n n λ a S 2 (n, a, where S 2 (n, a denotes the number of partitions of a set of size n into a non-singleton subsets, which can be obtained from the sequence (0, λ, λ,... of cumulants of the compensated Poisson distribution, through the combinatorial identity S 2 (n, a c0 a0 a ( n ( 1 c S(n c, a c, 0 a n, c which is the binomial dual of S(m, n n k0 ( m S 2 (m k, n k, k cf. 17 for details. The proof of Theorem 5.1 will be done by induction based on the following lemma. 21

22 Lemma 5.2 Let G : Ω IR be a bounded random variable and let u : Ω IR be a bounded process with compact support in. For all n 0 we have d E σ δ σ (u n G c Kd E σ ε + s d G ε + s d \ u k p 1 k p dσ d (s d d 0k d < <k 0 n 1 d n 0k d < <k 0 n 0 d n d 1 c Kd E σ δ σ (ε + s d 1 u kd 1 1 ε + s d 1 (u sd G ε + s d 1 \ u k p 1 k p dσ d (s d d d 1 ( kp 1 where c Kd, K d (k 0,..., k d IN d+1. Proof. k p+1 (5.5 The formula clearly holds when n 0, while when n 1, the first summation in (5.5 actually starts from d 1. The proof follows by application to l n 1 or l n of the following identity: E σ δ σ (u n G (5.6 0k l+1 < <k 0 n c Kl+1 E σ l+1 δ σ (ε + s l u k l 1 ε + s l+1 Gε + s l u sl+1 ε + s l+1 + l d1 0k d < <k 0 n l+1 d1 0k d < <k 0 n A l + c Kd E σ d ε + s d G d l ε + s l+1 \ u k p 1 k p dσ l+1 (s l+1 ε + s d \ u k p 1 k p dσ d (s d d 1 c Kd E σ δ σ (ε + s d 1 u kd 1 1 ε + s d 1 (u sd G ε + s d 1 \ u k p 1 k p dσ d (s d d l l+1 B d C d, (5.7 d1 d1 which will be proved by induction on l 0,..., n. First, note that (5.6 holds for l 0 as by (2.4 and (2.8 we have E σ δ σ (u n G E σ u s1 D s1 (δ σ (u n 1 Gσ(ds 1 E σ u s1 ε + s 1 δ σ (u n 1 ε + s 1 Gσ(ds 1 E σ G u s1 δ σ (u n 1 σ(ds 1 E σ u s1 (u s1 + δ σ (ε + s 1 u n 1 ε + s 1 Gσ(ds 1 E σ G u s1 δ σ (u n 1 σ(ds 1 22,

23 A 0 C 1, which also proves the lemma in case n 1. Next, when n 2, for l 0,..., n 1, using the duality formula (2.8 and the relations ε + s l+1 δ σ (ε + s l u ε + s l u sl+1 + δ σ (ε + s l+1 u, cf. (2.10, and D sl+2 ε + s l+2 I, we rewrite the first term in (5.6 as A l 0k l+1 < <k 0 n c Kl+1 E σ l+1 ε + s l+1 Gε + s l u sl+1 (ε + s l u sl+1 + δ σ (ε + s l+1 u 0 k l+1 < <k 0 n 1 k l+1 < <k 0 n + kl 1 l c Kl+1 E σ l+1 δ σ (ε + s l+1 u k l+1 ε + s l+1 Gε + s l u k l k l+1 s l+1 ε + s l+1 \ u k p 1 k p dσ l+1 (s l+1 l ε + s l+1 \ u k p 1 k p dσ l+1 (s l+1 l+1 c Kl+1 E σ δ σ (ε + s l+1 u k l+1 ε + s l+1 G ε + s l+1 \ u k p 1 k p dσ l+1 (s l+1 l+1 0k l+1 < <k 0 n 1 k l+1 < <k 0 n l+1 c Kl+1 E σ ε + s l+1 G ε + s l+1 \ u k p 1 k p dσ l+1 (s l+1 l+1 c Kl+1 l+1 E σ ε + s l+1 u sl+2 D sl+2 (δ σ (ε + s l+1 u kl+1 1 ε + s l+1 G ε + s l+1 \ u k p 1 k p dσ l+2 (s l+2 l+2 + 0k l+1 < <k 0 n 1 k l+1 < <k 0 n l+1 c Kl+1 E σ ε + s l+1 G ε + s l+1 \ u k p 1 k p dσ l+1 (s l+1 l+1 c Kl+1 ( l+1 E σ ε + s l+1 u sl+2 ε + s l+2 δ σ (ε + s l+1 u kl+1 1 ε + s l+1 G ε + s l+1 \ u k p 1 k p dσ l+2 (s l+2 l k l+1 < <k 0 n 0k l+1 < <k 0 n A l+1 + B l+1 C l+2, l+1 c Kl+1 E σ δ σ (ε + s l+1 u kl+1 1 ε + s l+1 (u sl+2 G ε + s l+1 \ u k p 1 k p dσ l+2 (s l+2 l+2 l+1 c Kl+1 E σ ε + s l+1 G ε + s l+1 \ u k p 1 k p dσ l+1 (s l+1 l+1 23

24 which proves (5.6 by induction on l 1,..., n 1, as n 1 n 1 E σ δ σ (u n G A 0 C 1 C 1 + A d A d+1 C 1 + B d+1 C d+2 d0 d0 n n+1 B d C d. d1 d1 Proof of Theorem 5.1. We check that in (5.1, all terms with a 0 and 0 b n 1 vanish, hence in particular the formula also holds when n 0. When n 1 the proof of (5.1 is obtained by application to c n or c n + 1 of the following identity: n c E σ δ σ (u n F ( 1 c c 1 + b0 ( 1 b n b a0 c 1 D c + E b, b0 a0 l 1 + +l a+1 n c a l 1,...,l a+1 0 E σ a+c δ σ (ε + s a u l a+1 ε + s a F l 1 + +lan b a l 1,...,la 0 C La+1,a+c (5.8 a+c C La,a+bE σ a+b ε + s a F ε + s a a+b ε + s a ε + s a\ dσ a+c (s a+c ε + s a\ dσ a+b (s a+b which will be proved by induction on c 1,..., n + 1. First, we note that since ( l1 + + l p + p 1 C La,a, l l p 1 + p 1 the identity (5.8 holds for c 1 from Lemma 5.2. Next, for all c 1,..., n 1, applying Lemma 5.2 with n l a+1 and G ε + s a F a+c ε + s a ε + s a\ and fixing s 1,..., s a+c, we rewrite the first term in (5.8 using (5.2 and the change of index as n c D c ( 1 c a0 k d p p + m m p, 0 p d, l 1 + +l a+1 n c a l 1,...,l a+1 0 C La+1,a+c 24

25 E σ a+c δ σ (ε + s a u l a+1 ε + s a F n c ( 1 c a0 l 1 + +l a+1 n c a l 1,...,l a+1 0 E σ a+c+d ε + s a+d F n c ( 1 c a0 a+c C La+1,a+c a+c+d qa+d+1 l 1 + +l a+1 n c a l 1,...,l a+1 0 ε + s a l a+1 d0 ε + s a+d C La+1,a+c E σ a+c+d δ σ (ε + s a+d 1 u m d ε + s a+d 1 F n c ( 1 c a 0 l 1 + +l a n c a n c +( 1 c+1 E c + D c+1, l 1,...,l a 0 a 0 l 1 + +l a +1 n c a 1 E σ a +c+1 ε + s a\ dσ a+c (s a+c m 1 + +m d l a+1 d m 1,...,m d 0 l a+1 d1 ε + s a+d \ m 1 + +m d l a+1 d m 1,...,m d 0 a+c+d qa+d ε + s a+d 1 \ ε + s a+d 1 a+c+d 1 ka+c+1 a C La,a +ce σ ε + s F +c a a +c l 1,...,l a +1 0 under the changes of indices qa +1 d a+d ka+1 ( m1 + + m p + p 1 m m p 1 + p 1 d ε + s a+d \ u 1+m k a s k dσ a+c+d (s a+c+d ( m1 + + m p + p 1 m m p 1 + p 1 ε + s a+d 1 \ u 1+m k a c s k dσ a+c+d (s a+c+d a ε + s a +c ε + s a \ u 1+l p dσ a +c (s a +c (5.9 C La +1,a +c+1 (5.10 a δ σ (ε + s a ul a +1 ε + F +c+1 sa qa +1 ε + s u a a s q l l a l l a + m m d, a a + d, in (5.9 when d 0,..., l a+1, and ε + s a \ u 1+l p dσ a +c+1 (s a +c+1 l l a +1 l l a + m m d, a + 1 a + d, in (5.10 when d 1,..., l a+1. Noting that in (5.10, the summation on a actually 25

26 ends at a n c 1 when c < n. We conclude the proof by induction, as E σ δ σ (u n F D 1 D n+1 + E 0 E 0 + n D b D b+1 b1 n E b, b0 and by the change of indices (a, b (a, b a in ( Recursive moment identities The main results of this section are Propositions 6.1 and 6.2. Their proofs are stated using Lemma 2.4 above and Proposition 6.3 below, and they are used to prove the main results of Section 3. In the next theorem we use the notation s of Definition 2.5 and let sj s0 sj, s j (s 0,..., s j, and dσ b+1 (s b σ(ds 0 σ(ds b, s b (s 0,..., s b, 0 j b. Proposition 6.1 Let N 0 and let u IL 2,1 be bounded with u and E σ b+1 s0 sj ( b u lp dσ b+1 (s b N+1 <, L (Ω, L p σ( l l a N + 1, l 0,..., l a 1, 0 j a b N. Then for all n 0,..., N we have δ σ (u L n+1 (Ω, π σ and n 1 ( n E σ δ σ (u n+1 E σ δ σ (u k k + n a n a0 j0 ba k0 l 0 + +lan b l 0,...,la 0 ( a C l 0,n L j E a,b σ u n k+1 t σ(dt b+1 sj ( b dσ b+1 (s b (6.1, where C l 0,n L a,b ( 1b a ( n l 0 C La,b, and C La,b is defined in (

27 Proof. When u : Ω IR is a bounded process with compact support in this result is a direct consequence of Lemma 2.4 and Proposition 6.3 below applied with n k and l 0 + k b n b. We conclude the proof by induction and a limiting argument, as follows. Let (K r r 1 denote an increasing family of compact subsets of such that K r. The family of processes u x (r (ω : u x 1 Kr (x, r 1, converges r 1 in IL 2,1 to u as r goes to infinity, hence δ σ (u (r converges to δ(u in L 2 (Ω, π σ as r goes to infinity. Clearly the result holds for N 0 by applying the formula to the process u (r which is bounded with compact support by letting r go to infinity. Next, letting N 0 and assuming that δ σ (u L n+1 (Ω, π σ and that (6.1 holds for all n 0,..., N, we note that for all even integer m {2,..., N + 1} we have the bound E σ δ σ (u m m 1 + a0 a j0 m 1 ba m 2 k0 ( m 1 E σ δσ (u m 2 k/(m 2 k u t m k σ(dt ( ( a b C l 0,m 1 L j a,b E σ s j b+1 l 0 + +lam b 1 l 0,...,la 0 dσ b+1 (s b which, applied to u (r (ω, allows us to extend (6.1 to the order N + 1 by uniform integrability after taking the limit as r goes to infinity. Let us consider some particular cases of Proposition 6.1. For n 1, Relation (6.1 reads E σ δσ (u 2 E σ u s 2 σ(ds E σ s1 (u s1 u s2 σ(ds 1 σ(ds 2 a 0, b 1, j E σ s1 (u s1 u s2 σ(ds 1 σ(ds 2 a 1, b 1, j E σ s1 s2 (u s1 u s2 σ(ds 1 σ(ds 2 a 1, b 1, j 1 2 E σ u s 2 σ(ds + E σ s1 s2 (u s1 u s2 σ(ds 1 σ(ds 2, 2 which coincides with (5.3. On the other hand for n 2 Relation (6.1 yields the third moment E σ δσ (u 3 E σ u 3 sσ(ds + 2E σ δ(u u 2 sσ(ds, 27

28 2E σ s0 (u 2 s 0 u s1 σ(ds 0 σ(ds 1 a 0, b 1, j E σ s0 (u s0 u s1 u s2 σ(ds 0 σ(ds 1 a 0, b 2, j E σ s0 (u s0 u 2 s 1 σ(ds 0 σ(ds 1 + 2E σ s0 (u 2 s 0 u s1 σ(ds 0 σ(ds 1 a 1, b 1, j E σ s0 s1 (u s0 u 2 s 0 σ(ds 0 σ(ds 1 a 1, b 1, j 1 2 E σ s0 (u s0 u s1 u s2 σ(ds 0 σ(ds 1 a 1, b 2, j 0 2 E σ s0 s1 (u s0 u s1 u s2 σ(ds 0 σ(ds 1 σ(ds 2 a 1, b 2, j E σ s0 (u s0 u s1 u s2 σ(ds 0 σ(ds 1 σ(ds 2 a 2, b 2, j E σ s0 s1 (u s0 u s1 u s2 σ(ds 0 σ(ds 1 σ(ds 2 a 2, b 2, j E σ s0 s1 s2 (u s0 u s1 u s2 σ(ds 0 σ(ds 1 σ(ds 2 a 2, b 2, j 2 3 E σ u 3 sσ(ds + 2E σ δ(u u 2 sσ(ds + E σ u s0 D s0 u 2 s 1 σ(ds 0 σ(ds E σ s0 s1 (u s0 u 2 s 1 σ(ds 0 σ(ds 1 + E σ s0 s1 s2 (u s0 u s1 u s2 σ(ds 0 σ(ds 1 σ(ds 2, 2 3 (6.2 which recovers (5.4 by the duality relation (2.8. As a consequence of Proposition 6.1 and Lemma 7.2 in the appendix, when the process u satisfies the cyclic condition a0 j0 D t1 u t2 (ω D tk u t1 (ω 0, ω Ω, t 1,..., t k, (6.3 k 2, Relation (6.1 becomes n 1 ( n E σ δ σ (u n+1 E σ δ σ (u k k k0 a (b 1 n n ( a + C l 0,n L j E a,b σ ba l 0 + +lan b l 0,...,la 0 ut n k+1 σ(dt b+1 sj ( b dσ b+1 (s b (6.4 i.e. the last two terms of (6.2 vanish when n 2. In case IR + Z, Condition (6.3 is satisfied when u iredictable, by the same argument as the one leading to (4.3., 28

29 The next Proposition follows from Proposition 6.1 and is used to prove Proposition 3.1. Proposition 6.2 Let N 0 and let u IL 2,1 be a bounded process such that u N+1 L (Ω, L p σ( and the integral u n t σ(dt is deterministic, for all n 1,..., N+ 1, and E σ a+1 s0 sa ( u lp dσ a+1 (s a <, l l a N + 1, l 0,..., l a 1, 0 a N + 1. Then for all n 0,..., N we have δ σ (u L n+1 (Ω, π σ and n 1 ( n E σ δ σ (u n+1 k k0 + ( a j 0 j a b n l 0 + +lan b l 0,...,la 0 l a+1,...,l b 0 u n k+1 t σ(dte σ δσ (u k C l 0,n L a,b E σ j+1 sj j dσ j+1 (s j b qj+1 u 1+lq t σ(dt. Proof. We apply Proposition 6.1 after integrating in s j+1,..., s a and using (2.13. Consequently if u : Ω that IR satisfy the hypotheses of Proposition 6.2 and is such j+1 s0 sj ( j u lp dσ j+1 (s j 0, (6.5 π σ -a.s., for all l l j N + 1, l 0 1,..., l j 1, j 1,..., N, or simply the cyclic condition D t0 u t1 (ω D tj u t0 (ω 0, ω Ω, t 1,..., t j, j 1,..., N, cf. Lemma 7.2 below, then we have n 1 ( n E σ δ σ (u n+1 u n k+1 t σ(dte σ δσ (u k, n 0,..., N, k k0 i.e. the moments of δ σ (u satisfy the same recurrence relation (5.8 as the moments of compensated Poisson integrals. The next proposition is used to prove Proposition 6.1 with the help of Lemma

30 Proposition 6.3 Let u : Ω IR and v : Ω IR be bounded processes with compact support in. For all k 0 we have E σ v s δ σ (ε + s u k σ(ds E σ δ σ (u k v s σ(ds + k a a0 j0 ( k a j ba ( 1 b a l 1 + +lak b l 1,...,la 0 where C La,b is defined in (5.2. Proof. C La,bE σ b+1 sj ( v s0 b dσ b+1 (s b Thiroof is an application of Theorem 5.1 with F v s. Using Proposition 7.1 below and the expansion j (I + si i0 j l0 0 i 0 < <i l j si0 sil, we have, up to the symmetrization due to the integral in σ(ds 0 σ(ds a and the summation on l 1,..., l a, ( b ε + s a v s0 (I + D s0 ε + s a ε + s a\ ( ( ( (I + si v s0 (I + D s0 i1 ( ( (I + si i1 v s0 b ( ( ( + (I + si v s0 D s0 i1 ( ( (I + si + i1 a j0 ( a j ε + s a v s0 b v s0 s1 sj ( ε + s a b b v s0 D s0 ( ε + s a\ + a j0 b b ( ( a s0 sj j hence by Theorem 5.1 applied to G v s with fixed s we have E σ v s δ σ ((I + D s u k σ(ds 30 v s0 b,,

31 k k a ( 1 b a a0 b0 l 1 + +lak b l 1,...,la 0 E σ b+1 ε + s a v s0 (I + D s0 k k a ( 1 b a a0 b0 + k a a1 j0 ( a j l 1 + +lak b l 1,...,la 0 k a ( 1 b a b0 E σ b+1 s0 sj ( E σ δ σ (u k + k a a1 j0 ( a j k a v s σ(ds ( 1 b a b0 C La,b ( b ε + s a C La,bE σ b+1 ε + s a v s0 v s0 l 1 + +lak b l 1,...,la 0 b l 1 + +lak b l 1,...,la 0 where we identified E σ δ σ (u k C La,b v s σ(ds to (6.6 on the last step, by another application of Theorem 5.1 to F v sσ(ds. ε + s a\ dσ b+1 (s b b ε + s a dσ b+1 (s b C La,bE σ b+1 sj ( v s0 b ε + s a\ dσ b+1 (s b (6.6 dσ b+1 (s b, 7 Appendix In this appendix we state some combinatorial results that have been used above. Proposition 7.1 Let u : Ω IR be a measurable process. For all 0 j, p n we have the relation ( n j n ε + s j \ u sp (I + si u sp, (7.1 i0 for mutually different s n (s 0,..., s n. 31

32 Proof. We will prove Relation (7.1 for all n 0 by induction on j {0, 1,..., n}. Clearly for j 0 the relation holds since u s0 n n ε + s 0 u sp u s0 u sp + u s0 n u s0 u sp + (I + s0 Ξ 1 Ξ n{s 0 } Ξ 0 Ξn{s 0 } s 0 / Ξ 0 n u sp. D Ξ1 u s1 D Ξn u sn D Ξ0u s0 D Ξnu sn Next, assuming that (7.1 holds at the rank j {0, 1,..., n 1} and taking {s 0,..., s n } mutually different we have n n ε + s j+1 \ u sp (I + 1 {p j+1} D sj+1 Ξ 0 Ξn {s j+1 } s j+1 / Ξ j+1 Ξ 0 Ξn {s j+1 } s j+1 / Ξ j+1 Ξ 0 Ξn {s j+1 } s j+1 / Ξ j+1 j (I + D si u sp i0 i p n j (I + D si D Ξp u sp i0 i p ( j n (I + si D Ξp u sp i0 Θ 0 Θn{s 0,s 1,...,s j } s 0 / Θ 0,...,s j / Θ j + Ξ 0 Ξn{s j+1 } s j+1 / Ξ j+1 ( j n (I + si i0 ( j+1 Θ 0 Θn{s 0,s 1,...,s j } s 0 / Θ 0,...,s j / Θ j D Θ0u s0 D Θnu sn Θ 0 Θn{s 0,s 1,...,s j } s 0 / Θ 0,...,s j / Θ j ( j u sp + sj+1 i0 n (I + si u sp. i0 32 D Θ0D Ξ0u s0 D ΘnD Ξnu sn D Ξ0D Θ0u s0 D ΞnD Θnu sn (I + si n u sp

33 Finally in the next lemma, which is used to prove Corollary 3.2, we show that Relation (6.5 is satisfied provided D s u t (ω satisfies the cyclic condition (7.2. Lemma 7.2 Let N 1 and assume that u : Ω IR satisfies the cyclic condition D t0 u t1 (ω D tj u t0 (ω 0, ω Ω, t 0, t 1,..., t j, (7.2 for j 1,..., N. Then we have ( t0 tj ut0 (ω u tj (ω 0, ω Ω, t 0, t 1,..., t j, for j 1,..., N. Proof. By Definition 2.5 we have j t0 tj u tp D Θ0u t0 D u, Θj tj (7.3 Θ 0 Θ j {t 0,t 1,...,t j } t 0,..., t j, j 2,..., N. Without loss of generality we may assume that {t 0, t 1,..., t j } are not equal to eachother and that Θ 0,..., Θ j and Θ k Θ l, 0 k l j in the above sum. In this case we can construct a sequence (k 1,..., k i by choosing t 0 t k1 Θ 0, t k2 Θ k1,..., t ki 1 Θ ki 2, until t ki t 0 Θ ki 1 for some i {2,..., j} since Θ 0 Θ j and Θ 0 Θ j {t 0, t 1,..., t j }. Hence by (7.2 we have D tk1 u t0 D tk2 u tk1 D tki 1 u tki 2 D tk0 u tki 1 0, by (7.2, which implies D Θ0 u t0 D Θk1 u tk1 D Θki 1 u tki 1 0, since hence (t k1, t k2,..., t ki 1, t 0 Θ 0 Θ k1 Θ ki 1, D Θ0 u t0 D Θk1 u tk1 D Θkj u tj 0, and (7.3 vanishes. 33

34 Again, in case IR +, Condition (7.2 holds in particular when either D s u t 0, 0 s t, as in (4.2, resp. D t u s 0, 0 s t, which is the case when u is backward, resp. forward, predictable. References 1 S. Albeverio and N. V. Smorodina. A distributional approach to multiple stochastic integrals and transformations of the Poisson measure. Acta Appl. Math., 94(1:1 19, K. Bichteler. Stochastic integration with jumps, volume 89 of Encyclopedia of Mathematics and its Applications. Cambridge University Press, Cambridge, P. Brandimarte. Numerical methods in finance and economics. Statistics in Practice. Wiley- Interscience John Wiley & Sons, Hoboken, NJ, second edition, P. Brémaud. Point processes and queues. Springer-Verlag, New York, Martingale dynamics, Springer Series in Statistics. 5 T. Carleman. Les fonctions quasi analytiques. Gauthier-Villars, Éditeur, Paris, C.A. Charalambides. Enumerative combinatorics. CRC Press Series on Discrete Mathematics and its Applications. Chapman & Hall/CRC, Boca Raton, FL, Y. Davydov and S. Nagaev. On the convex hulls of point processes. Manuscript, A. Dermoune, P. Krée, and L. Wu. Calcul stochastique non adapté par rapport à la mesure aléatoire de Poisson. In Séminaire de Probabilités, II, volume 1321 of Lecture Notes in Math., pages , Berlin, Springer. 9 Y. Ito. Generalized Poisson functionals. Probab. Theory Related Fields, 77:1 28, J. Jacod and A. N. Shiryaev. Limit theorems for stochastic processes, volume 288 of Grundlehren der Mathematischen Wissenschaften. Springer-Verlag, Berlin, second edition, G. Di Nunno, B. Øksendal, and F. Proske. Malliavin Calculus for Lévy Processes with Applications to Finance. Universitext. Springer-Verlag, Berlin, J. Picard. Formules de dualité sur l espace de Poisson. Ann. Inst. H. Poincaré Probab. Statist., 32(4: , N. Privault. Girsanov theorem for anticipative shifts on Poisson space. Probab. Theory Related Fields, 104:61 76, N. Privault. Moment identities for Poisson-Skorohod integrals and application to measure invariance. C. R. Math. Acad. Sci. Paris, 347: , N. Privault. Moment identities for Skorohod integrals on the Wiener space and applications. Electron. Commun. Probab., 14: (electronic, N. Privault. Stochastic Analysis in Discrete and Continuous Settings, volume 1982 of Lecture Notes in Mathematics. Springer-Verlag, Berlin, N. Privault. Generalized Bell polynomials and the combinatorics of poisson central moments. Electron. J. Combin., 18(1:Research Paper 54, 10, N. Privault and J.L. Wu. Poisson stochastic integration in Hilbert spaces. Ann. Math. Blaise Pascal, 6(2:41 61,

35 19 J. A. Shohat and J. D. Tamarkin. The Problem of Moments. American Mathematical Society Mathematical Surveys, vol. II. American Mathematical Society, New York, Y. Takahashi. Absolute continuity of Poisson random fields. Publ. RIMS Kyoto University, 26: , A.S. Üstünel and M. Zakai. Analyse de rotations aléatoires sur l espace de Wiener. C. R. Acad. Sci. Paris Sér. I Math., 319(10: , A.S. Üstünel and M. Zakai. Random rotations of the Wiener path. Probab. Theory Relat. Fields, 103(3: , A.S. Üstünel and M. Zakai. Transformation of measure on Wiener space. Springer Monographs in Mathematics. Springer-Verlag, Berlin, A. M. Veršik, I.M. Gel fand, and M. I. Graev. Representations of the group of diffeomorphisms. Uspehi Mat. Nauk, 30(6(186:1 50, S. Zuyev. Stopping sets: gamma-type results and hitting properties. Adv. in Appl. Probab., 31(2: ,

COVARIANCE IDENTITIES AND MIXING OF RANDOM TRANSFORMATIONS ON THE WIENER SPACE

COVARIANCE IDENTITIES AND MIXING OF RANDOM TRANSFORMATIONS ON THE WIENER SPACE Communications on Stochastic Analysis Vol. 4, No. 3 (21) 299-39 Serials Publications www.serialspublications.com COVARIANCE IDENTITIES AND MIXING OF RANDOM TRANSFORMATIONS ON THE WIENER SPACE NICOLAS PRIVAULT

More information

Random Fields: Skorohod integral and Malliavin derivative

Random Fields: Skorohod integral and Malliavin derivative Dept. of Math. University of Oslo Pure Mathematics No. 36 ISSN 0806 2439 November 2004 Random Fields: Skorohod integral and Malliavin derivative Giulia Di Nunno 1 Oslo, 15th November 2004. Abstract We

More information

Independence of some multiple Poisson stochastic integrals with variable-sign kernels

Independence of some multiple Poisson stochastic integrals with variable-sign kernels Independence of some multiple Poisson stochastic integrals with variable-sign kernels Nicolas Privault Division of Mathematical Sciences School of Physical and Mathematical Sciences Nanyang Technological

More information

Poisson stochastic integration in Hilbert spaces

Poisson stochastic integration in Hilbert spaces Poisson stochastic integration in Hilbert spaces Nicolas Privault Jiang-Lun Wu Département de Mathématiques Institut für Mathematik Université de La Rochelle Ruhr-Universität Bochum 7042 La Rochelle D-44780

More information

Supermodular ordering of Poisson arrays

Supermodular ordering of Poisson arrays Supermodular ordering of Poisson arrays Bünyamin Kızıldemir Nicolas Privault Division of Mathematical Sciences School of Physical and Mathematical Sciences Nanyang Technological University 637371 Singapore

More information

Malliavin Calculus: Analysis on Gaussian spaces

Malliavin Calculus: Analysis on Gaussian spaces Malliavin Calculus: Analysis on Gaussian spaces Josef Teichmann ETH Zürich Oxford 2011 Isonormal Gaussian process A Gaussian space is a (complete) probability space together with a Hilbert space of centered

More information

Convergence at first and second order of some approximations of stochastic integrals

Convergence at first and second order of some approximations of stochastic integrals Convergence at first and second order of some approximations of stochastic integrals Bérard Bergery Blandine, Vallois Pierre IECN, Nancy-Université, CNRS, INRIA, Boulevard des Aiguillettes B.P. 239 F-5456

More information

A NOTE ON STOCHASTIC INTEGRALS AS L 2 -CURVES

A NOTE ON STOCHASTIC INTEGRALS AS L 2 -CURVES A NOTE ON STOCHASTIC INTEGRALS AS L 2 -CURVES STEFAN TAPPE Abstract. In a work of van Gaans (25a) stochastic integrals are regarded as L 2 -curves. In Filipović and Tappe (28) we have shown the connection

More information

Malliavin Calculus in Finance

Malliavin Calculus in Finance Malliavin Calculus in Finance Peter K. Friz 1 Greeks and the logarithmic derivative trick Model an underlying assent by a Markov process with values in R m with dynamics described by the SDE dx t = b(x

More information

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition Filtrations, Markov Processes and Martingales Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition David pplebaum Probability and Statistics Department,

More information

Contents. 1 Preliminaries 3. Martingales

Contents. 1 Preliminaries 3. Martingales Table of Preface PART I THE FUNDAMENTAL PRINCIPLES page xv 1 Preliminaries 3 2 Martingales 9 2.1 Martingales and examples 9 2.2 Stopping times 12 2.3 The maximum inequality 13 2.4 Doob s inequality 14

More information

Divergence theorems in path space II: degenerate diffusions

Divergence theorems in path space II: degenerate diffusions Divergence theorems in path space II: degenerate diffusions Denis Bell 1 Department of Mathematics, University of North Florida, 4567 St. Johns Bluff Road South, Jacksonville, FL 32224, U. S. A. email:

More information

Factorial moments of point processes

Factorial moments of point processes Factorial moments of oint rocesses Jean-Christohe Breton IRMAR - UMR CNRS 6625 Université de Rennes 1 Camus de Beaulieu F-35042 Rennes Cedex France Nicolas Privault Division of Mathematical Sciences School

More information

On pathwise stochastic integration

On pathwise stochastic integration On pathwise stochastic integration Rafa l Marcin Lochowski Afican Institute for Mathematical Sciences, Warsaw School of Economics UWC seminar Rafa l Marcin Lochowski (AIMS, WSE) On pathwise stochastic

More information

Poisson random measure: motivation

Poisson random measure: motivation : motivation The Lévy measure provides the expected number of jumps by time unit, i.e. in a time interval of the form: [t, t + 1], and of a certain size Example: ν([1, )) is the expected number of jumps

More information

Citation Osaka Journal of Mathematics. 41(4)

Citation Osaka Journal of Mathematics. 41(4) TitleA non quasi-invariance of the Brown Authors Sadasue, Gaku Citation Osaka Journal of Mathematics. 414 Issue 4-1 Date Text Version publisher URL http://hdl.handle.net/1194/1174 DOI Rights Osaka University

More information

Wiener Measure and Brownian Motion

Wiener Measure and Brownian Motion Chapter 16 Wiener Measure and Brownian Motion Diffusion of particles is a product of their apparently random motion. The density u(t, x) of diffusing particles satisfies the diffusion equation (16.1) u

More information

THE LENT PARTICLE FORMULA

THE LENT PARTICLE FORMULA THE LENT PARTICLE FORMULA Nicolas BOULEAU, Laurent DENIS, Paris. Workshop on Stochastic Analysis and Finance, Hong-Kong, June-July 2009 This is part of a joint work with Laurent Denis, concerning the approach

More information

A pointwise equivalence of gradients on configuration spaces

A pointwise equivalence of gradients on configuration spaces A pointwise equivalence of gradients on configuration spaces Nicolas Privault Equipe d Analyse et Probabilités, Université d Evry-Val d Essonne Boulevard F. Mitterrand, 91025 Evry Cedex, France e-mail:

More information

The Wiener Itô Chaos Expansion

The Wiener Itô Chaos Expansion 1 The Wiener Itô Chaos Expansion The celebrated Wiener Itô chaos expansion is fundamental in stochastic analysis. In particular, it plays a crucial role in the Malliavin calculus as it is presented in

More information

An almost sure invariance principle for additive functionals of Markov chains

An almost sure invariance principle for additive functionals of Markov chains Statistics and Probability Letters 78 2008 854 860 www.elsevier.com/locate/stapro An almost sure invariance principle for additive functionals of Markov chains F. Rassoul-Agha a, T. Seppäläinen b, a Department

More information

The Rademacher Cotype of Operators from l N

The Rademacher Cotype of Operators from l N The Rademacher Cotype of Operators from l N SJ Montgomery-Smith Department of Mathematics, University of Missouri, Columbia, MO 65 M Talagrand Department of Mathematics, The Ohio State University, 3 W

More information

1 Brownian Local Time

1 Brownian Local Time 1 Brownian Local Time We first begin by defining the space and variables for Brownian local time. Let W t be a standard 1-D Wiener process. We know that for the set, {t : W t = } P (µ{t : W t = } = ) =

More information

Quantum stochastic calculus applied to path spaces over Lie groups

Quantum stochastic calculus applied to path spaces over Lie groups Quantum stochastic calculus applied to path spaces over Lie groups Nicolas Privault Département de Mathématiques Université de La Rochelle Avenue Michel Crépeau 1742 La Rochelle, France nprivaul@univ-lr.fr

More information

Pathwise Construction of Stochastic Integrals

Pathwise Construction of Stochastic Integrals Pathwise Construction of Stochastic Integrals Marcel Nutz First version: August 14, 211. This version: June 12, 212. Abstract We propose a method to construct the stochastic integral simultaneously under

More information

STAT 7032 Probability Spring Wlodek Bryc

STAT 7032 Probability Spring Wlodek Bryc STAT 7032 Probability Spring 2018 Wlodek Bryc Created: Friday, Jan 2, 2014 Revised for Spring 2018 Printed: January 9, 2018 File: Grad-Prob-2018.TEX Department of Mathematical Sciences, University of Cincinnati,

More information

Regularity of the density for the stochastic heat equation

Regularity of the density for the stochastic heat equation Regularity of the density for the stochastic heat equation Carl Mueller 1 Department of Mathematics University of Rochester Rochester, NY 15627 USA email: cmlr@math.rochester.edu David Nualart 2 Department

More information

On Shalom Tao s Non-Quantitative Proof of Gromov s Polynomial Growth Theorem

On Shalom Tao s Non-Quantitative Proof of Gromov s Polynomial Growth Theorem On Shalom Tao s Non-Quantitative Proof of Gromov s Polynomial Growth Theorem Carlos A. De la Cruz Mengual Geometric Group Theory Seminar, HS 2013, ETH Zürich 13.11.2013 1 Towards the statement of Gromov

More information

7 The Radon-Nikodym-Theorem

7 The Radon-Nikodym-Theorem 32 CHPTER II. MESURE ND INTEGRL 7 The Radon-Nikodym-Theorem Given: a measure space (Ω,, µ). Put Z + = Z + (Ω, ). Definition 1. For f (quasi-)µ-integrable and, the integral of f over is (Note: 1 f f.) Theorem

More information

THE MALLIAVIN CALCULUS FOR SDE WITH JUMPS AND THE PARTIALLY HYPOELLIPTIC PROBLEM

THE MALLIAVIN CALCULUS FOR SDE WITH JUMPS AND THE PARTIALLY HYPOELLIPTIC PROBLEM Takeuchi, A. Osaka J. Math. 39, 53 559 THE MALLIAVIN CALCULUS FOR SDE WITH JUMPS AND THE PARTIALLY HYPOELLIPTIC PROBLEM ATSUSHI TAKEUCHI Received October 11, 1. Introduction It has been studied by many

More information

Lecture 12. F o s, (1.1) F t := s>t

Lecture 12. F o s, (1.1) F t := s>t Lecture 12 1 Brownian motion: the Markov property Let C := C(0, ), R) be the space of continuous functions mapping from 0, ) to R, in which a Brownian motion (B t ) t 0 almost surely takes its value. Let

More information

Stability of optimization problems with stochastic dominance constraints

Stability of optimization problems with stochastic dominance constraints Stability of optimization problems with stochastic dominance constraints D. Dentcheva and W. Römisch Stevens Institute of Technology, Hoboken Humboldt-University Berlin www.math.hu-berlin.de/~romisch SIAM

More information

Normal approximation of Poisson functionals in Kolmogorov distance

Normal approximation of Poisson functionals in Kolmogorov distance Normal approximation of Poisson functionals in Kolmogorov distance Matthias Schulte Abstract Peccati, Solè, Taqqu, and Utzet recently combined Stein s method and Malliavin calculus to obtain a bound for

More information

ON THE PATHWISE UNIQUENESS OF SOLUTIONS OF STOCHASTIC DIFFERENTIAL EQUATIONS

ON THE PATHWISE UNIQUENESS OF SOLUTIONS OF STOCHASTIC DIFFERENTIAL EQUATIONS PORTUGALIAE MATHEMATICA Vol. 55 Fasc. 4 1998 ON THE PATHWISE UNIQUENESS OF SOLUTIONS OF STOCHASTIC DIFFERENTIAL EQUATIONS C. Sonoc Abstract: A sufficient condition for uniqueness of solutions of ordinary

More information

i=1 β i,i.e. = β 1 x β x β 1 1 xβ d

i=1 β i,i.e. = β 1 x β x β 1 1 xβ d 66 2. Every family of seminorms on a vector space containing a norm induces ahausdorff locally convex topology. 3. Given an open subset Ω of R d with the euclidean topology, the space C(Ω) of real valued

More information

1 Directional Derivatives and Differentiability

1 Directional Derivatives and Differentiability Wednesday, January 18, 2012 1 Directional Derivatives and Differentiability Let E R N, let f : E R and let x 0 E. Given a direction v R N, let L be the line through x 0 in the direction v, that is, L :=

More information

Asymptotics for posterior hazards

Asymptotics for posterior hazards Asymptotics for posterior hazards Pierpaolo De Blasi University of Turin 10th August 2007, BNR Workshop, Isaac Newton Intitute, Cambridge, UK Joint work with Giovanni Peccati (Université Paris VI) and

More information

Lévy Processes and Infinitely Divisible Measures in the Dual of afebruary Nuclear2017 Space 1 / 32

Lévy Processes and Infinitely Divisible Measures in the Dual of afebruary Nuclear2017 Space 1 / 32 Lévy Processes and Infinitely Divisible Measures in the Dual of a Nuclear Space David Applebaum School of Mathematics and Statistics, University of Sheffield, UK Talk at "Workshop on Infinite Dimensional

More information

for all f satisfying E[ f(x) ] <.

for all f satisfying E[ f(x) ] <. . Let (Ω, F, P ) be a probability space and D be a sub-σ-algebra of F. An (H, H)-valued random variable X is independent of D if and only if P ({X Γ} D) = P {X Γ}P (D) for all Γ H and D D. Prove that if

More information

POSITIVE DEFINITE FUNCTIONS AND MULTIDIMENSIONAL VERSIONS OF RANDOM VARIABLES

POSITIVE DEFINITE FUNCTIONS AND MULTIDIMENSIONAL VERSIONS OF RANDOM VARIABLES POSITIVE DEFINITE FUNCTIONS AND MULTIDIMENSIONAL VERSIONS OF RANDOM VARIABLES ALEXANDER KOLDOBSKY Abstract. We prove that if a function of the form f( K is positive definite on, where K is an origin symmetric

More information

A NEW PROOF OF THE WIENER HOPF FACTORIZATION VIA BASU S THEOREM

A NEW PROOF OF THE WIENER HOPF FACTORIZATION VIA BASU S THEOREM J. Appl. Prob. 49, 876 882 (2012 Printed in England Applied Probability Trust 2012 A NEW PROOF OF THE WIENER HOPF FACTORIZATION VIA BASU S THEOREM BRIAN FRALIX and COLIN GALLAGHER, Clemson University Abstract

More information

Real Analytic Version of Lévy s Theorem

Real Analytic Version of Lévy s Theorem E extracta mathematicae Vol. 30, Núm. 2, 153 159 (2015) Real Analytic Version of Lévy s Theorem A. El Kinani, L. Bouchikhi Université Mohammed V, Ecole Normale Supérieure de Rabat, B.P. 5118, 10105 Rabat

More information

Chaos expansions and Malliavin calculus for Lévy processes

Chaos expansions and Malliavin calculus for Lévy processes Chaos expansions and Malliavin calculus for Lévy processes Josep Lluís Solé, Frederic Utzet, and Josep Vives Departament de Matemàtiques, Facultat de Ciències, Universitat Autónoma de Barcelona, 8193 Bellaterra

More information

Verona Course April Lecture 1. Review of probability

Verona Course April Lecture 1. Review of probability Verona Course April 215. Lecture 1. Review of probability Viorel Barbu Al.I. Cuza University of Iaşi and the Romanian Academy A probability space is a triple (Ω, F, P) where Ω is an abstract set, F is

More information

Some basic elements of Probability Theory

Some basic elements of Probability Theory Chapter I Some basic elements of Probability Theory 1 Terminology (and elementary observations Probability theory and the material covered in a basic Real Variables course have much in common. However

More information

On Reflecting Brownian Motion with Drift

On Reflecting Brownian Motion with Drift Proc. Symp. Stoch. Syst. Osaka, 25), ISCIE Kyoto, 26, 1-5) On Reflecting Brownian Motion with Drift Goran Peskir This version: 12 June 26 First version: 1 September 25 Research Report No. 3, 25, Probability

More information

A REPRESENTATION FOR THE KANTOROVICH RUBINSTEIN DISTANCE DEFINED BY THE CAMERON MARTIN NORM OF A GAUSSIAN MEASURE ON A BANACH SPACE

A REPRESENTATION FOR THE KANTOROVICH RUBINSTEIN DISTANCE DEFINED BY THE CAMERON MARTIN NORM OF A GAUSSIAN MEASURE ON A BANACH SPACE Theory of Stochastic Processes Vol. 21 (37), no. 2, 2016, pp. 84 90 G. V. RIABOV A REPRESENTATION FOR THE KANTOROVICH RUBINSTEIN DISTANCE DEFINED BY THE CAMERON MARTIN NORM OF A GAUSSIAN MEASURE ON A BANACH

More information

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539 Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory

More information

An Infinitesimal Approach to Stochastic Analysis on Abstract Wiener Spaces

An Infinitesimal Approach to Stochastic Analysis on Abstract Wiener Spaces An Infinitesimal Approach to Stochastic Analysis on Abstract Wiener Spaces Dissertation zur Erlangung des akademischen Grades eines Doktors der Naturwissenschaften an der Fakultät für Mathematik, Informatik

More information

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability...

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability... Functional Analysis Franck Sueur 2018-2019 Contents 1 Metric spaces 1 1.1 Definitions........................................ 1 1.2 Completeness...................................... 3 1.3 Compactness......................................

More information

Two-boundary lattice paths and parking functions

Two-boundary lattice paths and parking functions Two-boundary lattice paths and parking functions Joseph PS Kung 1, Xinyu Sun 2, and Catherine Yan 3,4 1 Department of Mathematics, University of North Texas, Denton, TX 76203 2,3 Department of Mathematics

More information

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3 Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

-variation of the divergence integral w.r.t. fbm with Hurst parameter H < 1 2

-variation of the divergence integral w.r.t. fbm with Hurst parameter H < 1 2 /4 On the -variation of the divergence integral w.r.t. fbm with urst parameter < 2 EL ASSAN ESSAKY joint work with : David Nualart Cadi Ayyad University Poly-disciplinary Faculty, Safi Colloque Franco-Maghrébin

More information

Measure theory and probability

Measure theory and probability Chapter 1 Measure theory and probability Aim and contents This chapter contains a number of exercises, aimed at familiarizing the reader with some important measure theoretic concepts, such as: Monotone

More information

Stochastic Volatility and Correction to the Heat Equation

Stochastic Volatility and Correction to the Heat Equation Stochastic Volatility and Correction to the Heat Equation Jean-Pierre Fouque, George Papanicolaou and Ronnie Sircar Abstract. From a probabilist s point of view the Twentieth Century has been a century

More information

Stochastic Processes. Winter Term Paolo Di Tella Technische Universität Dresden Institut für Stochastik

Stochastic Processes. Winter Term Paolo Di Tella Technische Universität Dresden Institut für Stochastik Stochastic Processes Winter Term 2016-2017 Paolo Di Tella Technische Universität Dresden Institut für Stochastik Contents 1 Preliminaries 5 1.1 Uniform integrability.............................. 5 1.2

More information

Some SDEs with distributional drift Part I : General calculus. Flandoli, Franco; Russo, Francesco; Wolf, Jochen

Some SDEs with distributional drift Part I : General calculus. Flandoli, Franco; Russo, Francesco; Wolf, Jochen Title Author(s) Some SDEs with distributional drift Part I : General calculus Flandoli, Franco; Russo, Francesco; Wolf, Jochen Citation Osaka Journal of Mathematics. 4() P.493-P.54 Issue Date 3-6 Text

More information

LARGE DEVIATIONS OF TYPICAL LINEAR FUNCTIONALS ON A CONVEX BODY WITH UNCONDITIONAL BASIS. S. G. Bobkov and F. L. Nazarov. September 25, 2011

LARGE DEVIATIONS OF TYPICAL LINEAR FUNCTIONALS ON A CONVEX BODY WITH UNCONDITIONAL BASIS. S. G. Bobkov and F. L. Nazarov. September 25, 2011 LARGE DEVIATIONS OF TYPICAL LINEAR FUNCTIONALS ON A CONVEX BODY WITH UNCONDITIONAL BASIS S. G. Bobkov and F. L. Nazarov September 25, 20 Abstract We study large deviations of linear functionals on an isotropic

More information

Stochastic integration. P.J.C. Spreij

Stochastic integration. P.J.C. Spreij Stochastic integration P.J.C. Spreij this version: April 22, 29 Contents 1 Stochastic processes 1 1.1 General theory............................... 1 1.2 Stopping times...............................

More information

Quasi-invariant measures on the path space of a diffusion

Quasi-invariant measures on the path space of a diffusion Quasi-invariant measures on the path space of a diffusion Denis Bell 1 Department of Mathematics, University of North Florida, 4567 St. Johns Bluff Road South, Jacksonville, FL 32224, U. S. A. email: dbell@unf.edu,

More information

SUMMARY OF RESULTS ON PATH SPACES AND CONVERGENCE IN DISTRIBUTION FOR STOCHASTIC PROCESSES

SUMMARY OF RESULTS ON PATH SPACES AND CONVERGENCE IN DISTRIBUTION FOR STOCHASTIC PROCESSES SUMMARY OF RESULTS ON PATH SPACES AND CONVERGENCE IN DISTRIBUTION FOR STOCHASTIC PROCESSES RUTH J. WILLIAMS October 2, 2017 Department of Mathematics, University of California, San Diego, 9500 Gilman Drive,

More information

LOCAL TIMES OF RANKED CONTINUOUS SEMIMARTINGALES

LOCAL TIMES OF RANKED CONTINUOUS SEMIMARTINGALES LOCAL TIMES OF RANKED CONTINUOUS SEMIMARTINGALES ADRIAN D. BANNER INTECH One Palmer Square Princeton, NJ 8542, USA adrian@enhanced.com RAOUF GHOMRASNI Fakultät II, Institut für Mathematik Sekr. MA 7-5,

More information

Set-Indexed Processes with Independent Increments

Set-Indexed Processes with Independent Increments Set-Indexed Processes with Independent Increments R.M. Balan May 13, 2002 Abstract Set-indexed process with independent increments are described by convolution systems ; the construction of such a process

More information

1. Stochastic Processes and filtrations

1. Stochastic Processes and filtrations 1. Stochastic Processes and 1. Stoch. pr., A stochastic process (X t ) t T is a collection of random variables on (Ω, F) with values in a measurable space (S, S), i.e., for all t, In our case X t : Ω S

More information

Solving the Poisson Disorder Problem

Solving the Poisson Disorder Problem Advances in Finance and Stochastics: Essays in Honour of Dieter Sondermann, Springer-Verlag, 22, (295-32) Research Report No. 49, 2, Dept. Theoret. Statist. Aarhus Solving the Poisson Disorder Problem

More information

Pathwise volatility in a long-memory pricing model: estimation and asymptotic behavior

Pathwise volatility in a long-memory pricing model: estimation and asymptotic behavior Pathwise volatility in a long-memory pricing model: estimation and asymptotic behavior Ehsan Azmoodeh University of Vaasa Finland 7th General AMaMeF and Swissquote Conference September 7 1, 215 Outline

More information

BALANCING GAUSSIAN VECTORS. 1. Introduction

BALANCING GAUSSIAN VECTORS. 1. Introduction BALANCING GAUSSIAN VECTORS KEVIN P. COSTELLO Abstract. Let x 1,... x n be independent normally distributed vectors on R d. We determine the distribution function of the minimum norm of the 2 n vectors

More information

Fock factorizations, and decompositions of the L 2 spaces over general Lévy processes

Fock factorizations, and decompositions of the L 2 spaces over general Lévy processes Fock factorizations, and decompositions of the L 2 spaces over general Lévy processes A. M. Vershik N. V. Tsilevich Abstract We explicitly construct and study an isometry between the spaces of square integrable

More information

A RECONSTRUCTION FORMULA FOR BAND LIMITED FUNCTIONS IN L 2 (R d )

A RECONSTRUCTION FORMULA FOR BAND LIMITED FUNCTIONS IN L 2 (R d ) PROCEEDINGS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 127, Number 12, Pages 3593 3600 S 0002-9939(99)04938-2 Article electronically published on May 6, 1999 A RECONSTRUCTION FORMULA FOR AND LIMITED FUNCTIONS

More information

Random Bernstein-Markov factors

Random Bernstein-Markov factors Random Bernstein-Markov factors Igor Pritsker and Koushik Ramachandran October 20, 208 Abstract For a polynomial P n of degree n, Bernstein s inequality states that P n n P n for all L p norms on the unit

More information

Unfolding the Skorohod reflection of a semimartingale

Unfolding the Skorohod reflection of a semimartingale Unfolding the Skorohod reflection of a semimartingale Vilmos Prokaj To cite this version: Vilmos Prokaj. Unfolding the Skorohod reflection of a semimartingale. Statistics and Probability Letters, Elsevier,

More information

ON SECOND ORDER DERIVATIVES OF CONVEX FUNCTIONS ON INFINITE DIMENSIONAL SPACES WITH MEASURES

ON SECOND ORDER DERIVATIVES OF CONVEX FUNCTIONS ON INFINITE DIMENSIONAL SPACES WITH MEASURES ON SECOND ORDER DERIVATIVES OF CONVEX FUNCTIONS ON INFINITE DIMENSIONAL SPACES WITH MEASURES VLADIMIR I. BOGACHEV AND BEN GOLDYS Abstract. We consider convex functions on infinite dimensional spaces equipped

More information

Minimal Sufficient Conditions for a Primal Optimizer in Nonsmooth Utility Maximization

Minimal Sufficient Conditions for a Primal Optimizer in Nonsmooth Utility Maximization Finance and Stochastics manuscript No. (will be inserted by the editor) Minimal Sufficient Conditions for a Primal Optimizer in Nonsmooth Utility Maximization Nicholas Westray Harry Zheng. Received: date

More information

Stochastic Processes II/ Wahrscheinlichkeitstheorie III. Lecture Notes

Stochastic Processes II/ Wahrscheinlichkeitstheorie III. Lecture Notes BMS Basic Course Stochastic Processes II/ Wahrscheinlichkeitstheorie III Michael Scheutzow Lecture Notes Technische Universität Berlin Sommersemester 218 preliminary version October 12th 218 Contents

More information

An Almost Sure Approximation for the Predictable Process in the Doob Meyer Decomposition Theorem

An Almost Sure Approximation for the Predictable Process in the Doob Meyer Decomposition Theorem An Almost Sure Approximation for the Predictable Process in the Doob Meyer Decomposition heorem Adam Jakubowski Nicolaus Copernicus University, Faculty of Mathematics and Computer Science, ul. Chopina

More information

Itô s formula. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Itô s formula. Samy Tindel. Purdue University. Probability Theory 2 - MA 539 Itô s formula Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Itô s formula Probability Theory

More information

ON THE BOUNDEDNESS BEHAVIOR OF THE SPECTRAL FACTORIZATION IN THE WIENER ALGEBRA FOR FIR DATA

ON THE BOUNDEDNESS BEHAVIOR OF THE SPECTRAL FACTORIZATION IN THE WIENER ALGEBRA FOR FIR DATA ON THE BOUNDEDNESS BEHAVIOR OF THE SPECTRAL FACTORIZATION IN THE WIENER ALGEBRA FOR FIR DATA Holger Boche and Volker Pohl Technische Universität Berlin, Heinrich Hertz Chair for Mobile Communications Werner-von-Siemens

More information

1 Introduction. 2 Measure theoretic definitions

1 Introduction. 2 Measure theoretic definitions 1 Introduction These notes aim to recall some basic definitions needed for dealing with random variables. Sections to 5 follow mostly the presentation given in chapter two of [1]. Measure theoretic definitions

More information

On the constant in the reverse Brunn-Minkowski inequality for p-convex balls.

On the constant in the reverse Brunn-Minkowski inequality for p-convex balls. On the constant in the reverse Brunn-Minkowski inequality for p-convex balls. A.E. Litvak Abstract This note is devoted to the study of the dependence on p of the constant in the reverse Brunn-Minkowski

More information

On chaos representation and orthogonal polynomials for the doubly stochastic Poisson process

On chaos representation and orthogonal polynomials for the doubly stochastic Poisson process dept. of math./cma univ. of oslo pure mathematics No 1 ISSN 86 2439 March 212 On chaos representation and orthogonal polynomials for the doubly stochastic Poisson process Giulia Di Nunno and Steffen Sjursen

More information

STATISTICS 385: STOCHASTIC CALCULUS HOMEWORK ASSIGNMENT 4 DUE NOVEMBER 23, = (2n 1)(2n 3) 3 1.

STATISTICS 385: STOCHASTIC CALCULUS HOMEWORK ASSIGNMENT 4 DUE NOVEMBER 23, = (2n 1)(2n 3) 3 1. STATISTICS 385: STOCHASTIC CALCULUS HOMEWORK ASSIGNMENT 4 DUE NOVEMBER 23, 26 Problem Normal Moments (A) Use the Itô formula and Brownian scaling to check that the even moments of the normal distribution

More information

OPTIMAL SOLUTIONS TO STOCHASTIC DIFFERENTIAL INCLUSIONS

OPTIMAL SOLUTIONS TO STOCHASTIC DIFFERENTIAL INCLUSIONS APPLICATIONES MATHEMATICAE 29,4 (22), pp. 387 398 Mariusz Michta (Zielona Góra) OPTIMAL SOLUTIONS TO STOCHASTIC DIFFERENTIAL INCLUSIONS Abstract. A martingale problem approach is used first to analyze

More information

Probability approximation by Clark-Ocone covariance representation

Probability approximation by Clark-Ocone covariance representation Probability approximation by Clark-Ocone covariance representation Nicolas Privault Giovanni Luca Torrisi October 19, 13 Abstract Based on the Stein method and a general integration by parts framework

More information

The Hopf argument. Yves Coudene. IRMAR, Université Rennes 1, campus beaulieu, bat Rennes cedex, France

The Hopf argument. Yves Coudene. IRMAR, Université Rennes 1, campus beaulieu, bat Rennes cedex, France The Hopf argument Yves Coudene IRMAR, Université Rennes, campus beaulieu, bat.23 35042 Rennes cedex, France yves.coudene@univ-rennes.fr slightly updated from the published version in Journal of Modern

More information

1 Math 241A-B Homework Problem List for F2015 and W2016

1 Math 241A-B Homework Problem List for F2015 and W2016 1 Math 241A-B Homework Problem List for F2015 W2016 1.1 Homework 1. Due Wednesday, October 7, 2015 Notation 1.1 Let U be any set, g be a positive function on U, Y be a normed space. For any f : U Y let

More information

Performance Evaluation of Generalized Polynomial Chaos

Performance Evaluation of Generalized Polynomial Chaos Performance Evaluation of Generalized Polynomial Chaos Dongbin Xiu, Didier Lucor, C.-H. Su, and George Em Karniadakis 1 Division of Applied Mathematics, Brown University, Providence, RI 02912, USA, gk@dam.brown.edu

More information

The Knaster problem and the geometry of high-dimensional cubes

The Knaster problem and the geometry of high-dimensional cubes The Knaster problem and the geometry of high-dimensional cubes B. S. Kashin (Moscow) S. J. Szarek (Paris & Cleveland) Abstract We study questions of the following type: Given positive semi-definite matrix

More information

Ito Formula for Stochastic Integrals w.r.t. Compensated Poisson Random Measures on Separable Banach Spaces. B. Rüdiger, G. Ziglio. no.

Ito Formula for Stochastic Integrals w.r.t. Compensated Poisson Random Measures on Separable Banach Spaces. B. Rüdiger, G. Ziglio. no. Ito Formula for Stochastic Integrals w.r.t. Compensated Poisson Random Measures on Separable Banach Spaces B. Rüdiger, G. Ziglio no. 187 Diese rbeit ist mit Unterstützung des von der Deutschen Forschungsgemeinschaft

More information

On an Effective Solution of the Optimal Stopping Problem for Random Walks

On an Effective Solution of the Optimal Stopping Problem for Random Walks QUANTITATIVE FINANCE RESEARCH CENTRE QUANTITATIVE FINANCE RESEARCH CENTRE Research Paper 131 September 2004 On an Effective Solution of the Optimal Stopping Problem for Random Walks Alexander Novikov and

More information

Chapter One. The Calderón-Zygmund Theory I: Ellipticity

Chapter One. The Calderón-Zygmund Theory I: Ellipticity Chapter One The Calderón-Zygmund Theory I: Ellipticity Our story begins with a classical situation: convolution with homogeneous, Calderón- Zygmund ( kernels on R n. Let S n 1 R n denote the unit sphere

More information

arxiv: v2 [math.pr] 22 Aug 2009

arxiv: v2 [math.pr] 22 Aug 2009 On the structure of Gaussian random variables arxiv:97.25v2 [math.pr] 22 Aug 29 Ciprian A. Tudor SAMOS/MATISSE, Centre d Economie de La Sorbonne, Université de Panthéon-Sorbonne Paris, 9, rue de Tolbiac,

More information

ELEMENTS OF PROBABILITY THEORY

ELEMENTS OF PROBABILITY THEORY ELEMENTS OF PROBABILITY THEORY Elements of Probability Theory A collection of subsets of a set Ω is called a σ algebra if it contains Ω and is closed under the operations of taking complements and countable

More information

The Lebesgue Integral

The Lebesgue Integral The Lebesgue Integral Brent Nelson In these notes we give an introduction to the Lebesgue integral, assuming only a knowledge of metric spaces and the iemann integral. For more details see [1, Chapters

More information

(2m)-TH MEAN BEHAVIOR OF SOLUTIONS OF STOCHASTIC DIFFERENTIAL EQUATIONS UNDER PARAMETRIC PERTURBATIONS

(2m)-TH MEAN BEHAVIOR OF SOLUTIONS OF STOCHASTIC DIFFERENTIAL EQUATIONS UNDER PARAMETRIC PERTURBATIONS (2m)-TH MEAN BEHAVIOR OF SOLUTIONS OF STOCHASTIC DIFFERENTIAL EQUATIONS UNDER PARAMETRIC PERTURBATIONS Svetlana Janković and Miljana Jovanović Faculty of Science, Department of Mathematics, University

More information

The Cameron-Martin-Girsanov (CMG) Theorem

The Cameron-Martin-Girsanov (CMG) Theorem The Cameron-Martin-Girsanov (CMG) Theorem There are many versions of the CMG Theorem. In some sense, there are many CMG Theorems. The first version appeared in ] in 944. Here we present a standard version,

More information

Stability and Sensitivity of the Capacity in Continuous Channels. Malcolm Egan

Stability and Sensitivity of the Capacity in Continuous Channels. Malcolm Egan Stability and Sensitivity of the Capacity in Continuous Channels Malcolm Egan Univ. Lyon, INSA Lyon, INRIA 2019 European School of Information Theory April 18, 2019 1 / 40 Capacity of Additive Noise Models

More information

Introduction to Infinite Dimensional Stochastic Analysis

Introduction to Infinite Dimensional Stochastic Analysis Introduction to Infinite Dimensional Stochastic Analysis By Zhi yuan Huang Department of Mathematics, Huazhong University of Science and Technology, Wuhan P. R. China and Jia an Yan Institute of Applied

More information

Properties of delta functions of a class of observables on white noise functionals

Properties of delta functions of a class of observables on white noise functionals J. Math. Anal. Appl. 39 7) 93 9 www.elsevier.com/locate/jmaa Properties of delta functions of a class of observables on white noise functionals aishi Wang School of Mathematics and Information Science,

More information

Deviation Measures and Normals of Convex Bodies

Deviation Measures and Normals of Convex Bodies Beiträge zur Algebra und Geometrie Contributions to Algebra Geometry Volume 45 (2004), No. 1, 155-167. Deviation Measures Normals of Convex Bodies Dedicated to Professor August Florian on the occasion

More information