Mixing Properties of Multivariate Infinitely Divisible Random Fields

Size: px
Start display at page:

Download "Mixing Properties of Multivariate Infinitely Divisible Random Fields"

Transcription

1 Mixing Properties of Multivariate Infinitely Divisible Random Fields Riccardo Passeggeri Almut E. D. Veraart Received: 27 September 27 / Revised: 7 September 28 The Author(s) 28 Abstract In this work we present different results concerning mixing properties of multivariate infinitely divisible (ID) stationary random fields. First, we derive some necessary and sufficient conditions for mixing of stationary ID multivariate random fields in terms of their spectral representation. Second, we prove that (linear combinations of independent) mixed moving average fields are mixing. Further, using a simple modification of the proofs of our results, we are able to obtain weak mixing versions of our results. Finally, we prove the equivalence of ergodicity and weak mixing for multivariate ID stationary random fields. Keywords Multivariate random field Infinitely divisible Mixed moving average Lévy process Mixing Weak mixing Ergodicity Mathematics Subject Classification (2) 6E7 37A25 6G6 62M4 6G Introduction In 97 in his fundamental work [], Maruyama provided pivotal results for infinitely divisible (ID) processes. Among them, he proved that under certain conditions, known afterwards as Maruyama conditions, these processes are mixing (see Theorem 6 of []). After him various authors contributed on this line of research, see for example Gross [7] and Kokoszka and Taqqu [9]. In 996 Rosinski and Zak extended Maruyama results proving that the a stationary ID process (X t ) t R is mixing if and only if lim t E [ e i(x t X ) ] = E [ e ix ] E [ e ix ], provided the Lévy measure of X has no atoms in 2πZ. More recently, Fuchs and Stelzer [6] extended some of the main B Riccardo Passeggeri riccardo.passeggeri4@imperial.ac.uk Almut E. D. Veraart a.veraart@imperial.ac.uk Department of Mathematics, Imperial College London, Exhibition Road, London SW7 2AZ, UK

2 results of Rosinski and Zak to the multivariate case. Parallel to this line of research, new developments have been obtained for ergodic and weak mixing properties of infinitely divisible random fields. In particular, see Roy [5] and [6] for Poissonian ID random fields and Roy [7], Roy and Samorodnitsky [8] and [22] forα-stable univariate random fields. In the present work we fill an important gap by extending the results of Maruyama [], Rosinski and Zak [3,4], and Fuchs and Stelzer [6] to the multivariate random field case. First, this is crucial for applications since many of them consider a multidimensional domain composed by both spatial and temporal components (and not just temporal ones). This is typically the case for many physical systems, like turbulences (e.g. [,2]), and in econometrics (e.g. models based on panel data). Second, with the present work, we also close the gap between the two lines of research presented above by focusing on the more general case of multivariate stationary ID random fields. On the modelling/application level, we prove that multivariate mixed moving average fields are mixing. This is a relevant result since Lévy driven moving average fields are extensively used in many applications throughout different disciplines, like brain imaging [8], tumour growth [3] and turbulences [2,3], among many. Moreover, we discuss conditions which ensure that a multivariate random fields is weakly mixing. In particular, we show that the proofs of the results obtained for the mixing case can be slightly modified to obtain similar results for the weak mixing case. Finally, we prove that a multivariate stationary ID random field is weak mixing if and only if it is ergodic. The present work is structured as follows. In Sect. 2, we discuss some preliminaries on mixing and derive the mixing conditions for multivariate ID stationary random fields. In addition, we study some extensions and other related results. In Sect. 3, we prove that (sums of independent) mixed moving averages (MMA) are mixing, including MMA with an extended subordinated basis. In Sect. 4 we obtain weak mixing versions of the results obtained in Sect. 2 and we prove the equivalence between ergodicity and weak mixing for stationary ID random fields. In order to simplify the exposition, we decided to put long proofs in the appendices. 2 Preliminaries and Results on Mixing Conditions In this section we analyse mixing conditions for stationary infinite divisible random fields. We work with the probability space (, F, P) and the measurable space (R d, B(R d )), where B(R d ) is the Borel σ -algebra on the vector field R d. We write L (X t ) for the distribution, or law, of the random variable X t.now, let (θ t ) t R l be a measure preserving R l action on (, F, P). Consider the random field X t (ω) = X θ t (ω), t R l. The random field (X t ) t R l defined in this way is stationary and, conversely, any stationary measurable random field can be expressed in this form. Further, we have, with a little bit of abuse of notation, θ v (B) := {θ v (ω) : ω B} ={ω : X (ω ) = X v (ω) for ω B}. We denote by the supremum norm.

3 Then (X t ) t R l (4.4)): is mixing if and only if (see Wang, Roy and Stoev [22] equation lim P(A θ t n (B)) = P(A)P(B), () for all A, B σ X and all (t n ) n N T, where σ X := σ({x t : t R l }) is the σ -algebra generated by (X t ) t R l and T := { (t n ) n N R l : lim t n = }. The following definition is based on the characteristic function of (X t ) t R l [see [22] equation (A.6)]: lim E exp i = E exp i ( r β j X s j exp i j= q k= [ ( r β j X s j E exp i j= γ k X pk +t n ) )] q γ k X pk, (2) for all r, q N,β j,γ k R, p j, s k R l and (t n ) n N T. Further, for the multivariate (or R d -valued) random field, we have the following definition based on the characteristic function of (X t ) t R l. Definition 2. Let (X t ) t R l be an R d -valued stationary random field. Then (X t ) t R l is said to be mixing if for all λ = (s,...,s m ),μ = (p,...,p m ) R ml and θ,θ 2 R md lim E [ exp ( i θ, X λ +i θ 2, X μn )] = E [ exp (i θ, X λ ) ] E [ exp ( i θ 2, X μ )] (3) k= where X λ := (X s,...,x s m ) R md and μ n = (p + t n,...,p m + t n ), where (t n ) n N is any sequence in T. Further, we recall the definition of an infinitely divisible random field. Definition 2.2 An R d valued random field (X t ) t R l (or its distribution) is said to be infinitely divisible if for every (X t,...,x tk ), where k N, and for every n N there exist i.i.d random vectors Y (n,k) i, i =,...,n, inr d k (possibly on a different probability space) such that (X t,...,x tk ) d = Y (n,k) + +Y (n,k) n. It is straightforward to see that the above definition is equivalent to the following definition. An R d valued random field (X t ) t R l (or its distribution) is said to be infinitely divisible if for every finite dimensional distribution of (X t ) t R l, namely F t,...,t k (x,...,x k ) := P ( ) X t < x,...,x tk < x k where k N, and for every n N there exists a probability measure μ n,k on R d k such that (n times) F t,...,t k (x,...,x k ) = μ n,k μ n,k = μ n n,k. We are now ready to state our first result.

4 Theorem 2.3 Let (X t ) t R l, with l N, beanr d -valued strictly stationary infinite divisible random field such that Q, the Lévy measure of L (X ), satisfies Q ({x = (x,...,x d ) R d : j {,...,d}, x j 2πZ}) =. Then (X t ) t R l is mixing if and only if [ lim E e ( j) i(x tn (k) X )] = E [e ix( j) ] E for any j, k =,...,d and for any sequence (t n ) n N T. ] [e ix(k), (4) The above theorem relies on the following result, which is the multivariate random field extension of the Maruyama conditions (see Theorem 6 of []). Theorem 2.4 Let (X t ) t R l be an R d -valued strictly stationary infinite divisible random field. Then (X t ) t R l is mixing if and only if (MM) the covariance matrix function (t n ) of the Gaussian part of (X tn ) tn R l tends to, as n, (MM2 ) lim Q tn ( x y >δ)= for any δ>, where Q tn is the Lévy measure of L (X, X tn ) on (R d, B(R d )), where (t n ) n N is any sequence in T. Notice that the above conditions are fewer than the Maruyama conditions. This is because we used the following lemma, which is a multivariate random field extension of Lemma of [] and Lemma 2.2 of [6]. Lemma 2.5 Assume that lim Q tn ( x y >δ)= holds for any δ>, where Q tn is the Lévy measure of L (X, X tn ) on (R d, B(R d )), and (t n ) n N R l. Then one has lim x y Q tn (dx, dy) =. < x 2 + y 2 2. Related Results and Extensions In this section, we present different results which follow from, are related to or extend the theorems presented in the previous section. The first result is a corollary which follows immediately from Theorem 2.3, and states that a multivariate random field is mixing if and only if its components are pairwise mixing. Corollary 2.6 An R d -valued strictly stationary i.d. random field X = (X t ) t R l with Q ({x = (x,...,x d ) R d : j {,...,d}, x j 2πZ}) = is mixing if and only if the bivariate random fields (X ( j), X (k) ), j, k {,...,d}, j < k, are all mixing. Proof It follows immediately from Theorem 2.3. The following corollary is a generalisation of Corollary 2.5 of [6].

5 Corollary 2.7 Let (X t ) t R l be an R d -valued strictly stationary ID random field. Then with the previous notation, (X t ) t R l is mixing if and only if { } lim (t n ) + min(, x y )Q tn (dx, dy) =, (5) R 2d for any (t n ) n N T. Proof We can follow the argument by [6]. To this end, note that if we assume that (5) holds, then conditions (MM) and (MM2 ) hold and, thus, Theorem 2.4 implies that (X t ) t R l is mixing. For the other direction assume that (X t ) t R l is mixing then by Theorem 2.4 condition (MM) holds. Furthermore, for every δ > with Q jk ( K δ ) = and any j, k =,...,d, (cf. (2)), Q ( jk) t n K c δ Q jk K c δ as n. (6) for any (t n ) n N T, where the symbol means convergence in the weak topology. In addition, we know that the Lévy measures Q jk are concentrated on the axes of R 2. Now consider a δ> such that conditions (22) and (6) hold. Then we have lim sup + lim sup R 2 min(, xy )Q ( jk) t n (dx, dy) ɛ jk) Bcδ min(, xy )Q( t n (dx, dy) = ɛ. Letting ɛ we obtain that lim sup R 2 min(, xy )Q ( jk) t n (dx, dy) = for any j, k =,...,d. Finally, d d min, x k y j Q tn (dx, dy) R 2d = d j,k= d j,k= k= j= R 2d min(, x k y j )Q tn (dx, dy) R 2 min(, xy )Q ( jk) t n (dx, dy), as n. Therefore, this implies that lim min(, x y )Q tn (dx, dy), R 2d for any (t n ) n N T, hence we obtain that condition (5) is satisfied.

6 The next two results are a reformulation of Theorem 2.3. However, the first requires a short preliminary introduction, which will be useful for Sect. 4 as well. Recall that a codifference τ(x, X 2 ) of an ID real bivariate random vector (X, X 2 ) is defined as follows [ ] [ ] [ ] τ(x, X 2 ) := log E e i(x X 2 ) log E e ix log E e ix 2, where log is the distinguished logarithm as defined in [9] p. 33. Following [6] we recall that the autocodifference function for an R d -valued strictly stationary ID process (X t ) t R is defined as τ(t) = ( τ ( jk) (t) ) ( ) j,k=,...,d with τ ( jk) (t) := τ X (k), X t ( j).for an R d -valued strictly stationary ID random field (X t ) t R l the autocodifference field τ(t) is defined as τ(t) = ( τ ( jk) (t) ) ( ) j,k=,...,d with τ ( jk) (t) := τ X (k), X ( j) t, where t R l. Corollary 2.8 Let (X t ) t R l, with l N, beanr d -valued strictly stationary infinite divisible random field such that Q, the Lévy measure of L (X ), satisfies Q ({x = (x,...,x d ) R d : j {,...,d}, x j 2πZ}) =. Then (X t ) t R l is mixing if and only if τ(t n ) as n for any sequence (t n ) n N T. Proof It follows immediately from Theorem 2.3. Corollary 2.9 Let (X t ) t R l, with l N, beanr d -valued strictly stationary infinite divisible random field such that Q, the Lévy measure of L (X ), satisfies Q ({x = (x,...,x d ) R d : j {,...,d}, x j 2πZ}) =. Then (X t ) t R l is mixing if and only if [ lim E e t ( j) i(x t X (k) )] = E [e ix( j) ] E ] [e ix(k), (7) for any j, k =,...,d, where is any norm on R l (e.g. the sup or the Euclidean norm) and t R l. Proof : Assume that (X t ) t R l is mixing. Then by Theorem 2.3 we know that [ lim E e ( j) i(x tn (k) X )] = E [e ix( j) ] E ] [e ix(k) holds for any j, k =,...,d and for any sequence (t n ) n N T. Now consider the following simple result. Let M = (A, d ) and M 2 = (A 2, d 2 ) be two metric spaces. Let S A be an open set of M.Let f be a mapping defined on S. Then lim x c f (x) = l iff for any sequence (x n ) n N of points in S such that n N : x n =c and lim x n = c we have lim f (x n ) = l. From this result and from the fact that we are considering any sequence such that lim t n =, we obtain Eq. (7). : Assume that (7) holds. Then we have that (8) holds by the result stated above. Then by Theorem 2.3 we obtain that (X t ) t R l is mixing. (8)

7 Now consider the set E := { (t n ) n N R l : lim t n =, where is any norm on R l}. Notice that any sequence (t n ) n N T belongs to E and vice versa, because on the finite dimensional vector space R l any norm a is equivalent to any other norm b. Hence, we obtain that Eq. (8) holds for any (t n ) n N E, and by applying the argument above we obtain our result. Remark 2. It is possible to see that the extension that we have done in the above corollary, can be applied to all our results that holds for any sequence (t n ) n N T. The next result is a multivariate and random field extension of Theorem 2 of Rosinski and Zak [3] and it will help us to generalise Theorem 2.3. Theorem 2. Let (X t ) t R l be an R d -valued stationary ID random field such that Q, thelévymeasureofx, satisfies Q ({x = (x,...,x d ) R d : j {,...,d}, x j 2πZ}) =. In other words, Q has atoms in this set. Let Z ={z = (z,...,z j ) R d : z j = 2πk/y j j {,...,d}, where k Z and y = (y,...,y j ) is an atom of Q }. Then (X t ) t R l is mixing if and only if for some a = (a,...,a d ) R d \Z, with a p = for p =,...,d, [ lim E e i(a j X tn ( j) a k X (k) )] = E ] [e ia j X ( j) E for any j, k =,...,d and for any sequence (t n ) n N T. ] [e ia k X (k), Proof Consider an element a R d \Z with a p = forp =,...,d. We know that the set of atoms of any σ -finite measure is a countable set (the proof is straightforward) and that any Lévy measure is σ -finite. Hence, the set of atoms of Q is countable, which implies that Z is countable. This implies that our a exists. Now, let a a 2 M a := a d Notice that M a is an invertible d d matrix and X t is a d-dimensional column vector. We have that (X t ) t R l is mixing if and only if (M a X t ) t R l is mixing. This is because by looking at the Definition 2. it is enough to show that for every m N, λ= (s,...,s m ) R ml and θ = (θ,...,θ m ) R md we have θ, M a X λ = θ, X λ, where M a X λ := (M a X s,...,m a X sm ) and θ R md. Notice that for m = wehave M a X t := M a X t, t R l. Indeed, we have θ, M a X λ = d mk= j= a j X s ( j) k θ jk = M a θ, X λ = θ, X λ, where M a θ := (M a θ,...,m a θ m ) R md. Now, the Lévy measure Q a of M a X is given by Q a ( ) = Q (Ma ( )) (see Proposition. of [9]). Since a / Z, Q a has no atoms in the set {x = (x,...,x d ) R d : j {,...,d}, x j 2πZ}. This is because

8 Q a ({x = (x,...,x d ) R d : j {,...,d}, x j 2πZ}) = Q ({x = (x,...,x d ) R d : j {,...,d}, x j 2πZ/a j }), since a / Z then j {,...,d} :a j = 2πk/y j for any k Z and any atom y of Q, hence = Q ({x = (x,...,x d ) R d : j {,...,d} such that x j = y j, where y is any atom of Q }) =. The equality to zero comes from the fact that the set considered has no intersection with the set of atoms of the measure Q. Finally, by using Theorem 2.3 the proof is complete. From this result we have the following generalisation of Theorem 2.3. Corollary 2.2 Let (X t ) t R l be an R q -valued stationary ID random field. Then (X t ) t R l is mixing if and only if L (X tn X ) L (X X ) for any j, k =,...,d and for any sequence (t n ) n N T, where X is an independent copy of X. Proof This result is an immediate consequence of Theorem 2.3, Corollary 2.6 and Theorem 2.. We end this section with a simple general result which will also be useful for the next section. Proposition 2.3 Let (X t ) t R l be a linear combination of independent, stationary, ID d and mixing random fields. In other words, letr N and let (X t ) t R l = ( r k= Yt k) t R l, where (Yt k) t R l,k=,...,r, are independent Rq -valued stationary, ID and mixing random fields. Then (X t ) t R l is stationary, ID and mixing. 3 Mixed Moving Average Field In this section we will focus on a specific random field: the mixed moving average (MMA) random field. Before introducing this random field we need to recall the definition of an R d -valued Lévy basis and the related integration theory. Lévy basis are also called infinitely divisible independently scattered random measures in the literature. In the following let S be a non-empty topological space, B(S) be the Borel-σ -field on S and π be some probability measure on (S, B(S)). We denote by B (S R l ) the collection of all Borel sets in S R l with finite π λ l -measure, where λ l denotes the l-dimensional Lebesgue measure. Definition 3. A d-dimensional Lévy basis on S R l is an R d -valued random measure ={ (B) : B B (S R l )} satisfying: (i) the distribution of (B) is infinitely divisible for all B B (S R l ), (ii) for an arbitrary n N and pairwise disjoint sets B,...,B n B (S R l ) the random variables (B ),..., (B n ) are independent,

9 (iii) for any pairwise disjoint sets B, B 2,... B (S R l ) with n N B n B (S R l ) we have, almost surely, ( n N B n) = n N (B n). Throughout this section, we shall restrict ourselves to time-homogeneous and factorisable Lévy bases, i.e. Lévy bases with characteristic function given by ] E [e i θ, (B) = e ψ(θ) (B), (9) for all θ R d and B B (S R l ), where = π λ l is the product measure of the probability measure π on S and the Lebesgue measure λ l on R l and ψ(θ) = i γ,θ 2 θ, θ + R d ( ) e i θ,x i θ,x [,] ( x ) Q(dx) is the cumulant transform of an ID distribution with characteristic triplet (γ,, Q). We note that the quadruple (γ,, Q,π)determines the distribution of the Lévy basis completely and therefore it is called the generating quadruple. Now, we provide an extension of Theorem 3.2 of [6], which does not need a proof since it is a combination of Theorem 3.2 of [6] and Theorem 2.7 of [2]. It concerns the existence of integrals with respect to a Lévy basis. Remark 3.2 In this section we are considering a q-valued random field, since the d is used for the R d -valued Lévy basis, and we denote by M q d (R) the collection of q d matrices over the field R. Theorem 3.3 Let be an R d -valued Lévy basis with characteristic function of the form (9) and let f : S R l M q d (R) be a measurable function. Then f is integrable as a limit in probability in the sense of Rajput and Rosinski [2], if and only if f (A, s)γ + f (A, s)x( [,] ( f (a, s)x ) S R l R d [,] ( x ))Q(dx) dsπ(da) <, f (A, s) f (A, s) dsπ(da) <, and S R l min(, f (A, s)x 2 )Q(dx)dsπ(dA) <. S R l R d If f is -integrable, the distribution of S R l f (A, s) (da, ds) is infinitely divisible with characteristic triplet (γ int, int,v int ) given by γ int = f (A, s)γ + f (A, s)x( [,] ( f (a, s)x ) S R l R d [,] ( x ))v(dx)dsπ(da),

10 int = v int (B) = for all Borel sets B R q \{}. S S f (A, s) f (A, s) dsπ(da), and R l B ( f (A, s)x)q(dx)dsπ(da) R l R d Journal of Theoretical Probability Proof This theorem is a specific representation of Theorem 3.2 of [6] and Theorem 2.7 of [2]. Let us now introduce the main object of interest of this section: the mixed moving average random field. Definition 3.4 (Mixed Moving Average Random Field) Let be an R d -valued Lévy basis on S R l and let f : S R l M q d (R) be a measurable function. If the random field X t := f (A, t s) (da, ds) S R l exists in the sense of Theorem 3.3 for all t R l, it is called an n-dimensional mixed moving average random field (MMA random field for short). The function f is said to be its kernel function. MMA random field have been discussed in Surgailis et al. [2] and Veraart [2]. Note that an MMA random field is an ID and strictly stationary random field. The following lemma is a direct application of Corollary 2.7 to our MMA random field case. d Lemma 3.5 Let (X t ) t R l = ( S R l f (A, t s) (da, ds)) t R l be an MMA random field where is an R d -valued Lévy basis on S R l with generating quadruple (γ,, Q,π)and f : S R l M q d (R) is a measurable function. Then (X t ) t R l is mixing if and only if { lim f (A, s) f (A, t n s) dsπ(da) S R l } + min(, f (A, s)x f (A, t n s)x )Q(dx)dsπ(dA) =, S R l R d for any (t n ) n N T. The following theorem is the main result of this section, while the next corollary is an extension of it. d Theorem 3.6 Let (X t ) t R l = ( S R l f (A, t s) (da, ds)) t R l be an MMA random field where is an R d valued Lévy basis on S R l with generating quadruple (γ,, Q,π)and f : S R l M q d (R) is a measurable function. Then (X t ) t R l is mixing.

11 From the above result and Proposition 2.3, we have this corollary. Corollary 3.7 Sums of independent MMA random fields are stationary, ID and mixing random fields. Proof This corollary is an immediate consequence of Theorem 3.6 and Proposition 2.3. Remark 3.8 The above corollary holds for any linear combination of independent MMA, including MMA with different Lévy basis and different parameter space S. 3. Meta-Times and Subordination In this section we give a brief introduction of the concepts of meta-times and subordination, and present a result which is a corollary of Theorem 3.6. First, we recall the definition of an homogeneous Lévy sheet (see [4] definition 2.). Let b a F indicate the increments of a function F over an interval (a, b] Rk + and let a b indicate a i b i for i =,...,k (see [4]), where R m + ={x Rm : x i, i =,...,m} and m N. Letk, l N. Definition 3.9 Let X ={X t : t R k + } be a family of random vectors in Rd.We say that X is an homogeneous Lévy sheet on R k + if X t = for all t {t R k + : t j = forsome j {,...,k}} a.s., b a X,..., b n a n X are independent whenever n 2 and (a, b ],...,(a n, b n ] R k + are disjoint, X is continuous in probability, a+t b+t X = d a b X for all a, b, t Rk with a b, and all sample paths of X are lamp. The concept of lamp (i.e.limits along monotone paths) is the analogue of càdlàg, but in the multiparameter setting. If X ={X t : t R k + } is a homogeneous Lévy sheet then L ( a b X) ID(Rd ). Now let X ={X t : t R k + } be an Rd -valued homogeneous Lévy sheet on R k + and X ={ X (A) : A B(R k + )} be the homogeneous Lévy basis induced by X, namely X ([, t]) = X t a.s. for all t R k +.LetT ={T t : t R k + } be an R +-valued homogeneous Lévy sheet and T ={ T (A) : A B(R k + )} be the non-negative homogeneous Lévy basis induced by T. Define F T = σ( T (A) : A B b (R k + )) to be the σ -field generated by T. Then there exists a (F T, B(R k + ), B(Rk ))-measurable mapping φ T : R k + Rk such that for all ω and A B b (R k + ),theset T(A)(ω), given by T(A)(ω) ={x R k + : φ T (ω, x) A} is bounded and T (A)(ω) = Leb(T(A)(ω)). For each ω, T( )(ω) is called a meta-time associated with T ( )(ω).letm ={M(A) : A B b (R k + )} be defined as M(A)(ω) = X (T(A))(ω)

12 for all A B(R k + ). We say that M appears by extended subordination of X by T (or of X by T ). Then by Theorem 5. of [4] we have that M ia a homogeneous Lévy basis. Therefore, we have the following corollary of Theorem 3.6. Corollary 3. Let X ={X t : t R k } be an R d -valued homogeneous Lévy sheet on R k and X ={ X (A) : A B(R k )} be the homogeneous Lévy basis induced by X. Let T ={T t : t R k } be an R + -valued homogeneous Lévy sheet and T = { T (A) : A B(R k )} be the non-negative homogeneous Lévy basis induced by T. Let M ={M(A) : A B b (R k + )} be an extended subordination of X by T. Let d (Y t ) t R l = ( R R k l l f (B, t s)m(db, ds)) t R l, where f : R k l R l M q d (R) is a measurable function. Then (Y t ) t R l is mixing. Proof It is sufficient to notice that the framework introduced above holds for the case R k and not just for R k + (see [4]) and that M is an Rd -valued homogeneous Lévy basis on R k. Then by using Theorem 3.6 we obtain the result. 4 Weak Mixing and Ergodicity In this section, we will first show how to modify our results to obtain weak mixing version of the results presented before and then prove that for stationary ID random fields ergodicity and weak mixing are equivalent. We start with a definition of a density one set and of weak mixing for stationary random fields. Definition 4. AsetE R l is said to have density zero in R l with respect to the Lebesgue measure λ if lim T (2T ) l E (x)λ(dx) =. ( T,T ] l AsetD R l is said to have density one in R l if R l \D has density zero in R l. The class of all sequences on D that converge to infinity will be denoted by T D := { } (t n ) n N R l D : lim t n =. Definition 4.2 Consider the random field X t (ω) = X θ t (ω), t R l, where {θ t } t R l is a measure preserving R l -action. Let σ X be the σ -algebra generated by the field (X t ) t R l. We say that (X t ) t R l is weakly mixing if there exists a density one set D such that for all A, B σ X and all (t n ) n N T D. lim P(A θ t n (B)) = P(A)P(B), We are now ready to state the weak mixing version of Theorem 2.3.

13 Theorem 4.3 Let (X t ) t R l, with l N, beanr d -valued strictly stationary infinite divisible random field such that Q, the Lévy measure of L (X ), satisfies Q ({x = (x,...,x d ) R d : j {,...,d}, x j 2πZ}) =. Then (X t ) t R l is mixing if and only if there exists a density one set D R l such that [ lim E e ( j) i(x tn (k) X )] = E [e ix( j) ] E for any j, k =,...,d and for any sequence (t n ) n N T D. ] [e ix(k), Proof It is possible to see that the argument used in the first part of the proof of Theorem 2.3 applies and holds also for the case (t n ) n N T D. Moreover, using Theorem 4.4 the proof is complete. Theorem 4.4 Let (X t ) t R l be an R d -valued strictly stationary infinite divisible random field. Then (X t ) t R l is mixing if and only if there exists a density one set D R l such that (MM) the covariance matrix function (t n ) of the Gaussian part of (X tn ) tn R l tends to, as n, (MM2 ) lim Q tn ( x y >δ)= for any δ>, where Q tn is the Lévy measure of L (X, X tn ) on (R d, B(R d )), where (t n ) n N is any sequence in T D. Proof It is possible to see that the arguments used in the proof of Theorem 2.4 go through for the case (t n ) n N T D. Remark 4.5 It is possible to obtain weak mixing version also for the following results previously stated: Corollaries 2.6, 2.7, 2.8, 2.9, Theorem 2., Corollary 2.2 and Proposition 2.3. The proofs of these results are omitted because they follow exactly the same arguments used in the proofs of their respective results. The only difference is that we have (t n ) n N T D and not (t n ) n N T but this does not trigger any change in the arguments used in the proofs of these results. Among these results, we have the following corollary, which is the weak mixing version of Corollary 2.7 and it will be useful for our next result: the equivalence between weak mixing and ergodicity for stationary ID random fields. Corollary 4.6 Let (X t ) t R l be an R d -valued strictly stationary ID random field. Then with the previous notation, (X t ) t R l is mixing if and only if there exists a density one set D R l such that { } lim (t n ) + min(, x y )Q tn (dx, dy) = R 2d for any (t n ) n N T D. Proof See discussion in Remark 4.5.

14 In order to prove the equivalence between ergodicity and weak mixing for ID stationary random fields we will need various preliminary results, some of which have already been proven in the literature. We start with two known results. Lemma 4.7 (Lemma 4.3 of the ArXiv last version (i.e. v3) of [22]) Let f : R l R be non-negative and bounded. A necessary and sufficient condition for lim T (2T ) l f (t)dt = ( T,T ] l is that there exists a subset D of density one in R l such that lim f (t n) =, for any (t n ) n N T D. Lemma 4.8 (Theorem of [5]) Let μ be a finite measure on R l. Then lim T (2T ) l ( T,T ] l where ˆμ denotes the Fourier transform of μ. ˆμ(t)dt = μ({}), The following lemma is an adaptation to our framework of Lemma 3 of [4]. Lemma 4.9 Let (X t ) t R l be an R d -valued stationary ID random field and Q jk t be the Lévy measure of L (X ( j), X t (k) ). Then for every δ>and j, k =,...,d, the family of finite measures of (Q jk t Kδ c) t Rl is weakly relatively compact and where K δ ={(x, y) : x 2 + y 2 δ 2 }. lim sup xy Q jk δ t (dx, dy) =, () t R l K δ Proof This result comes directly from the proof of Lemma 3 of [4]. Now we will investigate the auto-codifference matrix of the R d -valued stationary ID random field (X t ) t R l, which was already introduced in Section 2.. Consider with τ ( jk) (t n ) := τ τ(t n ) = ( ) X (k), X ( j) t n = log E = σ jk t n + ( ) τ ( jk) (t n ), j,k=,...,d [e i(x (k) X ( j) tn )] log E [e ix(k) ] log E [e j) ix( tn R 2 (e ix )(e iy )Q jk t n (dx, dy), () ]

15 where Ɣ jk t n given by is the covariance function of the Gaussian part of (X (k), X ( j) ) and it is Ɣ jk t n = ( σ jk σ jk t n σ jk t n σ jk ). t n Proposition 4. Let (X t ) t R l be an R d -valued ID random field (not necessarily stationary). Then for any j, k =,...,d the function is non-negative definite. R l R l (s, t) τ ( jk) (X ( j) s, X (k) ) C Proof We argue as in Proposition 2 of [4]. Without loss of generality let t s with t, s R l. As seen above, we have ( ) τ X s (k), X t ( j) = σ jk t s + (e ix )(e iy )Qst jk (dx, dy). (2) R 2 Since (s, t) σ jk t s is non-negative definite because it is a covariance function, it just remains to show that the second element on the RHS of (2) is non-negative definite. However, this is a consequence of Lemma 4 in [4]. We can now state and later prove (see Appendix C ) the second main theorem of this section, which states the equivalence between ergodicity and weak mixing for ID stationary random fields. Theorem 4. Let l, d N. Let (X t ) t R l be an R d -valued stationary ID random field. Then (X t ) t R l is ergodic if and only if it is weakly mixing. t 5 Conclusion In this work we derived different results concerning ergodicity and mixing properties of multivariate stationary infinitely divisible random fields. A possible future direction consists of the investigation of statistical properties of the results presented in this paper. For example for multivariate stochastic processes, showing that mixed moving average (MMA) processes are mixing implies that the corresponding moment based estimator (like the generalised method of moments (GMM)) are consistent (see [6]). However, it is not clear that a similar result holds for the random fields case. Other possible directions would be to extend the present results to the case of random fields on manifolds or on infinite dimensional vector spaces. However, the literature is not as developed as for the R l -case and requires further work. Acknowledgements RP would like to thank the Centre for Doctoral Training in Mathematics of Planet Earth and the Grantham Institute for providing funding for this research. Funding was provided by Engineering and Physical Sciences Research Council (Grant No ).

16 Open Access This article is distributed under the terms of the Creative Commons Attribution 4. International License ( which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Appendix A: Proofs of Sect. 2 Proof of Theorem 2.3 This proof is an extension to the random field case of the proof of Theorem 2. of [6]. : Assume (X t ) t R l to be mixing. This implies lim E [ exp ( i θ, X +i θ 2, X tn )] = E [ exp (i θ, X ) ] E [ exp (i θ 2, X ) ], for any θ,θ 2 R d. Now, setting (θ,θ 2 ) = ( e k, e j ), j, k =,...,d with e j the unit j-th vector in R d,eq.(4) is satisfied. : Assume that Eq. (4) holds for every j, k =,...,d, and then we have lim E [e i ( X ( j) tn (k) +X )] = E [e ix( j) ] E [e ix(k) ], (3) for every j, k =,...,d. This is true because of the following reasoning. In particular, we extend the proof of Rosinski and Zak [3], Theorem, to the multivariate random field case. Assume that Eq. (4) holds. We will initially prove the following. For every Y L 2 (, F, P) (complex valued) [ lim E e j) ix( tn ] Ȳ = E [e ix( j) ] E [ Ȳ ], (4) for j =,...,d. It is possible to see that Eq. (4) holds for Y H := j) ix( lin{, e t, t R l } := {Z L 2 (, F, P) : Z = a + j) n ix( t i= a i e i, t i R l, n N}. Consider now the L 2 -closure of H and call it H. Then by standard density argument Eq. (4) is true for any Y H. Now, consider any Y L 2 (, F, P) (complex valued). We can write Y = Y + Y 2, where Y H and Y 2 H := {W L 2 (, F, P) : E[Z W ]= Z H}, where E[ ] denotes the inner product (usually written as <, >) from L 2 (, F, P) L 2 (, F, P) to R. Notice that we are using the conjugate (i.e. in symbol ) because this is how the inner product space over a complex space is defined. Further, notice that we can write Y = Y + Y 2 since the L 2 -space endowed with that inner product is a Hilbert space. Since E[e ix tn Ȳ 2 ]=for every t n R l and E[Ȳ 2 ]=bydefinition of H, we get that [ ] lim E e ix tn Ȳ [ ] [ ] = lim E e ix tn Ȳ = E e ix E [ ] [ ] Ȳ = E e ix E [ Ȳ ]. Hence we have equation (4). Putting now Y = e ix(k) in (4), for k =,...,d, we obtain

17 [ lim E e ( j) i(x tn (k) +X )] = E [e ix( j) ] E ] [e ix(k), (5) which is Eq. (3). We now prove that equations (4) and (3) imply the multidimensional Maruyama conditions: (MM) the covariance matrix function (t n ) of the Gaussian part of (X tn ) tn R l tends to, as n, where (t n ) n N is any sequence in T. (MM2) lim Q tn ( x y > δ) = and lim < x 2 + y 2 x y Q tn (dx, dy) = for any δ>, where Q tn is the Lévy measure of L (X, X tn ) on (R d, B(R d )). Actually, we will not prove (MM2) but we will prove instead the following condition: (MM2 ) lim Q tn ( x y >δ)= for any δ>, where Q tn is the Lévy measure of L (X, X tn ) on (R d, B(R d )). This is because in Lemma 2.5 we will prove that (MM2 ) implies (MM2). Regarding (MM), we have the following. Since (X, X tn ) has a 2d-dimensional ID distribution, its characteristic function can be written, using the Lévy Khintchine formulation for every θ = (θ,θ 2 ) R d R d, as (see Theorem 8. of [9] and the proof of Theorem 2. of [6]) [ ] { ( ) ( E e i θ,x +i θ 2,X tn θ α ) = exp i, θ 2 α 2 ( ) ( ) θ θ, (t n ) 2 θ 2 θ 2 ( + e i<θ,x>+i<θ 2,y> i <θ, x > [,] ( x ) i R 2d <θ 2, y > [,] ( y ) ) } Q tn (dx, dy). (6) By substituting ( e k, e j ), (, e j ) and ( e k, ), j, k =,...,d for (θ,θ 2 ) in (6) we get the description of equation (4) in terms of the covariance matrix function of the Gaussian part of (X ( j) t n, X (k) ) and the Lévy measure Q t n, namely ( lim [e E i X ( j) tn = lim exp )] (k) X ( [ j) ] ]) ix( E e E [e ix(k) { ( σ jk (t n )+ e i( y ( j) x (k)) ) e iy( j) e ix(k) + Q tn (dx, dy) R 2d } = for arbitrary j, k =,...,d, where σ jk (t n ) is the (k, j)-th element of (t n ).Byusing the identity Real(e i( x+y) e ix e iy + ) = (cos x )(cos y ) + sin x sin y and taking the logarithm on both sides, we get [ ( lim σ jk (t n )+ (cos x (k) )(cos y ( j) )+sin x (k) sin y ( j)) ] Q tn (dx, dy) R 2d =, (7)

18 ( for any j, k =,...,d. Applying the same argument to E [e i ]) E [e ix(k) we obtain Journal of Theoretical Probability X ( j) tn (k) +X )] ( [ E e j) ix( [ ( lim σ jk (t n )+ (cos x (k) )(cos y ( j) ) sin x (k) sin y ( j)) ] Q tn (dx, dy) =, R 2d (8) ] for any j, k =,...,d. Putting together equations (7) and (8), due to the consistency of the Lévy measures (see Proposition. of [9] and the proof of Theorem 2. of [6]), lim R 2d = lim ( ) cos x (k) )(cos y ( j) Q tn (dx, dy) (cos x )(cos y )Q ( jk) R 2 t n (dx, dy) =, (9) for any j, k =,...,d, where Q ( jk) t n denotes the Lévy measure of L (X (k), X t ( j) n ) on (R 2, B(R 2 )). Now, for fixed j, k {,...,d} the family of measures {L (X (k), X ( j) t n )} n N is tight. To see this, note that by letting K r ={(x, y) R 2 : x 2 + y 2 r 2 } and using the stationarity of (X t ) t R l we have P((X, X tn )/ K r ) P( X > 2 2 r) + P( Xtn > 2 2 r) = 2P( X > 2 2 r), hence lim r sup tn R l P((X, X tn )/ K r ) =. Since {L (X, X tn )} n N is tight, then {L (X (k), X ( j) t n )} n N is tight. Notice that we are proving more than necessary because it is sufficient to prove the above limit for sup n N. Thus, by Prokhorov s theorem we have that the family {L (X (k), X ( j) t n )} n N is sequentially compact (in the topology of weak convergence). Choose any sequence (τ n ) n N T and let F jk be a cluster (or accumulation) point of the family {L (X (k), X τ ( j) n )} n N. Then, using Lemma 7.8 of page 34 of Sato s book [9], we have that F jk is an ID distribution on R 2 with some Lévy measure Q jk. Moreover, let (t m ) m N be a subsequence of (τ n ) n N such that L (X (k), X ( j) t m ) F jk, as m. (2) Notice that (t m ) m N T as well. We know that F jk exists by Prokhorov theorem on Euclidean spaces. Then, for every δ> with Q jk ( K δ ) =, Q ( jk) t m K c δ Q jk K c δ, as m. (2)

19 Since (cos x )(cos y ) and using equations (9) and (2), we deduce that K c δ = lim lim (cos x )(cos y )Q jk (dx, dy) K c δ (cos x )(cos y )Q ( jk) t m (dx, dy) R 2 (cos x )(cos y )Q ( jk) t n (dx, dy) =. Every Lévy measure Q jk is concentrated on the set of straight lines {(x, y) R 2 : x 2πZ or y 2πZ}. This is because only on these lines the integrand (cos x )(cos y ) is zero, otherwise it is positive, and because δ can be taken arbitrarily small. By the stationarity of the process and (2), the projection of Q jk onto the first and second axis coincides with Q (k) and Q ( j), respectively, on the complement of every neighbourhood of zero (because (2) holds on any complement of every neighbourhood of zero). Recall the assumption on Q, i.e. Q ({x = (x,...,x d ) R d : j {,...,d}, x j 2πZ}) =. Hence, we have for every a Z, a =, Q jk ({2πa} R) = Q (k) ({2πa}) = Q (R... R {2πa} R... R) Q ({x R d : l {,...,d}, x l 2πZ}) = and similarly Q jk (R {2πa}) =. Therefore, Q jk, j, k =,...,d is concentrated on the axes of R 2 and on each of them coincides with Q (k) and Q ( j). It is important to stress the main ideas of the above argument. First, we showed that Q jk is concentrated only on {(x, y) : x 2πZ or y 2πZ} and then using the assumption of our theorem we showed that only when x = ory = the measure Q jk is nonzero. Further, the stationarity of the process allows the fact that when we project Q jk on the axes the projections coincide with Q (k) and Q ( j), avoiding the case of having Q(k) on one axis and Q t ( j) m on the other. Now observe that, for every t R l, using the consistency of the Lévy measure xy Q ( jk) t (dx, dy) (x 2 + y 2 )Q ( jk) t (dx, dy) x 2 Q ( jk) t (dx, dy) K δ 2 K δ 2 x δ + y 2 Q ( jk) t (dx, dy) 2 y δ = x 2 Q (k) 2 (dx) + y 2 Q ( j) x δ 2 (dy) y δ = x 2 Q (dx) <ɛ, (22) x δ

20 for any positive ɛ and any j, k =,...,d, if only δ is small enough. This implies that, by (22) for every j, k =,...,d we have for all m N K δ sin x sin y Q ( jk) t m (dx, dy) K δ xy Q ( jk) t m (dx, dy) <ɛ, for sufficiently small δ >. Since Q jk is concentrated on the axes of R 2 then K cδ sin x sin yq jk) jk(dx, dy) = and using (2) we get lim m K cδ sin x sin yq( t m (dx, dy) =. Thus, lim sin x sin yq ( jk) m R 2 t m (dx, dy) =, (23) for every j, k =,...,d. Combining equations (7), (9) and (23) we obtain that σ jk (t m ) asm for all j, k =,...,d. Since (t m ) m N is a subsequence of an arbitrary sequence τ n T,itfollowsthatσ jk (t n ) asn and thus (t n ) asn, for any (t n ) n N T. This is because we have used the fact that if a sequence has the property that any subsequence has a further subsequence that converges to the same limit, then the sequence converges to that limit. Hence, condition (MM) follows. To prove (MM2 ), observe that, for any m N, Q tm ( x 2 y 2 >δ 2) = Q tm In view of (2) we also get lim sup Q ( jk) t m m d j,k= d j,k= Q ( jk) t m ( x (k) y ( j)) 2 >δ 2 ( x (k) y ( j) δ d ). ( x (k) y ( j) ) ( δ x Q (k) jk y ( j) ) δ =, d d for any δ> and j, k =,...,d. Hence, lim m Q tm ( x y >δ)= for any δ> and together with the fact that (t k ) k N is a subsequence of an arbitrary sequence τ n T we obtain condition (MM2 ). Now, by Theorem 2.4 the proof is complete. Proof of Theorem 2.4 This proof is an extension to the random field case of the proof of Theorem 2.3 of [6]. : We have shown in the proof of Theorem 2.3 that mixing implies conditions (MM) and (MM2 ). In particular, mixing implies formula (4), which implies (MM) and (MM2 ).

21 : For the other direction we have the following. Assume that (MM) and (MM2 ) hold. Then by Lemma 2.5 condition (MM2) holds. We need to prove that the process is mixing, i.e. for all m N,λ= (s,...,s m ),μ= (p,...,p m ) R ml and θ,θ 2 R md lim E [ exp ( i θ, X λ +i θ 2, X μn )] = E [ exp (i θ, X λ ) ] E [ exp ( i θ 2, X μ )], (24) where X λ := (X s,...,x s m ) R md and μ n = (p + t n,...,p m + t n ), where (t n ) n N is any sequence in T. The family of R 2md -valued distributions of the ID random fields (X λ, X μn ), denoted {L (X λ, X μn )} n N is tight. Indeed, let K be the 2md-dimensional ball around the origin with radius of length 2ma, then by stationarity of the process (X t ) t R l we have P((X λ, X μn )/ K ) a) = 2m m m P( X s j a) + P( X p j +t n j= m P( X a), j= hence lim a sup λ,μ,tn P((X λ, X μn ) / K ) =. Again notice that we are proving more than necessary because it is sufficient to prove the above limit for sup λ,μ,n.let (α,, Q ) and (α 2, 2, Q 2 ) be the characteristic triplets of L (X λ ) and L (X μ ) respectively. Suppose F with characteristic triplet (α, R, Q) is a cluster point of the distributions of (X λ, X μn ). As in the proof of the previous theorem, we have that F is the limit as r of the distributions of (X λ, X μr ), where μ r = (p + t r,...,p m + t r ) with t r is a subsequence of t n.let(α r, r, Q r ) be the characteristic triplets of (X λ, X μr ). By Lemma 7.8 of [9] F is an ID distribution on R 2md. We denote by r (θ,θ 2 ) the characteristic function of L (X λ, X μr ) at the point (θ,θ 2 ) R md R md.the logarithm of r (θ,θ 2 ) can be written (see proof of Theorem 2.3 and Theorem 2.3.of [6]) as ( θ log r (θ,θ 2 ) = i + θ 2 ), ( α r α 2 r { x <δ, y <δ} ) 2 ( θ θ 2 j= ) ( ) θ, r θ 2 ( e i θ,x +i θ 2,y i θ, x [,] ( x ) i θ 2, y [,] ( y ) ) Q r (dx, dy) ( + e i θ,x +i θ 2,y i θ, x [,] ( x ) { x δ or y δ} i θ 2, y [,] ( y ) ) Q r (dx, dy) := I + I 2 + I 3 + I 4.

22 We need to prove that log r (θ,θ 2 ) log (θ ) + log 2 (θ 2 ) as r for all θ,θ 2 R md where and 2 are the characteristic functions of X λ and X μ, respectively. It is possible to see immediately that I = i α,θ +i α 2,θ 2, just by setting α r = (αr,α2 r ) = (α,α 2 ). Further, by condition (MM) I 2 converges to 2 θ,θ 2 2 θ 2,θ 2 as r, where and 2 are the md-dimensional covariance matrix of (X λ ) and (X μ ) respectively. For I 4,wehave ( lim I 4 = e i θ,x +i θ 2,y i θ, x [,] ( x ) r { x δ or y δ} i θ 2, y [,] ( y ) ) Q(dx, dy) ( ) = e i θ,x i θ, x [,] ( x ) Q (dx) { x δ} ( ) + e i<θ2,y> i <θ 2, y > [,] ( y ) Q 2 (dy). { y δ} This is because of the identity ( m ) ( m m ) ( m ) exp a i + b i = exp a i + exp b i, i= i= ( which is valid when a i b j = for any i, j m, and because, letting x = x (),...,x (m) ) ( (R d ) m and y = y (),...,y (m) ) (R d ) m,wehave Q( x y >δ) lim inf r lim inf r i= i= Q r ( x y >δ) m ( Q,tr +p i s j x ( j) y (i) > δ ) (MM2 ) =, m j,i= for any δ>, which shows in particular that Q( x y > ) =. Analogously to x and y we denote by θ ( j) and θ ( j) 2 the j-th R d -component of θ and θ 2, respectively. Concerning I 3, consider the multivariate Taylor expansion of e i θ,x +i θ 2,y i θ, x [,] ( x ) i θ 2, y [,] ( y ) =: g(x, y) with respect to the variable (x, y) at the point (x, y ). For any δ>small enough, we obtain

23 I 3 = 2 [ +2 { x <δ, y <δ} { x <δ, y <δ} m j= m j,k= θ ( j), x( j) 2 θ ( j), x( j) θ (k) m + j= 2, y(k) θ ( j) 2, y( j) ] Q r (dx, dy) + R, 2 Q r (dx, dy) where R is the reminder and in the integral form it is given by R = { x <δ, y <δ} 6 ( t) 2 D 3 x,y (g(tx, ty))dtq k(du), and so 6 R = = { x <δ, y <δ} { x <δ, y <δ} { x <δ, y <δ} ( ) θ 3 θ 2 ( θ θ 2 ) 3 2δ θ, tx + θ 2, ty 3 e i θ,tx +i θ 2,ty dtqr (dx, dy) θ, x + θ 2, y 3 Q r (dx, dy) θ, x + θ 2, y 3 Q r (dx, dy) { x <δ, y <δ} ( ( ) x 3 Q r (dx, dy) y x 2 Q (dx) + {< x <δ} t 3 dt {< y <δ} ) y 2 Q 2 (dx) and thus 6 R <ɛfor any positive ɛ if only δ is sufficiently small. Notice that our estimates are sharper than the ones of [6] because we work with the explicit integral form of the remainder. Moreover, we obtain for every j, k =,...,m and any δ small enough that θ ( j), x( j) θ (k) 2, y(k) Qr (dx, dy) { x <δ, y <δ} θ ( j) θ (k) 2 x ( j) y (k) Qr (dx, dy) { x <δ, y <δ} θ ( j) θ (k) 2 x ( j) ( y (k) Q,tr {< x ( j) 2 + y (k) 2 2δ 2 +p k s j dx ( j), dy (k)) } θ ( j) θ (k) 2 x ( j) {< x ( j) 2 + y (k) 2 } y (k) ( Q,tr +p k s j dx ( j), dy (k)) r,

24 by using condition (MM2). Finally, we have θ, x 2 Q r (dx, dy) 2 { x <δ, y <δ} + e i θ,x i θ, x [,] ( x )Q (dx) J + J 2. For J,wehave {< x <δ} J =, x 2 {< x <δ, y <δ} θ 2 Q r (dx, dy) 2 θ, x 2 Q r (dx, dy) θ 2 {< x <δ} For J 2,wehave J 2 = {< x <δ} {< x <δ} {< x <δ} θ, x 2 Q r (dx, dy) x 2 Q (dx). 2 θ, x 2 + e i θ,x i θ, x [,] ( x )Q (dx), and, by using the multivariate Taylor expansion and noticing that in this case the expansion is only for the variable x, we obtain J 2 θ 3 δ x 2 Q (dx). {< x <δ} Similar arguments apply to the second addend of the first term of I 3. Combining all the different results, we get lim log r (θ,θ 2 ) = log (θ ) + log 2 (θ 2 ), θ,θ 2 R md, r and consequently we obtain the desired result in (24), which concludes the proof. Proof of Lemma 2.5 Fix ɛ>, put B δ ={(x, y) R d R d : x 2 + y 2 δ 2 } and R δ ={(x, y) R d R d : δ 2 < x 2 + y 2 }. Then, we get {< x 2 + y 2 } x y Q tn (dx, dy) = x y Q tn (dx, dy) B δ + x y Q tn (dx, dy) := P + P 2. R δ

25 We will estimate the terms P and P 2 separately. Using the stationarity of Q tn (due to the stationarity of (X tn ) tn Rl ) and the consistency of the Lévy measure, we get P x 2 Q tn (dx, dy) + y 2 Q tn (dx, dy) 2 B δ 2 B δ x 2 Q tn (dx, dy) + 2 { x 2 δ 2,y R d } 2 = x 2 Q (dx). x δ { y 2 δ 2,x R d } y 2 Q tn (dx, dy) Here Q is the Lévy measure of X. Thus, for some appropriately small δ we have P = x y Q tn (dx, dy) ɛ B δ 2. (25) { } ) For the second term, set c = min δ, 8q ɛ with q = Q ( x 2 > δ2 2 <. Then, for C = R δ { x y > c } we obtain P 2 = C 2 Q t n (C) + R δ \C x y Q tn (dx, dy) + ɛ 8q Q t n (dx, dy) x y Q tn (dx, dy) R δ \C 2 Q t n ( x y > c ) + ɛ 8q Q t n (R δ \C) ({ } { }) 2 Q t n ( x y > c ) + ɛ 8q Q t n (x, y) : x 2 > δ2 (x, y) : y 2 > δ2 2 2 ( ) 2 Q t n ( x y > c ) + ɛ 4q Q t n x 2 > δ2 = 2 2 Q t n ( x y > c ) + ɛ 4. For n large enough we have Q tn ( x y > c )< 2 ɛ and therefore P 2 = x y Q tn (dx, dy) < ɛ R δ 2. (26) Finally, combining (25) and (26), and letting ɛ, we obtain the result of the lemma. Proof of Proposition 2.3 By independence of the random fields (Yt k) t Rl, k =,...,r, we have that the Lévy Khintchine representation of (X t ) t R l can be written as the product of the Lévy Khintchine representation of the (Yt k) t Rl, k =,...,r. In other words, for any n N

26 and θ j R d, j =,...,n, wehave E exp i n θ j, X t j j= = E exp i n θ j, r = Yt k j j= k= k= r E exp i n θ j, Y k. j= t j Now (X t ) t R l is stationary since for any h R l E exp i n θ j, Y k = E exp i j= t j n θ j, Y k j= t j +h because (Yt k) t R l is stationary, for any k =,...,r. Moreover, (X t) t R l is ID since it is a sum of independent ID random fields. To show that (X t ) t R l is mixing we proceed as follows. Consider any sequence (t n ) n N T and consider the joint random field (X, X tn ). First, notice that the covariance function of the Gaussian part of (X tn ) tn R l, call it X (t n ), is given by the sum of the covariance functions of the (Yt k) t R l, call them Y k (t n), k =,...,r. Moreover, notice also that the Lévy measure of the Lévy Khintchine formula of the law L (X tn, X ), call it Q X,tn, is given by the sum of the Lévy measures of the Lévy Khintchine formula of the laws L (Yt k n, Y k), call them Q Y k,t n, k =,...,r. Itis possible to see this in formulae. E [ exp ( i θ, X tn +i θ 2, X )] [ ( = E = = r k= r k= exp i θ, r k= Y k t n +i θ 2, )] r Y k k= [ ( )] E exp i θ, Yt k n +i θ 2, Y k { exp i (θ θ 2 ), ( α k α k 2 ) 2 ( θ θ 2 ),Ɣ k (t n ) ( θ ( + e i<θ,x>+i<θ 2,y> i <θ, x > [,] ( x ) R 2d ) } i <θ 2, y > [,] ( y ) Q Y k,t n (dx, dy) { (θ ) ( rk= α k ) = exp i, θ rk= 2 α2 k (θ ) r, Ɣ k (t n ) 2 θ 2 k= θ 2 ) ( θ θ 2 )

27 ( + e i<θ,x>+i<θ 2,y> i <θ, x > [,] ( x ) R 2d ) r } i <θ 2, y > [,] ( y ) Q Y k,t n (dx, dy) { = exp i ( + R 2d (θ θ 2 ), ( α X α 2 X ) 2 k= ( θ θ 2 ),Ɣ X (t n ) ( θ e i<θ,x>+i<θ 2,y> i <θ, x > [,] ( x ) ) } i <θ 2, y > [,] ( y ) Q X,tn (dx, dy) θ 2 ) where ( ) Ɣ k () Y k (t (t n ) = n ). Y k (t n ) () In order to prove that (X t ) t R l is mixing we need to show that conditions (MM) and (MM2 ) hold. However, these conditions hold for each Y k (t n ) and Q Y k,t n, where k =,...,r. Moreover, since both the covariance matrix function X (t n ) of the Gaussian part of (X t ) t R l and its Lévy measure Q X,tn are sums of Y k (t n ) and Q Y k,t n respectively, for k =,...,r, then we have that these conditions hold also for them. Hence, (X t ) t R l is mixing. Appendix B: Proofs of Sect. 3 Proof of Lemma 3.5 The argument of the proof of Lemma 3.4 of [6] can be extended to our setting. To this end, notice that we can write ( X ) = X tn S R l ( ) f (A, s) (da, ds), t f (A, t n s) n R l. From this and from Theorem 3.3 it is possible to compute the covariance matrix function of the Gaussian part of (X t ) t R l by ( ) () (tn ) Ɣ(t n ) =, (t n ) () where (t n ) = S R l f (A, s) f (A, t n s) dsπ(da), t n R l.

28 The Lévy measure Q tn of L (X, X tn ) is given by Q tn (B) = S R l R d B ( f (A, s)x, f (A, t n s)x)q(dx)dsπ(da), for all Borel sets B R 2q \{}, using again Theorem 3.3. Therefore, given this explicit representation of Q tn we have that min(, x y )Q tn (dx, dy) R 2q = min(, f (A, s)x f (A, t n s)x )Q(dx)dsπ(dA). S R l R d Now by using Corollary 2.7 we complete the proof. Proof of Theorem 3.6 We generalise the arguments given in the proof of Theorem 3.5 of [6]. Following Lemma 3.5, in order to prove this theorem it is sufficient to show that (t n ) = S R l f (A, s) f (A, t n s) dsπ(da) and min(, x y )Q tn (dx, dy) R 2d = min(, f (A, s)x f (A, t n s)x )Q(dx)dsπ(dA), S R l R d for any (t n ) n N T. First, we concentrate on proving that (t n ). Consider that by the existence of the MMA field we have (see Theorem 3.3) S R l f (A, t s) 2 2 dsπ(da) <, for any t R l, where 2 denotes the unique square root of. Therefore for any t R l, the function g t : S R l R,(A, s) f (A, t s) 2 is an element of L 2 (S R l, B(S R l ), π λ l ; R). The fact that the measure π λ l is σ -finite implies that every L 2 -function can be approximated (in the L 2 -norm) by an elementary function in

The Equivalence of Ergodicity and Weak Mixing for Infinitely Divisible Processes1

The Equivalence of Ergodicity and Weak Mixing for Infinitely Divisible Processes1 Journal of Theoretical Probability. Vol. 10, No. 1, 1997 The Equivalence of Ergodicity and Weak Mixing for Infinitely Divisible Processes1 Jan Rosinski2 and Tomasz Zak Received June 20, 1995: revised September

More information

Probability and Measure

Probability and Measure Part II Year 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2018 84 Paper 4, Section II 26J Let (X, A) be a measurable space. Let T : X X be a measurable map, and µ a probability

More information

Stable Process. 2. Multivariate Stable Distributions. July, 2006

Stable Process. 2. Multivariate Stable Distributions. July, 2006 Stable Process 2. Multivariate Stable Distributions July, 2006 1. Stable random vectors. 2. Characteristic functions. 3. Strictly stable and symmetric stable random vectors. 4. Sub-Gaussian random vectors.

More information

Set, functions and Euclidean space. Seungjin Han

Set, functions and Euclidean space. Seungjin Han Set, functions and Euclidean space Seungjin Han September, 2018 1 Some Basics LOGIC A is necessary for B : If B holds, then A holds. B A A B is the contraposition of B A. A is sufficient for B: If A holds,

More information

STOCHASTIC GEOMETRY BIOIMAGING

STOCHASTIC GEOMETRY BIOIMAGING CENTRE FOR STOCHASTIC GEOMETRY AND ADVANCED BIOIMAGING 2018 www.csgb.dk RESEARCH REPORT Anders Rønn-Nielsen and Eva B. Vedel Jensen Central limit theorem for mean and variogram estimators in Lévy based

More information

Lecture Notes in Advanced Calculus 1 (80315) Raz Kupferman Institute of Mathematics The Hebrew University

Lecture Notes in Advanced Calculus 1 (80315) Raz Kupferman Institute of Mathematics The Hebrew University Lecture Notes in Advanced Calculus 1 (80315) Raz Kupferman Institute of Mathematics The Hebrew University February 7, 2007 2 Contents 1 Metric Spaces 1 1.1 Basic definitions...........................

More information

Weak convergence. Amsterdam, 13 November Leiden University. Limit theorems. Shota Gugushvili. Generalities. Criteria

Weak convergence. Amsterdam, 13 November Leiden University. Limit theorems. Shota Gugushvili. Generalities. Criteria Weak Leiden University Amsterdam, 13 November 2013 Outline 1 2 3 4 5 6 7 Definition Definition Let µ, µ 1, µ 2,... be probability measures on (R, B). It is said that µ n converges weakly to µ, and we then

More information

Measure Theory on Topological Spaces. Course: Prof. Tony Dorlas 2010 Typset: Cathal Ormond

Measure Theory on Topological Spaces. Course: Prof. Tony Dorlas 2010 Typset: Cathal Ormond Measure Theory on Topological Spaces Course: Prof. Tony Dorlas 2010 Typset: Cathal Ormond May 22, 2011 Contents 1 Introduction 2 1.1 The Riemann Integral........................................ 2 1.2 Measurable..............................................

More information

ELEMENTS OF PROBABILITY THEORY

ELEMENTS OF PROBABILITY THEORY ELEMENTS OF PROBABILITY THEORY Elements of Probability Theory A collection of subsets of a set Ω is called a σ algebra if it contains Ω and is closed under the operations of taking complements and countable

More information

Some Background Material

Some Background Material Chapter 1 Some Background Material In the first chapter, we present a quick review of elementary - but important - material as a way of dipping our toes in the water. This chapter also introduces important

More information

3 (Due ). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure?

3 (Due ). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure? MA 645-4A (Real Analysis), Dr. Chernov Homework assignment 1 (Due ). Show that the open disk x 2 + y 2 < 1 is a countable union of planar elementary sets. Show that the closed disk x 2 + y 2 1 is a countable

More information

Course 212: Academic Year Section 1: Metric Spaces

Course 212: Academic Year Section 1: Metric Spaces Course 212: Academic Year 1991-2 Section 1: Metric Spaces D. R. Wilkins Contents 1 Metric Spaces 3 1.1 Distance Functions and Metric Spaces............. 3 1.2 Convergence and Continuity in Metric Spaces.........

More information

Reminder Notes for the Course on Measures on Topological Spaces

Reminder Notes for the Course on Measures on Topological Spaces Reminder Notes for the Course on Measures on Topological Spaces T. C. Dorlas Dublin Institute for Advanced Studies School of Theoretical Physics 10 Burlington Road, Dublin 4, Ireland. Email: dorlas@stp.dias.ie

More information

B 1 = {B(x, r) x = (x 1, x 2 ) H, 0 < r < x 2 }. (a) Show that B = B 1 B 2 is a basis for a topology on X.

B 1 = {B(x, r) x = (x 1, x 2 ) H, 0 < r < x 2 }. (a) Show that B = B 1 B 2 is a basis for a topology on X. Math 6342/7350: Topology and Geometry Sample Preliminary Exam Questions 1. For each of the following topological spaces X i, determine whether X i and X i X i are homeomorphic. (a) X 1 = [0, 1] (b) X 2

More information

Part II Probability and Measure

Part II Probability and Measure Part II Probability and Measure Theorems Based on lectures by J. Miller Notes taken by Dexter Chua Michaelmas 2016 These notes are not endorsed by the lecturers, and I have modified them (often significantly)

More information

Measures and Measure Spaces

Measures and Measure Spaces Chapter 2 Measures and Measure Spaces In summarizing the flaws of the Riemann integral we can focus on two main points: 1) Many nice functions are not Riemann integrable. 2) The Riemann integral does not

More information

Topology, Math 581, Fall 2017 last updated: November 24, Topology 1, Math 581, Fall 2017: Notes and homework Krzysztof Chris Ciesielski

Topology, Math 581, Fall 2017 last updated: November 24, Topology 1, Math 581, Fall 2017: Notes and homework Krzysztof Chris Ciesielski Topology, Math 581, Fall 2017 last updated: November 24, 2017 1 Topology 1, Math 581, Fall 2017: Notes and homework Krzysztof Chris Ciesielski Class of August 17: Course and syllabus overview. Topology

More information

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3 Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................

More information

(1) Consider the space S consisting of all continuous real-valued functions on the closed interval [0, 1]. For f, g S, define

(1) Consider the space S consisting of all continuous real-valued functions on the closed interval [0, 1]. For f, g S, define Homework, Real Analysis I, Fall, 2010. (1) Consider the space S consisting of all continuous real-valued functions on the closed interval [0, 1]. For f, g S, define ρ(f, g) = 1 0 f(x) g(x) dx. Show that

More information

Set-Indexed Processes with Independent Increments

Set-Indexed Processes with Independent Increments Set-Indexed Processes with Independent Increments R.M. Balan May 13, 2002 Abstract Set-indexed process with independent increments are described by convolution systems ; the construction of such a process

More information

GARCH processes continuous counterparts (Part 2)

GARCH processes continuous counterparts (Part 2) GARCH processes continuous counterparts (Part 2) Alexander Lindner Centre of Mathematical Sciences Technical University of Munich D 85747 Garching Germany lindner@ma.tum.de http://www-m1.ma.tum.de/m4/pers/lindner/

More information

Introduction to Dynamical Systems

Introduction to Dynamical Systems Introduction to Dynamical Systems France-Kosovo Undergraduate Research School of Mathematics March 2017 This introduction to dynamical systems was a course given at the march 2017 edition of the France

More information

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R. Ergodic Theorems Samy Tindel Purdue University Probability Theory 2 - MA 539 Taken from Probability: Theory and examples by R. Durrett Samy T. Ergodic theorems Probability Theory 1 / 92 Outline 1 Definitions

More information

16 1 Basic Facts from Functional Analysis and Banach Lattices

16 1 Basic Facts from Functional Analysis and Banach Lattices 16 1 Basic Facts from Functional Analysis and Banach Lattices 1.2.3 Banach Steinhaus Theorem Another fundamental theorem of functional analysis is the Banach Steinhaus theorem, or the Uniform Boundedness

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

Chapter 2 Metric Spaces

Chapter 2 Metric Spaces Chapter 2 Metric Spaces The purpose of this chapter is to present a summary of some basic properties of metric and topological spaces that play an important role in the main body of the book. 2.1 Metrics

More information

of the set A. Note that the cross-section A ω :={t R + : (t,ω) A} is empty when ω/ π A. It would be impossible to have (ψ(ω), ω) A for such an ω.

of the set A. Note that the cross-section A ω :={t R + : (t,ω) A} is empty when ω/ π A. It would be impossible to have (ψ(ω), ω) A for such an ω. AN-1 Analytic sets For a discrete-time process {X n } adapted to a filtration {F n : n N}, the prime example of a stopping time is τ = inf{n N : X n B}, the first time the process enters some Borel set

More information

Empirical Processes: General Weak Convergence Theory

Empirical Processes: General Weak Convergence Theory Empirical Processes: General Weak Convergence Theory Moulinath Banerjee May 18, 2010 1 Extended Weak Convergence The lack of measurability of the empirical process with respect to the sigma-field generated

More information

7 Complete metric spaces and function spaces

7 Complete metric spaces and function spaces 7 Complete metric spaces and function spaces 7.1 Completeness Let (X, d) be a metric space. Definition 7.1. A sequence (x n ) n N in X is a Cauchy sequence if for any ɛ > 0, there is N N such that n, m

More information

Estimates for probabilities of independent events and infinite series

Estimates for probabilities of independent events and infinite series Estimates for probabilities of independent events and infinite series Jürgen Grahl and Shahar evo September 9, 06 arxiv:609.0894v [math.pr] 8 Sep 06 Abstract This paper deals with finite or infinite sequences

More information

Analysis-3 lecture schemes

Analysis-3 lecture schemes Analysis-3 lecture schemes (with Homeworks) 1 Csörgő István November, 2015 1 A jegyzet az ELTE Informatikai Kar 2015. évi Jegyzetpályázatának támogatásával készült Contents 1. Lesson 1 4 1.1. The Space

More information

2 Topology of a Metric Space

2 Topology of a Metric Space 2 Topology of a Metric Space The real number system has two types of properties. The first type are algebraic properties, dealing with addition, multiplication and so on. The other type, called topological

More information

= F (b) F (a) F (x i ) F (x i+1 ). a x 0 x 1 x n b i

= F (b) F (a) F (x i ) F (x i+1 ). a x 0 x 1 x n b i Real Analysis Problem 1. If F : R R is a monotone function, show that F T V ([a,b]) = F (b) F (a) for any interval [a, b], and that F has bounded variation on R if and only if it is bounded. Here F T V

More information

Real Analysis Math 131AH Rudin, Chapter #1. Dominique Abdi

Real Analysis Math 131AH Rudin, Chapter #1. Dominique Abdi Real Analysis Math 3AH Rudin, Chapter # Dominique Abdi.. If r is rational (r 0) and x is irrational, prove that r + x and rx are irrational. Solution. Assume the contrary, that r+x and rx are rational.

More information

Topology. Xiaolong Han. Department of Mathematics, California State University, Northridge, CA 91330, USA address:

Topology. Xiaolong Han. Department of Mathematics, California State University, Northridge, CA 91330, USA  address: Topology Xiaolong Han Department of Mathematics, California State University, Northridge, CA 91330, USA E-mail address: Xiaolong.Han@csun.edu Remark. You are entitled to a reward of 1 point toward a homework

More information

THE STONE-WEIERSTRASS THEOREM AND ITS APPLICATIONS TO L 2 SPACES

THE STONE-WEIERSTRASS THEOREM AND ITS APPLICATIONS TO L 2 SPACES THE STONE-WEIERSTRASS THEOREM AND ITS APPLICATIONS TO L 2 SPACES PHILIP GADDY Abstract. Throughout the course of this paper, we will first prove the Stone- Weierstrass Theroem, after providing some initial

More information

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition Filtrations, Markov Processes and Martingales Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition David pplebaum Probability and Statistics Department,

More information

9 Brownian Motion: Construction

9 Brownian Motion: Construction 9 Brownian Motion: Construction 9.1 Definition and Heuristics The central limit theorem states that the standard Gaussian distribution arises as the weak limit of the rescaled partial sums S n / p n of

More information

M311 Functions of Several Variables. CHAPTER 1. Continuity CHAPTER 2. The Bolzano Weierstrass Theorem and Compact Sets CHAPTER 3.

M311 Functions of Several Variables. CHAPTER 1. Continuity CHAPTER 2. The Bolzano Weierstrass Theorem and Compact Sets CHAPTER 3. M311 Functions of Several Variables 2006 CHAPTER 1. Continuity CHAPTER 2. The Bolzano Weierstrass Theorem and Compact Sets CHAPTER 3. Differentiability 1 2 CHAPTER 1. Continuity If (a, b) R 2 then we write

More information

Lecture Characterization of Infinitely Divisible Distributions

Lecture Characterization of Infinitely Divisible Distributions Lecture 10 1 Characterization of Infinitely Divisible Distributions We have shown that a distribution µ is infinitely divisible if and only if it is the weak limit of S n := X n,1 + + X n,n for a uniformly

More information

Fuchsian groups. 2.1 Definitions and discreteness

Fuchsian groups. 2.1 Definitions and discreteness 2 Fuchsian groups In the previous chapter we introduced and studied the elements of Mob(H), which are the real Moebius transformations. In this chapter we focus the attention of special subgroups of this

More information

Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra

Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra D. R. Wilkins Contents 3 Topics in Commutative Algebra 2 3.1 Rings and Fields......................... 2 3.2 Ideals...............................

More information

Asymptotics for posterior hazards

Asymptotics for posterior hazards Asymptotics for posterior hazards Pierpaolo De Blasi University of Turin 10th August 2007, BNR Workshop, Isaac Newton Intitute, Cambridge, UK Joint work with Giovanni Peccati (Université Paris VI) and

More information

Integration on Measure Spaces

Integration on Measure Spaces Chapter 3 Integration on Measure Spaces In this chapter we introduce the general notion of a measure on a space X, define the class of measurable functions, and define the integral, first on a class of

More information

Part V. 17 Introduction: What are measures and why measurable sets. Lebesgue Integration Theory

Part V. 17 Introduction: What are measures and why measurable sets. Lebesgue Integration Theory Part V 7 Introduction: What are measures and why measurable sets Lebesgue Integration Theory Definition 7. (Preliminary). A measure on a set is a function :2 [ ] such that. () = 2. If { } = is a finite

More information

II - REAL ANALYSIS. This property gives us a way to extend the notion of content to finite unions of rectangles: we define

II - REAL ANALYSIS. This property gives us a way to extend the notion of content to finite unions of rectangles: we define 1 Measures 1.1 Jordan content in R N II - REAL ANALYSIS Let I be an interval in R. Then its 1-content is defined as c 1 (I) := b a if I is bounded with endpoints a, b. If I is unbounded, we define c 1

More information

Topological properties

Topological properties CHAPTER 4 Topological properties 1. Connectedness Definitions and examples Basic properties Connected components Connected versus path connected, again 2. Compactness Definition and first examples Topological

More information

Metric Spaces and Topology

Metric Spaces and Topology Chapter 2 Metric Spaces and Topology From an engineering perspective, the most important way to construct a topology on a set is to define the topology in terms of a metric on the set. This approach underlies

More information

MAT 570 REAL ANALYSIS LECTURE NOTES. Contents. 1. Sets Functions Countability Axiom of choice Equivalence relations 9

MAT 570 REAL ANALYSIS LECTURE NOTES. Contents. 1. Sets Functions Countability Axiom of choice Equivalence relations 9 MAT 570 REAL ANALYSIS LECTURE NOTES PROFESSOR: JOHN QUIGG SEMESTER: FALL 204 Contents. Sets 2 2. Functions 5 3. Countability 7 4. Axiom of choice 8 5. Equivalence relations 9 6. Real numbers 9 7. Extended

More information

2 (Bonus). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure?

2 (Bonus). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure? MA 645-4A (Real Analysis), Dr. Chernov Homework assignment 1 (Due 9/5). Prove that every countable set A is measurable and µ(a) = 0. 2 (Bonus). Let A consist of points (x, y) such that either x or y is

More information

4 Countability axioms

4 Countability axioms 4 COUNTABILITY AXIOMS 4 Countability axioms Definition 4.1. Let X be a topological space X is said to be first countable if for any x X, there is a countable basis for the neighborhoods of x. X is said

More information

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past.

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past. 1 Markov chain: definition Lecture 5 Definition 1.1 Markov chain] A sequence of random variables (X n ) n 0 taking values in a measurable state space (S, S) is called a (discrete time) Markov chain, if

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 15. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539 Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory

More information

CHAPTER 7. Connectedness

CHAPTER 7. Connectedness CHAPTER 7 Connectedness 7.1. Connected topological spaces Definition 7.1. A topological space (X, T X ) is said to be connected if there is no continuous surjection f : X {0, 1} where the two point set

More information

1.1. MEASURES AND INTEGRALS

1.1. MEASURES AND INTEGRALS CHAPTER 1: MEASURE THEORY In this chapter we define the notion of measure µ on a space, construct integrals on this space, and establish their basic properties under limits. The measure µ(e) will be defined

More information

Measures. 1 Introduction. These preliminary lecture notes are partly based on textbooks by Athreya and Lahiri, Capinski and Kopp, and Folland.

Measures. 1 Introduction. These preliminary lecture notes are partly based on textbooks by Athreya and Lahiri, Capinski and Kopp, and Folland. Measures These preliminary lecture notes are partly based on textbooks by Athreya and Lahiri, Capinski and Kopp, and Folland. 1 Introduction Our motivation for studying measure theory is to lay a foundation

More information

Section 2: Classes of Sets

Section 2: Classes of Sets Section 2: Classes of Sets Notation: If A, B are subsets of X, then A \ B denotes the set difference, A \ B = {x A : x B}. A B denotes the symmetric difference. A B = (A \ B) (B \ A) = (A B) \ (A B). Remarks

More information

STAT 7032 Probability Spring Wlodek Bryc

STAT 7032 Probability Spring Wlodek Bryc STAT 7032 Probability Spring 2018 Wlodek Bryc Created: Friday, Jan 2, 2014 Revised for Spring 2018 Printed: January 9, 2018 File: Grad-Prob-2018.TEX Department of Mathematical Sciences, University of Cincinnati,

More information

Math212a1413 The Lebesgue integral.

Math212a1413 The Lebesgue integral. Math212a1413 The Lebesgue integral. October 28, 2014 Simple functions. In what follows, (X, F, m) is a space with a σ-field of sets, and m a measure on F. The purpose of today s lecture is to develop the

More information

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product Chapter 4 Hilbert Spaces 4.1 Inner Product Spaces Inner Product Space. A complex vector space E is called an inner product space (or a pre-hilbert space, or a unitary space) if there is a mapping (, )

More information

1 Topology Definition of a topology Basis (Base) of a topology The subspace topology & the product topology on X Y 3

1 Topology Definition of a topology Basis (Base) of a topology The subspace topology & the product topology on X Y 3 Index Page 1 Topology 2 1.1 Definition of a topology 2 1.2 Basis (Base) of a topology 2 1.3 The subspace topology & the product topology on X Y 3 1.4 Basic topology concepts: limit points, closed sets,

More information

Real Analysis Notes. Thomas Goller

Real Analysis Notes. Thomas Goller Real Analysis Notes Thomas Goller September 4, 2011 Contents 1 Abstract Measure Spaces 2 1.1 Basic Definitions........................... 2 1.2 Measurable Functions........................ 2 1.3 Integration..............................

More information

I forgot to mention last time: in the Ito formula for two standard processes, putting

I forgot to mention last time: in the Ito formula for two standard processes, putting I forgot to mention last time: in the Ito formula for two standard processes, putting dx t = a t dt + b t db t dy t = α t dt + β t db t, and taking f(x, y = xy, one has f x = y, f y = x, and f xx = f yy

More information

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability...

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability... Functional Analysis Franck Sueur 2018-2019 Contents 1 Metric spaces 1 1.1 Definitions........................................ 1 1.2 Completeness...................................... 3 1.3 Compactness......................................

More information

REVIEW OF ESSENTIAL MATH 346 TOPICS

REVIEW OF ESSENTIAL MATH 346 TOPICS REVIEW OF ESSENTIAL MATH 346 TOPICS 1. AXIOMATIC STRUCTURE OF R Doğan Çömez The real number system is a complete ordered field, i.e., it is a set R which is endowed with addition and multiplication operations

More information

A SHORT INTRODUCTION TO BANACH LATTICES AND

A SHORT INTRODUCTION TO BANACH LATTICES AND CHAPTER A SHORT INTRODUCTION TO BANACH LATTICES AND POSITIVE OPERATORS In tis capter we give a brief introduction to Banac lattices and positive operators. Most results of tis capter can be found, e.g.,

More information

Mathematics for Economists

Mathematics for Economists Mathematics for Economists Victor Filipe Sao Paulo School of Economics FGV Metric Spaces: Basic Definitions Victor Filipe (EESP/FGV) Mathematics for Economists Jan.-Feb. 2017 1 / 34 Definitions and Examples

More information

5 Birkhoff s Ergodic Theorem

5 Birkhoff s Ergodic Theorem 5 Birkhoff s Ergodic Theorem Birkhoff s Ergodic Theorem extends the validity of Kolmogorov s strong law to the class of stationary sequences of random variables. Stationary sequences occur naturally even

More information

In terms of measures: Exercise 1. Existence of a Gaussian process: Theorem 2. Remark 3.

In terms of measures: Exercise 1. Existence of a Gaussian process: Theorem 2. Remark 3. 1. GAUSSIAN PROCESSES A Gaussian process on a set T is a collection of random variables X =(X t ) t T on a common probability space such that for any n 1 and any t 1,...,t n T, the vector (X(t 1 ),...,X(t

More information

NATIONAL BOARD FOR HIGHER MATHEMATICS. Research Scholarships Screening Test. Saturday, January 20, Time Allowed: 150 Minutes Maximum Marks: 40

NATIONAL BOARD FOR HIGHER MATHEMATICS. Research Scholarships Screening Test. Saturday, January 20, Time Allowed: 150 Minutes Maximum Marks: 40 NATIONAL BOARD FOR HIGHER MATHEMATICS Research Scholarships Screening Test Saturday, January 2, 218 Time Allowed: 15 Minutes Maximum Marks: 4 Please read, carefully, the instructions that follow. INSTRUCTIONS

More information

Chapter 1. Measure Spaces. 1.1 Algebras and σ algebras of sets Notation and preliminaries

Chapter 1. Measure Spaces. 1.1 Algebras and σ algebras of sets Notation and preliminaries Chapter 1 Measure Spaces 1.1 Algebras and σ algebras of sets 1.1.1 Notation and preliminaries We shall denote by X a nonempty set, by P(X) the set of all parts (i.e., subsets) of X, and by the empty set.

More information

MATH MEASURE THEORY AND FOURIER ANALYSIS. Contents

MATH MEASURE THEORY AND FOURIER ANALYSIS. Contents MATH 3969 - MEASURE THEORY AND FOURIER ANALYSIS ANDREW TULLOCH Contents 1. Measure Theory 2 1.1. Properties of Measures 3 1.2. Constructing σ-algebras and measures 3 1.3. Properties of the Lebesgue measure

More information

PROBLEMS. (b) (Polarization Identity) Show that in any inner product space

PROBLEMS. (b) (Polarization Identity) Show that in any inner product space 1 Professor Carl Cowen Math 54600 Fall 09 PROBLEMS 1. (Geometry in Inner Product Spaces) (a) (Parallelogram Law) Show that in any inner product space x + y 2 + x y 2 = 2( x 2 + y 2 ). (b) (Polarization

More information

Introduction to Topology

Introduction to Topology Introduction to Topology Randall R. Holmes Auburn University Typeset by AMS-TEX Chapter 1. Metric Spaces 1. Definition and Examples. As the course progresses we will need to review some basic notions about

More information

SPECTRAL THEOREM FOR SYMMETRIC OPERATORS WITH COMPACT RESOLVENT

SPECTRAL THEOREM FOR SYMMETRIC OPERATORS WITH COMPACT RESOLVENT SPECTRAL THEOREM FOR SYMMETRIC OPERATORS WITH COMPACT RESOLVENT Abstract. These are the letcure notes prepared for the workshop on Functional Analysis and Operator Algebras to be held at NIT-Karnataka,

More information

Analysis Qualifying Exam

Analysis Qualifying Exam Analysis Qualifying Exam Spring 2017 Problem 1: Let f be differentiable on R. Suppose that there exists M > 0 such that f(k) M for each integer k, and f (x) M for all x R. Show that f is bounded, i.e.,

More information

Gaussian vectors and central limit theorem

Gaussian vectors and central limit theorem Gaussian vectors and central limit theorem Samy Tindel Purdue University Probability Theory 2 - MA 539 Samy T. Gaussian vectors & CLT Probability Theory 1 / 86 Outline 1 Real Gaussian random variables

More information

Lecture 9. d N(0, 1). Now we fix n and think of a SRW on [0,1]. We take the k th step at time k n. and our increments are ± 1

Lecture 9. d N(0, 1). Now we fix n and think of a SRW on [0,1]. We take the k th step at time k n. and our increments are ± 1 Random Walks and Brownian Motion Tel Aviv University Spring 011 Lecture date: May 0, 011 Lecture 9 Instructor: Ron Peled Scribe: Jonathan Hermon In today s lecture we present the Brownian motion (BM).

More information

Lebesgue Measure on R n

Lebesgue Measure on R n CHAPTER 2 Lebesgue Measure on R n Our goal is to construct a notion of the volume, or Lebesgue measure, of rather general subsets of R n that reduces to the usual volume of elementary geometrical sets

More information

CHAPTER 6. Differentiation

CHAPTER 6. Differentiation CHPTER 6 Differentiation The generalization from elementary calculus of differentiation in measure theory is less obvious than that of integration, and the methods of treating it are somewhat involved.

More information

Linear Ordinary Differential Equations

Linear Ordinary Differential Equations MTH.B402; Sect. 1 20180703) 2 Linear Ordinary Differential Equations Preliminaries: Matrix Norms. Denote by M n R) the set of n n matrix with real components, which can be identified the vector space R

More information

1. Stochastic Processes and filtrations

1. Stochastic Processes and filtrations 1. Stochastic Processes and 1. Stoch. pr., A stochastic process (X t ) t T is a collection of random variables on (Ω, F) with values in a measurable space (S, S), i.e., for all t, In our case X t : Ω S

More information

LEBESGUE MEASURE AND L2 SPACE. Contents 1. Measure Spaces 1 2. Lebesgue Integration 2 3. L 2 Space 4 Acknowledgments 9 References 9

LEBESGUE MEASURE AND L2 SPACE. Contents 1. Measure Spaces 1 2. Lebesgue Integration 2 3. L 2 Space 4 Acknowledgments 9 References 9 LBSGU MASUR AND L2 SPAC. ANNI WANG Abstract. This paper begins with an introduction to measure spaces and the Lebesgue theory of measure and integration. Several important theorems regarding the Lebesgue

More information

MATH 722, COMPLEX ANALYSIS, SPRING 2009 PART 5

MATH 722, COMPLEX ANALYSIS, SPRING 2009 PART 5 MATH 722, COMPLEX ANALYSIS, SPRING 2009 PART 5.. The Arzela-Ascoli Theorem.. The Riemann mapping theorem Let X be a metric space, and let F be a family of continuous complex-valued functions on X. We have

More information

for all subintervals I J. If the same is true for the dyadic subintervals I D J only, we will write ϕ BMO d (J). In fact, the following is true

for all subintervals I J. If the same is true for the dyadic subintervals I D J only, we will write ϕ BMO d (J). In fact, the following is true 3 ohn Nirenberg inequality, Part I A function ϕ L () belongs to the space BMO() if sup ϕ(s) ϕ I I I < for all subintervals I If the same is true for the dyadic subintervals I D only, we will write ϕ BMO

More information

On Kusuoka Representation of Law Invariant Risk Measures

On Kusuoka Representation of Law Invariant Risk Measures MATHEMATICS OF OPERATIONS RESEARCH Vol. 38, No. 1, February 213, pp. 142 152 ISSN 364-765X (print) ISSN 1526-5471 (online) http://dx.doi.org/1.1287/moor.112.563 213 INFORMS On Kusuoka Representation of

More information

MH 7500 THEOREMS. (iii) A = A; (iv) A B = A B. Theorem 5. If {A α : α Λ} is any collection of subsets of a space X, then

MH 7500 THEOREMS. (iii) A = A; (iv) A B = A B. Theorem 5. If {A α : α Λ} is any collection of subsets of a space X, then MH 7500 THEOREMS Definition. A topological space is an ordered pair (X, T ), where X is a set and T is a collection of subsets of X such that (i) T and X T ; (ii) U V T whenever U, V T ; (iii) U T whenever

More information

Universal convex coverings

Universal convex coverings Bull. London Math. Soc. 41 (2009) 987 992 C 2009 London Mathematical Society doi:10.1112/blms/bdp076 Universal convex coverings Roland Bacher Abstract In every dimension d 1, we establish the existence

More information

Problem Set 2: Solutions Math 201A: Fall 2016

Problem Set 2: Solutions Math 201A: Fall 2016 Problem Set 2: s Math 201A: Fall 2016 Problem 1. (a) Prove that a closed subset of a complete metric space is complete. (b) Prove that a closed subset of a compact metric space is compact. (c) Prove that

More information

Riemann integral and volume are generalized to unbounded functions and sets. is an admissible set, and its volume is a Riemann integral, 1l E,

Riemann integral and volume are generalized to unbounded functions and sets. is an admissible set, and its volume is a Riemann integral, 1l E, Tel Aviv University, 26 Analysis-III 9 9 Improper integral 9a Introduction....................... 9 9b Positive integrands................... 9c Special functions gamma and beta......... 4 9d Change of

More information

Stochastic Processes. Winter Term Paolo Di Tella Technische Universität Dresden Institut für Stochastik

Stochastic Processes. Winter Term Paolo Di Tella Technische Universität Dresden Institut für Stochastik Stochastic Processes Winter Term 2016-2017 Paolo Di Tella Technische Universität Dresden Institut für Stochastik Contents 1 Preliminaries 5 1.1 Uniform integrability.............................. 5 1.2

More information

Solutions to Tutorial 8 (Week 9)

Solutions to Tutorial 8 (Week 9) The University of Sydney School of Mathematics and Statistics Solutions to Tutorial 8 (Week 9) MATH3961: Metric Spaces (Advanced) Semester 1, 2018 Web Page: http://www.maths.usyd.edu.au/u/ug/sm/math3961/

More information

1. If 1, ω, ω 2, -----, ω 9 are the 10 th roots of unity, then (1 + ω) (1 + ω 2 ) (1 + ω 9 ) is A) 1 B) 1 C) 10 D) 0

1. If 1, ω, ω 2, -----, ω 9 are the 10 th roots of unity, then (1 + ω) (1 + ω 2 ) (1 + ω 9 ) is A) 1 B) 1 C) 10 D) 0 4 INUTES. If, ω, ω, -----, ω 9 are the th roots of unity, then ( + ω) ( + ω ) ----- ( + ω 9 ) is B) D) 5. i If - i = a + ib, then a =, b = B) a =, b = a =, b = D) a =, b= 3. Find the integral values for

More information

The Hopf argument. Yves Coudene. IRMAR, Université Rennes 1, campus beaulieu, bat Rennes cedex, France

The Hopf argument. Yves Coudene. IRMAR, Université Rennes 1, campus beaulieu, bat Rennes cedex, France The Hopf argument Yves Coudene IRMAR, Université Rennes, campus beaulieu, bat.23 35042 Rennes cedex, France yves.coudene@univ-rennes.fr slightly updated from the published version in Journal of Modern

More information

On non-expansivity of topical functions by a new pseudo-metric

On non-expansivity of topical functions by a new pseudo-metric https://doi.org/10.1007/s40065-018-0222-8 Arabian Journal of Mathematics H. Barsam H. Mohebi On non-expansivity of topical functions by a new pseudo-metric Received: 9 March 2018 / Accepted: 10 September

More information

GAUSSIAN PROCESSES; KOLMOGOROV-CHENTSOV THEOREM

GAUSSIAN PROCESSES; KOLMOGOROV-CHENTSOV THEOREM GAUSSIAN PROCESSES; KOLMOGOROV-CHENTSOV THEOREM STEVEN P. LALLEY 1. GAUSSIAN PROCESSES: DEFINITIONS AND EXAMPLES Definition 1.1. A standard (one-dimensional) Wiener process (also called Brownian motion)

More information

Metric Spaces. Exercises Fall 2017 Lecturer: Viveka Erlandsson. Written by M.van den Berg

Metric Spaces. Exercises Fall 2017 Lecturer: Viveka Erlandsson. Written by M.van den Berg Metric Spaces Exercises Fall 2017 Lecturer: Viveka Erlandsson Written by M.van den Berg School of Mathematics University of Bristol BS8 1TW Bristol, UK 1 Exercises. 1. Let X be a non-empty set, and suppose

More information

Lecture 12. F o s, (1.1) F t := s>t

Lecture 12. F o s, (1.1) F t := s>t Lecture 12 1 Brownian motion: the Markov property Let C := C(0, ), R) be the space of continuous functions mapping from 0, ) to R, in which a Brownian motion (B t ) t 0 almost surely takes its value. Let

More information

Appendix B Convex analysis

Appendix B Convex analysis This version: 28/02/2014 Appendix B Convex analysis In this appendix we review a few basic notions of convexity and related notions that will be important for us at various times. B.1 The Hausdorff distance

More information