Stable Lévy motion with values in the Skorokhod space: construction and approximation

Size: px
Start display at page:

Download "Stable Lévy motion with values in the Skorokhod space: construction and approximation"

Transcription

1 Stable Lévy motion with values in the Skorokhod space: construction and approximation arxiv: v1 [math.pr] 6 Sep 2018 Raluca M. Balan Becem Saidani September 5, 2018 Abstract In this article, we introduce an infinite-dimensional analogue of the α-stable Lévy motion, defined as a Lévy process Z = {Zt)} t 0 with values in the space D of càdlàg functions on [0, 1], equipped with Skorokhod s J 1 topology. For each t 0, Zt) is an α-stable process with sample paths in D, denoted by {Zt, s)} s [0,1]. Intuitively, Zt, s) gives the value of the process Z at time t and location s in space. This process is closely related to the concept of regular variation for random elements in D introduced in [9] and [13]. We give a construction of Z based on a Poisson random measure, and we show that Z has a modification whose sample paths are càdlàg functions on [0, ) with values in D. Finally, we prove a functional limit theorem which identifies the distribution of this modification as the limit of the partial sum sequence {S n t) = [nt] i=1 X i} t 0, suitably normalized and centered, associated to a sequence X i ) i 1 of i.i.d. regularly varying elements in D. MSC 2010: Primary 60F17; Secondary 60G51, 60G52 Keywords: functional limit theorems; Skorokhod space; Lévy processes; regular variation Contents 1 Introduction 2 2 Càdlàg functions with values in D The space D[0, 1]; D) The space D[0, ); D) Corresponding author. Department of Mathematics and Statistics, University of Ottawa, 585 King Edward Avenue, Ottawa, ON, K1N 6N5, Canada. address: rbalan@uottawa.ca Research supported by a grant from the Natural Sciences and Engineering Research Council of Canada. Department of Mathematics and Statistics, University of Ottawa, 585 King Edward Avenue, Ottawa, ON, K1N 6N5, Canada. address: bsaid053@uottawa.ca 1

2 3 Construction: proof of Theorem The compound Poisson building blocks Construction in the case α < Construction in the case α > Approximation: proof of Theorem Point processes on Polish spaces Continuity of summation functional Convergence of truncated sums Approximation in the case α < Approximation in the case α > Simulations 36 A Some auxiliary results 39 B The α-stable Lévy sheet 41 C A result about Brownian motion 43 1 Introduction Regularly varying random variables play an important role in probability theory, being used as models for heavy-tailed observations observations which may assume extreme values with high probability). In many applications, one is often interested in the sum of such variables. For instance, if X i denotes the number of internet transactions performed on a secure website on day i, it might be of interest to study the total number n i=1 X i of transactions performed on this website in n days. If X i ) i 1 are independent and identically distributed i.i.d.) regularly varying random variables, then, with suitable normalization and centering, the partial sum process {S n t) = [nt] i=1 X i} t 0 converges as n to the α-stable Lévy motion, a process which plays the same central role for heavy-tailed observations as the Brownian motion for observations with finite variance. With the rapid advancement of technology, data is no longer observed at fixed moments of time, but continuously over a fixed interval in time or space which we may identify with the interval [0, 1]). If this measurement is expected to exhibit a sudden drop or increase over this fixed interval, then an appropriate model for it could be a random element in an infinite dimensional space, such as the Skorokhod space D = D[0, 1]) of càdlàg functions on [0, 1] i.e. right-continuous functions with left limits). For instance, if the number of internet transactions is observed continuously during the 24-hour duration of the day identified with the interval [0, 1]) and X i s) is the number recorded at time s of day i, then we may assume that X i = {X i s)} s [0,1] is a process with càdlàg sample paths. Another example is when X i s) represents the energy produced by a wind turbine on day i at location s of a large wind farm situated on the ocean shore, modeled by the interval [0, 1]. In these examples, we are interested in studying the behaviour of the partial sum process { n i=1 X is); s [0, 1]} which gives the full information about the total number of 2

3 transactions or the total amount of energy) for n days, at each time s during the 24-hour period or at each location s on the shore). The goal of this article is to study the macroscopic limit as time gets large) of the partial sum sequences as those appearing in the previous examples, associated to i.i.d. regularly varying elements in D. It turns out that this limit is an interesting object in itself, which deserves special attention and will be call an D-valued α-stable Lévy motion by analogy with its R d -valued counterpart. Our methods were deeply inspired by Resnick s beautiful presentation of the construction of the classical α-stable Lévy motion with values in R d, and of its approximation by partial sums of i.i.d. regularly varying vectors, given in [21]. Its aim is to extend these results to the infinite-dimensional setting, using the concept of regular variation for random elements in D introduced in [9], and developed further in [13]. More precisely, our goals are: i) to construct a Lévy process {Zt)} t 0 with values in D, whose marginal Zt) = {Zt, s)} s [0,1] is a càdlàg α-stable process with a specified distribution); ii) to show that this process has a modification whose sample paths are càdlàg functions from [0, ) to D where D is endowed with Skorohod J 1 -topology); and iii) to identify this modification as the limit as n of the partial sum process {S n t) = [nt] i=1 X i} t 0 associated to i.i.d. regularly varying random elements X i ) i 1 in D. We believe that this Lévy process is a natural infinite-dimensional analogue of the α-stable Lévy motion with values in R d, with which it shares several properties, like independence and stationarity of increments, self-similarity, and α-stable marginal distributions. We should emphasize that the D-valued Lévy motion constructed in the present article is more general than the two-parameter α-stable Lévy sheet introduced in [19] see Appendix B). Before we introduce the definition of a Lévy process with values in D, we need to recall some basic facts about the space D. We denote by the supremum norm on D given by x = sup s [0,1] xs), and by S D = {x D; x = 1} the unit sphere in D. With this norm, D is a Banach space, but it is not separable. For this reason, the theory of random elements in separable Banach spaces as presented for instance in [17]) or the functional limit theorems mentioned in Section 5 of [26] cannot be applied to D. We endow D with Skorokhod s J 1 -topology, introduced in [25]. There are two equivalent distances which induce this topology. We denote by d 0 J 1 the distance given by 12.16) of [5], under which D is a Polish space i.e. a complete separable metric space). Note that a function x D has a countable set of discontinuities which we denote by Discx). We let D be the Borel σ-field on D. Since D coincides with the σ-field generated by the projections π s : D R, s [0, 1] given by π s x) = xs), a function X : Ω D defined on a probability space Ω, F, P ) is a random element in D if Xs) is F-measurable for any s [0, 1]. For any s 1,..., s m [0, 1], the projection π s1,...,s m : D R m is defined by π s1,...,s m x) = xs 1 ),..., xs m )). We refer to [4, 5] for more details. The analogue of the polar-coordinate ) transformation is the map T : D 0 0, ) S D x x,, where D x 0 = D\{0}. Let ν α be the measure on 0, ] given given by T x) = by: ν α dr) = αr α 1 1 0, ) r)dr. 1) 3

4 Definition 1.1. Let ν be a measure on D, D) such that ν{0}) = 0 and ν := ν T 1 = cν α Γ 1 2) for some c > 0, α 0, 2), α 1 and a probability measure Γ 1 on S D. A collection {Zt)} t 0 of random elements in D, defined on a probability space Ω, F, P ) is a D-valued α-stable Lévy motion corresponding to ν) if i) Z0) = 0 a.s.; ii) Zt 2 ) Zt 1 ),..., Zt K ) Zt K 1 ) are independent, for any 0 t 1 <... < t K, K 3; iii) Zt 2 ) Zt 1 ) = d Zt 2 t 1 ) for any 0 t 1 < t 2, where = d means equality in distribution; iv) for any t > 0, Zt) = {Zt, s)} s [0,1] is an α-stable process with sample paths in D) such that for any s 1,..., s m [0, 1] and for any u = u 1,..., u m ) R m, E e ) { } iu 1Zt,s 1 )+...+iu mzt,s m) = exp t e iu y 1)µ s1,...,s m dy) if α < 1, 3) R m E e ) { } iu 1Zt,s 1 )+...+iu mzt,s m) = exp t e iu y 1 iu y)µ s1,...,s m dy) if α > 1 4) R m where y = y 1,..., y m ), u y = m i=1 u iy i, and µ s1,...,s m = ν π 1 s 1,...,s m. From this definition, it follows that Zt, s) has an α-stable S α t 1/α σ s, β s, 0)-distribution, for some constants σ s > 0 and β s [ 1, 1] depending on s see in Proposition 3.4 below). Note that property 2) implies that D 0 x 2 1)νdx) <, by a change of variables. Remark 1.2. The authors of [8] considered α-stable Lévy processes {Zt)} t 0 with values in a normed cone K with a sub-invariant norm. By definition, these processes have independent and stationary StαS increments, where StαS stands for strictly α-stable. If α < 1, a D-valued α-stable Lévy motion in the sense of Definition 1.1) is an α-stable Lévy process on the cone K = D, and therefore has the series representation given by Theorem 3.10 of [8]. Note that the space D equipped with d 0 J 1 is a normed cone, as specified by Definition 2.6 of [8], and the sup-norm is sub-invariant, as defined by relation 2.9) of [8], i.e. d 0 J 1 x + h, x) h for any x, h D.) If we denote by m t1,...,t n the law of Zt 1 ),..., Zt n )) on D n, D n ), then by properties i)-iii), the family {m t1,...,t n } of these laws is consistent in the sense of Kolmogorov see Theorem 3.7 of [18] for a statement of Kolmogorov s consistency theorem for random elements in a Polish space). But it is not obvious how to ensure that property iv) also holds, i.e. it is not clear how to construct a càdlàg process {Zt, s)} s [0,1] with finite-dimensional distributions specified by 3) and 4). Our first main result will tackle precisely this problem. Moreover, we will show that the process {Zt)} t 0 has a modification { Zt)} t 0 with sample paths in D[0, ); D), where D[0, ); D) is the set of functions x : [0, ) D which are right-continuous and have left-limits with respect to J 1. We introduce the following assumptions on the probability measure Γ 1. Assumption A. For any s [0, 1], Γ 1 {z S D ; zs) = 0}) = 0. Assumption B. For any s [0, 1], Γ 1 {z S D ; s Discz)}) = 0. We will prove the following result. 4

5 Theorem 1.3. Suppose that Assumption A holds. a) For any measure ν on D, D) such that ν{0}) = 0 and 2) holds, there exists a D- valued α-stable Lévy motion {Zt)} t 0 corresponding to measure ν). b) If α > 1, suppose that Assumption B holds. Then, there exists a collection { Zt)} t 0 of random elements in D such that P Zt) = Zt)) = 1 for any t 0, and the map t Zt) is in D[0, ); D) with probability 1. We now turn to our second result, the approximation theorem. Before speaking about regular variation on D, we need to recall some classical notions. A non-negative random variable X is regularly varying of index α for some α > 0) if its tail function F x) = P X > x) is so hence the name). A useful characterization of this property is expressed in terms of the vague convergence np X/a n ) v ν α of Radon measures on the space 0, ], for some sequence a n ) n 1 R + with a n. This property can be extended to higher dimensions. A random vector X in R d is regularly varying if np X/a n ) v µ on R d 0 = [, ] d \{0}, for a non-null Radon measure µ on R d 0 with µr d 0\R d ) = 0 and a sequence a n ) n 1 R + with a n ; or equivalently, ) ) X X v np, cν α Γ on 0, ] S d, 5) a n X for some α > 0, c > 0 and a probability measure Γ on the unit sphere S d = {x R d ; x = 1} with respect to the Euclidean norm on R d. We refer to [20, 21] for more details. Briefly speaking, the regular variation of a random element in R d reduces to the vague convergence of a sequence of Radon measures on the space R d 0, defined by removing 0 and adding the -hyperplanes. In the case of random elements in D, there is no natural analogue of an -hyperplane. To avoid this problem, the authors of [9, 13] considered D 0 = 0, ] S D. Another problem is the fact that vague convergence is defined only for Radon measures on locally compact spaces with countable basis, and D 0 is not locally compact. This problem is solved by using the concept of ŵ-convergence defined in Section 4.1 below). Note that D 0 is a Polish space equipped with the distance d D0 given by: d D0 r, z), r, z ) ) = 1 r 1 r d0 J 1 z, z ), 6) for any r, z), r, z ) D 0, with the convention 1/ = 0. With this distance, a set of the form ε, ] S D is bounded in D 0. This fact plays an important role in this article. Since is J 1 -continuous on D, T is a homeomorphism. Similarly to [22] but unlike [13, 7]), we prefer not to identify D 0 with 0, ) S D. Therefore, we will not say that D 0 is a subset of D 0. We are now ready to give the definition of regular variation on D. Definition 1.4. A random element X = {Xs)} s [0,1] in D is regularly varying and we write X RV{a n }, ν, D 0 )) if there exist a sequence a n ) n 1 R + with a n and a non-null boundedly finite measure ν on D 0 with νd 0 \T D 0 )) = 0 such that np X a n, X X ) ) 5 ŵ ν on D 0.

6 In this case, we say that ν is the limiting measure of X. Since ν is non-null, there exists a 0 > 0 such that νa 0, ) S D ) > 0. Without loss of generality, we assume that a 0 = 1. We let c = ν1, ) S D ). By Remark 3 of [13], the measure ν in Definition 1.4 has the following property: there exists α > 0 such that νaa) = a α νa) for any a > 0 and A BD 0 ), where aa = {ar, z); r, z) A}. We say that α is the index of X. In Lemma A.1 Appendix A), we prove that the measure ν in Definition 1.4 must be the product measure: ν = cν α Γ 1, 7) where Γ 1 is a probability measure on S D called the spectral measure of X), given by Γ 1 S) = ν1, ) S) c for all S BS D ). 8) Here we let BD 0 ) and BS D ) be the classes of Borel sets of D 0, respectively S D. If X RV{a n }, ν, D 0 ), then X is regularly varying of index α: for any ε > 0, np X > a n ε) cε α, as n, 9) with the same constant c > 0 as above. From this we infer that if α > 1, E X <, and hence E Xs) < for all s [0, 1]. In this case, we define E[X] = {E[Xs)]} s [0,1]. In [22], it is proved that if S n = n i=1 X i, where X i ) i 1 are i.i.d. random elements in D with X 1 RV {a n }, ν, D 0 ) and α > 1, then 1 a n S n ES n )) d N in D, where ES n ) = {ES n s))} s [0,1] and N = {Ns)} s [0,1] is an α-stable process with sample paths in D whose distribution is completely identified). We are now ready to state our second main result, which is an extension of Theorem 1.1 of [22] to functional convergence. We let D[0, ); D) be the set of of càdlàg functions on [0, ) with values in D, equipped with the Skorohod distance d,d described in Section 2 below). Theorem 1.5. Let X, X i ) i 1 be i.i.d. random elements in D such that X RV {a n }, ν, D). Let α be the index of X and Γ 1 be the spectral measure of X. Suppose that α 0, 2), α 1 and Γ 1 satisfies Assumptions A and B. For any n 1, t 0, let S n t) = {S n t, s)} s [0,1], where S n t, s) = a 1 n [nt] i=1 X is) for s [0, 1]. Let { Zt)} t 0 be the process constructed in Theorem 1.3.b), which may not be defined on the same probability space as the sequence X i ) i 1. a) If α < 1, then S n ) d Z ) in D[0, ); D). b) If α > 1, let S n t) = S n t) E[S n t)], where E[S n t)] = {E[S n t, s)]} s [0,1]. If k lim lim sup max P Xi 1 ε 0 k [nt ] { Xi a nε} E[X i 1 ) ) { Xi a nε}] > a n δ = 0 10) n i=1 6

7 for any δ > 0 and T > 0, then S n ) d Z ) in D[0, ); D). Assumption B is the same as Condition A-i) of [22], whereas 10) is a stronger form of Condition A-ii) of [22], which is needed for the functional convergence. We use the following notation. If X n ) n 1 and X are random elements in a metric space E, d), we write X n d X if X n ) n converges in distribution to X, and X n p X if P dx n, X) > ε) 0 for all ε > 0. This article is organized as follows. In Section 2 we introduce the spaces D[0, 1]; D) and D[0, ); D), and we study the weak convergence and tightness of probability measures on these spaces. In Sections 3 and 4 we give the proofs of Theorems 1.3 and 1.5, respectively. Some auxiliary results are included in Appendix A. 2 Càdlàg functions with values in D In this section, we introduce the spaces D[0, 1]; D) and D[0, ); D) of càdlàg functions defined [0, 1], respectively [0, ), with values in D. These spaces are equipped with the Skorohod distance introduced in [27]. We examine briefly the weak convergence of probability measures on these spaces, a topic which is developed at length in the companion paper [1]. 2.1 The space D[0, 1]; D) In this subsection, we introduce the space D[0, 1]; D) and discuss some of its properties. We begin by recalling some well-known facts about the classical Skorohod space D. We refer the reader to [4, 5] for more details. The Skorohod distance d J1 on D is defined as follows: for any x, y D, d J1 x, y) = inf { λ e x y λ }, λ Λ where Λ the set of strictly increasing continuous functions from [0, 1] onto [0, 1] and e is the identity function on [0, 1]. The space D equipped with distance d J1 is separable, but it is not complete. There exists another distance d 0 J 1 on D, which is equivalent to d J1, under which D is complete and separable. This distance is given by: see 12.16) of [5]) d 0 J 1 x, y) = inf λ Λ { λ 0 x y λ }, 11) for any x, y D, where λ 0 = sup s<s. Note that By relation 12.17) of [5], log λs ) λs) s s d J1 x, 0) = d 0 J 1 x, 0) = x for any x D. 12) sup λs) s e λ 0 1 for all λ Λ. 13) s [0,1] 7

8 Taking λ = e in 11), we obtain: d 0 J 1 x, y) x y for all x, y D. 14) J For functions x n ) n 1 and x in D, we write x 1 n x if d 0 J1 x n, x) 0. For any δ 0, 1), we consider the following modulus of continuity of a function x D: w x, δ) = sup s 1 s s 2,s 2 s 1 δ xs) xs1 ) xs 2 ) xs) ). 15) We denote by D[0, 1]; D) the set of functions x : [0, 1] D which are right-continuous and have left limits with respect to J 1. We denote by xt ) the left limit of x at t 0, 1]. If x D[0, 1]; D), we let xt, s) = xt)s) for any t [0, 1] and s [0, 1]. We let d D be Skorohod distance on D[0, 1]; D), given by relation 2.1) of [27]: d D x, y) = inf λ Λ { λ e ρ Dx, y λ)}, 16) where ρ D is the uniform distance on D[0, 1]; D) defined by: ρ D x, y) = sup d 0 J 1 xt), yt)). 17) t [0,1] Hence, d D x n, x) 0 if and only if there exists a sequence λ n ) n 1 Λ such that sup λ n t) t 0 and sup d 0 J 1 x n λ n t)), xt)) 0. t [0,1] t [0,1] We denote by D the super-uniform norm on D[0, 1]; D) given by: x D = sup xt). t [0,1] By the discussion in small print on page 122 of [5], the set {xt); t [0, 1]} is relatively compact in D, J 1 ), and hence, x D < by Theorem 12.3 of [5].) By relation 12), it follows that for any x D[0, 1]; D), Note that for any x, y D[0, 1]; D), we have: d D x, 0) = ρ D x, 0) = x D. 18) d D x, y) ρ D x, y) x y D. 19) The space D[0, 1]; D) equipped with d D is separable, but it is not complete. Similarly to the distance d 0 J 1 on D, we consider another distance d 0 D on D[0, 1]; D), given by: d 0 Dx, y) = inf λ Λ { λ 0 ρ D x, y λ)}. 20) The following result is similar to Theorems 12.1 and 12.2 of [5]. See also Theorem 2.6 of [27]. 8

9 Theorem 2.1. The metrics d D and d 0 D are equivalent. The space D[0, 1]; D) is separable under d D and d 0 D, and is complete under d0 D. Similarly to 15), for any x D[0, 1]; D) and δ 0, 1), we consider the following modulus of continuity: w Dx, δ) = sup d 0 J1 xt), xt 1 )) d 0 J 1 xt 2 ), xt)) ). t 1 t t 2, t 2 t 1 δ The following result will be used in the proof of Theorem 3.14 below. Lemma 2.2. For any x, y D[0, 1]; D), we have: w Dx + y, δ) w Dx, δ) + 2 y D. Proof: Let t 1 t t 2 be such that t 2 t 1 δ. By triangle inequality and 14), d 0 J 1 xt) + yt), xt1 ) + yt 1 ) ) ) d 0 J1 xt) + yt), xt) + d 0 J1 xt), xt1 ) ) + d 0 J xt1 1 ), xt 1 ) + yt 1 ) ) yt) + d 0 J 1 xt), xt1 ) ) + yt 1 ) d 0 J 1 xt), xt1 ) ) + 2 y D. Similarly, d 0 J 1 xt) + yt), xt2 ) + yt 2 ) ) d 0 J 1 xt), xt2 ) ) + 2 y D. If a 1, a 2, b 1, b 2, c R are such that a i b i + c for i = 1, 2, then it is easy to see that a 1 a 2 b 1 b 2 + c. It follows that d 0 j 1 xt) + yt), xt1 ) + yt 1 ) ) d 0 J 1 xt) + yt), xt2 ) + yt 2 ) ) is less than d 0 J 1 xt), xt1 ) ) d 0 J 1 xt), xt2 ) ) + 2 y D w Dx, δ) + 2 y D. The conclusion follows taking the supremum over all t 1 t t 2 such that t 2 t 1 δ. The following result shows that the super-uniform norm is continuous on D[0, 1]; D). Its proof if given in [1]. Lemma 2.3. If x n ) n 1 and x are functions in D[0, 1]; D) such that d D x n, x) 0 as n, then x n D x D as n. We conclude this subsection with a brief discussion about finite-dimensional sets in D[0, 1]; D), and tightness of probability measures on this space. Let D D be the Borel σ-field of D[0, 1]; D), with respect to d D. It can be shown that D D coincides with the σ-field generated by the projections {πt D ; t [0, 1]}, where πt D : D[0, 1]; D) D is given by πt D x) = xt). We equip D with the J 1 -topology and D[0, 1]; D) with distance d D. Then the projections π0 D and π1 D are continuous everywhere, whereas for t 0, 1), πt D is continuous at x if and only if x is continuous at t. If P is a probability measure on D[0, 1]; D), we let T p be the set of t [0, 1] such that πt D is continuous almost everywhere with respect to P. The set T P has a countable complement, and hence is dense in [0, 1]. For fixed t 1,..., t k [0, 1], we consider the projection πt D 1,...,t k : D[0, 1]; D) D k given by πt D 1,...,t k x) = xt 1 ),..., xt k )). If P n ) n 1 and P are probability measures on D[0, 1]; D) such that P w n P, then the following marginal convergence holds for all t 1,..., t k T P : P n π D t 1,...,t k ) 1 w P π D t 1,...,t k ) 1 in D k, J k 1 ), 21) 9

10 where J k 1 is the product of J 1 -topologies. The following result will be used in the proof of Theorem 3.14 below, being the analogue of Theorem 15.3 of [4] for the space D[0, 1]; D). Its proof is given in [1]. Theorem 2.4. A sequence P n ) n 1 of probability measures on D[0, 1]; D) is tight if and only if it satisfies the following three conditions: i) lim a lim n P n {x; x D > a}) = 0; ii) for any η > 0 and ρ > 0, there exist δ 0, 1) and n 0 1 such that for all n n 0, a) P n {x; w xt), δ) > η for some t [0, 1]}) < ρ b) P n {x; xt, δ) xt, 0) > η for some t [0, 1]}) < ρ c) P n {x; xt, 1 ) xt, 1 δ) > η for some t [0, 1]}) < ρ; iii) for any η > 0 and ρ > 0, there exist δ 0, 1) and n 0 1 such that for all n n 0, a) P n {x; w D x, δ) > η}) ) < ρ b) P n {x; d 0 J 1 xδ), x0) > η}) ) < ρ c) P n {x; d 0 J 1 x1 ), x1 δ) > η}) < ρ. 2.2 The space D[0, ); D) In this subsection, we introduce the space D[0, ); D) and we list some of its properties. For any fixed T > 0, we let D[0, T ]; D) be the set of functions x : [0, T ] D which are right-continuous and have left-limits with respect to J 1. Let Λ T be the set of strictly increasing continuous functions from [0, T ] onto itself. Similarly to the case T = 1, we define the Skorohod distance on D[0, T ]; D) by: d T,D x, y) = inf λ Λ T { λ e T ρ T,D x, y λ)}, 22) where T is the supremum norm on Λ T, e is the identity function on [0, T ], and ρ T,D is the uniform distance on D[0, T ]; D) given by: ρ T,D x, y) = sup d 0 J 1 xt), yt)). 23) t [0,T ] We denote by T,D the super-uniform norm on D[0, T ]; D) given by: For any x, y D[0, T ]; D), we have x T,D = sup xt). t [0,T ] d T,D x, y) ρ T,D x, y) x y T,D. 24) The Skorohod distance on the space D[0, ); D) is given by: see 2.2) of [27]) d,d x, y) = 0 e d t t,d rt x), r t y) ) ) 1 dt, 25) 10

11 where r t x) is the restriction to [0, t] of the function x D[0, ); D). By Theorem 2.6 of [27], D[0, ); D) equipped with distance d,d is a Polish space. Its Borel σ-field D,D coincides by Lemma 2.7 of [27]) with the σ-field generated by the projections {πt D ; t 0}, where πt D : D[0, ); D) D is given by πt D x) = xt). Similarly to page 174 of [5], if P n ) n 1 and P are probability measures on D[0, ); D) such that P w n P then the marginal convergence 21) holds for all t 1,..., t k T P, where the set T p defined as in Section 2.1 above) has a countable complement. In fact, P w n P if and only if P n rt 1 w P rt 1 for any t T p see also Theorem 2.8 of [27]). 3 Construction: proof of Theorem 1.3 In this section, we give the construction of an α-stable Lévy motion Z = {Zt)} t 0 with values in D, and we show that this process has a modification with sample paths in the space of càdlàg functions from [0, ) to D. We follow the method described in Section 5.5 of [21]. For each t 0, Zt) is a random element in D which we denote by {Zt, s)} s [0,1], that is Zt, s) = Zt)s). Intuitively, the process Z evolves in time and space: Zt, s) gives the value of this process at time t 0 and location s [0, 1] in space. 3.1 The compound Poisson building blocks In this subsection, we introduce the building blocks of the construction, and we examine their properties. Let N = i 1 δ T i,r i,w i ) be a Poisson random measure on [0, ) D 0 of intensity Leb ν, defined on a complete probability space Ω, F, P ), where Leb is the Lebesgue measure and ν is given by 2) on 0, ) S D and ν{ } S D ) = 0. Refer to Definition 4.1 below for the definition of a Poisson random measure.) By an extension of Proposition 5.3 of [21] to point processes on Polish spaces, we can represent the points T i, R i, W i ) as follows: {T i, R i )} i 1 are the points of a Poisson random measure on [0, ) 0, ] of intensity Leb ν α, and W i ) i 1 is an independent sequence of i.i.d. random elements in S D with law Γ 1. Let ε j ) j 0 be a sequence of real numbers such that ε j 0 and ε 0 = 1. Let I j = ε j, ε j 1 ] for j 1 and I 0 = 1, ). We fix t 0 and s [0, 1]. For any j 0, we let Z j t, s) = rzs)ndu, dr, dz) = R i W i s)1 {Ri I j }. 26) [0,t] I j S D T i t Note that for any j 0 and s [0, 1], Z j 0, s) = 0. Lemma 3.1. a) Z j t, s) is well-defined and F-measurable for any j 0, t 0, s [0, 1]. b) For any t 0 and j 0, the process Z j t) = {Z j t, s)} s [0,1] has all sample paths in D, with left limit at point s 0, 1] given by Z j t, s ) = rzs )Ndu, dr, dz) = R i W i s )1 {Ri I j }. [0,t] I j S D T i t 11

12 Proof: a) Z j t, s) is well-defined since [0, t] I j S D is a bounded set in [0, ) D 0 due to definition 6) of the metric d D0 on D 0 ), and the sum in 26) contains finitely many terms. Z j t, s) is F-measurable since N is a point process and the map µ µπ s ) = 0, ) S D rzs)µdr, dz) is M p [0, ) D 0 )-measurable, where π s r, z) = rzs) see Section 4.1 below for the definition of a point process). b) This follows by the dominated convergence theorem, whose application is justified by the fact that [0,t] I j S D rndu, dr, dz) <. To investigate the finite dimensional distribution of process Z j t) corresponding to points s 1,..., s m [0, 1], we consider the function π s1,...,s m : 0, ) S D R m given by: Note that π s1,...,s m T = π s1,...,s m. π s1,...,s m r, z) = rzs 1 ),..., rzs m )). Lemma 3.2. For any j 0, t 0 and s 1,..., s m [0, 1], the vector Z j t, s 1 ),..., Z j t, s m )) has a compound Poisson distribution in R m with characteristic function: E e i ) { } m k=1 u kz j t,s k ) = exp t e iu 1rzs 1 )+...+iu mrzs m) 1)νdr, dz), I j S D for any u 1,..., u m ) R m. Letting ϕs) = S D zs)γ 1 s) and ψs) = S D zs) 2 Γ 1 ds) for any s [0, 1], we have E Z j t, s) ) = t rzs)νdr, dz) = tϕs) rν α dr) I j S D I j Var Z j t, s) ) = t rzs) 2 νdr, dz) = tψs) r 2 ν α dr). I j S D I j Proof: We represent the restriction of N to [0, t] I j S D as N d [0,t] Ij S D = K i=1 δ τ i,j i,w i ), where K is a Poisson random variable of mean tνi j S D ), τ i ) i 1 are i.i.d. uniformly distributed on [0, 1], J i ) i 1 are i.i.d. on I j of law ν α /ν α I j ), W i ) i 1 are i.i.d. on S D of law Γ 1, and K, τ i ) i 1, J i ) i 1, W i ) i 1 are independent. Hence, Z j t, s 1 ),..., Z j t, s m )) d = K i=1 J iy i with Y i = W i s 1 ),..., W i s m )). The result follows since {J i Y i } i 1 are i.i.d. vectors in R m with law where ν Ij S D 1 νi j S D ) ν I j S D π 1 s 1,...,s m, is the restriction of ν to I j S D. The previous result shows that for j 1, Z j t, s) has finite mean and finite variance, while Z 0 t, s) has infinite variance since α < 2), but has finite mean if α > 1. Note that j 1 Var Z j t, s) ) = tψs) 0,1] r2 ν α dr) <. Moreover, the variables {Z j t, s)} j 0 are independent, since the intervals I j ) j 0 are disjoint. Hence by Kolmogorov s convergence criterion see e.g. Theorem 22.6 of [3]), for any t > 0 and s [0, 1], Z j t, s) E Z j t, s) )) converges a.s. j 1 12

13 We denote by Ω t,s the event that this series converges, with P Ω t,s ) = 1. If α < 1, j 1 E Z j t, s) ) = cϕs) 1 rν 0 αdr) is finite, whereas if α > 1, EZ 0 t, s)) = cϕs) rν 1 α dr) is finite. For any t 0, s [0, 1] fixed, on the event Ω t,s we define Zt, s) = j 0 Z j t, s) if α < 1, 27) Zt, s) = j 0 Z j t, s) E Z j t, s) )) if α > 1. 28) On the event Ω c t,s, we let Zt, s) = x 0, for arbitrary x 0 D, in both cases α < 1 and α > 1. Note that Z0, s) = 0 for all s [0, 1]. For any s 1,..., s m [0, 1], we consider the following measure on R m : µ s1,...,s m = ν π 1 s 1,...,s m = ν π 1 s 1,...,s m, 29) The next result identifies some essential properties of the measures µ s1,...,s m. Assumption A is needed only to guarantee that µ s1,...,s m {0}) = 0. Lemma 3.3. Suppose that Assumption A holds. a) For any s 1,..., s m [0, 1], µ s1,...,s m is a Lévy measure on R m, i.e. µ s1,...,s m {0}) = 0 and y 2 1)µ s1,...,s m dy) <. R m b) For any s 1,..., s m [0, 1], for any h > 0 and for any Borel set A R m, µ s1,...,s m ha) = h α µ s1,...,s m A). c) For any s [0, 1], the measure µ s is given by where c + s = µ s 1, ) and c s = µ s, 1). µ s dy) = c + s αy α 1 + c s α y) α 1) dy, Proof: a) By Assumption A, µ s1,...,s m {0}) = ν{r, z); rzs 1 ) =... = rzs m ) = 0}) = 0, using the convention that 0 = 0. The second property follows because m y 2 i=1 µ s1,...,s m dy) = c zs i) 2 ) 1/2 m r 2 ν α dr) zs i ) 2 Γ 1 dz) y 1 S D 0 i=1 α m ) α/2γ1 = c zs i ) 2 α dz) c 2 α S D 2 α, and y >1 i=1 µ s1,...,s m dy) = c m ) 1 ν α dr)γ 1 dz) = c S D i=1 zs i) 2 S D 13 m ) α/2γ1 zs i ) 2 dz). i=1

14 b) By Fubini s theorem and the scaling property of ν α, it can be proved that ν has the following scaling property: for any h > 0 and H BD 0 ), νhh) = h α νh), where hh = {hr, z); r, z) H}. For any h > 0 and A BR m ), we have µ s1,...,s m ha) = ν{r, z); rzs 1 ),..., rzs m )) ha}) = νhh) where H = {r, z); rzs 1 ),..., rzs m )) A} = π 1 s 1,...,s m A). The conclusion follows from the scaling property of ν mentioned above. c) This is an immediate consequence of the scaling property in b). We denote by S α σ, β, µ) the α-stable distribution given by Definition of [23], and Cα 1 Γ2 α) πα ) = 1 α cos. 30) 2 Based on the previous lemma, we obtain the following result. Proposition 3.4. For any t > 0, the process Zt) = {Zt, s)} s [0,1] given by 27) and 28) is α-stable with finite-dimensional distributions given by 3) and 4). In particular, for any t > 0 and s [0, 1], Zt, s) has a S α t 1/α σ s, β s, 0) distribution with parameters σ s = Cα 1 c + s + c s ) and β s = c+ s c s, 31) c + s + c s where c + s and c s are given in Lemma 3.3.c). Moreover, Zt, s k ) d Zt, s) as k, for any s [0, 1] and for any sequence s k ) k 1 with s k s and s k s for all k 1. Proof: Case 1: α < 1. By Lemma 3.2 and the independence of {Z j t, s)} j 0, it follows that the characteristic function of the variable Zt, s) is given by: E e iuzt,s)) { } { } = exp t e iurzs) 1)νdr, dz) = exp t e iuy 1)µ s dy), u R. D 0 R The fact that Zt, s) has a S α t 1/α σ s, β s, 0) follows essentially from the calculations on page 568 of [11], using the form of the measure µ s given in Lemma 3.3.c). Similarly, it can be seen that for any s 1,..., s m [0, 1], Zt, s 1 ),..., Zt, s m )) has characteristic function given by 3). The fact that Zt, s 1 ),..., Zt, s m )) has an α-stable distribution follows by Theorem 14.3 of [24], using the scaling property of the measure µ s1,...,s m given in Lemma 3.3.b). The last statement follows from the fact that Ee iuzt,sk) ) Ee iuzt,s) ). To see this, note that lim k zs k ) = zs) for any z S D. By the dominated convergence theorem, e iurzsk) 1)νdr, dz) e iurzs) 1)νdr, dz), as k. D 0 D 0 The application of this theorem is justified using the inequalities e iurzs) 1 urzs) if r 1 and e iurzs) 1 2 if r > 1. 14

15 Case 2: α > 1. This is similar to Case 1, except that we now have centering constants. In this case, the characteristic function of Zt, s) is given by E e iuzt,s)) { } = exp t e iurzs) 1 iurzs))νdr, dz), u R. D 0 The last statement follows from the fact that Ee iuzt,sk) ) Ee iuzt,s) ), since e iurzsk) 1 iurzs))νdr, dz) e iurzs) 1 iurzs))νdr, dz). D 0 D 0 The application of the dominated convergence theorem is justified using the inequalities e iurzs) 1 iurzs) 1 2 urzs) 2 if r 1 and e iurzs) 1 iurzs) 2 urzs) if r > 1. We denote by D u [0, ); D) the set of functions x : [0, ) D which are rightcontinuous and have left limits with respect to the uniform norm on D. Clearly, D u [0, ); D) is a subset of D[0, ); D). Lemma 3.5. For any j 0, the process {Z j t)} t 0 has all sample paths in D u [0, ); D), with left limit at t > 0 given by Z j t ) = {Z j t, s)} s [0,1], where Z j t, s) = rzs)ndu, dr, dz). [0,t) I j S D Proof: We first show that the map t Z j t) is right-continuous in D, ). Let t 0 be arbitrary and t n ) n 1 such that t n t and t n t for all n 1. Then Z j t n ) Z j t) = sup rzs)ndu, dr, dz) s [0,1] t,t n] I j S D rndu, dr, dz), t,t n] I j S D and the last integral converges to 0 as n by the dominated convergence theorem. Next, we show that the map t Z j t) has left limit Z j t ) in D, ). Let t > 0 be arbitrary and t n ) n 1 such that t n t and t n t for all n 1. Then Z j t ) Z j t n ) = sup rzs)ndu, dr, dz) s [0,1] t n,t) I j S D rndu, dr, dz), t n,t) I j S D and the last integral converges to 0 as n by the dominated convergence theorem. For any ε > 0, t 0 and s [0, 1], we let Z ε) t, s) = rzs)ndu, dr, dz) = R i W i s)1 {Ri ε, )}. 32) [0,t] ε, ) S D T i t Using this notation, we have: Z ε k) t, s) = k Z j t, s), for all k 0. 33) j=0 Remark 3.6. Similarly to Lemma 3.1 and Lemma 3.5 for j = 0, it can be proved that the process Z ε) t) = {Z ε) t, s)} s [0,1] has all sample paths in D for any t 0, and the process Z ε) = {Z ε) t)} t 0 has all sample paths in D u [0, ); D). 15

16 3.2 Construction in the case α < 1 In this subsection, we give the proof Theorem 1.3 in the case α < 1. In particular, property 35) below will be used in the proof of the approximation result Theorem 1.5.a)). Our first result shows that for any t > 0 fixed, the process Zt) given by 27) has a càdlàg modification which can be obtained as an almost sure limit with respect to the uniform norm. Recall that {Xs)} s [0,1] is a modification of {Y s)} s [0,1] if P Xs) = Y s)) = 1 for all s [0, 1]. Lemma 3.7. If α < 1, then for any t 0, there exists a random element Zt) = {Zt, s)} s [0,1] in D such that P Zt, s) = Zt, s)) = 1 for all s [0, 1], and lim k Zε k) t) Zt) = 0 a.s. Proof: For t = 0, we define Z0, s) = 0 for all s [0, 1]. We consider the case t > 0. By 26), Z j t) i 1 R i1 {Ri I j }1 {Ti t} = [0,t] I j S D rndu, dr, dz). Since α < 1, it follows that E Z j t) E rndu, dr, dz) = t rνdr, dz) <, j 1 j 1 [0,t] I j S D 0,1] S D which implies that j 1 Z jt) < a.s. We denote by Ω t the event that this series converges, with P Ω t ) = 1. On the event Ω t, the sequence {Z εk) t) = k j=0 Z jt)} k 0 is Cauchy in D, ), and we denote its limit by Zt). On the event Ω c t, we let Zt) = x 0. By Lemma 3.1.a), Zt, s) is F-measurable for any s [0, 1]. Hence, Zt) is a random element in D. On the event Ω t,s Ω t, Zt, s) Z εk) t, s) = j k+1 Z jt, s), and hence Z ε k) t, s) Zt, s) j k+1 Z j t, s) j k+1 Z j t) 0. On the other hand, on the event Ω t, Z εk) t, s) Zt, s) for any s [0, 1]. uniqueness of the limit, Zt, s) = Zt, s) on the event Ω t,s Ω t. By the The following result proves Theorem 1.3.a) in the case α < 1. Theorem 3.8. If α < 1, the process {Zt)} t 0 defined in Lemma 3.7 is a D-valued α- stable Lévy motion corresponding to ν). This process is 1/α)-self-similar, i.e. {Zct)} t 0 d = c 1/α {Zt)} t 0 for any c > 0, 34) where d = denotes equality of finite-dimensional distributions. Proof: We first show that the process {Zt)} t 0 satisfies properties i)-iv) given in Definition 1.1. Property i) is clear. To verify property ii), we apply Lemma A.3 Appendix A) to the space S = D equipped with d 0 J 1. By Lemma 3.7, for i = 2,..., K, X i) k := Z εk) t i ) Z εk) t i 1 ) X i) := Zt i ) Zt i 1 ) a.s. as k, in D, ), and hence also in D, J 1 ). The variables X 2) k,..., XK) k are independent for any k, since X i) k 16

17 is F N t i 1,t i -measurable and the σ-fields F N t i 1,t i, i = 2,..., K are independent. Here F N s,t is the σ-field generated by Na, b] B) for any s < a < b t and B BD 0 ). It follows that X 2),..., X K) are independent. For property iii), we have to show that vectors X := Zt 2, s 1 ) Zt 1, s 1 ),... Zt 2, s 1 ) Zt 1, s m )) and Y := Zt 2 t 1, s 1 ),..., Zt 2 t 1, s m )) have the same distribution, for any s 1,..., s m [0, 1]. By 27) and Lemma 3.7, on the event Ω t1,s Ω t2,s Ω t1 Ω t2, Zt 2, s) Zt 1, s) = Zt 2, s) Zt 1, s) = j 0 Zj t 2, s) Z j t 1, s) ). As in the proof of Proposition 3.4, it follows that the characteristic function of X is { } Ee iu X ) = exp t 2 t 1 ) e iu y 1)µ s1,...,s m dy), u R m, R m which is the same as the characteristic function of Y. Hence X d = Y. Finally, property iv) was shown in Proposition 3.4 for Zt), and remains valid for its modification Zt). To prove relation 34), we have to show that {Zct)} d t 0 = {c 1/α Zt)} t 0 for any c > 0. Since both processes have stationary and independent increments, it is enough to show that Zct) = d c 1/α Zt) for any t > 0, i.e. vectors U = Zct, s 1 ),..., Zct, s m )) and V = c 1/α Zt, s 1 ),..., Zt, s m )) have the same distribution, for any s 1,..., s m [0, 1] and t > 0. Let h c y) = c 1/α y for y R m. By the scaling property of the measure µ s1,...,s m given in Lemma 3.3.b), µ s1,...,s m h 1 c A)) = µ s1,...,s m c 1/α A) = cµ s1,...,s m A), for any Borel set A R m. Therefore, the characteristic function of V is { } { } Ee iu V ) = exp t e iu y 1)µ s1,...,s m h 1 c )dy) = exp ct e iu y 1)µ s1,...,s m dy) R m R m for any u R m, which is the same as the characteristic function of U. Hence U d = V. The following result proves Theorem 1.3.b) in the case α < 1. Theorem 3.9. If α < 1 and {Zt)} t 0 is the process defined in Lemma 3.7, then there exists a collection { Zt)} t 0 of random elements in D, such that P Zt) = Zt)) = 1 for all t 0, and for any T > 0, sup Z εk) t) Zt) 0 a.s. as k. 35) t T Moreover, the map t Zt) is in D u [0, ); D) a.s. Proof: For any T > 0, we denote by D u [0, T ]; D) the set of functions x : [0, T ] D which are right-continuous and have left-limits with respect to the norm on D. Note that D u [0, T ]; D) is a Banach space with respect to the super-uniform norm T,D. 17

18 Using the same idea as in the proof of Theorem 5.4 of [21], we will show that there exists an event Ω of probability 1, on which we can say that for any T > 0, {Z ε k) )} k 1 is a Cauchy sequence in D u [0, T ]; D), 36) where D u [0, T ]; D) is equipped with the norm T,D. We denote by { Zt)} t [0,T ] the limit of this sequence in D u [0, T ]; D) on the event Ω). Relation 35) then holds by definition. Since T > 0 is arbitrary, Zω, t) is a well-defined element in D for any t 0 and ω Ω. For ω Ω, we let Zω, t) = y 0 for any t 0, where y 0 D is arbitrary. For any ω Ω and t 0, Zω, t) D and we denote Zω, t, s) := Zω, t)s) for any s [0, 1]. Clearly, Zt, s) is F-measurable for any s [0, 1], being the a.s. limit of the sequence {Z ε k) t, s)} k 1 This proves that Zt) is a random element in D, for any t 0. By Lemma A.2 with S = D equipped with the uniform norm), the map t Zt) lies in D u [0, ); D) on the event Ω). From relation 35) and Lemma 3.7, we infer that Zt) Zt) = 0 a.s. for any t > 0. It remains to prove 36). For this, it suffices to prove that for any δ > 0, lim lim P max K L K<k L Zε k) Z εk) T,D > δ) = 0. 37) Let δ > 0 be arbitrary. For any K < k L, t > 0 and s [0, 1], Z εk) t, s) Z εk) t, s) = rzs)ndu, dr, dz) = R i W i s)1 {εk <R i ε K }, [0,T ] ε k,e K ] S D T i t and hence Z εk) t) Z εk) t) R i 1 {εk <R i ε K } = rndu, dr, dz). T i t [0,t] ε k,ε K ] S D Taking the supremum over t [0, T ] followed by the maximum over k with K < k L, we obtain: max K<k L Zε k) Z εk) T,D rndu, dr, dz). [0,T ] ε L,ε K ] S D By Markov s inequality, P max K<k L Zε k) Z εk) T,D > δ) 1 ) δ E rndu, dr, dz) [0,T ] ε L,ε K ] S D = T rνdr, dz) = T rν α dr) 0 as K, L, δ ε L,ε K ] S D δ ε L,ε K ] using the fact that ε L,1] rν αdr) 1 0 rν αdr) <, as L. This proves 37). 18

19 3.3 Construction in the case α > 1 In this subsection, we give the proof of Theorem 1.3 in the case α > 1. In particular, property 56) below will be used in the proof of approximation result Theorem 1.5.b)). In this case, for any ε > 0, E[Z ε) t, s)] = ctϕs) rν ε α dr) is finite, and we denote Z ε) t, s) = Z ε) t, s) E[Z ε) t, s)], where Z ε) t, s) is given by 32). By 33), it follows that Z ε k) t, s) = k j=0 Z j t, s) E Z j t, s) )). 38) Remark For any probability measure Q on D, D), there exists a càdlàg process {Y s)} s [0,1], defined on a probability space Ω, F, P ), whose law under P is Q. This is simply because we may take Ω, F, P ) = D, D, Q) and Y s) = π s for all s [0, 1]. This fact will be used in the proof of Lemma 3.11 below. The next result is the analogue of Lemma 3.7 for the case α > 1. The crucial elements of its proof are: i) tightness of the sequence {Z ε k) t)}k 1 in D, proved in [22]; and ii) the improved version of Itô-Nisio theorem for random elements in D, given in [2]. The original version of Itô-Nisio theorem in D can be found in [15].) Recall that in the case α > 1, the process Zt) = {Zt, s)} s [0,1] is given by 28). Lemma For any t 0, there exists a random element Zt) = {Zt, s)} s [0,1] in D such that P Zt, s) = Zt, s)) = 1 for all s [0, 1], and In particular, E Zt, s) ) = 0 for all s [0, 1] and t > 0. lim k Zε k) t) Zt) = 0 a.s. 39) Proof: For t = 0, we define Z0, s) = 0 for all s [0, 1]. We will assume for simplicity that t = 1, the case of arbitrary t > 0 being similar. To simplify the notation, in this proof we denote Z ε k) = {Z ε k ) s) = Z ε k ) 1, s)}s [0,1] and Z = {Zs) = Z1, s)} s [0,1]. From the last part of the proof of Theorem 2.12 of [22], we know that Z ε k) )k 1 is tight in D, J 1 ). By Prohorov s theorem, Z ε k) )k 1 is relatively compact in D, J 1 ). Hence, there exists a subsequence N Z + and a probability measure Q on D, D) such that P Z ε k) ) 1 w Q as k, k N. By Remark 3.10, let Y be a random element in D with law Q, defined on a probability space Ω, F, P ). Then, Z ε k) d Y in D, J 1 ) as k, k N, which implies that Z ε k) s1 ),..., Z ε k) sm )) d Y s 1 ),..., Y s m )), 40) as k, k N, for any s 1,..., s m T, where T = {s 0, 1); P s DiscY )) = 0} {0, 1} is dense in [0, 1] see p.124 of [4]). By 28) and 38), Zs) = lim k Z ε k) s) a.s. for any s [0, 1]. 41) 19

20 By 40) and the uniqueness of the limit, it follows that for any s 1,..., s m T, Zs 1 ),..., Zs m )) d = Y s 1 ),..., Y s m )). Consider now another subsequence N Z + such that P Z ε k) ) 1 w Q as k, k N, for a probability measure Q on D, D). Let Y be a random element in D with law Q, defined on a probability space Ω, F, P ). Let T = {s 0, 1); P s DiscY )) = 0} {0, 1}. The same argument as above shows that for any s 1,..., s m T Zs 1 ),..., Zs m )) d = Y s 1 ),..., Y s m )). Hence, Y s 1 ),..., Y s m )) d = Y s 1 ),..., Y s m )) for any s 1,..., s m T T. Since T T is dense in [0, 1] and contains 1, by Theorem 12.5 of [5], we conclude that Q = Q. This shows that any subsequence of {P Z ε k) ) 1 } k which converges weakly, in fact converges weakly to Q. Therefore, P Z ε k) ) 1 w Q as k, and relation 40) holds as k not only along the subsequence N ). Note that Z ε k) s) = k j=0 Zj 1, s) EZ j 1, s)) ) and {X j = Z j 1, ) EZ j 1, ))} j 0 are random elements in D by Lemma 3.1), which are independent and have mean zero. The existence of a càdlàg process {Zs)} s [0,1] such that lim k Z ε k) Z = 0 a.s. will follow by Theorem 2.1.iii) of [2]. Relation 2.1) of [2] holds, due to 40). We only have to prove that { Y s) } s [0,1] is uniformly integrable, which is equivalent to { Zs) } s [0,1] being uniformly integrable. This will follow from the fact that: sup E Zs) p < for any 1 < p < α. 42) s [0,1] To prove 42), recall from Proposition 3.4 that Zs) has a S α σ s, β s, 0)-distribution. By Property of [23], E Zs) p = σsc p α,βs p)) p, where c α,βs p)) p = c p 1 + βs 2 tan 2 απ ) p/2α p cos 2 α arctan β s tan απ )) 2 c p 1 + tan 2 απ ) p/2α for all s [0, 1], 2 and c p > 0 is a constant depending only on p. The form of the constant c α,β p) plays an important roles in the argument above. This constant was computed in [12].) Note that for any s [0, 1], σ s = C 1 α C 1 α c + s + c s ) = Cα 1 µ s {y R; y > 1}) = C 1 ν1, ) S D ) = Cα 1 cν α 1, )) <, α ν{r, z) 0, ) S D ; r zs) > 1}) where for the last equality we used definition 2) of ν. Relation 42) follows. The following result proves Theorem 1.3.a) in the case α > 1. Theorem If α 1, 2), the process {Zt)} t 0 defined in Lemma 3.11 is a D-valued α-stable Lévy motion corresponding to ν). This process is 1/α)-self-similar, i.e. it satisfies 34). Moreover, for any t 0 and for any monotone sequence t k ) k 0 with t k t, lim Zt k) Zt) = 0 a.s. 43) k 20

21 Proof: The first two sentences are proved exactly as in the case α < 1, with obvious modifications in the form of the characteristic functions, due to centering. We only have to prove the last sentence. For this, we apply again Theorem 2.1.iii) of [2] with E = R. For any i 1, let X i = Zt i 1 ) Zt i ). By property ii) in Definition 1.1, X i ) i 1 are independent random elements in D with zero mean). Let S k = k i=1 X i = Zt 0 ) Zt k ) for all k 1, and Y = Zt 0 ) Zt). We first show that for any s 1,..., s m [0, 1], S k s 1 ),..., S k s m )) d Y s 1 ),..., Y s m )) as k. To see this, note that S k s 1 ),..., S k s m )) = d Zt 0 t k, s 1 ),..., Zt 0 t k, s m )) by property iii) in Definition 1.1) stationarity of the increments). It is now clear that we have the following convergence the characteristic functions: for any u = u 1,..., u m ) R m, { } Ee iu 1S k s 1 )+...+iu ms k s m) ) = exp t 0 t k ) e iu y 1 iu y)µ s1,...,s m dy), R { m } Ee iu 1Y s 1 )+...+iu my s m) ) = exp t 0 t) e iu y 1 iu y)µ s1,...,s m dy), R m as k. It remains to show that { Y s) } s [0,1] is uniformly integrable, which is equivalent to saying that { Zt 0 t, s) } s [0,1] is uniformly integrable, by the stationarity of the increments. By the self-similarity of {Zt)} t 0, Zt 0 t, s) d = t 0 t) 1/α Z1, s) for all s [0, 1]. Using 42) and the fact that Z1, s) = Z1, s) a.s. for any s [0, 1], it follows that for any 1 < p < α, sup s [0,1] E Zt 0 t, s) p = t 0 t) p/α sup s [0,1] E Z1, s) p <. Recall that in 42) we used the notation Zs) = Z1, s).) Hence, { Zt 0 t, s) } s [0,1] is uniformly integrable. By Theorem 2.1.iii) of [2], it follows that S k Zt 0 ) Zt) a.s. in D, ), as k, which is the same as Zt k ) Zt) a.s. in D, ), as k. The following preliminary result will be used in the proof of tightness of Z ε k) )k 1. Lemma For any ε > 0 and T > 0, E Z ε) α T,D T c α 1 ε1 α. Proof: By definition, for any t [0, T ] and s [0, 1], we have Z ε) t, s) r zs) Ndu, dr, dz) [0,t] ε, ) S D rndu, dr, dz) =: Y. [0,T ] ε, ) S D Hence Z ε) T,D Y and E Z ε) T,D EY ) = T ε, ) S D rνdr, dz) = T c α α 1 ε1 α. The next result plays a crucial role in the proof of Theorem 1.3.b) in the case α > 1. Its proof uses some results related to sums of i.i.d. regularly varying random elements in D, which are given in Section 4.5 below. 21

22 Theorem If Assumption B holds, then Z ε k) )k 1 is tight in D[0, ); D). Proof: It is enough to prove that Z ε k) )k 1 is tight in D[0, T ]; D) for any T > 0. Without loss of generality, we assume that T = 1. Let P k be the law of Z ε k). We verify that Pk ) k 1 satisfies conditions i)-iii) of Theorem 2.4. To prove this, we argue as in the last part of the proof of Theorem 2.12 of [22]. For condition i), it suffices to show that the following two relations hold: lim P A Zε 0) D > A) = 0 for all ε 0 > 0 44) lim sup P Z ε) ε 0) Z D > η) = 0 for all η > 0. 45) ε 0 0 0<ε<ε 0 To see this, let η > 0 and ρ > 0 be arbitrary. By 45) and the fact that ε k 0, there exist ε 0 0, 1) and k 0 such that P Z ε k) Z ε 0 ) D > η) < ρ/2 for any k k 0. By 44), there exists A 0 > 0 such that P Z ε 0 ) D > A 0 ) < ρ/2. Let a 0 = η + A 0. Then, for all k k 0, P Z ε k) D > a 0 ) P Z ε k) Z ε 0 ) D > η) + P Z ε 0 ) D > A 0 ) < ρ. This proves that condition i) holds. To prove 44), let ε 0 > 0 be arbitrary. For any A > 2 EZ ε 0) ) D, P Z ε 0) D > A) P Z ε0) D > A/2) 2 A EZε 0) ) D 2 A T c α α 1 ε1 α using Markov inequality and Lemma Relation 44) follows letting A. To prove 45), we use an indirect argument. Consider a sequence X i ) i 1 of i.i.d. regularly varying elements in D as given by Definition 1.4) with limiting measure ν given by 2). Let S n ε) be given by relation 62) below. Similarly to Theorem 4.13 below which is based on the fact that the probability measure Γ 1 satisfies Assumptions B), it can be proved that for any 0 < ε < ε 0, S n ε) S ε 0) n ES n ε) S ε 0) n ) d Z ε) Z ε 0) 0, in D[0, 1]; D), 46) where D[0, 1]; D) is equipped with distance d D. For any t > 0 and s [0, 1], we define S n <ε t, s) = 1 a n [nt] i=1 X i s)1 { Xi a nε}. Then S n ε) = S n S n <ε. Hence, S n ε) S ε 0) n = S <ε 0 n S <ε and relation 46) becomes: S <ε 0 n S <ε n ES <ε 0 n S <ε n ) d Z ε) Z ε 0) n in D[0, 1]; D). Since D is d D -continuous see Lemma 2.3), by the continuous mapping theorem, we have: S <ε 0 n S <ε n ES <ε 0 n S <ε n ) D d Z ε) Z ε 0) D as n. Let η > 0 be arbitrary. By Portmanteau theorem, P Z ε) Z ε 0) D > η) lim inf P n S<ε 0 n S n <ε ES <ε 0 n S n <ε ) D > η) lim sup P S <ε 0 n ES <ε 0 n ) D > η/2) + P S n <ε ES n <ε ) D > η/2). n 22

23 We take the supremum over all ε 0, ε 0 ), followed by the limit as ε 0 0. We obtain that lim ε0 0 sup 0<ε<ε0 P Z ε) Z ε 0) D > η) is less than lim lim sup P S <ε 0 n ES <ε 0 n ) D > η/2) + lim sup lim sup P S n <ε ES n <ε ) D > η/2). ε 0 0 n ε0 0 0<ε<ε 0 n Since S <ε n = S n S ε) n, both these terms are zero, by relation 63) below with T = 1). This concludes the proof of 45). We prove that P k ) k 1 satisfies condition ii) of Theorem 2.4. Let η > 0 and ρ > 0 be arbitrary. It suffices to show that there exist δ 0, 1) and ε 0 > 0 such that for all ε 0, ε 0 ), a) b) P w Z ε) t, δ) > η for some t [0, 1]) < ρ P Z ε) t, δ) Z ε) t, 0) > η for some t [0, 1]) < ρ c) P Z ε) t, 1 ) Z ε) t, 1 δ) > η for some t [0, 1]) < ρ. By 45), there exists ε 0 > 0 such that 47) P Z ε) Z ε 0) D > η/4) < ρ/2 for all ε 0, ε 0 ). 48) Since D[0, 1]; D) endowed with d 0 D is separable and complete see Theorem 2.1), by Theorem 1.3 of [5], the single probability measure P Z ε 0) ) 1 is tight. Hence, by condition ii) of Theorem 2.4, there exists δ 0, 1) such that Using the fact that P w Z ε 0) t, δ) > η/2 for some t [0, 1]) < ρ/2 49) P Z ε 0) t, δ) Z ε 0 ) t, 0) > η/2 for some t [0, 1]) < ρ/2 50) P Z ε 0) t, 1 ) Z ε 0 ) t, 1 δ) > η/2 for some t [0, 1]) < ρ/2. 51) w x + y, δ) w x, δ) + 2 y for all x, y D, we infer that w Z ε) t), δ) w Z ε 0) t), δ)+2 Z ε) Z ε 0 ) D, and hence P w Z ε) t), δ) > η for some t [0, 1]) is smaller than P w Z ε 0) t), δ) > η/2 for some t [0, 1]) + P Z ε) Z ε 0 ) D > η/4). Part a) of 47) follows from 48) and 49). Similarly, part b) of 47) follows from 48) and 50), using the fact that Z ε) t, δ) Z ε) t, 0) Z ε 0) t, δ) Z ε 0 ) t, 0) + 2 Z ε) Z ε 0 ) D, whereas part c) of 47) follows from 48) and 51), since Z ε) t, 1 ) Z ε) t, 1 δ) Z ε 0) t, 1 ) Z ε 0 ) t, 1 δ) + 2 Z ε) Z ε 0 ) D. 23

24 It remains to prove that P k ) k 1 satisfies condition iii) of Theorem 2.4. Let η > 0 and ρ > 0 be arbitrary. Note that Z ε) 0) = 0. We will show that there exist δ 0, 1) and ε 0 > 0 such that for all ε 0, ε 0 ), a) P w D Zε), δ) > η) < ρ b) P Z ε) δ) > η) < ρ c) P 52) d 0 ε) ε) ) ) J 1 Z 1 ), Z 1 δ) > 3η/2 < ρ. Let ε 0 be such that 48) holds. Using again the fact that P Z ε 0) ) 1 is tight, but invoking this time condition iii) of Theorem 2.4, we infer that there exists δ 0, 1) such that P w DZ ε 0), δ) > η/2) < ρ/2 53) P Z ε 0) δ) > η/2) < ρ/2 54) P d 0 J 1 Z ε 0 ) 1 ) Z ε 0 ) 1 δ) ) > η/2 ) < ρ/2. 55) By Lemma 2.2, P w D Zε), δ) > η) P w D Zε 0), δ) > η/2) + P 2 Z ε) Z ε 0 ) D > η/2) < ρ. Part a) of 52) follows using 53) and 48). Part b) of 52) follows using 54) and 48), since Z ε) δ) Z ε 0) δ) + Z ε) Z ε 0 ) D. To see that part c) of 52) holds, note that by the triangular inequality, d 0 J 1 Z ε) 1 ), Z ε) 1 δ) ) is smaller than d 0 J 1 Z ε) 1 ), Z ε 0 ) 1 ) ) + d 0 J1 Z ε 0 ) 1 ), Z ε 0 ) 1 δ) ) + d 0 J1 Z ε 0 ) 1 δ), Z ε) 1 δ) ). We treat separately these three terms. For the second term, we use 55). For the last term, we use 48), since this term is bounded by Z ε 0) 1 δ) Z ε) 1 δ) which is smaller than Z ε 0) Z ε) D. For the first term, we also use 55), since this term is bounded by Z ε) 1 ) Z ε 0) 1 ) which is smaller than Z ε 0 ) Z ε) D. To see this, note that by Remark 3.6, Z ε) 1 ) = lim δ 0 Z ε) 1 δ) in D, ) and Z ε 0) 1 ) = limδ 0 Z ε 0) 1 δ) in D, ), and hence Z ε) 1 ) Z ε 0) 1 ) = lim δ 0 Z ε) 1 δ) Z ε 0) 1 δ) Z ε) Z ε 0 ) D. The following result proves Theorem 1.3.b) in the case α > 1. Theorem If α 1, 2) and Assumption B holds, then there exists a collection { Zt)} t 0 of random elements in D such that P Zt) = Zt)) = 1 for all t 0, the map t Zt) is in D[0, ); D), and Z ε k) ) d Z ) in D[0, ); D) 56) as k, k N, for a subsequence N Z +, where D[0, ); D) is equipped with the Skorohod distance d,d given by 25). 24

25 Proof: Step 1. By Theorem 3.14, there exists a subsequence N Z + such that Z ε k) ) d Y ) in D[0, ); D), 57) as k, k N, where Y is a random element in D[0, ); D), defined on a probability space Ω, F, P ). We prove that for any t 1,..., t n 0, Zt 1 ),..., Zt n )) d = Y t 1 ),..., Y t n )) in D n. 58) To see this, note that 57) implies that Z ε k) t1 ),..., Z ε k) tn )) d Y t 1 ),..., Y t n )) in D n, J1 n ), for any t 1,..., t n T Y = T P Y 1 see 21)). On the other hand, by 39), Z ε k) t1 ),..., Z ε k) tn )) p Zt 1 ),..., Zt n )) in D n, J1 n ) for any t 1,..., t n 0. By the uniqueness of the limit, 58) holds for any t 1,..., t n T Y. To see that 58) holds for arbitrary t 1,..., t n 0, we proceed by approximation. Since T Y is dense in [0, ), for any i = 1,..., n, there exists a monotone sequence t k i ) k T Y such that t k i t i as k. By 43), Zt k 1),..., Zt k p n)) Zt 1 ),..., Zt n )) in D n, J1 n ) as k. Since Y has all sample paths in D[0, ); D), Y t k 1),..., Y t k n)) Y t 1 ),..., Y t n )) in D n, J1 n ) as k. Relation 58) follows again by the uniqueness of the limit. Step 2. Relation 58) shows that processes {Zt)} t 0 and {Y t)} t 0 have the same finite-dimensional distributions. The process {Y t)} t 0 has sample paths in D[0, ); D), which is a Borel space being a Polish space). By Lemma 3.24 of [16], there exists a process { Zt)} t 0 defined on the same probability space Ω, F, P ), whose sample paths are in D[0, ); D), such that P Zt) = Zt)) = 1 for all t 0. In particular, { Zt)} t 0 has the same finite-dimensional distributions as {Zt)} t 0, hence also as {Y t)} t 0. Since finite-dimensional distributions uniquely determine the law, it follows that the random elements Z ) = { Zt)} t 0 and Y ) = {Y t)} t 0 have the same law in D[0, ); D). Relation 56) follows from 57). 4 Approximation: proof of Theorem 1.5 In this section, we show that the α-stable Lévy process with values in D constructed in the Section 3 can be obtained as the limit in distribution) of the partial sum sequence associated with i.i.d. regularly varying elements in D, with suitable normalization and centering. This result can be viewed as an extension of the stable functional central limit theorem see e.g. Theorem 7.1 of [21]) to the case of random elements in D. The proof of this result uses the method of point process convergence, instead of the classical method based on finite dimensional convergence and tightness. A similar method was used in [22] for fixed time t = 1. We extend the arguments of [22] to include the time variable t > Point processes on Polish spaces In this subsection, we review some basic concepts related to point processes on a Polish space, following [6]. Similar concepts are considered in [20, 21] for point processes on an LCCB space i.e. a locally compact space with countable basis). 25

26 Let E, d) be a Polish space i.e. a complete separable metric space) and E its Borel σ-field. A measure µ on E is boundedly finite if µa) < for all bounded sets A E. Recall that a set A is bounded if it is contained in an open ball.) We denote by M + E) the set of all boundedly finite measures on E, and by M p E) its subset consisting of point or counting) measures, i.e. Z + -valued measures, where Z + = {0, 1, 2,..., }. A measure µ M p E) can be represented as µ = i 1 δ x i for some x i ) i 1 E, where δ x is the Dirac measure at x. In this case, x i ) i 1 are called the atoms or points) of µ. A measure µ = i 1 δ x i M p E) is simple if µ{x}) 1 for all x E, i.e. x i ) i 1 are distinct. The set M + E) is equipped with the topology of ŵ-convergence: µ ŵ n µ on E if µ n A) µa) for any bounded set A E with µ A) = 0. By Proposition A.2.6.II of [6], this is equivalent to µ n f) µf) for any f ĈE), where µf) = fdµ E and ĈE) is the set of bounded continuous functions f : E R which vanish outside a bounded set. We denote by M + E) and M p E) the Borel σ-fields of M + E), respectively M p E). By Proposition 9.1.IV of [6], M + E) and M p E) are Polish spaces, and M + E) and M p E) are generated by the functions M + E) µ µa), A E, respectively M p E) µ µa), A E. A point process on E is a function N : Ω M p E) defined on a probability space Ω, F, P ), which is F/ M p E)-measurable, i.e. NA) : Ω Z + is F-measurable for any A E. The law P N 1 of N is uniquely determined by the Laplace functional L N f) = Ee Nf) ), for all measurable functions f : E [0, ) with bounded support. We say that a sequence N n ) n 1 of point processes on E converges in distribution to the point process N on E and we write N d n N in M p E), if P Nn 1 ) n 1 converges weakly to P N 1 as probability measures on M p E). By Proposition 11.1.VIII of [6], this is equivalent to L Nn f) L N f) for all continuous functions f : E R vanishing outside a bounded set. Definition 4.1. Let ν M + E) be arbitrary. A point process N on E is called a Poisson random measure on E of intensity ν, if for any bounded set A E, NA) has a Poisson distribution with mean νa), and for any bounded disjoint sets A 1,..., A n E, NA 1 ),..., NA n ) are independent. The Laplace functional of a Poisson random measure N of intensity ν on E is: { } L N f) = exp 1 e fx) )νdx), 59) E for all bounded measurable functions f : E [0, ) with bounded support. The following result plays a crucial role in this article. It is an extension of Proposition 3.21 of [20] to point processes on Polish spaces, with which shares the same proof based on Laplace functionals). Recall that a random element in E is a function X : Ω E defined on a probability space Ω, F, P ), which is F/E-measurable. Proposition 4.2. Let E be a Polish space and ν M + E) be arbitrary. For any n 1, let X i,n ) i 1 be i.i.d. random elements in E and N n = i 1 δ i/n,x i,n ). Let N be a Poisson 26

27 random measure on [0, ) E of intensity Leb ν, where Leb is the Lebesgue measure. Then N d n N in M p [0, ) E) if and only if np X 1,n ) ŵ ν on E. We conclude this section with few words about finite measures. We denote by M f E) the set of finite measures on E, equipped with the topology if weak convergence: µ n w µ if µ n A) µa) for any set A E with µ A) = 0. Finally, we denote by M p,f E) the set of finite point measures on E, equipped also with the topology of weak convergence. 4.2 Continuity of summation functional In this subsection, we establish the continuity of the truncated summation functional defined on the set of point measures on [0, ) D 0. This will constitute an important step in the proof of our main result. The proofs contained in this subsection are extensions of those of [22] to point measures whose atoms include also a time variable. We endow the spaces [0, ) D 0 and [0, T ] D with the product topologies, D being equipped with Skorohod s J 1 -topology. For fixed T > 0 and ε > 0, we define Ψ : M p [0, ) D 0 ) M p,f [0, T ] D) by: Ψm) = m [0,T ] ε, ) SD ψ 1 where m [0,T ] ε, ) SD denotes the restriction of m to [0, T ] ε, ) S D, and the function ψ : [0, ) ε, ) S D [0, T ] D is given by ψt, r, z) = t, rz). Note that Ψm) is a finite measure since [0, T ] ε, ) S D is a bounded set. The application of the function Ψ has a double effect on a measure m: it removes the atoms t i, r i, z i ) of m whose second coordinate r i is less than ε or is, and transforms the remaining atoms using the inverse polar-coordinate map r, z) rz, while leaving the first coordinate t i of these atoms unchanged provided that t i T ). More precisely, if m = i 1 δ t i,r i,z i ) M p [0, ) D 0 ) then Ψm) = t i T δ t i,r i z i )1 {ri ε, )}. For any m M p [0, ) D 0 ) and for any measurable function f : [0, T ] D [0, ), ft, x)ψm)dt, dx) = [0,T ] D ft, rz)mdt, dr, dz). [0,T ] ε, ) S D 60) Lemma 4.3. The function Ψ is continuous on the set A of measures m M p [0, ) D 0 ) which satisfy the following two conditions: m[0, ) {ε, } S D ) = 0 and m{0, T } ε, ) S D ) = 0. The function Ψ = Ψ ε,t and the set A = A ε,t depend on ε and T. To simplify the writing, we drop the indices ε, T.) Proof: Let E = [0, ) D 0, E = [0, ) ε, ) S D and E = [0, T ] D. Since E is a bounded set, M p E ) = M p,f E ). Note that Ψ = Ψ 2 Ψ 1, where Ψ 1 : M p E) M p,f E ) is the restriction Ψ 1 m) = m E and Ψ 2 : M p,f E ) M p,f E ) is given by Ψ 2 m) = m ψ 1. 27

28 Similarly to Proposition 3.3 of [10], it can be shown that Ψ 1 is continuous on A. The fact that Ψ 2 is continuous follows from the continuity of function ψ, exactly as in the proof of Proposition 5.6.a) of [21]. Definition 4.4. We denote by M p,f [0, T ] D) the set of measures µ M p,f[0, T ] D) which have the following properties: i) µ is simple; ii) µ{t, x), t, x )}) 1 for any t, x), t, x ) [0, T ] D with x x and Discx) Discx ) ; iii) µ{t 0 } D) 1 for all t 0 [0, T ]. Alternatively, we can say that Mp,f [0, T ] D) is the set of finite point measures µ = p i=1 δ x i on [0, T ] D which satisfy the following three conditions: 1) the points t 1, x 1 ),..., t p, x p ) are distinct; 2) Discx i ) Discx j ) = for all i j; 3) no vertical line contains two points of µ. The next result gives the continuity of the summation functional, being the extension of Lemma 2.9 of [22] to our setting. Recall that D[0, T ]; D) is the space of right-continuous functions with left limits with respect to J 1 see Section 2). Theorem 4.5. The summation functional Φ : M p,f [0, T ] D) D[0, T ]; D) defined by ) Φµ) = x i t i t t [0,T ] if µ = p δ ti,x i ), i=1 is continuous on the set Mp,f [0, T ] D), where D[0, T ]; D) is equipped with the metric d T,D given by 22). Proof: We use a similar argument to page 221 of [21], combined with the argument of Lemma 2.9 of [22]. Let µ = p i=1 δ t i,x i ) Mp,f [0, T ] D) and µ n) n 1 M p,f [0, T ] D) be such that µ w n µ. We must prove that: Φµ n ) Φµ) in D[0, T ]; D). 61) Note that µ n [0, T ] D) µ[0, T ] D) = p implies that µ n [0, T ] D) = p for all n n 0 for some n 0 1, since µ n [0, T ] D) Z + for all n. Since µ is simple, the atoms t 1, x 1 ),..., t p, x p ) are distinct. Hence, there exists r > 0 such that µb r t i, x i )) = 1 for all i = 1,..., p, where B r t i, x i ) is the ball of radius r and center t i, x i ). Fix i = 1,..., p. For any r 0, r), µ B r t i, x i )) = 0 and hence, µ n B r t i, x i )) µb r t i, x i )) = 1. Therefore, for any r 0, r), there exists N i r ) n 0 such that µ n B r t i, x i )) = 1 for all n N i r ). In particular, for r = r/2 there exists N i := N i r/2) such that µ n B r/2 t i, x i )) = 1 for all n N i. We infer that for any n N i, µ n has exactly one atom in B r/2 t i, x i ), which we denote by t n i, x n i ). We claim that: t n i, x n i ) t i, x i ) in [0, T ] D, i.e. t n i t i and x n i J 1 xi. To see this, let r 0, r/2 be arbitrary. We known that for any n N i r ), µ n has exactly one atom in B r t i, x i ), and since B r t i, x i ) B r/2 t i, x i ), this atom must be t n i, x n i ). Hence, t n i, x n i ) B r t i, x i ) for any n N i r ). 28

29 Let N 0 = max i p N i. For any n N 0, µ n = p i=1 δ t n i,xn i ) and Φµ n ) = t n i t xn i ) t T. The points t 1,..., t p are distinct, since µ cannot have two atoms with the same time coordinate, by property iii) in the definition of Mp,f [0, T ] D). Pick δ 0 > 0 such that t i+1 t i > 2δ 0 for all i = 1,..., p 1. Let δ 0, δ 0 ) be arbitrary. By the choice of δ 0, the intervals t i δ, t i + δ), i = 1,..., p are non-overlapping. By property ii) in the definition of Mp,f [0, T ] D), Discx i) Discx j ) for all i j. By Theorem 4.1 of [27], it follows that k i=1 xn i J 1 k i=1 x i for all i p. Hence, there i=1 x i) δ exists n 1 δ) N 0 such that for all n n 1 δ), t n k t k δ and d 0 J 1 k i=1 xn i, k for all k p. Let λ n Λ T be such that λ n t n i ) = t i for all i = 1,... p and λ n is a linear function between t n i and t n i+1. By relation 7.20) of [21], λ n e T 3δ for all n n 1 δ). Recalling definitions 22) and 23) of distances d T,D and ρ T,D, for any n n 1 δ), we have: ρ T,D Φµ), Φµ) λ 1 n ) = sup d 0 J 1 Φµ)t), Φµn )λ 1 n t)) ) t [0,T ] ) k k ) = sup d 0 J 1 x i, = max t [0,T ] k p d0 J 1 x i, x n i < δ, t i t λ nt n i ) t x n i and hence d T,D Φµ), Φµ n )) 3δ. This concludes the proof of 61). The following corollary is an immediate consequence of the previous two results. Corollary 4.6. The function Q : M p [0, ) D 0 ) D[0, T ]; D) given by ) Qm) = r i z i 1 {ri ε, )} if δ ti,r i,z i ), t i t t [0,T ] i=1 m = i 1 is continuous on the set U = A Ψ 1 Mp,f [0, T ] D)), where D[0, T ]; D) is equipped with the distance d T,D given by 22). The function Q = Q ε,t and the set U ε,t depend on ε and T. To simplify the writing, we omit the indices ε, T.) Proof: The conclusion follows by Lemma 4.3 and Theorem 4.5, since Q = Φ Ψ. 4.3 Convergence of truncated sums In this subsection, we consider a sequence X i ) i 1 of i.i.d. regularly varying random elements in D, and we prove that the sequence S n ε) ) n 1 of truncated sums defined by: S n ε) t) = 1 a n [nt] i=1 i=1 X i 1 { Xi >a nε}, for any t 0 62) converges in distribution in the space D[0, ); D) to the process Z ε) given by 32). The following result together with Corollary 4.6 will allow us to apply the continuous mapping theorem. For this result, we need Assumption B. 29

30 Theorem 4.7. Let N be a Poisson random measure on [0, ) D 0 of intensity Leb ν, where ν is given by 7). If Γ 1 satisfies Assumption B, then N U ε,t a.s. for any ε > 0 and T > 0, where U ε,t is the set given in Corollary 4.6. Proof: We have to show that with probability 1, N satisfies the two conditions listed in Lemma 4.3, and ξ = Ψ ε,t N) Mp,f [0, T ] D). We begin with the conditions of Lemma 4.3. For any n 1, E[N[n 1, n) {ε, } S D )] = cν α {ε, }) = 0 and hence N[n 1, n) {ε, } S D ) = 0 a.s. By additivity, N[0, ) {ε, } S D ) = 0 a.s. Similarly, N{0, T } ε, ) S D ) = 0 a.s. Next, we show that with probability 1, ξ satisfies conditions i)-iii) given in Definition 4.4. First, we show that ξ is a Poisson random measure on [0, T ] D of intensity Leb ν ε) where ν ε) = ν ε, ) SD U 1 and U : ε, ) S D D is given by Ur, z) = rz. Note that ξ is a point process since N is a point process and Ψ ε,t is measurable. So, it suffices to show that the Laplace functional of ξ is given by 59). Let g : [0, T ] D [0, ) be a bounded measurable function with bounded support. By 60), [ )] [ )] L ξ g) = E exp gdξ = E exp gt, rz)ndt, dr, dz) [0,T ] D [0,T ] ε, ) S { } D = exp 1 e gt,rz) )dtνdr, dz) [0,T ] ε, ) S { D } = exp 1 e gt,x) ) dt ν ε) dx). [0,T ] D Since Leb ν ε) is diffuse, ξ is simple a.s. So, ξ satisfies condition i) with probability 1. To show that ξ satisfies condition ii) with probability 1, we represent its points as follows. Let P i = c 1/α Γ 1/α i where Γ i = i j=1 E j and E i ) i 1 are i.i.d. exponential random variables of mean 1. Let W i ) i 1 be an independent sequence of i.i.d. random elements in S D of law Γ 1. By the extension of Proposition 5.3 of [21] to Polish spaces, i 1 δ P i,w i ) is a Poisson random measure on 0, ) S D of intensity ν, and so, i 1 δ P i,w i )1 {Pi >ε} is a Poisson random measure on ε, ) S D of intensity ν ε, ) SD. By the extension of Proposition 5.2 of [21] to Polish spaces, i 1 δ P i W i 1 {Pi >ε} is a Poisson random measure on D of intensity ν ε). Finally, by the extension of Proposition 5.3 of [21], ξ = i 1 δ τ i,p i W i )1 {Pi >ε} is a Poisson random measure on [0, T ] D of intensity Leb ν ε), where τ i ) i 1 are i.i.d. uniformly distributed on [0, T ], independent of E i ) i 1 and W i ) i 1. Hence ξ = d ξ. Consider the event A = i j A i,j, where A i,j = {DiscW i ) DiscW j ) = }. Let F = {x, y) S D S D ; Discx) Discy) }. By Fubini s theorem and Assumption B, P A c i,j) = P W i, W j ) F ) = Γ 1 Γ 1 )F ) = Γ 1 F x )Γ 1 dx) = 0, S D where F x = {y S D ; x, y) F } = s Discx) {y S D ; s Discy)}. Hence, P A) = 1. Let B be the event on which ξ{t, x), t, x )}) 1 for all t, x), t, x ) [0, T ] D with x x and Discx) Discx ), and B the similar event with ξ replaced by ξ. 30

31 Since ξ = d ξ, P B) = P B ). We claim that A B. To see this, let ω B ) c. Then, there exist t, x), t, x ) [0, T ] D with x x and Discx) Discx ) such that ξ ω; {t, x), t, x )}) 2. This means that both t, x) and t, x ) are atoms of ξ ω). But the atoms of ξ ω) are of the form τ i ω), P i ω)w i ω)) with P i ω) > ε. Hence, there exist i j with P i ω) > ε and P j ω) > ε such that t, x) = τ i ω), P i ω)w i ω)) and t, x ) = τ j ω), P j ω)w j ω)). This proves that ω A c ij A c. ) Hence, P B) = P B ) = P A) = 1. This proves that ξ satisfies condition ii) with probability 1. Finally, to show that ξ satisfies condition iii) with probability 1, we let C = i j C i,j, where C i,j = {τ i τ j }. Note that P C) = 1 since for all i j P Ci,j) c = P τ i = τ j ) = 1 T T 1 T 2 {x=y} dxdy = Let D be the event on which ξ{t 0 } D) 1 for all t 0 [0, T ], and D the similar event with ξ replaced by ξ. Since ξ d = ξ, P D) = P D ). We claim that C D. To see this, let ω D ) c. Then there exists t 0 [0, T ] such that ξ ω; {t 0 } D) 2. This means that ξ ω) has at least two atoms with time coordinate t 0. Using the form of the atoms of ξ ω), we infer that there exist i j such that τ i ω) = τ j ω) = t 0. This proves that ω C c i,j C c. ) Hence, P D) = P D ) = P C) = 1. This proves that ξ satisfies condition iii) with probability 1. The next result gives the convergence of the truncated sums of i.i.d. regularly varying elements in D. Theorem 4.8. Let X i ) i 1 be i.i.d. random elements in D such that X 1 RV {a n }, ν, D 0 ). Let α be the index of X and Γ 1 be the spectral measure of X. Suppose that α < 2, α 1 and Γ 1 satisfies Assumption B. If {S n ε), n 1} and Z ε) are given by 62), respectively 32), then for any ε > 0 and T > 0 S ε) n ) d Z ε) ) in D[0, T ]; D) as n, where D[0, T ]; D) is equipped with distance d T,D given by 22). Moreover, P s DiscZ ε) t)) forsome t > 0) = 0 for all s [0, 1] and ε > 0. Proof: By Proposition 4.2 with E = D 0 and X i,n = X i /a n, X i / X i ), N n = i 1 δ ) i n, X i an, X i d N, X i where N is a Poisson random measure on [0, ) D 0 of intensity Leb ν. Note that S n ε) = QN n ) and Z ε) = QN), where Q is the map given in Corollary 4.6. By the continuous mapping theorem and Theorem 4.7, S n ε) d Z ε) in D[0, T ]; D). To prove the last statement, we fix s [0, 1] and we let Ω T = t [0,T ] {s DiscZ ε) t))}. It is enough to prove that P Ω T ) = 0 for all T > 0. From 32), we see that if W i is continuous at s for all i 1, then Z ε) t) is continuous at s for all t [0, T ]. Hence, Ω T i 1 {s DiscW i )}. The fact that P Ω T ) = 0 follows by Assumption B, since P s DiscW i )) = Γ 1 {z S D ; s Discz)}) = 0. 31

32 4.4 Approximation in the case α < 1 In this subsection, we prove the approximation result Theorem 1.5) in the case α < 1. The first result shows that a certain asymptotic negligibility condition holds automatically in the case α < 1. Lemma 4.9. Let X i ) i 1 be i.i.d. random elements in D such that X 1 RV {a n }, ν, D 0 ). Suppose that α 0, 1), where α is the index of X. Let {S n ε), n 1} be given by 62) and S n t) = a 1 [nt] n i=1 X i for all t 0, n 1. Then for any δ > 0 and T > 0 lim lim sup P S n S n ε) T,D > δ) = 0, ε 0 n and in particular, lim ε 0 lim sup n P d T,D S n, S ε) n ) > δ) = 0. Proof: Let δ > 0 and T > 0 be arbitrary. Since S n t) S ε) n S n S ε) n T,D = 1 a n max By Markov s inequality, k [nt ] t) = a 1 [nt] n i=1 X i1 { Xi a nε}, k X i 1 { Xi a nε} 1 [nt ] X i 1 { Xi a a nε}. n i=1 P S n S ε) n T,D > δ) 1 δa n [nt ] E X 1 1 { X1 a nε}). Since X 1 is regularly varying of index α < 1, E X 1 1 { X1 x}) α xp X 1 α 1 > x) as x, by Karamata s theorem e.g. Theorem 2.1 of [21]), and hence, by 9), n a n E X 1 1 { Xi a nε}) α 1 α εnp X 1 > a n ε) α 1 α c ε1 α as n. Here fx) gx) as x means that fx)/gx) 1 as x. Therefore, i=1 lim sup P S n S n ε) T,D > δ) T n δ α 1 α c ε1 α. The conclusion follows letting ε 0, and using the fact that α < 1. Proof of Theorem 1.5.a) By Theorem 2.8 of [27], it is enough to prove that S n ) d Z ) in D[0, T ]; D), for any T > 0, where D[0, T ]; D) is equipped with distance d T,D. This follows by Theorem 4.2 of [4], whose hypotheses are verified due to Theorem 3.9, Theorem 4.8 and Lemma

33 4.5 Approximation in the case α > 1 In this subsection, we prove the approximation result Theorem 1.5) in the case α > 1. The following result is the counterpart of Lemma 4.9 for the case α > 1. Lemma Let X i ) i 1 be i.i.d.random elements in D such that X 1 RV {a n }, ν, D 0 ). Suppose that α 1, 2), where α is the index of X 1. Let {S n ε), n 1} be given by 62). For any t 0 and n 1, let S n t) = [nt] i=1 X i/a n, S ε) n t) = S n ε) t) E[S n ε) t)] and S n t) = S n t) E[S n t)]. If 10) holds for any δ > 0 and T > 0, then for any δ > 0 and T > 0, lim lim sup P S n S ε) n T,D > δ) = 0, 63) ε 0 n and in particular, lim ε 0 lim sup n P d T,D S n, S ε) n ) > δ) = 0. Proof: Since S n t) S ε) n t) = [nt] i=1 Y i,n with Y i,n = a 1 n Xi 1 { Xi a nε} EX i 1 ) { Xi a nε}), S n S ε) n T,D = sup S n t) S ε) k n t) = max Y i,n. t [0,T ] k [nt ] By Lévy-Octaviani inequality, which is valid for independent random elements in a normed space see Proposition of [17]), for any δ > 0, ) k P S n S ε) n T,D > δ) 3 max P Y i,n > δ/3. k [nt ] The conclusion follows by 10). To deal with the centering constants, we need to use the fact that addition is continuous in the space D[0, T ]; D) equipped with the distance d T,D. To deduce this, we cannot simply apply Theorem 4.1 of [27] with S, m) = D, d 0 J 1 ), since we do not know if the relation d 0 J 1 x + y, x + y ) d 0 J 1 x, x ) + d 0 J 1 y, y ) holds for any x, x, y, y D, as required on p.78 of [27]. Although the general question of continuity of the addition on D[0, T ]; D) remains open, we were able to find a weaker version of this result which is sufficient for our purposes. This is contained in the lemma below. J Lemma Let f n ) n 1 D and f D be such that f 1 n f. Consider yn ) n 1 D[0, T ]; D) and y D[0, T ]; D) defined as follows: for any t [0, T ], i=1 y n t) = [nt] n f n and yt) = tf. 64) Then ρ T,D y n, y) 0. Moreover, if f is continuous, then for any sequence x n ) n 1 D[0, T ]; D) and x D[0, T ]; D) such that d T,D x n, x) 0, we have: d T,D x n + y n, x + y) 0. 65) 33 i=1

34 J Proof: We first prove that ρ T,D y n, y) 0. Since f 1 n f, there exists a sequence ρ n ) n 1 Λ such that ρ n 0 0 and f n f ρ n 0. Let z n t) = [nt] f. Let ε > 0 be n arbitrary. Then, there exists N ε such that for all n N e, ρ n 0 < ε and f n f ρ n < ε/t. Hence, for any t [0, T ] and n N ε, y n t) z n t) ρ n t f n f ρ n < ε and d 0 J 1 y n t), z n t)) ρ n 0 y n t) z n t) ρ n < ε. On the other hand, there exists N ε such that, for any t [0, T ] and n N ε, d 0 J 1 z n t), yt)) z n t) yt) = [nt] n t f 1 f < ε. n This shows that ρ T,D y n, y) = sup t [0,T ] d 0 J 1 y n t), yt)) < 2ε for any n N ε N ε. We now prove 65). For any t [0, T ], we denote xt) = {xt, s)} s [0,1], and we use a similar notation for yt), x n t) and y n t). Let ε > 0 be arbitrary. Since f is uniformly continuous, there exists δ ε 0, ε) such that for any s, s [0, 1] with s s < δ ε, fs) fs ) < ε. 66) Because d T,D x n, x) 0, there exists a sequence λ n ) n 1 Λ T such that λ n e T 0 and ρ T,D x n λ n, x) 0. Pick 0 < η ε < ε lnδ ε + 1) arbitrary. Then, there exists N ε 1) such that for any n N ε 1), sup t [0,T ] λ n t) t < ε and sup t [0,T ] d 0 J 1 x n λ n t)), xt)) < η ε. Using definition 11) of d 0 J 1, it follows that for any n N ε 1) and for any t [0, T ], there exists µ n) t Λ such that µ n) t 0 < η ε and sup x n λn t), µ n) t s) ) xt, s) < η ε. 67) s [0,1] By inequality 13) and the choice of η ε, sup s [0,1] µ n) t s) s < e ηε 1 < δ ε. J Note that f n f 0, since f 1 2) n f and f is continuous. Hence, there exists N ε such that sup s [0,1] f n s) fs) < ε for any n N ε 2). By 66), for any n N ε 1) N ε 2), f n µ n) t s)) fs) f n µ n) t s)) fµ n) t s)) + fµ n) t s)) fs) < 2ε. Choose N ε 0) such that 1/n < ε for any n N ε 0). Then, for any n N ε 0) and t [0, T ], [nλ n t)] t n [nλ n t)] λ n t) n + λ nt) t 1 n + ε < 2ε. Since f n f 0, it follows that C := sup n 1 f n <. Let N ε = N e 0) N e 1) N e 2). Using the definitions of y n and y, it follows that for any n N ε, t [0, T ] and s [0, 1], y n λn t), µ n) t s) ) yt, s) and hence, by 67), [nλ n t)] n t f nµ n) t s)) + t f n µ n) t s)) fs) < 2εC + T ). x n + y n ) λ n t), µ n) t s) ) x + y)t, s) < η ε + 2εC + T ) < ε[1 + 2C + T )]. 34

35 To summarize, we have proved that for any n N ε, and t [0, T ], there exists µ n) t Λ such that µ n) t 0 < η ε < ε and x n + y n )λ n t)) µ n) t x + y)t) < ε[1 + 2C + T )]. By definition 11) of d 0 J 1, this implies that for any n N ε and t [0, T ], d 0 J 1 xn + y n )λ n t)), x + y)t) ) < ε[1 + 2C + T )]. Therefore, for any n N ε ρ T,D x n + y n ) λ n, x + y) = sup d 0 J xn 1 + y n )λ n t)), x + y)t) ) < ε[1 + 2C + T )]. t [0,T ] Since λ n ε T < ε, using definition 22) of d T,D, we conclude that d T,D x n + y n, x + y) < ε[1 + 2C + T )] for any n N ε. Remark In the proof of Theorem 2.12 of [22], it was shown that, in a more general context than here, the function s E[Z ε) 1, s)] is continuous on [0, 1]. In our case, E[Z ε) 1, s)] = cϕs) rν ε α dr), where ϕs) = S D zs)γ 1 dz) for all s [0, 1]. The continuity of ϕ can be proved directly as follows. By the dominated convergence theorem, ϕ is a càdlàg function. To show that ϕ is left-continuous, note that for any s [0, 1], ) ϕs) ϕs ) = zs) zs ) Γ1 dz) = z{s}γ 1 dz), S D {z S D ;z{s}>0} where z{s} = zs) zs ) is the jump of z S D at s. By Assumption B, the set in the last integral above has Γ 1 -measure 0, and hence this integral is equal to 0. The following result gives the convergence of the centered sums. Theorem Let X i ) i 1 be i.i.d. random elements in D such that X 1 RV {a n }, ν, D 0 ). Let α be the index of X and Γ 1 be the spectral measure of X. Suppose that α 1, 2) and Γ 1 satisfies Assumption B. Let {S n ε), n 1} and Z ε) be given by 62), respectively 32). For any t 0, let S ε) n t) = S n ε) t) E[S n ε) t)] and Z ε) t) = Z ε) t) E[Z ε) t)]. Then, for any ε > 0 and T > 0 S ε) n ) d Z ε) ) in D[0, T ]; D), where D[0, T ]; D) is equipped with distance d T,D. Proof: Let X n = S ε) n and X = Z ε). For any t 0 and s [0, 1], y n t, s) := E[S n ε) t, s)] = [nt] E[X 1 s)1 { X1 >a a nε}] = [nt] n n f ns), with f n s) = n a n E[X 1 s)1 { X1 >a nε}], and yt, s) := E[Z ε) t, s)] = tc rzs)ν α dr)γ 1 dz) = tfs), ε, ) S D 35

36 with fs) = c α α 1 ε1 α ϕs) and ϕs) = S D zs)γ 1 dz). This shows that the functions y n ) n 1 and y are of the same form as in 64). By Remark 4.12, ϕ is continuous on [0, 1]. By Theorem 4.8, X d n X in the space D[0, T ]; D) equipped with d T,D. Since this space is separable by Theorem 2.1), by Skorohod s embedding theorem Theorem 6.7 of [5]), there exist random elements X n) n 1 and X defined on a probability space Ω, F, P ) such that X n d = X n for all n, X = d X and d T,D X n, X ) 0 a.s. By Lemma 4.11, it follows that d T,D X n + y n, X + y) 0 a.s. This implies that d T,D X n + y n, X + y) 0 in probability and in distribution). By Corollary to Theorem 3.1 of [5] and using again the fact that D[0, T ]; D) equipped with d T,D is a separable space), we infer that X n + y n d X + y in D[0, T ]; D) equipped with d T,D. Since y n ) n 1 and y are deterministic, X n + y d n = X n + y n for any n, and X + y = d X + y. It follows that X n + y d n X + y in D[0, T ]; D) equipped with d T,D. Proof of Theorem 1.5.b) This follows by Theorem 4.2 of [4] whose hypotheses are verified due to Theorem 3.15, Lemma 4.10 and Theorem Simulations In this section, we simulate the sample paths of a D-valued α-stable Lévy motion using Theorem 1.5, by focusing on two examples of a regularly varying process X in D. Example 5.1. The simplest example of a regularly varying process X = {Xs)} s [0,1] in D is the α-stable Lévy motion, which can be simulated using the stable central limit theorem. We recall briefly this result below. Let ξ, ξ j ) j 1 be i.i.d. regularly varying random variables in R, i.e. P ξ > x) = x α Lx) and lim x P ξ > x) P ξ > x) = p, 68) for some α 0, 2), p [0, 1] and a slowly varying function L. Let a n ) n 1 be a sequence of real numbers with a n such that np ξ > a n ) 1 as n, i.e. a α n nla n ) as n. Condition 68) is equivalent to the vague convergence np ξ/a n ) v ν α,p in R 0, where ν α,p dz) = pαz α 1 1 0, ) z) + qα z) α 1 1,0) z) ) dz 69) with q = 1 p. In other words, for any x > 0, ) ξ lim np > x n a n = px α and lim n np In this case, we write ξ RV {a n }, ν α,p, R 0 ). In particular, if ) ξ < x = qx α. a n lim Lx) = C > 0, 70) x 36

37 then a α n Cn. We assume that α 1. Let µ = 0 if α < 1 and µ = Eξ) if α > 1. A classical result, which can be deduced for instance from Theorem 2.7 of [26], states that 1 a n [n ] j=1 ξ j µ) d X ) in D 71) where X = {Xs)} s [0,1] is an α-stable Lévy motion, with X1) having a S α σ α, β, 0)- distribution. Here σα α = Cα 1 with C α given by 30), and β = p q. By Property of [23], lim x x α P X1) > x) = p and lim x x α P X1) < x) = q. If L satisfies 70), this implies that X1) RV {a n }, Cν α,p, R 0 ), since ) X1) np > x = na α n ) a n x) α P X1) > a n x) x α Cpx α a n as n, and similarly, np X1) a n ) < x Cqx α. By Lemma 2.1 of [14], it follows that X RV {a n }, ν, D 0 ) for a boundedly finite measure ν on D 0. Note that the normalizing sequence {a n } n for the regular variation of X in D is the same as for ξ, if L satisfies 70). In the simulations, we take a n = Cn) 1/α, where C is given by 70). In view of 71), for any s [0, 1], Xs) 1 [ms] a m j=1 ξ j µ), when m is large. Next, we consider n i.i.d. copies of X. For this, let ξ ij ) i,j 1 be i.i.d. copies of ξ. When m is large, we have the following approximations for any s [0, 1]: X i s) 1 [ms] ξ ij µ), for all i = 1,..., n. a m j=1 By Theorem 1.5, the following approximation gives a D-valued α-stable Lévy motion Z: Zt, s) 1 [nt] X i s) 1 a n a n a m i=1 [nt] [ms] i=1 j=1 ξ ij µ), for any t, s [0, 1], when n and m are large. By Theorem B.2 below, this approximation yields in fact an α-stable Lévy sheet, which is an example of a D-valued α-stable Lévy motion, according to Theorem B.1 below.) We consider 5 examples of regularly varying random variables ξ which satisfy 70): i) ξ Paretoα), i.e. ξ has density fx) = αx α 1 if x > 1; then Lx) = 1; ii) ξ has a two-sided Pareto distribution, i.e. ξ has density given by fx) = pαx α 1 if x > 1 and fx) = qα x) α 1 if x < 1, for p 0, 1) and q = 1 p; then Lx) = 1; iii) ξ Fréchetα), i.e. ξ has density fx) = αx α 1 e x α if x > 0; then Lx) = x α 1 e x α ) 1 as x ; iv) ξ Burra, b) with a, b > 0, i.e. ξ has density fx) = abx b x b ) a 1 for x > 0; in this case α = ab and Lx) = 1 + x b ) a 1 as x ; v) ξ S α σ, β, µ); in this case Lx) C := C α σ α as x. The following pictures are the 3-dimensional plots of t k, s l, Zt k, s l )) for k = 1,..., n and l = 1,..., m, with t k = k/n and s l = l/m, when n = 400 and m =

38 a) α = 0.5 b) α = 1.5 Figure 1: α-stable Lévy sheet based on Pareto distribution a) α = 0.5 b) α = 1.5 Figure 2: α-stable Lévy sheet based on Fréchet distribution Example 5.2. In this example, X = {Xs)} s [0,1] is a regularly varying random element in D given by a series, as explained in Example 4.1 of [7]. Let Y, Y j ) j 1 be i.i.d. random elements in the space C = C[0, 1]) of continuous functions on [0, 1], such that 0 < C Y,α := E sup s [0,1] Y s) α ) < 72) for some α 0, 2). Let ε j ) j 1 be i.i.d. random variables which take values 1 and 1 with probability 1/2, and Γ j = j i=1 E i where E i ) i 1 are i.i.d. exponential random variables of mean 1. Assume that Y j ) j 1, ε j ) j 1 and E j ) j 1 are independent. By Theorem of [23], for any s [0, 1], the series Xs) = j 1 ε j Γ 1/α j Y j s) converges a.s. 73) and has a S α σ s, 0, 0)-distribution, with σs α = Cα 1 E Y s) α and C α given by 30). Moreover, the process X = {Xs)} s [0,1] has sample paths in C, and is regularly varying in D. 38

39 More precisely, X RV {a n }, ν, D 0 ) with sequence a n ) n chosen such that a α n nc Y,α, and limiting measure ν specified by 4.3) of [7]. In the simulation below, we truncate the series in 73) by considering only the first K terms for K large), and we take Y = W where W = {W s)} s [0,1] is the Brownian motion. The fact that W satisfies condition 72) is proved in Appendix C.) We simulate K i.i.d. copies of W using Donsker theorem. Let ξ, ξ jk ) j,k 1 be i.i.d. random variables with mean 0 and variance 1. When m is large, W j s) 1 [ms] m k=1 ξ jk for any j = 1,..., K, and Xs) K j=1 ε jγ 1/α j W j s) 1 K [ms] m j=1 k=1 ε jγ 1/α j ξ jk for any s [0, 1]. Next, we consider n i.i.d. copies of X. Let ε ij ) i,j 1 be i.i.d. copies of ε 1, E ij ) i,j 1 i.i.d. copies of E 1 and ξ ijk ) i,j,k 1 i.i.d. copies of ξ. Let Γ ij = j k=1 E ik. We take a n = nc W,α ) 1/α where C W,α is computed by approximation. By Theorem 1.5, Zt, s) 1 [nt] X i s) 1 a n a n m i=1 [nt] [ms] i=1 K k=1 j=1 ε ij Γ 1/α ij ξ ijk is an approximation of a D-valued α-stable Lévy motion, when n, m and K are large. The following pictures are the 3-dimensional plots of t k, s l, Zt k, s l )) for k = 1,..., n and l = 1,..., m, with t k = k/n and s l = l/m, when n = 400 and m = 250. a) α = 0.5 b) α = 1.5 Figure 3: D-valued α-stable Lévy motion based on a regularly process in D given by series 73) in which Y j ) j 1 are i.i.d. Brownian motions A Some auxiliary results In this section, we include some auxiliary results which are used in this article. The first result shows that the measure ν which appears in the definition of regularly variation for random elements in D must be of product form. This result is probably well-known. We include its proof since we could not find it in the literature. Lemma A.1. If c = ν1, ) S D ) > 0, then the measure ν in Definition 1.4 must be of the product from 7), with probability measure Γ 1 given by 8). 39

Weak convergence and Brownian Motion. (telegram style notes) P.J.C. Spreij

Weak convergence and Brownian Motion. (telegram style notes) P.J.C. Spreij Weak convergence and Brownian Motion (telegram style notes) P.J.C. Spreij this version: December 8, 2006 1 The space C[0, ) In this section we summarize some facts concerning the space C[0, ) of real

More information

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539 Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory

More information

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3 Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................

More information

1 Independent increments

1 Independent increments Tel Aviv University, 2008 Brownian motion 1 1 Independent increments 1a Three convolution semigroups........... 1 1b Independent increments.............. 2 1c Continuous time................... 3 1d Bad

More information

The strictly 1/2-stable example

The strictly 1/2-stable example The strictly 1/2-stable example 1 Direct approach: building a Lévy pure jump process on R Bert Fristedt provided key mathematical facts for this example. A pure jump Lévy process X is a Lévy process such

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

arxiv: v1 [math.pr] 6 Sep 2012

arxiv: v1 [math.pr] 6 Sep 2012 Functional Convergence of Linear Sequences in a non-skorokhod Topology arxiv:209.47v [math.pr] 6 Sep 202 Raluca Balan Adam Jakubowski and Sana Louhichi September 5, 202 Abstract In this article, we prove

More information

Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals

Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals Noèlia Viles Cuadros BCAM- Basque Center of Applied Mathematics with Prof. Enrico

More information

Empirical Processes: General Weak Convergence Theory

Empirical Processes: General Weak Convergence Theory Empirical Processes: General Weak Convergence Theory Moulinath Banerjee May 18, 2010 1 Extended Weak Convergence The lack of measurability of the empirical process with respect to the sigma-field generated

More information

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition Filtrations, Markov Processes and Martingales Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition David pplebaum Probability and Statistics Department,

More information

SUMMARY OF RESULTS ON PATH SPACES AND CONVERGENCE IN DISTRIBUTION FOR STOCHASTIC PROCESSES

SUMMARY OF RESULTS ON PATH SPACES AND CONVERGENCE IN DISTRIBUTION FOR STOCHASTIC PROCESSES SUMMARY OF RESULTS ON PATH SPACES AND CONVERGENCE IN DISTRIBUTION FOR STOCHASTIC PROCESSES RUTH J. WILLIAMS October 2, 2017 Department of Mathematics, University of California, San Diego, 9500 Gilman Drive,

More information

Poisson random measure: motivation

Poisson random measure: motivation : motivation The Lévy measure provides the expected number of jumps by time unit, i.e. in a time interval of the form: [t, t + 1], and of a certain size Example: ν([1, )) is the expected number of jumps

More information

n [ F (b j ) F (a j ) ], n j=1(a j, b j ] E (4.1)

n [ F (b j ) F (a j ) ], n j=1(a j, b j ] E (4.1) 1.4. CONSTRUCTION OF LEBESGUE-STIELTJES MEASURES In this section we shall put to use the Carathéodory-Hahn theory, in order to construct measures with certain desirable properties first on the real line

More information

ELEMENTS OF PROBABILITY THEORY

ELEMENTS OF PROBABILITY THEORY ELEMENTS OF PROBABILITY THEORY Elements of Probability Theory A collection of subsets of a set Ω is called a σ algebra if it contains Ω and is closed under the operations of taking complements and countable

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 15. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

1 Weak Convergence in R k

1 Weak Convergence in R k 1 Weak Convergence in R k Byeong U. Park 1 Let X and X n, n 1, be random vectors taking values in R k. These random vectors are allowed to be defined on different probability spaces. Below, for the simplicity

More information

Probability and Measure

Probability and Measure Chapter 4 Probability and Measure 4.1 Introduction In this chapter we will examine probability theory from the measure theoretic perspective. The realisation that measure theory is the foundation of probability

More information

Lecture Notes in Advanced Calculus 1 (80315) Raz Kupferman Institute of Mathematics The Hebrew University

Lecture Notes in Advanced Calculus 1 (80315) Raz Kupferman Institute of Mathematics The Hebrew University Lecture Notes in Advanced Calculus 1 (80315) Raz Kupferman Institute of Mathematics The Hebrew University February 7, 2007 2 Contents 1 Metric Spaces 1 1.1 Basic definitions...........................

More information

On the Converse Law of Large Numbers

On the Converse Law of Large Numbers On the Converse Law of Large Numbers H. Jerome Keisler Yeneng Sun This version: March 15, 2018 Abstract Given a triangular array of random variables and a growth rate without a full upper asymptotic density,

More information

Probability and Measure

Probability and Measure Part II Year 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2018 84 Paper 4, Section II 26J Let (X, A) be a measurable space. Let T : X X be a measurable map, and µ a probability

More information

Stochastic integration. P.J.C. Spreij

Stochastic integration. P.J.C. Spreij Stochastic integration P.J.C. Spreij this version: April 22, 29 Contents 1 Stochastic processes 1 1.1 General theory............................... 1 1.2 Stopping times...............................

More information

Processes with independent increments.

Processes with independent increments. Dylan Gonzalez Arroyo Processes with independent increments. Master Thesis Thesis supervisor: Dr. F.M. Spieksma, Dr. O. van Gaans Date master exam: 29 January 2016 Mathematisch Instituut, Universiteit

More information

Convergence of Feller Processes

Convergence of Feller Processes Chapter 15 Convergence of Feller Processes This chapter looks at the convergence of sequences of Feller processes to a iting process. Section 15.1 lays some ground work concerning weak convergence of processes

More information

Wiener Measure and Brownian Motion

Wiener Measure and Brownian Motion Chapter 16 Wiener Measure and Brownian Motion Diffusion of particles is a product of their apparently random motion. The density u(t, x) of diffusing particles satisfies the diffusion equation (16.1) u

More information

MAT 570 REAL ANALYSIS LECTURE NOTES. Contents. 1. Sets Functions Countability Axiom of choice Equivalence relations 9

MAT 570 REAL ANALYSIS LECTURE NOTES. Contents. 1. Sets Functions Countability Axiom of choice Equivalence relations 9 MAT 570 REAL ANALYSIS LECTURE NOTES PROFESSOR: JOHN QUIGG SEMESTER: FALL 204 Contents. Sets 2 2. Functions 5 3. Countability 7 4. Axiom of choice 8 5. Equivalence relations 9 6. Real numbers 9 7. Extended

More information

On the convergence of sequences of random variables: A primer

On the convergence of sequences of random variables: A primer BCAM May 2012 1 On the convergence of sequences of random variables: A primer Armand M. Makowski ECE & ISR/HyNet University of Maryland at College Park armand@isr.umd.edu BCAM May 2012 2 A sequence a :

More information

Stochastic Processes. Winter Term Paolo Di Tella Technische Universität Dresden Institut für Stochastik

Stochastic Processes. Winter Term Paolo Di Tella Technische Universität Dresden Institut für Stochastik Stochastic Processes Winter Term 2016-2017 Paolo Di Tella Technische Universität Dresden Institut für Stochastik Contents 1 Preliminaries 5 1.1 Uniform integrability.............................. 5 1.2

More information

Beyond the color of the noise: what is memory in random phenomena?

Beyond the color of the noise: what is memory in random phenomena? Beyond the color of the noise: what is memory in random phenomena? Gennady Samorodnitsky Cornell University September 19, 2014 Randomness means lack of pattern or predictability in events according to

More information

STOCHASTIC ANALYSIS FOR JUMP PROCESSES

STOCHASTIC ANALYSIS FOR JUMP PROCESSES STOCHASTIC ANALYSIS FOR JUMP PROCESSES ANTONIS PAPAPANTOLEON Abstract. Lecture notes from courses at TU Berlin in WS 29/1, WS 211/12 and WS 212/13. Contents 1. Introduction 2 2. Definition of Lévy processes

More information

Convergence of Markov Processes. Amanda Turner University of Cambridge

Convergence of Markov Processes. Amanda Turner University of Cambridge Convergence of Markov Processes Amanda Turner University of Cambridge 1 Contents 1 Introduction 2 2 The Space D E [, 3 2.1 The Skorohod Topology................................ 3 3 Convergence of Probability

More information

Measure Theory on Topological Spaces. Course: Prof. Tony Dorlas 2010 Typset: Cathal Ormond

Measure Theory on Topological Spaces. Course: Prof. Tony Dorlas 2010 Typset: Cathal Ormond Measure Theory on Topological Spaces Course: Prof. Tony Dorlas 2010 Typset: Cathal Ormond May 22, 2011 Contents 1 Introduction 2 1.1 The Riemann Integral........................................ 2 1.2 Measurable..............................................

More information

Integration on Measure Spaces

Integration on Measure Spaces Chapter 3 Integration on Measure Spaces In this chapter we introduce the general notion of a measure on a space X, define the class of measurable functions, and define the integral, first on a class of

More information

GARCH processes continuous counterparts (Part 2)

GARCH processes continuous counterparts (Part 2) GARCH processes continuous counterparts (Part 2) Alexander Lindner Centre of Mathematical Sciences Technical University of Munich D 85747 Garching Germany lindner@ma.tum.de http://www-m1.ma.tum.de/m4/pers/lindner/

More information

PROBABILITY THEORY II

PROBABILITY THEORY II Ruprecht-Karls-Universität Heidelberg Institut für Angewandte Mathematik Prof. Dr. Jan JOHANNES Outline of the lecture course PROBABILITY THEORY II Summer semester 2016 Preliminary version: April 21, 2016

More information

Brownian Motion and Conditional Probability

Brownian Motion and Conditional Probability Math 561: Theory of Probability (Spring 2018) Week 10 Brownian Motion and Conditional Probability 10.1 Standard Brownian Motion (SBM) Brownian motion is a stochastic process with both practical and theoretical

More information

Random Process Lecture 1. Fundamentals of Probability

Random Process Lecture 1. Fundamentals of Probability Random Process Lecture 1. Fundamentals of Probability Husheng Li Min Kao Department of Electrical Engineering and Computer Science University of Tennessee, Knoxville Spring, 2016 1/43 Outline 2/43 1 Syllabus

More information

Jump Processes. Richard F. Bass

Jump Processes. Richard F. Bass Jump Processes Richard F. Bass ii c Copyright 214 Richard F. Bass Contents 1 Poisson processes 1 1.1 Definitions............................. 1 1.2 Stopping times.......................... 3 1.3 Markov

More information

Weak convergence and Compactness.

Weak convergence and Compactness. Chapter 4 Weak convergence and Compactness. Let be a complete separable metic space and B its Borel σ field. We denote by M() the space of probability measures on (, B). A sequence µ n M() of probability

More information

The Kadec-Pe lczynski theorem in L p, 1 p < 2

The Kadec-Pe lczynski theorem in L p, 1 p < 2 The Kadec-Pe lczynski theorem in L p, 1 p < 2 I. Berkes and R. Tichy Abstract By a classical result of Kadec and Pe lczynski (1962), every normalized weakly null sequence in L p, p > 2 contains a subsequence

More information

GAUSSIAN PROCESSES; KOLMOGOROV-CHENTSOV THEOREM

GAUSSIAN PROCESSES; KOLMOGOROV-CHENTSOV THEOREM GAUSSIAN PROCESSES; KOLMOGOROV-CHENTSOV THEOREM STEVEN P. LALLEY 1. GAUSSIAN PROCESSES: DEFINITIONS AND EXAMPLES Definition 1.1. A standard (one-dimensional) Wiener process (also called Brownian motion)

More information

ON ADDITIVE TIME-CHANGES OF FELLER PROCESSES. 1. Introduction

ON ADDITIVE TIME-CHANGES OF FELLER PROCESSES. 1. Introduction ON ADDITIVE TIME-CHANGES OF FELLER PROCESSES ALEKSANDAR MIJATOVIĆ AND MARTIJN PISTORIUS Abstract. In this note we generalise the Phillips theorem [1] on the subordination of Feller processes by Lévy subordinators

More information

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R. Ergodic Theorems Samy Tindel Purdue University Probability Theory 2 - MA 539 Taken from Probability: Theory and examples by R. Durrett Samy T. Ergodic theorems Probability Theory 1 / 92 Outline 1 Definitions

More information

MATH 202B - Problem Set 5

MATH 202B - Problem Set 5 MATH 202B - Problem Set 5 Walid Krichene (23265217) March 6, 2013 (5.1) Show that there exists a continuous function F : [0, 1] R which is monotonic on no interval of positive length. proof We know there

More information

An invariance result for Hammersley s process with sources and sinks

An invariance result for Hammersley s process with sources and sinks An invariance result for Hammersley s process with sources and sinks Piet Groeneboom Delft University of Technology, Vrije Universiteit, Amsterdam, and University of Washington, Seattle March 31, 26 Abstract

More information

Gaussian Random Fields: Geometric Properties and Extremes

Gaussian Random Fields: Geometric Properties and Extremes Gaussian Random Fields: Geometric Properties and Extremes Yimin Xiao Michigan State University Outline Lecture 1: Gaussian random fields and their regularity Lecture 2: Hausdorff dimension results and

More information

Tail process and its role in limit theorems Bojan Basrak, University of Zagreb

Tail process and its role in limit theorems Bojan Basrak, University of Zagreb Tail process and its role in limit theorems Bojan Basrak, University of Zagreb The Fields Institute Toronto, May 2016 based on the joint work (in progress) with Philippe Soulier, Azra Tafro, Hrvoje Planinić

More information

The Skorokhod reflection problem for functions with discontinuities (contractive case)

The Skorokhod reflection problem for functions with discontinuities (contractive case) The Skorokhod reflection problem for functions with discontinuities (contractive case) TAKIS KONSTANTOPOULOS Univ. of Texas at Austin Revised March 1999 Abstract Basic properties of the Skorokhod reflection

More information

Lecture 9. d N(0, 1). Now we fix n and think of a SRW on [0,1]. We take the k th step at time k n. and our increments are ± 1

Lecture 9. d N(0, 1). Now we fix n and think of a SRW on [0,1]. We take the k th step at time k n. and our increments are ± 1 Random Walks and Brownian Motion Tel Aviv University Spring 011 Lecture date: May 0, 011 Lecture 9 Instructor: Ron Peled Scribe: Jonathan Hermon In today s lecture we present the Brownian motion (BM).

More information

Theorem 2.1 (Caratheodory). A (countably additive) probability measure on a field has an extension. n=1

Theorem 2.1 (Caratheodory). A (countably additive) probability measure on a field has an extension. n=1 Chapter 2 Probability measures 1. Existence Theorem 2.1 (Caratheodory). A (countably additive) probability measure on a field has an extension to the generated σ-field Proof of Theorem 2.1. Let F 0 be

More information

1. Stochastic Processes and filtrations

1. Stochastic Processes and filtrations 1. Stochastic Processes and 1. Stoch. pr., A stochastic process (X t ) t T is a collection of random variables on (Ω, F) with values in a measurable space (S, S), i.e., for all t, In our case X t : Ω S

More information

STAT 7032 Probability Spring Wlodek Bryc

STAT 7032 Probability Spring Wlodek Bryc STAT 7032 Probability Spring 2018 Wlodek Bryc Created: Friday, Jan 2, 2014 Revised for Spring 2018 Printed: January 9, 2018 File: Grad-Prob-2018.TEX Department of Mathematical Sciences, University of Cincinnati,

More information

3 Integration and Expectation

3 Integration and Expectation 3 Integration and Expectation 3.1 Construction of the Lebesgue Integral Let (, F, µ) be a measure space (not necessarily a probability space). Our objective will be to define the Lebesgue integral R fdµ

More information

Estimates for probabilities of independent events and infinite series

Estimates for probabilities of independent events and infinite series Estimates for probabilities of independent events and infinite series Jürgen Grahl and Shahar evo September 9, 06 arxiv:609.0894v [math.pr] 8 Sep 06 Abstract This paper deals with finite or infinite sequences

More information

PACKING-DIMENSION PROFILES AND FRACTIONAL BROWNIAN MOTION

PACKING-DIMENSION PROFILES AND FRACTIONAL BROWNIAN MOTION PACKING-DIMENSION PROFILES AND FRACTIONAL BROWNIAN MOTION DAVAR KHOSHNEVISAN AND YIMIN XIAO Abstract. In order to compute the packing dimension of orthogonal projections Falconer and Howroyd 997) introduced

More information

Packing-Dimension Profiles and Fractional Brownian Motion

Packing-Dimension Profiles and Fractional Brownian Motion Under consideration for publication in Math. Proc. Camb. Phil. Soc. 1 Packing-Dimension Profiles and Fractional Brownian Motion By DAVAR KHOSHNEVISAN Department of Mathematics, 155 S. 1400 E., JWB 233,

More information

Problem set 1, Real Analysis I, Spring, 2015.

Problem set 1, Real Analysis I, Spring, 2015. Problem set 1, Real Analysis I, Spring, 015. (1) Let f n : D R be a sequence of functions with domain D R n. Recall that f n f uniformly if and only if for all ɛ > 0, there is an N = N(ɛ) so that if n

More information

Analysis Comprehensive Exam Questions Fall 2008

Analysis Comprehensive Exam Questions Fall 2008 Analysis Comprehensive xam Questions Fall 28. (a) Let R be measurable with finite Lebesgue measure. Suppose that {f n } n N is a bounded sequence in L 2 () and there exists a function f such that f n (x)

More information

Some Background Material

Some Background Material Chapter 1 Some Background Material In the first chapter, we present a quick review of elementary - but important - material as a way of dipping our toes in the water. This chapter also introduces important

More information

The main results about probability measures are the following two facts:

The main results about probability measures are the following two facts: Chapter 2 Probability measures The main results about probability measures are the following two facts: Theorem 2.1 (extension). If P is a (continuous) probability measure on a field F 0 then it has a

More information

Regular Variation and Extreme Events for Stochastic Processes

Regular Variation and Extreme Events for Stochastic Processes 1 Regular Variation and Extreme Events for Stochastic Processes FILIP LINDSKOG Royal Institute of Technology, Stockholm 2005 based on joint work with Henrik Hult www.math.kth.se/ lindskog 2 Extremes for

More information

Notes 1 : Measure-theoretic foundations I

Notes 1 : Measure-theoretic foundations I Notes 1 : Measure-theoretic foundations I Math 733-734: Theory of Probability Lecturer: Sebastien Roch References: [Wil91, Section 1.0-1.8, 2.1-2.3, 3.1-3.11], [Fel68, Sections 7.2, 8.1, 9.6], [Dur10,

More information

X n D X lim n F n (x) = F (x) for all x C F. lim n F n(u) = F (u) for all u C F. (2)

X n D X lim n F n (x) = F (x) for all x C F. lim n F n(u) = F (u) for all u C F. (2) 14:17 11/16/2 TOPIC. Convergence in distribution and related notions. This section studies the notion of the so-called convergence in distribution of real random variables. This is the kind of convergence

More information

2 (Bonus). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure?

2 (Bonus). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure? MA 645-4A (Real Analysis), Dr. Chernov Homework assignment 1 (Due 9/5). Prove that every countable set A is measurable and µ(a) = 0. 2 (Bonus). Let A consist of points (x, y) such that either x or y is

More information

A Concise Course on Stochastic Partial Differential Equations

A Concise Course on Stochastic Partial Differential Equations A Concise Course on Stochastic Partial Differential Equations Michael Röckner Reference: C. Prevot, M. Röckner: Springer LN in Math. 1905, Berlin (2007) And see the references therein for the original

More information

Metric Spaces and Topology

Metric Spaces and Topology Chapter 2 Metric Spaces and Topology From an engineering perspective, the most important way to construct a topology on a set is to define the topology in terms of a metric on the set. This approach underlies

More information

An introduction to Mathematical Theory of Control

An introduction to Mathematical Theory of Control An introduction to Mathematical Theory of Control Vasile Staicu University of Aveiro UNICA, May 2018 Vasile Staicu (University of Aveiro) An introduction to Mathematical Theory of Control UNICA, May 2018

More information

4th Preparation Sheet - Solutions

4th Preparation Sheet - Solutions Prof. Dr. Rainer Dahlhaus Probability Theory Summer term 017 4th Preparation Sheet - Solutions Remark: Throughout the exercise sheet we use the two equivalent definitions of separability of a metric space

More information

ASYMPTOTICALLY NONEXPANSIVE MAPPINGS IN MODULAR FUNCTION SPACES ABSTRACT

ASYMPTOTICALLY NONEXPANSIVE MAPPINGS IN MODULAR FUNCTION SPACES ABSTRACT ASYMPTOTICALLY NONEXPANSIVE MAPPINGS IN MODULAR FUNCTION SPACES T. DOMINGUEZ-BENAVIDES, M.A. KHAMSI AND S. SAMADI ABSTRACT In this paper, we prove that if ρ is a convex, σ-finite modular function satisfying

More information

Set-Indexed Processes with Independent Increments

Set-Indexed Processes with Independent Increments Set-Indexed Processes with Independent Increments R.M. Balan May 13, 2002 Abstract Set-indexed process with independent increments are described by convolution systems ; the construction of such a process

More information

Branching Processes II: Convergence of critical branching to Feller s CSB

Branching Processes II: Convergence of critical branching to Feller s CSB Chapter 4 Branching Processes II: Convergence of critical branching to Feller s CSB Figure 4.1: Feller 4.1 Birth and Death Processes 4.1.1 Linear birth and death processes Branching processes can be studied

More information

Part V. 17 Introduction: What are measures and why measurable sets. Lebesgue Integration Theory

Part V. 17 Introduction: What are measures and why measurable sets. Lebesgue Integration Theory Part V 7 Introduction: What are measures and why measurable sets Lebesgue Integration Theory Definition 7. (Preliminary). A measure on a set is a function :2 [ ] such that. () = 2. If { } = is a finite

More information

DISTRIBUTION OF THE SUPREMUM LOCATION OF STATIONARY PROCESSES. 1. Introduction

DISTRIBUTION OF THE SUPREMUM LOCATION OF STATIONARY PROCESSES. 1. Introduction DISTRIBUTION OF THE SUPREMUM LOCATION OF STATIONARY PROCESSES GENNADY SAMORODNITSKY AND YI SHEN Abstract. The location of the unique supremum of a stationary process on an interval does not need to be

More information

Combinatorics in Banach space theory Lecture 12

Combinatorics in Banach space theory Lecture 12 Combinatorics in Banach space theory Lecture The next lemma considerably strengthens the assertion of Lemma.6(b). Lemma.9. For every Banach space X and any n N, either all the numbers n b n (X), c n (X)

More information

at time t, in dimension d. The index i varies in a countable set I. We call configuration the family, denoted generically by Φ: U (x i (t) x j (t))

at time t, in dimension d. The index i varies in a countable set I. We call configuration the family, denoted generically by Φ: U (x i (t) x j (t)) Notations In this chapter we investigate infinite systems of interacting particles subject to Newtonian dynamics Each particle is characterized by its position an velocity x i t, v i t R d R d at time

More information

16 1 Basic Facts from Functional Analysis and Banach Lattices

16 1 Basic Facts from Functional Analysis and Banach Lattices 16 1 Basic Facts from Functional Analysis and Banach Lattices 1.2.3 Banach Steinhaus Theorem Another fundamental theorem of functional analysis is the Banach Steinhaus theorem, or the Uniform Boundedness

More information

LOCATION OF THE PATH SUPREMUM FOR SELF-SIMILAR PROCESSES WITH STATIONARY INCREMENTS. Yi Shen

LOCATION OF THE PATH SUPREMUM FOR SELF-SIMILAR PROCESSES WITH STATIONARY INCREMENTS. Yi Shen LOCATION OF THE PATH SUPREMUM FOR SELF-SIMILAR PROCESSES WITH STATIONARY INCREMENTS Yi Shen Department of Statistics and Actuarial Science, University of Waterloo. Waterloo, ON N2L 3G1, Canada. Abstract.

More information

4.5 The critical BGW tree

4.5 The critical BGW tree 4.5. THE CRITICAL BGW TREE 61 4.5 The critical BGW tree 4.5.1 The rooted BGW tree as a metric space We begin by recalling that a BGW tree T T with root is a graph in which the vertices are a subset of

More information

Topology, Math 581, Fall 2017 last updated: November 24, Topology 1, Math 581, Fall 2017: Notes and homework Krzysztof Chris Ciesielski

Topology, Math 581, Fall 2017 last updated: November 24, Topology 1, Math 581, Fall 2017: Notes and homework Krzysztof Chris Ciesielski Topology, Math 581, Fall 2017 last updated: November 24, 2017 1 Topology 1, Math 581, Fall 2017: Notes and homework Krzysztof Chris Ciesielski Class of August 17: Course and syllabus overview. Topology

More information

Convergence at first and second order of some approximations of stochastic integrals

Convergence at first and second order of some approximations of stochastic integrals Convergence at first and second order of some approximations of stochastic integrals Bérard Bergery Blandine, Vallois Pierre IECN, Nancy-Université, CNRS, INRIA, Boulevard des Aiguillettes B.P. 239 F-5456

More information

Recall that if X is a compact metric space, C(X), the space of continuous (real-valued) functions on X, is a Banach space with the norm

Recall that if X is a compact metric space, C(X), the space of continuous (real-valued) functions on X, is a Banach space with the norm Chapter 13 Radon Measures Recall that if X is a compact metric space, C(X), the space of continuous (real-valued) functions on X, is a Banach space with the norm (13.1) f = sup x X f(x). We want to identify

More information

Feller Processes and Semigroups

Feller Processes and Semigroups Stat25B: Probability Theory (Spring 23) Lecture: 27 Feller Processes and Semigroups Lecturer: Rui Dong Scribe: Rui Dong ruidong@stat.berkeley.edu For convenience, we can have a look at the list of materials

More information

ANALYSIS QUALIFYING EXAM FALL 2017: SOLUTIONS. 1 cos(nx) lim. n 2 x 2. g n (x) = 1 cos(nx) n 2 x 2. x 2.

ANALYSIS QUALIFYING EXAM FALL 2017: SOLUTIONS. 1 cos(nx) lim. n 2 x 2. g n (x) = 1 cos(nx) n 2 x 2. x 2. ANALYSIS QUALIFYING EXAM FALL 27: SOLUTIONS Problem. Determine, with justification, the it cos(nx) n 2 x 2 dx. Solution. For an integer n >, define g n : (, ) R by Also define g : (, ) R by g(x) = g n

More information

5 Measure theory II. (or. lim. Prove the proposition. 5. For fixed F A and φ M define the restriction of φ on F by writing.

5 Measure theory II. (or. lim. Prove the proposition. 5. For fixed F A and φ M define the restriction of φ on F by writing. 5 Measure theory II 1. Charges (signed measures). Let (Ω, A) be a σ -algebra. A map φ: A R is called a charge, (or signed measure or σ -additive set function) if φ = φ(a j ) (5.1) A j for any disjoint

More information

µ X (A) = P ( X 1 (A) )

µ X (A) = P ( X 1 (A) ) 1 STOCHASTIC PROCESSES This appendix provides a very basic introduction to the language of probability theory and stochastic processes. We assume the reader is familiar with the general measure and integration

More information

3 (Due ). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure?

3 (Due ). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure? MA 645-4A (Real Analysis), Dr. Chernov Homework assignment 1 (Due ). Show that the open disk x 2 + y 2 < 1 is a countable union of planar elementary sets. Show that the closed disk x 2 + y 2 1 is a countable

More information

Spectral representations and ergodic theorems for stationary stochastic processes

Spectral representations and ergodic theorems for stationary stochastic processes AMS 263 Stochastic Processes (Fall 2005) Instructor: Athanasios Kottas Spectral representations and ergodic theorems for stationary stochastic processes Stationary stochastic processes Theory and methods

More information

Introduction and Preliminaries

Introduction and Preliminaries Chapter 1 Introduction and Preliminaries This chapter serves two purposes. The first purpose is to prepare the readers for the more systematic development in later chapters of methods of real analysis

More information

A SHORT INTRODUCTION TO BANACH LATTICES AND

A SHORT INTRODUCTION TO BANACH LATTICES AND CHAPTER A SHORT INTRODUCTION TO BANACH LATTICES AND POSITIVE OPERATORS In tis capter we give a brief introduction to Banac lattices and positive operators. Most results of tis capter can be found, e.g.,

More information

9 Brownian Motion: Construction

9 Brownian Motion: Construction 9 Brownian Motion: Construction 9.1 Definition and Heuristics The central limit theorem states that the standard Gaussian distribution arises as the weak limit of the rescaled partial sums S n / p n of

More information

An invariance principle for sums and record times of regularly varying stationary sequences

An invariance principle for sums and record times of regularly varying stationary sequences An invariance principle for sums and record times of regularly varying stationary sequences Bojan Basrak Hrvoje Planinić Philippe Soulier arxiv:1609.00687v2 [math.pr] 4 Dec 2017 December 5, 2017 Abstract

More information

Stochastic flows associated to coalescent processes

Stochastic flows associated to coalescent processes Stochastic flows associated to coalescent processes Jean Bertoin (1) and Jean-François Le Gall (2) (1) Laboratoire de Probabilités et Modèles Aléatoires and Institut universitaire de France, Université

More information

MATH5011 Real Analysis I. Exercise 1 Suggested Solution

MATH5011 Real Analysis I. Exercise 1 Suggested Solution MATH5011 Real Analysis I Exercise 1 Suggested Solution Notations in the notes are used. (1) Show that every open set in R can be written as a countable union of mutually disjoint open intervals. Hint:

More information

The Skorokhod problem in a time-dependent interval

The Skorokhod problem in a time-dependent interval The Skorokhod problem in a time-dependent interval Krzysztof Burdzy, Weining Kang and Kavita Ramanan University of Washington and Carnegie Mellon University Abstract: We consider the Skorokhod problem

More information

) ) = γ. and P ( X. B(a, b) = Γ(a)Γ(b) Γ(a + b) ; (x + y, ) I J}. Then, (rx) a 1 (ry) b 1 e (x+y)r r 2 dxdy Γ(a)Γ(b) D

) ) = γ. and P ( X. B(a, b) = Γ(a)Γ(b) Γ(a + b) ; (x + y, ) I J}. Then, (rx) a 1 (ry) b 1 e (x+y)r r 2 dxdy Γ(a)Γ(b) D 3 Independent Random Variables II: Examples 3.1 Some functions of independent r.v. s. Let X 1, X 2,... be independent r.v. s with the known distributions. Then, one can compute the distribution of a r.v.

More information

Functional Analysis. Martin Brokate. 1 Normed Spaces 2. 2 Hilbert Spaces The Principle of Uniform Boundedness 32

Functional Analysis. Martin Brokate. 1 Normed Spaces 2. 2 Hilbert Spaces The Principle of Uniform Boundedness 32 Functional Analysis Martin Brokate Contents 1 Normed Spaces 2 2 Hilbert Spaces 2 3 The Principle of Uniform Boundedness 32 4 Extension, Reflexivity, Separation 37 5 Compact subsets of C and L p 46 6 Weak

More information

Introduction to Empirical Processes and Semiparametric Inference Lecture 08: Stochastic Convergence

Introduction to Empirical Processes and Semiparametric Inference Lecture 08: Stochastic Convergence Introduction to Empirical Processes and Semiparametric Inference Lecture 08: Stochastic Convergence Michael R. Kosorok, Ph.D. Professor and Chair of Biostatistics Professor of Statistics and Operations

More information

Weak convergence in Probability Theory A summer excursion! Day 3

Weak convergence in Probability Theory A summer excursion! Day 3 BCAM June 2013 1 Weak convergence in Probability Theory A summer excursion! Day 3 Armand M. Makowski ECE & ISR/HyNet University of Maryland at College Park armand@isr.umd.edu BCAM June 2013 2 Day 1: Basic

More information

4 Sums of Independent Random Variables

4 Sums of Independent Random Variables 4 Sums of Independent Random Variables Standing Assumptions: Assume throughout this section that (,F,P) is a fixed probability space and that X 1, X 2, X 3,... are independent real-valued random variables

More information

Product measure and Fubini s theorem

Product measure and Fubini s theorem Chapter 7 Product measure and Fubini s theorem This is based on [Billingsley, Section 18]. 1. Product spaces Suppose (Ω 1, F 1 ) and (Ω 2, F 2 ) are two probability spaces. In a product space Ω = Ω 1 Ω

More information

Random Bernstein-Markov factors

Random Bernstein-Markov factors Random Bernstein-Markov factors Igor Pritsker and Koushik Ramachandran October 20, 208 Abstract For a polynomial P n of degree n, Bernstein s inequality states that P n n P n for all L p norms on the unit

More information