STOCHASTIC GEOMETRY BIOIMAGING

Size: px
Start display at page:

Download "STOCHASTIC GEOMETRY BIOIMAGING"

Transcription

1 CENTRE FOR STOCHASTIC GEOMETRY AND ADVANCED BIOIMAGING RESEARCH REPORT Anders Rønn-Nielsen and Eva B. Vedel Jensen Central limit theorem for mean and variogram estimators in Lévy based models No. 06, June 2018

2 Central limit theorem for mean and variogram estimators in Lévy based models Anders Rønn Nielsen 1 and Eva B. Vedel Jensen 2 1 Department of Finance, Copenhagen Business School 2 Department of Mathematics, Aarhus University Abstract We consider an infinitely divisible random field in given as an integral of a kernel function with respect to a Lévy basis. Under mild regularity conditions we derive central limit theorems for the moment estimators of the mean and the variogram of the field. Keywords: central limit theorem; infinite divisibility; Lévy-based modelling 1 Introduction In the present paper, we derive central limit theorems (CLTs) for mean and variogram estimators for a field (X i ) i Z d, defined by X i = f(s i) M(ds), (1.1) where M is an infinitely divisible, independently scattered random measure on and f is some kernel function. Lévy-based models as defined in (1.1) provide a flexible modelling framework that recently has been used in a range of modelling contexts, including modelling of turbulent flows, growth processes, Cox point processes and brain imaging data [3, 6, 8, 9. Due to the tractability of Lévy-based models, it has been possible to derive tail asymptotics for the supremum of such a field as well as asymptotics of excursion sets of the field. The case where M is a convolution equivalent measure is considered in [12, 14. Results for random measures with regularly varying tails have been derived in [11 and refined in [1, 2. In [4, CLTs are proved for a stationary random field (X i ) i Z d, discretely defined by X i = u(m s+i : s Z d ), (1.2) where (M i ) i Z d are independent and identically distributed (i.i.d.) random variables and u is a measurable function. In contrast, our set-up involves a random measure 1

3 defined on. However, if we discretize the integral in (1.1), then our model is of the form (1.2) with u a linear function and the random variables M i having an infinitely divisible distribution. Note that if this type of model is going to hold for finer and finer resolution, then the common distribution of the random variables M i has to be infinitely divisible. For a random field defined by (1.1), we prove in the present paper CLTs for S Γ = i Γ X i and T Γ (h) = i Γ (X i X i+h ) 2, where h Z d and Γ is a finite subset of Z d with number of elements increasing to infinity. We show under mild regularity conditions that both S Γ and T Γ (h) are asymptotically normally distributed and give explicit expressions for the asymptotic variances. It turns out that the asymptotic variance of T Γ (h) is equal to an approximate variance, earlier derived in [13. In the latter paper, the approximate variance was used in the assessment of the precision of a section spacing estimator in electron microscopy. More details are given in Section 5 below. If we discretize the integral in (1.1), both S Γ and T Γ (h) take the form ( ) v a s M s+i. (1.3) i Γ s Z d Here, v(x) = x and a s = f(s) for S Γ, while v(x) = x 2 and a s = f(s) f(s h) for T Γ (h). In [4, CLTs for random variables of the form (1.3) are discussed. In particular, a CLT for S Γ is derived which is a discrete analogue of the CLT we obtain. For second-order properties, [4 considers X i X i+h, i Γ but do not deal explicitly with the case where the mean of the field is unknown, as is done in our paper. The present paper is organised as follows. In Section 2 we define the random field (1.1) and state the main result. In Section 3 we provide the proofs. An important tool in the proofs will be a CLT (Theorem 3.1) for m n -dependent fields proved in [5. Section 4 discusses the case where the kernel function is isotropic, while the results are used for estimation of sample spacing in Section 5. Cumulant formulae and a proof of a corollary to Ottaviani s inequality are deferred to two Appendices. 2 Preliminaries and main result Consider an independently scattered random measure M on. Then for a sequence of disjoint sets (A n ) n N in B( ) the random variables (M(A n )) n N are independent and satisfy M( A n ) = M(A n ). Assume furthermore that M(A) is infinitely divisible for all A B( ). Then M is called a Lévy basis, see [3 and references therein. 2

4 For a random variable X let κ X (λ) denote its cumulant function log E(e iλx ). We shall assume that the Lévy basis is stationary and isotropic such that for A B( ) the variable M(A) has a Lévy Khintchine representation given by ( κ M(A) (λ) = iλam d (A) 1 2 λ2 θm d (A)+ e iλu 1 iλu1 [ 1,1 (u) ) F (ds, du), (2.1) A R where m d is the Lebesgue measure on (, B( )), a R, θ 0 and F is a measure on B( R) of the form F (A B) = m d (A)ρ(B). (2.2) In the following, we let M be a so called spot variable, i.e. M is a random variable with the distribution of M(A) for A B( ) with m d (A) = 1. We will assume that the Lévy measure ρ satisfies z 2 ρ(dz) < [ 1,1 and z ρ(dz) <. [ 1,1 c The first assumption is needed for ρ to be a Lévy measure. The second assumption is equivalent to E M < (see [15, Theorem 25.3). Now assume that f : [0, ) is a bounded kernel function satisfying f(s)ds <. (2.3) Let Z d be the grid integer numbers in. We consider the family of random variables (X i ) i Z d defined by X i = f(s i) M(ds). (2.4) See [10, Theorem 2.7 for existence of the integrals, where their conditions (i) (iii) can be easily verified under the given assumptions on M and f. The field is stationary, and if furthermore f has the form f(s) = g( s ), where is Euclidean norm, it will be isotropic. This special case is studied in Section 4. We will be interested in estimating the mean and the variogram of the field. We have µ = EX 0 = E(M ) f(s) ds and for h Z d γ(h) = E(X 0 X h ) 2 = Var(M ) (f(s) f(s h)) 2 ds, where identities from Appendix A have been applied. Note that due to stationarity, µ = EX i and γ(h) = E(X i X i+h ) 2 for all i Z d. For Γ a finite subset of Z d, we define S Γ = i Γ X i, (2.5) 3

5 and for h Z d T Γ (h) = (X i X i+h ) 2. (2.6) Unbiased estimators for µ and γ(h), respectively, are obtained as i Γ µ = 1 Γ S Γ γ(h) = 1 Γ T Γ, where Γ denotes the number of elements in Γ. The main result gives asymptotic normality of these estimators for Γ going to infinity. Theorem 2.1. Let (Γ n ) n 1 be a sequence of finite subsets of Z d and assume that goes to infinity and Γ n / goes to 0. Let σ 2 S = Var(M ) i Z d f(s)f(s i) ds. If σ 2 S <, then S Γn µ If furthermore E(M ) 4 <, then T Γn (h) γ(h) D N (0, σs) 2. (2.7) D N (0, σt 2 ), (2.8) where σt 2 = ( [κ ) 2 4 (M ) g(s) 2 g(s j) 2 ds + 2 Var(M ) 2 g(s)g(s j) ds <, j Z d κ 4 (M ) is the fourth cumulant of M and g(s) = f(s) f(s h). It should be noted that the assumptions of the theorem and Γ n / 0 are equivalent with the (less intuitive, but more computationally convenient) property and (Γ n j) Γ n / 1 for all j Z d, where Γ + j denotes translation of Γ in direction j Z d, i.e. Γ + j = {i + j i Γ}. 4

6 3 Proofs An important tool in the proof of our main result, Theorem 2.1, is a CLT for m n dependent random fields that may be found in [5, Theorem 2. This theorem is also used in the proofs of the CLTs, derived in [4. However, our proof of the CLT for S Γ is simpler and takes advantage of the linearity of the Lévy based model (1.1), while our proof of the CLT for T Γ (h) follows a different route than the one adopted in [4 for the second order property i Γ X ix i+h. Recall that a random field (Z i ) i Z d is said to be m n dependent, if for any A, B Z d with A, B < and inf{ a b : a A, b B} > m n, (Z a ) a A and (Z b ) b B are independent. The CLT for m n dependent fields is here stated as Theorem 3.1. Let (Γ n ) n 1 be a sequence of finite subsets of Z d with as n and let (m n ) n 1 be a sequence of positive integers. For each n 1, let {U n (j), j Z d } be an m n dependent random field with EU n (j) = 0 for all j Z d. Assume that E( j Γ n U n (j)) 2 σ 2 as n with σ 2 <. Then j Γ n U n (j) converges in distribution to a Gaussian random variable with mean zero and variance σ 2, if there exists a finite constant c > 0 such that for any n 1, j Γ n EU n (j) 2 c, and for any ɛ > 0, it holds that lim n L n (ɛ) = 0, where L n (ɛ) = m 2d n E ( ) U n (j) 2 1 { Un(j) ɛm 2d n }. j Γ n Proof of Theorem 2.1 for S Γn. Let (m n ) n 1 be a sequence of integers increasing to infinity. The choice of the sequence will be specified below. Let for i Z d and n N, C n (i) be the ball in with center in i and radius m n /3. Let X n i = f(s i) M(ds) C n(i) and let furthermore µ n = E(X n 0) = E(M ) f(s) ds and S C n(0) Γ n = i Γ n X n i. The proof will be divided into two steps. The first step is to show that S Γn µ (S Γn µ n ) lim 2 n = 0. (3.1) The second step is to show, using Theorem 3.1, that with an appropriately chosen sequence (m n ) n 1, we have S Γn µ n D N (0, σs) 2 (3.2) 5

7 To prove (3.1), we find, using Appendix A, S Γn µ (S Γn µ n ) 2 2 = 1 [ ( E [ ) 2 f(s i) M(ds) E(M ) f(s i) ds i Γ C n n(i) c C n(i) c = 1 [ Cov f(s i) M(ds), f(s j) M(ds) i Γ n j Γ C n n(i) c C n(j) c = Var(M ) f(s i) f(s j) ds i Γ n j Γ C n n(i) c C n(j) c Var(M ) f(s i) f(s j) ds C n(i) c C n(j) c i Γ n j Z d = Var(M ) j Z d C n(0) c C n(j) c f(s) f(s j) ds 0, where the convergence is a result of dominated convergence, since f(s) f(s j) ds <, (3.3) j Z d and C n (0) c C n (j) c as n. In order to prove (3.2), define for n N and j Z d such that U n (j) = Xn j µ n, U n (j) = S Γn µ n. Γn j Γ n Then, clearly (U n (j)) j Z d is m n dependent with EU n (j) = 0 for all n 1 and j Z d. Furthermore, cf. Appendix A, [( ) 2 E U n (i) i Γ n = 1 [ f(s i) M(ds), f(s j) M(ds) C n(i) C n(j) = Var(M ) Cov i Γ n j Γ n i Γ n j Γ n C n(i) C n(j) = Var(M ) j Z d (Γ n j) Γ n Var(M ) j Z d f(s) f(s j) ds, f(s i) f(s j) ds C n(0) C n(j) f(s) f(s j) ds 6

8 as n, by dominated convergence. Here, we have used that (3.3) is fulfilled, (Γ n j) Γ n / 1 for all j and C n (0) C n (j) as n. Furthermore, we find, cf. Appendix A, E[U n (i) 2 = Var(X n 0) = Var(M ) f(s) 2 ds Var(M ) f(s) 2 ds <. i Γ C n n(0) Now define Y = sup X n 0 µ n. n 1 Note that the sequence (X n 0) n 1 has independent increments and that sup E [ (X n 0 µ n ) 2 = sup Var(M ) f(s) 2 ds Var(M ) f(s) 2 ds <. n 1 n 1 C n(0) Then, EY 2 <, due to Corollary B.2. Furthermore, L n (ɛ) = m 2d n E [ Un(j)1 2 { Un(j) ɛm 2d n } j Γ n = m 2d n E [ (X n 0 µ n ) 2 1 { X n m 2d n E [ Y 2 1 {Y ɛ Γn m 2d n } µn ɛ 0 Γ n m 2d n } = m 2d n E [ Y 2 1 {Y ɛ Γn m 2d m 2d n P [ Y ɛ m 2d n n,y Γ n 1/4 } m6d n ɛ 2 EY 2 + m 2d n ψ( 1/4 ), + m 2d n E [ Y 2 1 {Y ɛ Γn m 2d + m 2d n E [ Y 2 1 {Y > Γn 1/4 } n,y > Γ n 1/4 } where ψ(x) = E(Y 2 1 {Y x} ). Note that ψ(x) 0 as x. With [ denoting the integer part, now choose the sequence (m n ) n 1 as m n = [ 1 24d if ψ( 1/4 ) = 0 and m n = min {[ [ } ψ( 1/4 ) 1 4d, 1 24d, otherwise. With this choice it is seen that L n (ɛ) 0 as n. Thus we have from Theorem 3.1 that S Γn µ n D N (0, σs) 2. Combining this with (3.1) yields the desired result. In the proof of Theorem 2.1 for T Γn (h), we need the following lemma, the proof of which follows from straightforward calculations, using that f is bounded and non negative. Lemma 3.2. Let for h Z d g(s) = f(s) f(s h), s. 7

9 Then, for s, j Z d, g(s)g(s j) v j (s), g(s) 2 g(s j) 2 Cv j (s), where v j (s) = f(s)f(s j) + f(s)f(s h j) + f(s h)f(s j) + f(s h)f(s h j) and C < is an appropriately chosen constant. If f(s)f(s j) ds <, j Z d then j Z d v j (s) ds < and ( ) 2 v j (s) ds <. j Z d Proof of Theorem 2.1 for T Γn (h). First note that it follows from Lemma 3.2 that σs 2 < and E[(M ) 4 < implies σt 2 <. For ease of notation, we shall now consider the field (Y i ) i Z d, where Y i = X i X i+h. Then, for all i Z d, EY i = 0 and ( ) Y i = f(s i) f(s i h) M(ds) = g(s i) M(ds), where g(s) = f(s) f(s h) as previously. Let (m n ) n 1 be (another) sequence of integers increasing to infinity. Let C n (i) and X n i be defined as in the proof of Theorem 2.1 for S Γn. Let Y n i = X n i X n i+h = g n (s i) M(ds), where g n (s) = 1 Cn(0)(s)f(s) 1 Cn(h)(s)f(s h). Then g n (s) g(s) for all s. Note that EY n i = 0 and let furthermore γ(h) n = E(Y n 0) 2. Also define T Γn (h) = i Γ n (Y n i ) 2. As for S Γn, the first step will be to show that T Γn (h) γ(h) (T Γn (h) γ(h) n ) lim 2 n = 0. (3.4) The second step is to show, using Theorem 3.1, that with an appropriately chosen sequence (m n ) n 1, we have T Γn (h) γ(h) n D N (0, σt 2 ). (3.5) 8

10 To prove (3.4), we find T Γn (h) γ(h) (T Γn (h) γ(h) n ) 2 2 = 1 [( E [ Y 2 i EYi 2 ((Y n i ) 2 E(Y n i ) 2 ) ) 2 i Γ n = E [ (Yi 2 EYi 2 )(Yj 2 EY 2 i,j Γ n E [( (Y n i ) 2 E(Y n i ) 2)( (Y n j ) 2 E(Y n j ) 2) i,j Γ n j ) i,j Γ n E [ (Y 2 i EY 2 i ) ( (Y n j ) 2 E(Y n j ) 2). (3.6) We will show convergence of the three terms in (3.6) separately. Using the calculation formulae from Appendix A, the first term equals 1 [ κ 4 (M ) g(s i) 2 g(s j) 2 ds i,j Γ n ( ) Var(M ) 2 g(s i)g(s j) ds = [ Γ n (Γ n j) κ 4 (M ) g(s) 2 g(s j) 2 ds j Z d ( ) Var(M ) 2 g(s)g(s j) ds. By dominated convergence, this converges to σt 2, using that σ2 T is finite and that all terms in σt 2 are non negative. The second term in (3.6) equals [ Γ n (Γ n j) κ 4 (M ) g n (s) 2 g n (s j) 2 ds j Z d ( ) Var(M ) 2 g n (s)g n (s j) ds. By dominated convergence and Lemma 3.2, applied to g n instead of g, but keeping the upper bound as v j (s), this also converges to σt 2. Similarly, the third term of (3.6) converges to 2σT 2, and we have the desired result in (3.4). Now define for n N and j Z d V n (j) = (Y n j ) 2 γ(h) n. We want to show that j Γ n V n (j) D N (0, σ 2 T ), 9

11 using Theorem 3.1. Then by construction EV n (j) = 0 for all n 1 and j Z d, and for n such that m n /3 > h, the field (V n (j)) j Z d is m n dependent. Furthermore [( ) 2 E V n (i) = 1 [( E [ n (Y i ) 2 E(Y n i ) 2) 2 i Γ n i Γ n = 1 E [( (Y n i ) 2 E(Y n i ) 2)( (Y n j ) 2 E(Y n j ) 2), i,j Γ n which equals the second term of (3.6) and therefore has limit σt 2. Additionally, using Appendix A, [ [(Y E[V n (i) 2 n = E 0) 2 E(Y n 0) 2 2 i Γ n ( ) ( ) 2 = κ 4 (M ) g n (s) 4 ds + 2 Var(M ) 2 g n (s) 2 ds R ( ) ( d 2 κ 4 (M ) v 0 (s) 2 ds + 2 Var(M ) 2 v 0 (s) ds) < with v 0 as defined in Lemma 3.2. Now define Z = ( sup n 1 X n 0 + sup X n h ) 2 + C, n 1 where C = 2 Var(M ) f(s) 2 ds. Using Appendix A, we find that 0 γ(h) n C for all n 1. Also note that the sequence (X n 0) n 1 has independent increments and that for k = 1, 2, 3, 4, cf. Appendix A, ( n sup κ k X 0) = sup κ k (M ) f(s) k ds κ k (M ) f(s) k ds <. n 1 n 1 C n(0) Thus also sup n 1 E ( X n 4 0) <. By stationarity, also supn 1 E ( X n 4 h) <, and thus EZ 2 <, due to Corollary B.2. Then L n (ɛ) = m 2d n E [ Vn 2 (j)1 { Vn(j) ɛm 2d n } j Γ n = m 2d n E [( (X n 0 X n ) h) 2 21{ (X γ(h) n n m 2d n E [ Z 2 1 {Z ɛ Γn m 2d n }. 0 Xn h )2 γ(h) n ɛ Γ n m 2d n } Now, it follows that L n (ɛ) 0 by choosing (m n ) n 1 as in the proof of Theorem 2.1 for S Γn. Thus, we can conclude from Theorem 3.1 T Γn γ(h) n D N (0, σt 2 ), and combining this with (3.4) gives the desired convergence. 10

12 4 Isotropic model Now make the assumption that the observations in the field have the form X i = f( s i ) M(ds) (4.1) for i Z d, where f : [0, ) [0, ) satisfies f( s ) ds <. This is equivalent to assuming that f(s) in (2.4) only depends on s through s. With this assumption, the field is isotropic, so E(X i X j ) 2 = γ( i j ) for a variogram function γ : [0, ) [0, ). Estimation of µ = EX 0 and the corresponding asymptotic properties are unchanged, compared to the more general framework in Section 2. Also estimation of γ( h ) for h Z d can be carried out as before, and the asymptotic results are still valid. However, we can improve the variogram estimator by taking into account that γ( h ) only depends on h. For this, we introduce T Γ(d 0 ) = 1 h(d 0 ) i Γ h h(d 0 ) (X i X i+h ) 2, where Γ is a finite subset of Z d, d 0 > 0 and h(d 0 ) = {h Z d : h = d 0 }. We have Theorem 4.1. Let (Γ n ) n 1 be a sequence of finite subsets of Z d and assume that goes to infinity and Γ n / goes to 0. If f( s )f( s i ) ds < and E(M ) 4 <, then where σ 2 T = 1 h(d 0 ) 2 i Z d T Γ n (d 0 ) γ(d 0 ) j Z d h,h h(d 0 ) and g h (s) = f( s ) f( s h ). D N (0, σt 2 ), [ κ 4 (M ) g h (s) 2 g h (s j) 2 ds ( ) Var(M ) 2 g h (s)g h (s j) ds <, The proof of Theorem 4.1 follows exactly the same lines as the second part of the proof of Theorem 2.1. Additional notation is only needed to take care of the extra summation in the definition of T Γ. 11

13 5 Estimation of sample spacing Assume that d = 3 and consider the isotropic case again. Let the random field (X t ) t R 3 be defined as X t = f( s t ) M(ds). R 3 Suppose that the field is only observed in grid points within two parallel planes with an unknown distance that we want to estimate. Due to the isotropy, we can without loss of generality assume that observations are made in the two planes and Z 2 = {(z 1, z 2, 0) : z 1, z 2 Z} Z 2 + h 0 = {(z 1, z 2, d 0 ) : z 1, z 2 Z}, where h 0 = (0, 0, d 0 ). The aim is then to estimate d 0. Note that Z 2 + h 0 Z 3 only if d 0 Z, which is not assumed. The idea will be to estimate γ(d 0 ), the variogram evaluated in d 0, only based on pairs (i, i + h 0 ), where i Z 2 and i + h 0 Z 2 + h 0. For a finite subset Γ Z 2, we can estimate γ(d 0 ) by γ(d 0 ) = 1 Γ T Γ, where T Γ = i Γ (X i X i+h0 ) 2. Assuming that the variogram function d γ(d) is known and strictly increasing, an estimator for d 0 is obtained by ˆd 0 = γ 1 ( γ(d 0 )). This estimator is studied in [13, where an expression for the variance of T Γ is derived. Furthermore an approximate expression for the variance of ˆd 0 is found using a Taylor approximation. Below we state a theorem, giving asymptotic normality of T Γ. In fact, apart from the scaling, the asymptotic variance is equal to the approximation to the variance suggested in [13. A result showing asymptotic normality of TΓ can directly be translated into asymptotic confidence intervals for ˆd 0. If also d γ(d) is assumed to be differentiable, asymptotic normality of ˆd 0 is easily derived using the delta method. Theorem 5.1. Let (Γ n ) n 1 be a sequence of finite subsets of Z 2 and assume that goes to infinity and that Γ n / 0 for all j Z 2. Assume furthermore that E(M ) 4 < and f( s )f( s i ) ds <. R 3 i Z 2 Then, T Γn γ(d 0 ) D N (0, σ 2 T ), 12

14 where σ 2 T = [κ 4 (M ) g(s) 2 g(s j) 2 ds j Z 2 ( ) Var(M ) 2 g(s)g(s j) ds <, and g(s) = f( s ) f( s h 0 ). Above Γ n denotes the boundary of Γ n as a subset of Z 2. The proof of Theorem 5.1 is almost identical to the proof of Theorem 2.1, but with Z d replaced by Z 2 as the index set for the observations. Acknowledgements This work was supported by Centre for Stochastic Geometry and Advanced Bioimaging, funded by the Villum Foundation. A Cumulant formulas For two variables Y 1, Y 2 on the form (2.4), Y 1 = f 1 (s) M(ds) Y 2 = f 2 (s) M(ds), with kernels f 1, f 2 satisfying (2.3), the joint cumulant function is given as log E ( e ) i(λ 1Y 1 +λ 2 Y 2 ) = log E ( e i(λ 1f 1 (s)+λ 2 f 2 (s))m ) ds, see [6 for details. Assuming that E M k < and differentiating k times with respect to λ 1 gives κ k (Y 1 ) = κ k (M ) f 1 (s) k ds, where κ k (X) denotes the kth cumulant of the variable X. Similarly, if E(M ) 2 < we find Cov(Y 1, Y 2 ) = Var(M ) f 1 (s)f 2 (s) ds. If furthermore EY 1 = EY 2 = 0 and E(M ) 4 <, it holds E ( (Y1 2 EY1 2 )(Y2 2 EY2 2 ) ) ( ) ( 2 = κ 4 (M ) f 1 (s) 2 f 2 (s) 2 ds + 2 Var(M ) 2 f 1 (s)f 2 (s) ds). 13

15 B Corollary to Ottaviani s inequality In this Appendix (X n ) n 1 will denote a sequence of random variables. Let S n = X X n and M n = max 1 k n S k. We will use the following version of Ottaviani s inequality, see [7, Section Theorem B.1. Let (X n ) n 1 be a sequence of independent random variables. Then, for all x, y R. P(M n > x + y) min 1 k n P( S n S k y) P( S n > x) Corollary B.2. Let (X n ) n 1 be a sequence of independent random variables. Then for all p 1, sup n 1 E S n p < if and only if EM p <, where M = sup n 1 M n. Note that the corollary can be shown for 0 < p < 1 as well. However, in the present paper we only need the result for p = 2, 4. Proof of the corollary. First notice that it is easy to show directly that EM p < implies sup n 1 E S n p <. The opposite statement follows from monotone convergence, if we can show that for all n 1 EM p n C max 1 k n E S k p (B.1) for a finite constant C that only depends on p. By assumption, sup n 1 E S n p < and therefore, m = max 1 k n E S k p <. Let τ = (2 p+1 m) 1/p. Then, by Markov s inequality, we have for 1 k n P( S n S k > τ) E( S n S k p ) τ p 2 p E S n p + E S k p τ p 1 2, where the inequality x + y p 2 p ( x p + y p ) has been applied. It follows that min 1 k n P( S n S k τ) 1, so by Theorem B.1, 2 P(M n > x) = P(M n > (x τ) + τ) 2P( S n + τ > x). Integrating both sides with respect to px p 1 dx gives EM p n 2E ( ( S n + τ) p) 2 p+1 E( S n p + τ p ) 2 p+1 (1 + 2 p+1 )m. 14

16 References [1 Adler, R. J., Samorodnitsky, G. and Taylor, J. E. (2010). Excursion sets of three classes of stable random fields. Adv. Appl. Prob. 42, [2 Adler, R. J., Samorodnitsky, G. and Taylor, J. E. (2013). High level excursion set geometry for non Gaussian infinitely divisible random fields. Ann. Prob. 41, [3 Barndorff-Nielsen, O. E. and Schmiegel, J (2004). Lévy based tempospatial modelling with applications to turbulence. Uspekhi Mat. Nauk. 159, [4 El Machkouri, M., Volný, D. and Wu, W. B. (2013). A central limit theorem for stationary random fields. Stoch. Proc. Appl. 123, [5 Heinrich, L. (1988). Asymptotic behaviour of an empirical nearest neighbour distance function for stationary Poisson cluster processes. Math. Nachr. 136, [6 Hellmund, G., Prokešová, M. and Jensen, E. B. V. (2008). Lévy based Cox point processes. Adv. Appl. Prob. 40, [7 Hoffmann Jørgensen, J. (1994). Probability with a View towards Statistics. Vol. I. Chapmann & Hall, New York and London. [8 Jónsdóttir, K. Ý., Schmiegel, J. and Jensen, E. B. V. (2008). Lévybased growth models. Bernoulli 14, [9 Jónsdóttir, K. Ý., Rønn-Nielsen, A., Mouridsen, K. and Jensen, E. B. V. (2013). Lévy-based modelling in brain imaging. Scand. J. Stat. 40, [10 Rajput, B. S. and Rosiński, J. (1989). Spectral representations of infinitely divisible processes. Prob. Theory Rel. Fields 82, [11 Rosiński, J. and Samorodnitsky, G. (1993). Distributions of subadditive functionals of sample paths of infinitely divisible processes. Ann. Probab. 21, [12 Rønn-Nielsen, A. and Jensen, E. B. V. (2016). Tail asymptotics for the supremum of an infinitely divisible field with convolution equivalent Lévy measure. J. Appl. Prob. 53, [13 Rønn-Nielsen, A., Sporring, J. and Jensen, E. B. V. (2017). Estimation of sample spacing in stochastic processes. Image Anal. Stereol. 36, [14 Rønn-Nielsen, A. and Jensen, E. B. V. (2017). Excursion sets of infinitely divisible random fields with convolution equivalent Lévy measure. J. Appl. Prob. 54,

17 [15 Sato, K.-I. (1999). Lévy Processes and Infinitely Divisible Distributions. Cambridge University Press, Cambridge. 16

RESEARCH REPORT. Estimation of sample spacing in stochastic processes. Anders Rønn-Nielsen, Jon Sporring and Eva B.

RESEARCH REPORT. Estimation of sample spacing in stochastic processes.   Anders Rønn-Nielsen, Jon Sporring and Eva B. CENTRE FOR STOCHASTIC GEOMETRY AND ADVANCED BIOIMAGING www.csgb.dk RESEARCH REPORT 6 Anders Rønn-Nielsen, Jon Sporring and Eva B. Vedel Jensen Estimation of sample spacing in stochastic processes No. 7,

More information

On Optimal Stopping Problems with Power Function of Lévy Processes

On Optimal Stopping Problems with Power Function of Lévy Processes On Optimal Stopping Problems with Power Function of Lévy Processes Budhi Arta Surya Department of Mathematics University of Utrecht 31 August 2006 This talk is based on the joint paper with A.E. Kyprianou:

More information

LARGE DEVIATION PROBABILITIES FOR SUMS OF HEAVY-TAILED DEPENDENT RANDOM VECTORS*

LARGE DEVIATION PROBABILITIES FOR SUMS OF HEAVY-TAILED DEPENDENT RANDOM VECTORS* LARGE EVIATION PROBABILITIES FOR SUMS OF HEAVY-TAILE EPENENT RANOM VECTORS* Adam Jakubowski Alexander V. Nagaev Alexander Zaigraev Nicholas Copernicus University Faculty of Mathematics and Computer Science

More information

GARCH processes continuous counterparts (Part 2)

GARCH processes continuous counterparts (Part 2) GARCH processes continuous counterparts (Part 2) Alexander Lindner Centre of Mathematical Sciences Technical University of Munich D 85747 Garching Germany lindner@ma.tum.de http://www-m1.ma.tum.de/m4/pers/lindner/

More information

Evgeny Spodarev WIAS, Berlin. Limit theorems for excursion sets of stationary random fields

Evgeny Spodarev WIAS, Berlin. Limit theorems for excursion sets of stationary random fields Evgeny Spodarev 23.01.2013 WIAS, Berlin Limit theorems for excursion sets of stationary random fields page 2 LT for excursion sets of stationary random fields Overview 23.01.2013 Overview Motivation Excursion

More information

Spring 2012 Math 541B Exam 1

Spring 2012 Math 541B Exam 1 Spring 2012 Math 541B Exam 1 1. A sample of size n is drawn without replacement from an urn containing N balls, m of which are red and N m are black; the balls are otherwise indistinguishable. Let X denote

More information

Octavio Arizmendi, Ole E. Barndorff-Nielsen and Víctor Pérez-Abreu

Octavio Arizmendi, Ole E. Barndorff-Nielsen and Víctor Pérez-Abreu ON FREE AND CLASSICAL TYPE G DISTRIBUTIONS Octavio Arizmendi, Ole E. Barndorff-Nielsen and Víctor Pérez-Abreu Comunicación del CIMAT No I-9-4/3-4-29 (PE /CIMAT) On Free and Classical Type G Distributions

More information

Limit Theorems for Exchangeable Random Variables via Martingales

Limit Theorems for Exchangeable Random Variables via Martingales Limit Theorems for Exchangeable Random Variables via Martingales Neville Weber, University of Sydney. May 15, 2006 Probabilistic Symmetries and Their Applications A sequence of random variables {X 1, X

More information

GARCH processes probabilistic properties (Part 1)

GARCH processes probabilistic properties (Part 1) GARCH processes probabilistic properties (Part 1) Alexander Lindner Centre of Mathematical Sciences Technical University of Munich D 85747 Garching Germany lindner@ma.tum.de http://www-m1.ma.tum.de/m4/pers/lindner/

More information

Asymptotics for posterior hazards

Asymptotics for posterior hazards Asymptotics for posterior hazards Pierpaolo De Blasi University of Turin 10th August 2007, BNR Workshop, Isaac Newton Intitute, Cambridge, UK Joint work with Giovanni Peccati (Université Paris VI) and

More information

Additive functionals of infinite-variance moving averages. Wei Biao Wu The University of Chicago TECHNICAL REPORT NO. 535

Additive functionals of infinite-variance moving averages. Wei Biao Wu The University of Chicago TECHNICAL REPORT NO. 535 Additive functionals of infinite-variance moving averages Wei Biao Wu The University of Chicago TECHNICAL REPORT NO. 535 Departments of Statistics The University of Chicago Chicago, Illinois 60637 June

More information

Problem set 1, Real Analysis I, Spring, 2015.

Problem set 1, Real Analysis I, Spring, 2015. Problem set 1, Real Analysis I, Spring, 015. (1) Let f n : D R be a sequence of functions with domain D R n. Recall that f n f uniformly if and only if for all ɛ > 0, there is an N = N(ɛ) so that if n

More information

Theoretical Statistics. Lecture 1.

Theoretical Statistics. Lecture 1. 1. Organizational issues. 2. Overview. 3. Stochastic convergence. Theoretical Statistics. Lecture 1. eter Bartlett 1 Organizational Issues Lectures: Tue/Thu 11am 12:30pm, 332 Evans. eter Bartlett. bartlett@stat.

More information

6.1 Moment Generating and Characteristic Functions

6.1 Moment Generating and Characteristic Functions Chapter 6 Limit Theorems The power statistics can mostly be seen when there is a large collection of data points and we are interested in understanding the macro state of the system, e.g., the average,

More information

Asymptotic Statistics-III. Changliang Zou

Asymptotic Statistics-III. Changliang Zou Asymptotic Statistics-III Changliang Zou The multivariate central limit theorem Theorem (Multivariate CLT for iid case) Let X i be iid random p-vectors with mean µ and and covariance matrix Σ. Then n (

More information

. Find E(V ) and var(v ).

. Find E(V ) and var(v ). Math 6382/6383: Probability Models and Mathematical Statistics Sample Preliminary Exam Questions 1. A person tosses a fair coin until she obtains 2 heads in a row. She then tosses a fair die the same number

More information

Fundamental Tools - Probability Theory IV

Fundamental Tools - Probability Theory IV Fundamental Tools - Probability Theory IV MSc Financial Mathematics The University of Warwick October 1, 2015 MSc Financial Mathematics Fundamental Tools - Probability Theory IV 1 / 14 Model-independent

More information

Random Bernstein-Markov factors

Random Bernstein-Markov factors Random Bernstein-Markov factors Igor Pritsker and Koushik Ramachandran October 20, 208 Abstract For a polynomial P n of degree n, Bernstein s inequality states that P n n P n for all L p norms on the unit

More information

The Equivalence of Ergodicity and Weak Mixing for Infinitely Divisible Processes1

The Equivalence of Ergodicity and Weak Mixing for Infinitely Divisible Processes1 Journal of Theoretical Probability. Vol. 10, No. 1, 1997 The Equivalence of Ergodicity and Weak Mixing for Infinitely Divisible Processes1 Jan Rosinski2 and Tomasz Zak Received June 20, 1995: revised September

More information

The Central Limit Theorem: More of the Story

The Central Limit Theorem: More of the Story The Central Limit Theorem: More of the Story Steven Janke November 2015 Steven Janke (Seminar) The Central Limit Theorem:More of the Story November 2015 1 / 33 Central Limit Theorem Theorem (Central Limit

More information

STAT 200C: High-dimensional Statistics

STAT 200C: High-dimensional Statistics STAT 200C: High-dimensional Statistics Arash A. Amini May 30, 2018 1 / 59 Classical case: n d. Asymptotic assumption: d is fixed and n. Basic tools: LLN and CLT. High-dimensional setting: n d, e.g. n/d

More information

A log-scale limit theorem for one-dimensional random walks in random environments

A log-scale limit theorem for one-dimensional random walks in random environments A log-scale limit theorem for one-dimensional random walks in random environments Alexander Roitershtein August 3, 2004; Revised April 1, 2005 Abstract We consider a transient one-dimensional random walk

More information

Regular Variation and Extreme Events for Stochastic Processes

Regular Variation and Extreme Events for Stochastic Processes 1 Regular Variation and Extreme Events for Stochastic Processes FILIP LINDSKOG Royal Institute of Technology, Stockholm 2005 based on joint work with Henrik Hult www.math.kth.se/ lindskog 2 Extremes for

More information

Convergence rates in weighted L 1 spaces of kernel density estimators for linear processes

Convergence rates in weighted L 1 spaces of kernel density estimators for linear processes Alea 4, 117 129 (2008) Convergence rates in weighted L 1 spaces of kernel density estimators for linear processes Anton Schick and Wolfgang Wefelmeyer Anton Schick, Department of Mathematical Sciences,

More information

The Subexponential Product Convolution of Two Weibull-type Distributions

The Subexponential Product Convolution of Two Weibull-type Distributions The Subexponential Product Convolution of Two Weibull-type Distributions Yan Liu School of Mathematics and Statistics Wuhan University Wuhan, Hubei 4372, P.R. China E-mail: yanliu@whu.edu.cn Qihe Tang

More information

Pointwise convergence rates and central limit theorems for kernel density estimators in linear processes

Pointwise convergence rates and central limit theorems for kernel density estimators in linear processes Pointwise convergence rates and central limit theorems for kernel density estimators in linear processes Anton Schick Binghamton University Wolfgang Wefelmeyer Universität zu Köln Abstract Convergence

More information

Available online at ISSN (Print): , ISSN (Online): , ISSN (CD-ROM):

Available online at   ISSN (Print): , ISSN (Online): , ISSN (CD-ROM): American International Journal of Research in Formal, Applied & Natural Sciences Available online at http://www.iasir.net ISSN (Print): 2328-3777, ISSN (Online): 2328-3785, ISSN (CD-ROM): 2328-3793 AIJRFANS

More information

Information Theoretic Asymptotic Approximations for Distributions of Statistics

Information Theoretic Asymptotic Approximations for Distributions of Statistics Information Theoretic Asymptotic Approximations for Distributions of Statistics Ximing Wu Department of Agricultural Economics Texas A&M University Suojin Wang Department of Statistics Texas A&M University

More information

Multivariate Normal-Laplace Distribution and Processes

Multivariate Normal-Laplace Distribution and Processes CHAPTER 4 Multivariate Normal-Laplace Distribution and Processes The normal-laplace distribution, which results from the convolution of independent normal and Laplace random variables is introduced by

More information

Mean convergence theorems and weak laws of large numbers for weighted sums of random variables under a condition of weighted integrability

Mean convergence theorems and weak laws of large numbers for weighted sums of random variables under a condition of weighted integrability J. Math. Anal. Appl. 305 2005) 644 658 www.elsevier.com/locate/jmaa Mean convergence theorems and weak laws of large numbers for weighted sums of random variables under a condition of weighted integrability

More information

Maximum Likelihood Drift Estimation for Gaussian Process with Stationary Increments

Maximum Likelihood Drift Estimation for Gaussian Process with Stationary Increments Austrian Journal of Statistics April 27, Volume 46, 67 78. AJS http://www.ajs.or.at/ doi:.773/ajs.v46i3-4.672 Maximum Likelihood Drift Estimation for Gaussian Process with Stationary Increments Yuliya

More information

LIMITS FOR QUEUES AS THE WAITING ROOM GROWS. Bell Communications Research AT&T Bell Laboratories Red Bank, NJ Murray Hill, NJ 07974

LIMITS FOR QUEUES AS THE WAITING ROOM GROWS. Bell Communications Research AT&T Bell Laboratories Red Bank, NJ Murray Hill, NJ 07974 LIMITS FOR QUEUES AS THE WAITING ROOM GROWS by Daniel P. Heyman Ward Whitt Bell Communications Research AT&T Bell Laboratories Red Bank, NJ 07701 Murray Hill, NJ 07974 May 11, 1988 ABSTRACT We study the

More information

COMPLETE QTH MOMENT CONVERGENCE OF WEIGHTED SUMS FOR ARRAYS OF ROW-WISE EXTENDED NEGATIVELY DEPENDENT RANDOM VARIABLES

COMPLETE QTH MOMENT CONVERGENCE OF WEIGHTED SUMS FOR ARRAYS OF ROW-WISE EXTENDED NEGATIVELY DEPENDENT RANDOM VARIABLES Hacettepe Journal of Mathematics and Statistics Volume 43 2 204, 245 87 COMPLETE QTH MOMENT CONVERGENCE OF WEIGHTED SUMS FOR ARRAYS OF ROW-WISE EXTENDED NEGATIVELY DEPENDENT RANDOM VARIABLES M. L. Guo

More information

Theory and Applications of Stochastic Systems Lecture Exponential Martingale for Random Walk

Theory and Applications of Stochastic Systems Lecture Exponential Martingale for Random Walk Instructor: Victor F. Araman December 4, 2003 Theory and Applications of Stochastic Systems Lecture 0 B60.432.0 Exponential Martingale for Random Walk Let (S n : n 0) be a random walk with i.i.d. increments

More information

Asymptotic Tail Probabilities of Sums of Dependent Subexponential Random Variables

Asymptotic Tail Probabilities of Sums of Dependent Subexponential Random Variables Asymptotic Tail Probabilities of Sums of Dependent Subexponential Random Variables Jaap Geluk 1 and Qihe Tang 2 1 Department of Mathematics The Petroleum Institute P.O. Box 2533, Abu Dhabi, United Arab

More information

Max stable Processes & Random Fields: Representations, Models, and Prediction

Max stable Processes & Random Fields: Representations, Models, and Prediction Max stable Processes & Random Fields: Representations, Models, and Prediction Stilian Stoev University of Michigan, Ann Arbor March 2, 2011 Based on joint works with Yizao Wang and Murad S. Taqqu. 1 Preliminaries

More information

On the quantiles of the Brownian motion and their hitting times.

On the quantiles of the Brownian motion and their hitting times. On the quantiles of the Brownian motion and their hitting times. Angelos Dassios London School of Economics May 23 Abstract The distribution of the α-quantile of a Brownian motion on an interval [, t]

More information

Supermodular ordering of Poisson arrays

Supermodular ordering of Poisson arrays Supermodular ordering of Poisson arrays Bünyamin Kızıldemir Nicolas Privault Division of Mathematical Sciences School of Physical and Mathematical Sciences Nanyang Technological University 637371 Singapore

More information

Some functional (Hölderian) limit theorems and their applications (II)

Some functional (Hölderian) limit theorems and their applications (II) Some functional (Hölderian) limit theorems and their applications (II) Alfredas Račkauskas Vilnius University Outils Statistiques et Probabilistes pour la Finance Université de Rouen June 1 5, Rouen (Rouen

More information

On the asymptotic normality of frequency polygons for strongly mixing spatial processes. Mohamed EL MACHKOURI

On the asymptotic normality of frequency polygons for strongly mixing spatial processes. Mohamed EL MACHKOURI On the asymptotic normality of frequency polygons for strongly mixing spatial processes Mohamed EL MACHKOURI Laboratoire de Mathématiques Raphaël Salem UMR CNRS 6085, Université de Rouen France mohamed.elmachkouri@univ-rouen.fr

More information

Part IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

Part IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015 Part IA Probability Definitions Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.

More information

Least Squares Estimators for Stochastic Differential Equations Driven by Small Lévy Noises

Least Squares Estimators for Stochastic Differential Equations Driven by Small Lévy Noises Least Squares Estimators for Stochastic Differential Equations Driven by Small Lévy Noises Hongwei Long* Department of Mathematical Sciences, Florida Atlantic University, Boca Raton Florida 33431-991,

More information

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3 Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................

More information

BALANCING GAUSSIAN VECTORS. 1. Introduction

BALANCING GAUSSIAN VECTORS. 1. Introduction BALANCING GAUSSIAN VECTORS KEVIN P. COSTELLO Abstract. Let x 1,... x n be independent normally distributed vectors on R d. We determine the distribution function of the minimum norm of the 2 n vectors

More information

MAJORIZING MEASURES WITHOUT MEASURES. By Michel Talagrand URA 754 AU CNRS

MAJORIZING MEASURES WITHOUT MEASURES. By Michel Talagrand URA 754 AU CNRS The Annals of Probability 2001, Vol. 29, No. 1, 411 417 MAJORIZING MEASURES WITHOUT MEASURES By Michel Talagrand URA 754 AU CNRS We give a reformulation of majorizing measures that does not involve measures,

More information

Stable Process. 2. Multivariate Stable Distributions. July, 2006

Stable Process. 2. Multivariate Stable Distributions. July, 2006 Stable Process 2. Multivariate Stable Distributions July, 2006 1. Stable random vectors. 2. Characteristic functions. 3. Strictly stable and symmetric stable random vectors. 4. Sub-Gaussian random vectors.

More information

Ernesto Mordecki 1. Lecture III. PASI - Guanajuato - June 2010

Ernesto Mordecki 1. Lecture III. PASI - Guanajuato - June 2010 Optimal stopping for Hunt and Lévy processes Ernesto Mordecki 1 Lecture III. PASI - Guanajuato - June 2010 1Joint work with Paavo Salminen (Åbo, Finland) 1 Plan of the talk 1. Motivation: from Finance

More information

An introduction to Lévy processes

An introduction to Lévy processes with financial modelling in mind Department of Statistics, University of Oxford 27 May 2008 1 Motivation 2 3 General modelling with Lévy processes Modelling financial price processes Quadratic variation

More information

Risk Bounds for Lévy Processes in the PAC-Learning Framework

Risk Bounds for Lévy Processes in the PAC-Learning Framework Risk Bounds for Lévy Processes in the PAC-Learning Framework Chao Zhang School of Computer Engineering anyang Technological University Dacheng Tao School of Computer Engineering anyang Technological University

More information

The Codimension of the Zeros of a Stable Process in Random Scenery

The Codimension of the Zeros of a Stable Process in Random Scenery The Codimension of the Zeros of a Stable Process in Random Scenery Davar Khoshnevisan The University of Utah, Department of Mathematics Salt Lake City, UT 84105 0090, U.S.A. davar@math.utah.edu http://www.math.utah.edu/~davar

More information

Research Reports on Mathematical and Computing Sciences

Research Reports on Mathematical and Computing Sciences ISSN 1342-2804 Research Reports on Mathematical and Computing Sciences Long-tailed degree distribution of a random geometric graph constructed by the Boolean model with spherical grains Naoto Miyoshi,

More information

On an Effective Solution of the Optimal Stopping Problem for Random Walks

On an Effective Solution of the Optimal Stopping Problem for Random Walks QUANTITATIVE FINANCE RESEARCH CENTRE QUANTITATIVE FINANCE RESEARCH CENTRE Research Paper 131 September 2004 On an Effective Solution of the Optimal Stopping Problem for Random Walks Alexander Novikov and

More information

Statistical inference on Lévy processes

Statistical inference on Lévy processes Alberto Coca Cabrero University of Cambridge - CCA Supervisors: Dr. Richard Nickl and Professor L.C.G.Rogers Funded by Fundación Mutua Madrileña and EPSRC MASDOC/CCA student workshop 2013 26th March Outline

More information

Probability and Measure

Probability and Measure Chapter 4 Probability and Measure 4.1 Introduction In this chapter we will examine probability theory from the measure theoretic perspective. The realisation that measure theory is the foundation of probability

More information

Long-range dependence

Long-range dependence Long-range dependence Kechagias Stefanos University of North Carolina at Chapel Hill May 23, 2013 Kechagias Stefanos (UNC) Long-range dependence May 23, 2013 1 / 45 Outline 1 Introduction to time series

More information

STAT 7032 Probability Spring Wlodek Bryc

STAT 7032 Probability Spring Wlodek Bryc STAT 7032 Probability Spring 2018 Wlodek Bryc Created: Friday, Jan 2, 2014 Revised for Spring 2018 Printed: January 9, 2018 File: Grad-Prob-2018.TEX Department of Mathematical Sciences, University of Cincinnati,

More information

arxiv: v2 [math.pr] 4 Sep 2017

arxiv: v2 [math.pr] 4 Sep 2017 arxiv:1708.08576v2 [math.pr] 4 Sep 2017 On the Speed of an Excited Asymmetric Random Walk Mike Cinkoske, Joe Jackson, Claire Plunkett September 5, 2017 Abstract An excited random walk is a non-markovian

More information

Reduced-load equivalence for queues with Gaussian input

Reduced-load equivalence for queues with Gaussian input Reduced-load equivalence for queues with Gaussian input A. B. Dieker CWI P.O. Box 94079 1090 GB Amsterdam, the Netherlands and University of Twente Faculty of Mathematical Sciences P.O. Box 17 7500 AE

More information

Monte-Carlo MMD-MA, Université Paris-Dauphine. Xiaolu Tan

Monte-Carlo MMD-MA, Université Paris-Dauphine. Xiaolu Tan Monte-Carlo MMD-MA, Université Paris-Dauphine Xiaolu Tan tan@ceremade.dauphine.fr Septembre 2015 Contents 1 Introduction 1 1.1 The principle.................................. 1 1.2 The error analysis

More information

Gaussian Random Fields: Geometric Properties and Extremes

Gaussian Random Fields: Geometric Properties and Extremes Gaussian Random Fields: Geometric Properties and Extremes Yimin Xiao Michigan State University Outline Lecture 1: Gaussian random fields and their regularity Lecture 2: Hausdorff dimension results and

More information

WEIBULL RENEWAL PROCESSES

WEIBULL RENEWAL PROCESSES Ann. Inst. Statist. Math. Vol. 46, No. 4, 64-648 (994) WEIBULL RENEWAL PROCESSES NIKOS YANNAROS Department of Statistics, University of Stockholm, S-06 9 Stockholm, Sweden (Received April 9, 993; revised

More information

Resistance Growth of Branching Random Networks

Resistance Growth of Branching Random Networks Peking University Oct.25, 2018, Chengdu Joint work with Yueyun Hu (U. Paris 13) and Shen Lin (U. Paris 6), supported by NSFC Grant No. 11528101 (2016-2017) for Research Cooperation with Oversea Investigators

More information

2016 Final for Advanced Probability for Communications

2016 Final for Advanced Probability for Communications 06 Final for Advanced Probability for Communications The number of total points is 00 in this exam.. 8 pt. The Law of the Iterated Logarithm states that for i.i.d. {X i } i with mean 0 and variance, [

More information

Second-Order Analysis of Spatial Point Processes

Second-Order Analysis of Spatial Point Processes Title Second-Order Analysis of Spatial Point Process Tonglin Zhang Outline Outline Spatial Point Processes Intensity Functions Mean and Variance Pair Correlation Functions Stationarity K-functions Some

More information

STAT 200C: High-dimensional Statistics

STAT 200C: High-dimensional Statistics STAT 200C: High-dimensional Statistics Arash A. Amini April 27, 2018 1 / 80 Classical case: n d. Asymptotic assumption: d is fixed and n. Basic tools: LLN and CLT. High-dimensional setting: n d, e.g. n/d

More information

Weighted Sums of Orthogonal Polynomials Related to Birth-Death Processes with Killing

Weighted Sums of Orthogonal Polynomials Related to Birth-Death Processes with Killing Advances in Dynamical Systems and Applications ISSN 0973-5321, Volume 8, Number 2, pp. 401 412 (2013) http://campus.mst.edu/adsa Weighted Sums of Orthogonal Polynomials Related to Birth-Death Processes

More information

The autocorrelation and autocovariance functions - helpful tools in the modelling problem

The autocorrelation and autocovariance functions - helpful tools in the modelling problem The autocorrelation and autocovariance functions - helpful tools in the modelling problem J. Nowicka-Zagrajek A. Wy lomańska Institute of Mathematics and Computer Science Wroc law University of Technology,

More information

Limit theorems for multipower variation in the presence of jumps

Limit theorems for multipower variation in the presence of jumps Limit theorems for multipower variation in the presence of jumps Ole E. Barndorff-Nielsen Department of Mathematical Sciences, University of Aarhus, Ny Munkegade, DK-8 Aarhus C, Denmark oebn@imf.au.dk

More information

1 Probability theory. 2 Random variables and probability theory.

1 Probability theory. 2 Random variables and probability theory. Probability theory Here we summarize some of the probability theory we need. If this is totally unfamiliar to you, you should look at one of the sources given in the readings. In essence, for the major

More information

The largest eigenvalues of the sample covariance matrix. in the heavy-tail case

The largest eigenvalues of the sample covariance matrix. in the heavy-tail case The largest eigenvalues of the sample covariance matrix 1 in the heavy-tail case Thomas Mikosch University of Copenhagen Joint work with Richard A. Davis (Columbia NY), Johannes Heiny (Aarhus University)

More information

Notes 6 : First and second moment methods

Notes 6 : First and second moment methods Notes 6 : First and second moment methods Math 733-734: Theory of Probability Lecturer: Sebastien Roch References: [Roc, Sections 2.1-2.3]. Recall: THM 6.1 (Markov s inequality) Let X be a non-negative

More information

18.175: Lecture 3 Integration

18.175: Lecture 3 Integration 18.175: Lecture 3 Scott Sheffield MIT Outline Outline Recall definitions Probability space is triple (Ω, F, P) where Ω is sample space, F is set of events (the σ-algebra) and P : F [0, 1] is the probability

More information

Packing-Dimension Profiles and Fractional Brownian Motion

Packing-Dimension Profiles and Fractional Brownian Motion Under consideration for publication in Math. Proc. Camb. Phil. Soc. 1 Packing-Dimension Profiles and Fractional Brownian Motion By DAVAR KHOSHNEVISAN Department of Mathematics, 155 S. 1400 E., JWB 233,

More information

Weighted Exponential Distribution and Process

Weighted Exponential Distribution and Process Weighted Exponential Distribution and Process Jilesh V Some generalizations of exponential distribution and related time series models Thesis. Department of Statistics, University of Calicut, 200 Chapter

More information

OPTIMAL TRANSPORTATION PLANS AND CONVERGENCE IN DISTRIBUTION

OPTIMAL TRANSPORTATION PLANS AND CONVERGENCE IN DISTRIBUTION OPTIMAL TRANSPORTATION PLANS AND CONVERGENCE IN DISTRIBUTION J.A. Cuesta-Albertos 1, C. Matrán 2 and A. Tuero-Díaz 1 1 Departamento de Matemáticas, Estadística y Computación. Universidad de Cantabria.

More information

Set-Indexed Processes with Independent Increments

Set-Indexed Processes with Independent Increments Set-Indexed Processes with Independent Increments R.M. Balan May 13, 2002 Abstract Set-indexed process with independent increments are described by convolution systems ; the construction of such a process

More information

Formulas for probability theory and linear models SF2941

Formulas for probability theory and linear models SF2941 Formulas for probability theory and linear models SF2941 These pages + Appendix 2 of Gut) are permitted as assistance at the exam. 11 maj 2008 Selected formulae of probability Bivariate probability Transforms

More information

On large deviations for combinatorial sums

On large deviations for combinatorial sums arxiv:1901.0444v1 [math.pr] 14 Jan 019 On large deviations for combinatorial sums Andrei N. Frolov Dept. of Mathematics and Mechanics St. Petersburg State University St. Petersburg, Russia E-mail address:

More information

Brownian Motion and Conditional Probability

Brownian Motion and Conditional Probability Math 561: Theory of Probability (Spring 2018) Week 10 Brownian Motion and Conditional Probability 10.1 Standard Brownian Motion (SBM) Brownian motion is a stochastic process with both practical and theoretical

More information

9 Brownian Motion: Construction

9 Brownian Motion: Construction 9 Brownian Motion: Construction 9.1 Definition and Heuristics The central limit theorem states that the standard Gaussian distribution arises as the weak limit of the rescaled partial sums S n / p n of

More information

PACKING-DIMENSION PROFILES AND FRACTIONAL BROWNIAN MOTION

PACKING-DIMENSION PROFILES AND FRACTIONAL BROWNIAN MOTION PACKING-DIMENSION PROFILES AND FRACTIONAL BROWNIAN MOTION DAVAR KHOSHNEVISAN AND YIMIN XIAO Abstract. In order to compute the packing dimension of orthogonal projections Falconer and Howroyd 997) introduced

More information

Topics in fractional Brownian motion

Topics in fractional Brownian motion Topics in fractional Brownian motion Esko Valkeila Spring School, Jena 25.3. 2011 We plan to discuss the following items during these lectures: Fractional Brownian motion and its properties. Topics in

More information

EC212: Introduction to Econometrics Review Materials (Wooldridge, Appendix)

EC212: Introduction to Econometrics Review Materials (Wooldridge, Appendix) 1 EC212: Introduction to Econometrics Review Materials (Wooldridge, Appendix) Taisuke Otsu London School of Economics Summer 2018 A.1. Summation operator (Wooldridge, App. A.1) 2 3 Summation operator For

More information

Gaussian Random Fields: Excursion Probabilities

Gaussian Random Fields: Excursion Probabilities Gaussian Random Fields: Excursion Probabilities Yimin Xiao Michigan State University Lecture 5 Excursion Probabilities 1 Some classical results on excursion probabilities A large deviation result Upper

More information

Random Process Lecture 1. Fundamentals of Probability

Random Process Lecture 1. Fundamentals of Probability Random Process Lecture 1. Fundamentals of Probability Husheng Li Min Kao Department of Electrical Engineering and Computer Science University of Tennessee, Knoxville Spring, 2016 1/43 Outline 2/43 1 Syllabus

More information

Selected Exercises on Expectations and Some Probability Inequalities

Selected Exercises on Expectations and Some Probability Inequalities Selected Exercises on Expectations and Some Probability Inequalities # If E(X 2 ) = and E X a > 0, then P( X λa) ( λ) 2 a 2 for 0 < λ

More information

On the Convolution Order with Reliability Applications

On the Convolution Order with Reliability Applications Applied Mathematical Sciences, Vol. 3, 2009, no. 16, 767-778 On the Convolution Order with Reliability Applications A. Alzaid and M. Kayid King Saud University, College of Science Dept. of Statistics and

More information

LOCATION OF THE PATH SUPREMUM FOR SELF-SIMILAR PROCESSES WITH STATIONARY INCREMENTS. Yi Shen

LOCATION OF THE PATH SUPREMUM FOR SELF-SIMILAR PROCESSES WITH STATIONARY INCREMENTS. Yi Shen LOCATION OF THE PATH SUPREMUM FOR SELF-SIMILAR PROCESSES WITH STATIONARY INCREMENTS Yi Shen Department of Statistics and Actuarial Science, University of Waterloo. Waterloo, ON N2L 3G1, Canada. Abstract.

More information

18.175: Lecture 13 Infinite divisibility and Lévy processes

18.175: Lecture 13 Infinite divisibility and Lévy processes 18.175 Lecture 13 18.175: Lecture 13 Infinite divisibility and Lévy processes Scott Sheffield MIT Outline Poisson random variable convergence Extend CLT idea to stable random variables Infinite divisibility

More information

Lecture 1: August 28

Lecture 1: August 28 36-705: Intermediate Statistics Fall 2017 Lecturer: Siva Balakrishnan Lecture 1: August 28 Our broad goal for the first few lectures is to try to understand the behaviour of sums of independent random

More information

Module 3. Function of a Random Variable and its distribution

Module 3. Function of a Random Variable and its distribution Module 3 Function of a Random Variable and its distribution 1. Function of a Random Variable Let Ω, F, be a probability space and let be random variable defined on Ω, F,. Further let h: R R be a given

More information

Simultaneous drift conditions for Adaptive Markov Chain Monte Carlo algorithms

Simultaneous drift conditions for Adaptive Markov Chain Monte Carlo algorithms Simultaneous drift conditions for Adaptive Markov Chain Monte Carlo algorithms Yan Bai Feb 2009; Revised Nov 2009 Abstract In the paper, we mainly study ergodicity of adaptive MCMC algorithms. Assume that

More information

Non-homogeneous random walks on a semi-infinite strip

Non-homogeneous random walks on a semi-infinite strip Non-homogeneous random walks on a semi-infinite strip Chak Hei Lo Joint work with Andrew R. Wade World Congress in Probability and Statistics 11th July, 2016 Outline Motivation: Lamperti s problem Our

More information

Tail process and its role in limit theorems Bojan Basrak, University of Zagreb

Tail process and its role in limit theorems Bojan Basrak, University of Zagreb Tail process and its role in limit theorems Bojan Basrak, University of Zagreb The Fields Institute Toronto, May 2016 based on the joint work (in progress) with Philippe Soulier, Azra Tafro, Hrvoje Planinić

More information

RESEARCH REPORT. The Cavalieri estimator with unequal section spacing revisited. Markus Kiderlen and Karl-Anton Dorph-Petersen

RESEARCH REPORT. The Cavalieri estimator with unequal section spacing revisited.   Markus Kiderlen and Karl-Anton Dorph-Petersen CENTRE FOR STOCHASTIC GEOMETRY AND ADVANCED BIOIMAGING 017 www.csgb.dk RESEARCH REPORT Markus Kiderlen and Karl-Anton Dorph-Petersen The Cavalieri estimator with unequal section spacing revisited No. 04,

More information

LECTURE 10: REVIEW OF POWER SERIES. 1. Motivation

LECTURE 10: REVIEW OF POWER SERIES. 1. Motivation LECTURE 10: REVIEW OF POWER SERIES By definition, a power series centered at x 0 is a series of the form where a 0, a 1,... and x 0 are constants. For convenience, we shall mostly be concerned with the

More information

Maximum Entropy Approximations for Asymptotic Distributions of Smooth Functions of Sample Means

Maximum Entropy Approximations for Asymptotic Distributions of Smooth Functions of Sample Means Scandinavian Journal of Statistics Vol. 38: 130 146 2011 doi: 10.1111/j.1467-9469.2010.00698.x Published by Blackwell Publishing Ltd. Maximum Entropy Approximations for Asymptotic Distributions of Smooth

More information

1. Stochastic Processes and filtrations

1. Stochastic Processes and filtrations 1. Stochastic Processes and 1. Stoch. pr., A stochastic process (X t ) t T is a collection of random variables on (Ω, F) with values in a measurable space (S, S), i.e., for all t, In our case X t : Ω S

More information

Extreme inference in stationary time series

Extreme inference in stationary time series Extreme inference in stationary time series Moritz Jirak FOR 1735 February 8, 2013 1 / 30 Outline 1 Outline 2 Motivation The multivariate CLT Measuring discrepancies 3 Some theory and problems The problem

More information

UPPER DEVIATIONS FOR SPLIT TIMES OF BRANCHING PROCESSES

UPPER DEVIATIONS FOR SPLIT TIMES OF BRANCHING PROCESSES Applied Probability Trust 7 May 22 UPPER DEVIATIONS FOR SPLIT TIMES OF BRANCHING PROCESSES HAMED AMINI, AND MARC LELARGE, ENS-INRIA Abstract Upper deviation results are obtained for the split time of a

More information