STOCHASTIC GEOMETRY BIOIMAGING

Similar documents
RESEARCH REPORT. Estimation of sample spacing in stochastic processes. Anders Rønn-Nielsen, Jon Sporring and Eva B.

On Optimal Stopping Problems with Power Function of Lévy Processes

LARGE DEVIATION PROBABILITIES FOR SUMS OF HEAVY-TAILED DEPENDENT RANDOM VECTORS*

GARCH processes continuous counterparts (Part 2)

Evgeny Spodarev WIAS, Berlin. Limit theorems for excursion sets of stationary random fields

Spring 2012 Math 541B Exam 1

Octavio Arizmendi, Ole E. Barndorff-Nielsen and Víctor Pérez-Abreu

Limit Theorems for Exchangeable Random Variables via Martingales

GARCH processes probabilistic properties (Part 1)

Asymptotics for posterior hazards

Additive functionals of infinite-variance moving averages. Wei Biao Wu The University of Chicago TECHNICAL REPORT NO. 535

Problem set 1, Real Analysis I, Spring, 2015.

Theoretical Statistics. Lecture 1.

6.1 Moment Generating and Characteristic Functions

Asymptotic Statistics-III. Changliang Zou

. Find E(V ) and var(v ).

Fundamental Tools - Probability Theory IV

Random Bernstein-Markov factors

The Equivalence of Ergodicity and Weak Mixing for Infinitely Divisible Processes1

The Central Limit Theorem: More of the Story

STAT 200C: High-dimensional Statistics

A log-scale limit theorem for one-dimensional random walks in random environments

Regular Variation and Extreme Events for Stochastic Processes

Convergence rates in weighted L 1 spaces of kernel density estimators for linear processes

The Subexponential Product Convolution of Two Weibull-type Distributions

Pointwise convergence rates and central limit theorems for kernel density estimators in linear processes

Available online at ISSN (Print): , ISSN (Online): , ISSN (CD-ROM):

Information Theoretic Asymptotic Approximations for Distributions of Statistics

Multivariate Normal-Laplace Distribution and Processes

Mean convergence theorems and weak laws of large numbers for weighted sums of random variables under a condition of weighted integrability

Maximum Likelihood Drift Estimation for Gaussian Process with Stationary Increments

LIMITS FOR QUEUES AS THE WAITING ROOM GROWS. Bell Communications Research AT&T Bell Laboratories Red Bank, NJ Murray Hill, NJ 07974

COMPLETE QTH MOMENT CONVERGENCE OF WEIGHTED SUMS FOR ARRAYS OF ROW-WISE EXTENDED NEGATIVELY DEPENDENT RANDOM VARIABLES

Theory and Applications of Stochastic Systems Lecture Exponential Martingale for Random Walk

Asymptotic Tail Probabilities of Sums of Dependent Subexponential Random Variables

Max stable Processes & Random Fields: Representations, Models, and Prediction

On the quantiles of the Brownian motion and their hitting times.

Supermodular ordering of Poisson arrays

Some functional (Hölderian) limit theorems and their applications (II)

On the asymptotic normality of frequency polygons for strongly mixing spatial processes. Mohamed EL MACHKOURI

Part IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

Least Squares Estimators for Stochastic Differential Equations Driven by Small Lévy Noises

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

BALANCING GAUSSIAN VECTORS. 1. Introduction

MAJORIZING MEASURES WITHOUT MEASURES. By Michel Talagrand URA 754 AU CNRS

Stable Process. 2. Multivariate Stable Distributions. July, 2006

Ernesto Mordecki 1. Lecture III. PASI - Guanajuato - June 2010

An introduction to Lévy processes

Risk Bounds for Lévy Processes in the PAC-Learning Framework

The Codimension of the Zeros of a Stable Process in Random Scenery

Research Reports on Mathematical and Computing Sciences

On an Effective Solution of the Optimal Stopping Problem for Random Walks

Statistical inference on Lévy processes

Probability and Measure

Long-range dependence

STAT 7032 Probability Spring Wlodek Bryc

arxiv: v2 [math.pr] 4 Sep 2017

Reduced-load equivalence for queues with Gaussian input

Monte-Carlo MMD-MA, Université Paris-Dauphine. Xiaolu Tan

Gaussian Random Fields: Geometric Properties and Extremes

WEIBULL RENEWAL PROCESSES

Resistance Growth of Branching Random Networks

2016 Final for Advanced Probability for Communications

Second-Order Analysis of Spatial Point Processes

STAT 200C: High-dimensional Statistics

Weighted Sums of Orthogonal Polynomials Related to Birth-Death Processes with Killing

The autocorrelation and autocovariance functions - helpful tools in the modelling problem

Limit theorems for multipower variation in the presence of jumps

1 Probability theory. 2 Random variables and probability theory.

The largest eigenvalues of the sample covariance matrix. in the heavy-tail case

Notes 6 : First and second moment methods

18.175: Lecture 3 Integration

Packing-Dimension Profiles and Fractional Brownian Motion

Weighted Exponential Distribution and Process

OPTIMAL TRANSPORTATION PLANS AND CONVERGENCE IN DISTRIBUTION

Set-Indexed Processes with Independent Increments

Formulas for probability theory and linear models SF2941

On large deviations for combinatorial sums

Brownian Motion and Conditional Probability

9 Brownian Motion: Construction

PACKING-DIMENSION PROFILES AND FRACTIONAL BROWNIAN MOTION

Topics in fractional Brownian motion

EC212: Introduction to Econometrics Review Materials (Wooldridge, Appendix)

Gaussian Random Fields: Excursion Probabilities

Random Process Lecture 1. Fundamentals of Probability

Selected Exercises on Expectations and Some Probability Inequalities

On the Convolution Order with Reliability Applications

LOCATION OF THE PATH SUPREMUM FOR SELF-SIMILAR PROCESSES WITH STATIONARY INCREMENTS. Yi Shen

18.175: Lecture 13 Infinite divisibility and Lévy processes

Lecture 1: August 28

Module 3. Function of a Random Variable and its distribution

Simultaneous drift conditions for Adaptive Markov Chain Monte Carlo algorithms

Non-homogeneous random walks on a semi-infinite strip

Tail process and its role in limit theorems Bojan Basrak, University of Zagreb

RESEARCH REPORT. The Cavalieri estimator with unequal section spacing revisited. Markus Kiderlen and Karl-Anton Dorph-Petersen

LECTURE 10: REVIEW OF POWER SERIES. 1. Motivation

Maximum Entropy Approximations for Asymptotic Distributions of Smooth Functions of Sample Means

1. Stochastic Processes and filtrations

Extreme inference in stationary time series

UPPER DEVIATIONS FOR SPLIT TIMES OF BRANCHING PROCESSES

Transcription:

CENTRE FOR STOCHASTIC GEOMETRY AND ADVANCED BIOIMAGING 2018 www.csgb.dk RESEARCH REPORT Anders Rønn-Nielsen and Eva B. Vedel Jensen Central limit theorem for mean and variogram estimators in Lévy based models No. 06, June 2018

Central limit theorem for mean and variogram estimators in Lévy based models Anders Rønn Nielsen 1 and Eva B. Vedel Jensen 2 1 Department of Finance, Copenhagen Business School 2 Department of Mathematics, Aarhus University Abstract We consider an infinitely divisible random field in given as an integral of a kernel function with respect to a Lévy basis. Under mild regularity conditions we derive central limit theorems for the moment estimators of the mean and the variogram of the field. Keywords: central limit theorem; infinite divisibility; Lévy-based modelling 1 Introduction In the present paper, we derive central limit theorems (CLTs) for mean and variogram estimators for a field (X i ) i Z d, defined by X i = f(s i) M(ds), (1.1) where M is an infinitely divisible, independently scattered random measure on and f is some kernel function. Lévy-based models as defined in (1.1) provide a flexible modelling framework that recently has been used in a range of modelling contexts, including modelling of turbulent flows, growth processes, Cox point processes and brain imaging data [3, 6, 8, 9. Due to the tractability of Lévy-based models, it has been possible to derive tail asymptotics for the supremum of such a field as well as asymptotics of excursion sets of the field. The case where M is a convolution equivalent measure is considered in [12, 14. Results for random measures with regularly varying tails have been derived in [11 and refined in [1, 2. In [4, CLTs are proved for a stationary random field (X i ) i Z d, discretely defined by X i = u(m s+i : s Z d ), (1.2) where (M i ) i Z d are independent and identically distributed (i.i.d.) random variables and u is a measurable function. In contrast, our set-up involves a random measure 1

defined on. However, if we discretize the integral in (1.1), then our model is of the form (1.2) with u a linear function and the random variables M i having an infinitely divisible distribution. Note that if this type of model is going to hold for finer and finer resolution, then the common distribution of the random variables M i has to be infinitely divisible. For a random field defined by (1.1), we prove in the present paper CLTs for S Γ = i Γ X i and T Γ (h) = i Γ (X i X i+h ) 2, where h Z d and Γ is a finite subset of Z d with number of elements increasing to infinity. We show under mild regularity conditions that both S Γ and T Γ (h) are asymptotically normally distributed and give explicit expressions for the asymptotic variances. It turns out that the asymptotic variance of T Γ (h) is equal to an approximate variance, earlier derived in [13. In the latter paper, the approximate variance was used in the assessment of the precision of a section spacing estimator in electron microscopy. More details are given in Section 5 below. If we discretize the integral in (1.1), both S Γ and T Γ (h) take the form ( ) v a s M s+i. (1.3) i Γ s Z d Here, v(x) = x and a s = f(s) for S Γ, while v(x) = x 2 and a s = f(s) f(s h) for T Γ (h). In [4, CLTs for random variables of the form (1.3) are discussed. In particular, a CLT for S Γ is derived which is a discrete analogue of the CLT we obtain. For second-order properties, [4 considers X i X i+h, i Γ but do not deal explicitly with the case where the mean of the field is unknown, as is done in our paper. The present paper is organised as follows. In Section 2 we define the random field (1.1) and state the main result. In Section 3 we provide the proofs. An important tool in the proofs will be a CLT (Theorem 3.1) for m n -dependent fields proved in [5. Section 4 discusses the case where the kernel function is isotropic, while the results are used for estimation of sample spacing in Section 5. Cumulant formulae and a proof of a corollary to Ottaviani s inequality are deferred to two Appendices. 2 Preliminaries and main result Consider an independently scattered random measure M on. Then for a sequence of disjoint sets (A n ) n N in B( ) the random variables (M(A n )) n N are independent and satisfy M( A n ) = M(A n ). Assume furthermore that M(A) is infinitely divisible for all A B( ). Then M is called a Lévy basis, see [3 and references therein. 2

For a random variable X let κ X (λ) denote its cumulant function log E(e iλx ). We shall assume that the Lévy basis is stationary and isotropic such that for A B( ) the variable M(A) has a Lévy Khintchine representation given by ( κ M(A) (λ) = iλam d (A) 1 2 λ2 θm d (A)+ e iλu 1 iλu1 [ 1,1 (u) ) F (ds, du), (2.1) A R where m d is the Lebesgue measure on (, B( )), a R, θ 0 and F is a measure on B( R) of the form F (A B) = m d (A)ρ(B). (2.2) In the following, we let M be a so called spot variable, i.e. M is a random variable with the distribution of M(A) for A B( ) with m d (A) = 1. We will assume that the Lévy measure ρ satisfies z 2 ρ(dz) < [ 1,1 and z ρ(dz) <. [ 1,1 c The first assumption is needed for ρ to be a Lévy measure. The second assumption is equivalent to E M < (see [15, Theorem 25.3). Now assume that f : [0, ) is a bounded kernel function satisfying f(s)ds <. (2.3) Let Z d be the grid integer numbers in. We consider the family of random variables (X i ) i Z d defined by X i = f(s i) M(ds). (2.4) See [10, Theorem 2.7 for existence of the integrals, where their conditions (i) (iii) can be easily verified under the given assumptions on M and f. The field is stationary, and if furthermore f has the form f(s) = g( s ), where is Euclidean norm, it will be isotropic. This special case is studied in Section 4. We will be interested in estimating the mean and the variogram of the field. We have µ = EX 0 = E(M ) f(s) ds and for h Z d γ(h) = E(X 0 X h ) 2 = Var(M ) (f(s) f(s h)) 2 ds, where identities from Appendix A have been applied. Note that due to stationarity, µ = EX i and γ(h) = E(X i X i+h ) 2 for all i Z d. For Γ a finite subset of Z d, we define S Γ = i Γ X i, (2.5) 3

and for h Z d T Γ (h) = (X i X i+h ) 2. (2.6) Unbiased estimators for µ and γ(h), respectively, are obtained as i Γ µ = 1 Γ S Γ γ(h) = 1 Γ T Γ, where Γ denotes the number of elements in Γ. The main result gives asymptotic normality of these estimators for Γ going to infinity. Theorem 2.1. Let (Γ n ) n 1 be a sequence of finite subsets of Z d and assume that goes to infinity and Γ n / goes to 0. Let σ 2 S = Var(M ) i Z d f(s)f(s i) ds. If σ 2 S <, then S Γn µ If furthermore E(M ) 4 <, then T Γn (h) γ(h) D N (0, σs) 2. (2.7) D N (0, σt 2 ), (2.8) where σt 2 = ( [κ ) 2 4 (M ) g(s) 2 g(s j) 2 ds + 2 Var(M ) 2 g(s)g(s j) ds <, j Z d κ 4 (M ) is the fourth cumulant of M and g(s) = f(s) f(s h). It should be noted that the assumptions of the theorem and Γ n / 0 are equivalent with the (less intuitive, but more computationally convenient) property and (Γ n j) Γ n / 1 for all j Z d, where Γ + j denotes translation of Γ in direction j Z d, i.e. Γ + j = {i + j i Γ}. 4

3 Proofs An important tool in the proof of our main result, Theorem 2.1, is a CLT for m n dependent random fields that may be found in [5, Theorem 2. This theorem is also used in the proofs of the CLTs, derived in [4. However, our proof of the CLT for S Γ is simpler and takes advantage of the linearity of the Lévy based model (1.1), while our proof of the CLT for T Γ (h) follows a different route than the one adopted in [4 for the second order property i Γ X ix i+h. Recall that a random field (Z i ) i Z d is said to be m n dependent, if for any A, B Z d with A, B < and inf{ a b : a A, b B} > m n, (Z a ) a A and (Z b ) b B are independent. The CLT for m n dependent fields is here stated as Theorem 3.1. Let (Γ n ) n 1 be a sequence of finite subsets of Z d with as n and let (m n ) n 1 be a sequence of positive integers. For each n 1, let {U n (j), j Z d } be an m n dependent random field with EU n (j) = 0 for all j Z d. Assume that E( j Γ n U n (j)) 2 σ 2 as n with σ 2 <. Then j Γ n U n (j) converges in distribution to a Gaussian random variable with mean zero and variance σ 2, if there exists a finite constant c > 0 such that for any n 1, j Γ n EU n (j) 2 c, and for any ɛ > 0, it holds that lim n L n (ɛ) = 0, where L n (ɛ) = m 2d n E ( ) U n (j) 2 1 { Un(j) ɛm 2d n }. j Γ n Proof of Theorem 2.1 for S Γn. Let (m n ) n 1 be a sequence of integers increasing to infinity. The choice of the sequence will be specified below. Let for i Z d and n N, C n (i) be the ball in with center in i and radius m n /3. Let X n i = f(s i) M(ds) C n(i) and let furthermore µ n = E(X n 0) = E(M ) f(s) ds and S C n(0) Γ n = i Γ n X n i. The proof will be divided into two steps. The first step is to show that S Γn µ (S Γn µ n ) lim 2 n = 0. (3.1) The second step is to show, using Theorem 3.1, that with an appropriately chosen sequence (m n ) n 1, we have S Γn µ n D N (0, σs) 2 (3.2) 5

To prove (3.1), we find, using Appendix A, S Γn µ (S Γn µ n ) 2 2 = 1 [ ( E [ ) 2 f(s i) M(ds) E(M ) f(s i) ds i Γ C n n(i) c C n(i) c = 1 [ Cov f(s i) M(ds), f(s j) M(ds) i Γ n j Γ C n n(i) c C n(j) c = Var(M ) f(s i) f(s j) ds i Γ n j Γ C n n(i) c C n(j) c Var(M ) f(s i) f(s j) ds C n(i) c C n(j) c i Γ n j Z d = Var(M ) j Z d C n(0) c C n(j) c f(s) f(s j) ds 0, where the convergence is a result of dominated convergence, since f(s) f(s j) ds <, (3.3) j Z d and C n (0) c C n (j) c as n. In order to prove (3.2), define for n N and j Z d such that U n (j) = Xn j µ n, U n (j) = S Γn µ n. Γn j Γ n Then, clearly (U n (j)) j Z d is m n dependent with EU n (j) = 0 for all n 1 and j Z d. Furthermore, cf. Appendix A, [( ) 2 E U n (i) i Γ n = 1 [ f(s i) M(ds), f(s j) M(ds) C n(i) C n(j) = Var(M ) Cov i Γ n j Γ n i Γ n j Γ n C n(i) C n(j) = Var(M ) j Z d (Γ n j) Γ n Var(M ) j Z d f(s) f(s j) ds, f(s i) f(s j) ds C n(0) C n(j) f(s) f(s j) ds 6

as n, by dominated convergence. Here, we have used that (3.3) is fulfilled, (Γ n j) Γ n / 1 for all j and C n (0) C n (j) as n. Furthermore, we find, cf. Appendix A, E[U n (i) 2 = Var(X n 0) = Var(M ) f(s) 2 ds Var(M ) f(s) 2 ds <. i Γ C n n(0) Now define Y = sup X n 0 µ n. n 1 Note that the sequence (X n 0) n 1 has independent increments and that sup E [ (X n 0 µ n ) 2 = sup Var(M ) f(s) 2 ds Var(M ) f(s) 2 ds <. n 1 n 1 C n(0) Then, EY 2 <, due to Corollary B.2. Furthermore, L n (ɛ) = m 2d n E [ Un(j)1 2 { Un(j) ɛm 2d n } j Γ n = m 2d n E [ (X n 0 µ n ) 2 1 { X n m 2d n E [ Y 2 1 {Y ɛ Γn m 2d n } µn ɛ 0 Γ n m 2d n } = m 2d n E [ Y 2 1 {Y ɛ Γn m 2d m 2d n P [ Y ɛ m 2d n n,y Γ n 1/4 } m6d n ɛ 2 EY 2 + m 2d n ψ( 1/4 ), + m 2d n E [ Y 2 1 {Y ɛ Γn m 2d + m 2d n E [ Y 2 1 {Y > Γn 1/4 } n,y > Γ n 1/4 } where ψ(x) = E(Y 2 1 {Y x} ). Note that ψ(x) 0 as x. With [ denoting the integer part, now choose the sequence (m n ) n 1 as m n = [ 1 24d if ψ( 1/4 ) = 0 and m n = min {[ [ } ψ( 1/4 ) 1 4d, 1 24d, otherwise. With this choice it is seen that L n (ɛ) 0 as n. Thus we have from Theorem 3.1 that S Γn µ n D N (0, σs) 2. Combining this with (3.1) yields the desired result. In the proof of Theorem 2.1 for T Γn (h), we need the following lemma, the proof of which follows from straightforward calculations, using that f is bounded and non negative. Lemma 3.2. Let for h Z d g(s) = f(s) f(s h), s. 7

Then, for s, j Z d, g(s)g(s j) v j (s), g(s) 2 g(s j) 2 Cv j (s), where v j (s) = f(s)f(s j) + f(s)f(s h j) + f(s h)f(s j) + f(s h)f(s h j) and C < is an appropriately chosen constant. If f(s)f(s j) ds <, j Z d then j Z d v j (s) ds < and ( ) 2 v j (s) ds <. j Z d Proof of Theorem 2.1 for T Γn (h). First note that it follows from Lemma 3.2 that σs 2 < and E[(M ) 4 < implies σt 2 <. For ease of notation, we shall now consider the field (Y i ) i Z d, where Y i = X i X i+h. Then, for all i Z d, EY i = 0 and ( ) Y i = f(s i) f(s i h) M(ds) = g(s i) M(ds), where g(s) = f(s) f(s h) as previously. Let (m n ) n 1 be (another) sequence of integers increasing to infinity. Let C n (i) and X n i be defined as in the proof of Theorem 2.1 for S Γn. Let Y n i = X n i X n i+h = g n (s i) M(ds), where g n (s) = 1 Cn(0)(s)f(s) 1 Cn(h)(s)f(s h). Then g n (s) g(s) for all s. Note that EY n i = 0 and let furthermore γ(h) n = E(Y n 0) 2. Also define T Γn (h) = i Γ n (Y n i ) 2. As for S Γn, the first step will be to show that T Γn (h) γ(h) (T Γn (h) γ(h) n ) lim 2 n = 0. (3.4) The second step is to show, using Theorem 3.1, that with an appropriately chosen sequence (m n ) n 1, we have T Γn (h) γ(h) n D N (0, σt 2 ). (3.5) 8

To prove (3.4), we find T Γn (h) γ(h) (T Γn (h) γ(h) n ) 2 2 = 1 [( E [ Y 2 i EYi 2 ((Y n i ) 2 E(Y n i ) 2 ) ) 2 i Γ n = 1 + 1 2 1 E [ (Yi 2 EYi 2 )(Yj 2 EY 2 i,j Γ n E [( (Y n i ) 2 E(Y n i ) 2)( (Y n j ) 2 E(Y n j ) 2) i,j Γ n j ) i,j Γ n E [ (Y 2 i EY 2 i ) ( (Y n j ) 2 E(Y n j ) 2). (3.6) We will show convergence of the three terms in (3.6) separately. Using the calculation formulae from Appendix A, the first term equals 1 [ κ 4 (M ) g(s i) 2 g(s j) 2 ds i,j Γ n ( ) 2 + 2 Var(M ) 2 g(s i)g(s j) ds = [ Γ n (Γ n j) κ 4 (M ) g(s) 2 g(s j) 2 ds j Z d ( ) 2 + 2 Var(M ) 2 g(s)g(s j) ds. By dominated convergence, this converges to σt 2, using that σ2 T is finite and that all terms in σt 2 are non negative. The second term in (3.6) equals [ Γ n (Γ n j) κ 4 (M ) g n (s) 2 g n (s j) 2 ds j Z d ( ) 2 + 2 Var(M ) 2 g n (s)g n (s j) ds. By dominated convergence and Lemma 3.2, applied to g n instead of g, but keeping the upper bound as v j (s), this also converges to σt 2. Similarly, the third term of (3.6) converges to 2σT 2, and we have the desired result in (3.4). Now define for n N and j Z d V n (j) = (Y n j ) 2 γ(h) n. We want to show that j Γ n V n (j) D N (0, σ 2 T ), 9

using Theorem 3.1. Then by construction EV n (j) = 0 for all n 1 and j Z d, and for n such that m n /3 > h, the field (V n (j)) j Z d is m n dependent. Furthermore [( ) 2 E V n (i) = 1 [( E [ n (Y i ) 2 E(Y n i ) 2) 2 i Γ n i Γ n = 1 E [( (Y n i ) 2 E(Y n i ) 2)( (Y n j ) 2 E(Y n j ) 2), i,j Γ n which equals the second term of (3.6) and therefore has limit σt 2. Additionally, using Appendix A, [ [(Y E[V n (i) 2 n = E 0) 2 E(Y n 0) 2 2 i Γ n ( ) ( ) 2 = κ 4 (M ) g n (s) 4 ds + 2 Var(M ) 2 g n (s) 2 ds R ( ) ( d 2 κ 4 (M ) v 0 (s) 2 ds + 2 Var(M ) 2 v 0 (s) ds) < with v 0 as defined in Lemma 3.2. Now define Z = ( sup n 1 X n 0 + sup X n h ) 2 + C, n 1 where C = 2 Var(M ) f(s) 2 ds. Using Appendix A, we find that 0 γ(h) n C for all n 1. Also note that the sequence (X n 0) n 1 has independent increments and that for k = 1, 2, 3, 4, cf. Appendix A, ( n sup κ k X 0) = sup κ k (M ) f(s) k ds κ k (M ) f(s) k ds <. n 1 n 1 C n(0) Thus also sup n 1 E ( X n 4 0) <. By stationarity, also supn 1 E ( X n 4 h) <, and thus EZ 2 <, due to Corollary B.2. Then L n (ɛ) = m 2d n E [ Vn 2 (j)1 { Vn(j) ɛm 2d n } j Γ n = m 2d n E [( (X n 0 X n ) h) 2 21{ (X γ(h) n n m 2d n E [ Z 2 1 {Z ɛ Γn m 2d n }. 0 Xn h )2 γ(h) n ɛ Γ n m 2d n } Now, it follows that L n (ɛ) 0 by choosing (m n ) n 1 as in the proof of Theorem 2.1 for S Γn. Thus, we can conclude from Theorem 3.1 T Γn γ(h) n D N (0, σt 2 ), and combining this with (3.4) gives the desired convergence. 10

4 Isotropic model Now make the assumption that the observations in the field have the form X i = f( s i ) M(ds) (4.1) for i Z d, where f : [0, ) [0, ) satisfies f( s ) ds <. This is equivalent to assuming that f(s) in (2.4) only depends on s through s. With this assumption, the field is isotropic, so E(X i X j ) 2 = γ( i j ) for a variogram function γ : [0, ) [0, ). Estimation of µ = EX 0 and the corresponding asymptotic properties are unchanged, compared to the more general framework in Section 2. Also estimation of γ( h ) for h Z d can be carried out as before, and the asymptotic results are still valid. However, we can improve the variogram estimator by taking into account that γ( h ) only depends on h. For this, we introduce T Γ(d 0 ) = 1 h(d 0 ) i Γ h h(d 0 ) (X i X i+h ) 2, where Γ is a finite subset of Z d, d 0 > 0 and h(d 0 ) = {h Z d : h = d 0 }. We have Theorem 4.1. Let (Γ n ) n 1 be a sequence of finite subsets of Z d and assume that goes to infinity and Γ n / goes to 0. If f( s )f( s i ) ds < and E(M ) 4 <, then where σ 2 T = 1 h(d 0 ) 2 i Z d T Γ n (d 0 ) γ(d 0 ) j Z d h,h h(d 0 ) and g h (s) = f( s ) f( s h ). D N (0, σt 2 ), [ κ 4 (M ) g h (s) 2 g h (s j) 2 ds ( ) 2 + 2 Var(M ) 2 g h (s)g h (s j) ds <, The proof of Theorem 4.1 follows exactly the same lines as the second part of the proof of Theorem 2.1. Additional notation is only needed to take care of the extra summation in the definition of T Γ. 11

5 Estimation of sample spacing Assume that d = 3 and consider the isotropic case again. Let the random field (X t ) t R 3 be defined as X t = f( s t ) M(ds). R 3 Suppose that the field is only observed in grid points within two parallel planes with an unknown distance that we want to estimate. Due to the isotropy, we can without loss of generality assume that observations are made in the two planes and Z 2 = {(z 1, z 2, 0) : z 1, z 2 Z} Z 2 + h 0 = {(z 1, z 2, d 0 ) : z 1, z 2 Z}, where h 0 = (0, 0, d 0 ). The aim is then to estimate d 0. Note that Z 2 + h 0 Z 3 only if d 0 Z, which is not assumed. The idea will be to estimate γ(d 0 ), the variogram evaluated in d 0, only based on pairs (i, i + h 0 ), where i Z 2 and i + h 0 Z 2 + h 0. For a finite subset Γ Z 2, we can estimate γ(d 0 ) by γ(d 0 ) = 1 Γ T Γ, where T Γ = i Γ (X i X i+h0 ) 2. Assuming that the variogram function d γ(d) is known and strictly increasing, an estimator for d 0 is obtained by ˆd 0 = γ 1 ( γ(d 0 )). This estimator is studied in [13, where an expression for the variance of T Γ is derived. Furthermore an approximate expression for the variance of ˆd 0 is found using a Taylor approximation. Below we state a theorem, giving asymptotic normality of T Γ. In fact, apart from the scaling, the asymptotic variance is equal to the approximation to the variance suggested in [13. A result showing asymptotic normality of TΓ can directly be translated into asymptotic confidence intervals for ˆd 0. If also d γ(d) is assumed to be differentiable, asymptotic normality of ˆd 0 is easily derived using the delta method. Theorem 5.1. Let (Γ n ) n 1 be a sequence of finite subsets of Z 2 and assume that goes to infinity and that Γ n / 0 for all j Z 2. Assume furthermore that E(M ) 4 < and f( s )f( s i ) ds <. R 3 i Z 2 Then, T Γn γ(d 0 ) D N (0, σ 2 T ), 12

where σ 2 T = [κ 4 (M ) g(s) 2 g(s j) 2 ds j Z 2 ( ) 2 + 2 Var(M ) 2 g(s)g(s j) ds <, and g(s) = f( s ) f( s h 0 ). Above Γ n denotes the boundary of Γ n as a subset of Z 2. The proof of Theorem 5.1 is almost identical to the proof of Theorem 2.1, but with Z d replaced by Z 2 as the index set for the observations. Acknowledgements This work was supported by Centre for Stochastic Geometry and Advanced Bioimaging, funded by the Villum Foundation. A Cumulant formulas For two variables Y 1, Y 2 on the form (2.4), Y 1 = f 1 (s) M(ds) Y 2 = f 2 (s) M(ds), with kernels f 1, f 2 satisfying (2.3), the joint cumulant function is given as log E ( e ) i(λ 1Y 1 +λ 2 Y 2 ) = log E ( e i(λ 1f 1 (s)+λ 2 f 2 (s))m ) ds, see [6 for details. Assuming that E M k < and differentiating k times with respect to λ 1 gives κ k (Y 1 ) = κ k (M ) f 1 (s) k ds, where κ k (X) denotes the kth cumulant of the variable X. Similarly, if E(M ) 2 < we find Cov(Y 1, Y 2 ) = Var(M ) f 1 (s)f 2 (s) ds. If furthermore EY 1 = EY 2 = 0 and E(M ) 4 <, it holds E ( (Y1 2 EY1 2 )(Y2 2 EY2 2 ) ) ( ) ( 2 = κ 4 (M ) f 1 (s) 2 f 2 (s) 2 ds + 2 Var(M ) 2 f 1 (s)f 2 (s) ds). 13

B Corollary to Ottaviani s inequality In this Appendix (X n ) n 1 will denote a sequence of random variables. Let S n = X 1 + + X n and M n = max 1 k n S k. We will use the following version of Ottaviani s inequality, see [7, Section 6.24. Theorem B.1. Let (X n ) n 1 be a sequence of independent random variables. Then, for all x, y R. P(M n > x + y) min 1 k n P( S n S k y) P( S n > x) Corollary B.2. Let (X n ) n 1 be a sequence of independent random variables. Then for all p 1, sup n 1 E S n p < if and only if EM p <, where M = sup n 1 M n. Note that the corollary can be shown for 0 < p < 1 as well. However, in the present paper we only need the result for p = 2, 4. Proof of the corollary. First notice that it is easy to show directly that EM p < implies sup n 1 E S n p <. The opposite statement follows from monotone convergence, if we can show that for all n 1 EM p n C max 1 k n E S k p (B.1) for a finite constant C that only depends on p. By assumption, sup n 1 E S n p < and therefore, m = max 1 k n E S k p <. Let τ = (2 p+1 m) 1/p. Then, by Markov s inequality, we have for 1 k n P( S n S k > τ) E( S n S k p ) τ p 2 p E S n p + E S k p τ p 1 2, where the inequality x + y p 2 p ( x p + y p ) has been applied. It follows that min 1 k n P( S n S k τ) 1, so by Theorem B.1, 2 P(M n > x) = P(M n > (x τ) + τ) 2P( S n + τ > x). Integrating both sides with respect to px p 1 dx gives EM p n 2E ( ( S n + τ) p) 2 p+1 E( S n p + τ p ) 2 p+1 (1 + 2 p+1 )m. 14

References [1 Adler, R. J., Samorodnitsky, G. and Taylor, J. E. (2010). Excursion sets of three classes of stable random fields. Adv. Appl. Prob. 42, 293 318. [2 Adler, R. J., Samorodnitsky, G. and Taylor, J. E. (2013). High level excursion set geometry for non Gaussian infinitely divisible random fields. Ann. Prob. 41, 134 169. [3 Barndorff-Nielsen, O. E. and Schmiegel, J (2004). Lévy based tempospatial modelling with applications to turbulence. Uspekhi Mat. Nauk. 159, 63 90. [4 El Machkouri, M., Volný, D. and Wu, W. B. (2013). A central limit theorem for stationary random fields. Stoch. Proc. Appl. 123, 1 14. [5 Heinrich, L. (1988). Asymptotic behaviour of an empirical nearest neighbour distance function for stationary Poisson cluster processes. Math. Nachr. 136, 131 148. [6 Hellmund, G., Prokešová, M. and Jensen, E. B. V. (2008). Lévy based Cox point processes. Adv. Appl. Prob. 40, 603 629. [7 Hoffmann Jørgensen, J. (1994). Probability with a View towards Statistics. Vol. I. Chapmann & Hall, New York and London. [8 Jónsdóttir, K. Ý., Schmiegel, J. and Jensen, E. B. V. (2008). Lévybased growth models. Bernoulli 14, 62 90. [9 Jónsdóttir, K. Ý., Rønn-Nielsen, A., Mouridsen, K. and Jensen, E. B. V. (2013). Lévy-based modelling in brain imaging. Scand. J. Stat. 40, 511 529. [10 Rajput, B. S. and Rosiński, J. (1989). Spectral representations of infinitely divisible processes. Prob. Theory Rel. Fields 82, 451 488. [11 Rosiński, J. and Samorodnitsky, G. (1993). Distributions of subadditive functionals of sample paths of infinitely divisible processes. Ann. Probab. 21, 996 1014. [12 Rønn-Nielsen, A. and Jensen, E. B. V. (2016). Tail asymptotics for the supremum of an infinitely divisible field with convolution equivalent Lévy measure. J. Appl. Prob. 53, 244 261. [13 Rønn-Nielsen, A., Sporring, J. and Jensen, E. B. V. (2017). Estimation of sample spacing in stochastic processes. Image Anal. Stereol. 36, 43 49. [14 Rønn-Nielsen, A. and Jensen, E. B. V. (2017). Excursion sets of infinitely divisible random fields with convolution equivalent Lévy measure. J. Appl. Prob. 54, 833 851. 15

[15 Sato, K.-I. (1999). Lévy Processes and Infinitely Divisible Distributions. Cambridge University Press, Cambridge. 16