Oberwolfach Preprints

Size: px
Start display at page:

Download "Oberwolfach Preprints"

Transcription

1 Oberwolfach Preprints OP MATTHIAS SCHULTE AND CHRISTOPH THÄLE Central Limit Theorems for the Radial Spanning Tree Mathematisches Forschungsinstitut Oberwolfach ggmbh Oberwolfach Preprints OP ISSN

2 Oberwolfach Preprints OP Starting in 2007, the MFO publishes a preprint series which mainly contains research results related to a longer stay in Oberwolfach. In particular, this concerns the Research in Pairs- Programme RiP and the Oberwolfach-Leibniz-Fellows OLF, but this can also include an Oberwolfach Lecture, for example. A preprint can have a size from pages, and the MFO will publish it on its website as well as by hard copy. Every RiP group or Oberwolfach-Leibniz-Fellow may receive on request 30 free hard copies DIN A4, black and white copy by surface mail. Of course, the full copy right is left to the authors. The MFO only needs the right to publish it on its website as a documentation of the research work done at the MFO, which you are accepting by sending us your file. In case of interest, please send a pdf file of your preprint by to rip@mfo.de or owlf@mfo.de, respectively. The file should be sent to the MFO within 12 months after your stay as RiP or OLF at the MFO. There are no requirements for the format of the preprint, except that the introduction should contain a short appreciation and that the paper size respectively format should be DIN A4, "letter" or "article". On the front page of the hard copies, which contains the logo of the MFO, title and authors, we shall add a running number 20XX - XX. e cordially invite the researchers within the RiP or OLF programme to make use of this offer and would like to thank you in advance for your cooperation. Imprint: Mathematisches Forschungsinstitut Oberwolfach ggmbh MFO Schwarzwaldstrasse Oberwolfach-alke Germany Tel Fax URL admin@mfo.de The Oberwolfach Preprints OP, ISSN are published by the MFO. Copyright of the content is held by the authors.

3 Central it theorems for the radial spanning tree Matthias Schulte and Christoph Thäle Abstract Consider a homogeneous Poisson point process in a compact convex set in d- dimensional Euclidean space which has interior points and contains the origin. The radial spanning tree is constructed by connecting each point of the Poisson point process with its nearest neighbour that is closer to the origin. For increasing intensity of the underlying Poisson point process the paper provides expectation and variance asymptotics as well as central it theorems with rates of convergence for a class of edge functionals including the total edge length. Keywords. Central it theorem, directed spanning forest, Poisson point process, radial spanning tree, random graph. MSC. Primary 60D05; Secondary 60F05, 60G55. 1 Introduction and results Random graphs for which the relative position of their vertices in space determines the presence of edges found considerable attention in the probability and combinatorics literature during the last decades, cf. [1, 8, 9, 14]. Among the most popular models are the nearest-neighbour graph and the random geometric or Gilbert graph. Geometric random graphs with a tree structure have attracted particular interest. For example the minimal spanning tree has been studied intensively in stochastic optimization, cf. [20, 21]. Bhatt and Roy [4] have proposed a model of a geometric random graph with a tree structure, the so-called minimal directed spanning tree, and in [2] another model has been introduced by Baccelli and Bordenave. This so-called radial spanning tree is also of interest in the area of communication networks as discussed in [2, 5]. To define the radial spaning tree formally, let R d, d 2, be a compact convex set with d-dimensional Lebesgue measure λ d > 0 which contains the origin 0, and let η t be a Poisson point process in whose intensity measure is a multiple t 1 of the Lebesgue measure restricted to, see Section 2 for more details. For a point x η t we denote by nx, η t the nearest neighbour of x in η t {0} which is closer to the origin than x, i.e., for which nx, η t x and x nx, η t x y for all y η t B d 0, x \ {x}, where stands for the Euclidean norm and B d z, r is the d-dimensional closed ball with centre z R d and radius r 0. In what follows, we call nx, η t the radial nearest neighbour of x. The radial spanning tree RSTη t with respect to η t rooted at the origin Institute of Stochastics, Karlsruhe Institute of Technology, Germany, matthias.schulte@kit.edu Faculty of Mathematics, Ruhr University Bochum, Germany, christoph.thaele@rub.de 1

4 0 is the random tree in which each point x η t is connected by an edge to its radial nearest neighbour, see Figure 1 and Figure 2 for pictures in dimensions 2 and 3. In the original paper [2], the radial spanning tree was constructed with respect to a stationary Poisson point process in R d, and properties dealing with individual edges or vertices such as edge lengths and degree distributions as well as the behaviour of semi-infinite paths were considered. Moreover, spatial averages of edge lengths within have been investigated, especially when is a ball with increasing radius. Semiinfinite paths and the number of infinite subtrees of the root were further studied by Baccelli, Coupier and Tran [3], while Bordenave [5] considered a closely related navigation problem. However, several questions related to cumulative functionals as the total edge length remained open. For example, the question of central it theorem for the total edge length of the radial spanning tree within a convex observation window has been brought up by Penrose and ade [17] and is still one of the prominent open problems. Note that for such a central it theorem the set-up in which the intensity goes to infinity and the set is kept fixed is equivalent to the situation in which increases to R d for fixed intensity. The main difficulty in proving a central it theorem is that the usual existing techniques are at least not directly applicable, see [17] for a more detailed discussion. One reason for this is the lack of spatial homogeneity of the construction as a result of the observation that the geometry of the set of possible radial nearest neighbours of a given point changes with the distance of the point to the origin. The main contribution of our paper is a central it theorem together with an optimal rate of convergence. Its proof relies on a recent Berry-Esseen bound for the normal approximation of Poisson functionals from Last, Peccati and Schulte [11], which because of its geometric flavour is particularly well suited for models arising in stochastic geometry. The major technical problem in order to prove a central it theorem is to control appropriately the asymptotic behaviour of the variance. This sophisticated issue is settled in our text on the basis of a recent non-degeneracy condition also taken from [11]. Although a central it theorem for the radial spanning tree is an open problem, we remark that central it theorems for edge-length functionals of the minimal spanning tree of a random point set have been obtained by Kesten and Lee [10] and later also by Chatterjee and Sen [6], Penrose [15] and Penrose and Yukich [18]. Moreover, Penrose and ade [16] have shown a central it theorem for the total edge length of the minimal directed spanning tree. In order to state our main results formally, we use the notation lx, η t := x nx, η t and define the edge-length functionals L a t := x η t lx, η t a, a 0, t 1, 1.1 of RSTη t the assumption that a 0 is discussed in Remark 3.3 below. Note that L 0 t is just the number of vertices of RSTη t, while L 1 t is its total edge length. Our first result provides expectation and variance asymptotics for L a t. To state it, denote by κ d the volume of the d-dimensional unit ball and by Γ the usual Gamma function. An explicit representation of the constant v a in the next theorem will be derived in Lemma 3.4 below. 2

5 Figure 1: Simulations of radial spanning trees in the unit square with t = 50 left and t = 300 right. They have been produced with the freely available R-package spatgraphs. Theorem 1.1. Let a 0. Then t ta/d 1 E[L a t ] = 2 κ d a/d Γ 1 + a λ d 1.2 d and there exists a constant v a 0, only depending on a and d such that t t2a/d 1 Var[L a t ] = v a λ d. 1.3 e turn now to the central it theorem. Our next result in particular ensures that, after suitable centering and rescaling, the functionals L a t converge in distribution to a standard Gaussian random variable, as t. Theorem 1.2. Let a 0 and let Z be a standard Gaussian random variable. Then there is a constant C 0, only depending on, the parameter a and the space dimension d such that sup s R P L a t E[L a t ] Var[L a t ] s PZ s C t 1/2, t 1. For a = 1 and a = 0 Theorem 1.2 says that the total edge length and the number of edges satisfy a central it theorem, as t. hile for a > 0 the result is non-trivial, a central it theorem in the case a = 0 is immediate from the following observations. Namely, there is a canonical one-to-one correspondence between the points of η t and the edges in the radial spanning tree. Moreover, the number of points of η t is a Poissondistributed random variable η t with mean and variance tλ d, and it is well-known from the classical central it theorem that under suitable normalization such a random variable is well approximated by a standard Gaussian random variable. Since for this situation the rate t 1/2 is known to be optimal, the rate of convergence in Theorem 1.2 can in general not be improved. 3

6 Figure 2: A simulation of a radial spanning tree in the unit cube with t = 500. It has been produced with the freely available R-package spatgraphs. The radial spanning tree is closely related to another geometric random graph, namely the directed spanning forest, which has also been introduced in [2]. The directed spanning forest DSFη t with respect to a direction e S d 1 is constructed in the following way. For x R d let H x,e be the half-space H x,e := {y R d : e, y x 0}. e now take the points of η t as vertices of DSTη t and connect each point x η t with its closest neighbour ˆnx, η t in η t H x,e \ {x}. If there is no such point, we put ˆnx, η t =. This means that we look for the neighbour of a vertex of the directed spanning forest always in the same direction, whereas this direction changes according to the the relative position to the origin in case of the radial spanning tree. The directed spanning forest can be regarded as a local approximation of the radial spanning tree at distances far away from the origin. As for the radial spanning tree we define l e x, η t := x ˆnx, η t if ˆnx, η t and l e x, η t := 0 if ˆnx, η t =, and L a t := x η t l e x, η t a, a 0, t 1. In order to avoid boundary effects it is convenient to replace the Poisson point process η t in by a unit-intensity stationary Poisson point process η in R d. For this set-up we 4

7 introduce the functionals L a := x η l e x, η a, a 0, t Let us recall that a forest is a family of one or more than one disjoint trees, while a tree is an undirected and simple graph without cycles. Although with strictly positive probability DSFη t is a union of more than one disjoint trees, Coupier and Tran have shown in [7] that in the directed spanning forest DSFη of a stationary Poisson point process η in the plane any two paths eventually coalesce, implying that DSFη is almost surely a tree. Resembling the intersection behaviour of Brownian motions, where the intersection of any finite number of independent Brownian motions in R d with different starting points is almost surely non-empty if and only if d = 2, cf. [13, Theorem 9.3 b], it seems that a similar property is no more true for dimensions d 3. Note that in contrast to these dimension-sensitive results, our expectation and variance asymptotics and our central it theorem hold for any space dimension d 2. A key idea of the proof of Theorem 1.1 is to show that, for a 0, and t ta/d 1 E[L a t t t2a/d 1 Var[L a t ] = ] = E[ L a ] Var[ L a ] B d 0,r λ r κ d r d d. 1.5 In other words this means that the expectation and the variance of L a t can be approximated by those of L a a or L, which are much easier to study because of translation B d 0,r invariance. e shall use a recent non-degeneracy criterion from [11] to show that the right-hand side in 1.5 is bounded away from zero, which implies then the same property also for the edge-length functionals of the radial spanning tree. To prove Theorem 1.2 we use again recent findings from [11]. These are reviewed in Section 2 together with some other background material. In Section 3 we establish Theorem 1.1 before proving Theorem 1.2 in the final Section 4. 2 Preinaries Notation. Let BR d be the Borel σ-field on R d and let λ d be the Lebesgue measure on R d. For a set A BR d we denote by inta the interior of A and by λ d A the restriction of λ d to A. For z R d and r 0, B d z, r stands for the closed ball with centre z and radius r, and B d is the d-dimensional closed unit ball, whose volume is given by κ d := λ d B d. By S d 1 we denote the unit sphere in R d. Poisson point processes. Let N be the set of σ-finite counting measures on R d and let it be equipped with the σ-field that is generated by all maps N µ µa, A BR d. For a σ-finite non-atomic measure µ on R d a Poisson point process η with intensity measure µ is a random element in N such that for all n N and pairwise disjoint sets A 1,..., A n BR d the random variables ηa 1,..., ηa n are independent, 5

8 for A BR d with µa 0,, ηa is Poisson distributed with parameter µa. e say that η is stationary with intensity t > 0 if µ = tλ d, implying that η has the same distribution as the translated point process η + z for all z R d. By abuse of terminology we also speak about t as intensity if the measure µ has the form tλ d for some possibly compact subset R d. A Poisson point process η can with probability one be represented as η = m δ xn, x 1, x 2,... R d, m N {0, }, n=1 where δ x stands for the unit-mass Dirac measure concentrated at x R d. In particular, if µr d <, η can be written in form of the distributional identity η = M n=1 with i.i.d. random points X n n N that are distributed according to µ /µr d and a Poisson distributed random variable M with mean µr d that is independent of X n n N. As usual in the theory of point processes, we may identify η with its support and write, by slight abuse of notation, x η whenever x is charged by the random measure η. Moreover, we write η A for the restriction of η to a subset A BR d. Although we interpreted η as a random set in the introduction, we prefer from now on to regard η as a random measure. To deal with functionals of η, the so-called multivariate Mecke formula will turn out to be useful for us. It states that, for k N and a non-negative measurable function f : R d k N R, [ ] E fx 1,..., x k, η x 1,...,x k η k δ Xn = E[fx 1,..., x k, η + δ x δ xk ] µ k dx 1,..., x k, R d k 2.1 where η k stands for the set of all k-tuples of distinct points of η and the integration on the right-hand side is with respect to the k-fold product measure of µ see Theorem 1.15 in [19]. It is a remarkable fact that 2.1 for k = 1 also characterizes the Poisson point process η. Variance asymptotics and central it theorems for Poisson functionals. As a Poisson functional we denote a random variable F which only depends on a Poisson point process η. Every Poisson functional F can be written as F = fη almost surely with a measurable function f : N R, called a representative of F. For a Poisson functional F with representative f and z R d, the first-order difference operator is defined by D z F := fη + δ z fη. 6

9 In other words, D z F measures the effect on F when a point z is added to η. Moreover, for z 1, z 2 R d, the second-order difference operator is given by D 2 z 1,z 2 F := D z1 D z2 F = fη + δ z1 + δ z2 fη + δ z1 fη + δ z2 + fη. The difference operators play a crucial role in the following result, which is the key tool to establish our central it theorem, see [11, Proposition 1.3]. Proposition 2.1. Let F t t 1 be a family of Poisson functionals depending on the Poisson point processes η t t 1 with intensity measures tλ d for t 1, where R d is a compact convex set with interior points. Suppose that there are constants c 1, c 2 0, such that E[ D z F t 5 ] c 1 and E[ D 2 z 1,z 2 F t 5 ] c 2, z, z 1, z 2, t Further assume that t 1 Var[F t ] v, t t 0, for some v, t 0 0, and that sup t PDz 2 1,z 2 F t 0 1/20 dz 2 <, 2.3 z 1, t 1 and let Z be a standard Gaussian random variable. Then there is a constant C 0, such that Ft E[F t ] P Var[Ft ] s PZ s C t 1/2, t sup s R The proof of Proposition 2.1 given in [11] is based on a combination of Stein s method for normal approximation, the Malliavin calculus of variations and recent findings concerning the Ornstein-Uhlenbeck semigroup on the Poisson space around the so-called Mehler formula. Clearly, 2.4 implies that F t E[F t ]/ Var[F t ] converges in distribution to a standard Gaussian random variable, as t, but also delivers a rate of convergence for the so-called Kolmogorov distance. A similar bound is also available for the asserstein distance, but in order to keep the result transparent, we restrict here to the more prominent and as we think natural Kolmogorov distance. One problem to overcome when one wishes to apply Proposition 2.1 is to show that the ing variance of the family F t t 1 of Poisson functionals is not degenerate in that t 1 Var[F t ] is uniformly bounded away from zero for sufficiently large t. As discussed in the introduction we will relate the variance of L a t as r. A key tool to show that Var[ the following result, which is a version of Theorem 5.2 in [11]. a, as t, to the variance of L, B d 0,r a L ] v aλ d with a constant v a 0, is Proposition 2.2. Let η be a stationary Poisson point process in R d with intensity measure λ d and let F be a square-integrable Poisson functional depending on η with representative f : N R. Assume that there exist k N, sets I 1, I 2 {1,..., k} with I 1 I 2 = {1,..., k}, a constant c 0, and bounded sets 0, A 1,..., A k R d with strictly positive Lebesgue measure such that [ E fη + δx + δ x+yi fη + δ x + δ x+yi ] c i I1 i I 2 for x 0 and y i A i, i {1,..., k}. Then, there is a constant v 0, only depending on c, k and A 1,..., A k such that Var[F ] v λ d 0. 7

10 Proof. e define U = {x 1,..., x k+1 R d k+1 : x 1 0, x i+1 x 1 +A i, i {1,..., k}}. It follows from Theorem 5.2 in [11] that Var[F ] c 2 4 k+2 k + 1! min inf =J {1,...,k+1} V U, λ k+1 d V λ k+1 d J λ U/2 k+2 d Π JV, 2.5 where Π J stands for the projection onto the components whose indices belong to J and J denotes the cardinality of J. e have that λ k+1 d U = 1{x i+1 x 1 + A i, i {1,..., k}} dx 2,..., x k+1 dx 1 0 R d k k 2.6 = λ d 0 λ d A i. Since A 1,..., A k are bounded, there is a constant R 0, such that, for x 1,..., x k+1 U, x i x j R, i, j {1,..., k + 1}. Let V U be such that λ k+1 d V λ k+1 d U/2 k+2 and fix J {1,..., k + 1}. e write x J resp. x J C for the vector consisting of all variables from x 1,..., x k+1 with index in J resp. J C. Then, we have that λ k+1 d V 1{x J Π J V }1{x J, x J C U} dx 1,..., x k+1 R d k+1 κ d R d J C 1{x J Π J V } dx J = κ d R d J C λ J d Π JV. R d J Because of λ k+1 d V λ k+1 U/2 k+2 and 2.6, this implies that d λ J d Π JV λk+1 d V κ d R d J C k λ da i 2 k+2 κ d R d J C λ d 0. Together with 2.5, this concludes the proof. 3 Proof of Theorem 1.1 e prepare the proof of Theorem 1.1 by collecting some properties of the Poisson functionals we are interested in. Recall that the edge-length functional L a t defined at 1.1 depends on the Poisson point process η t with intensity measure tλ d for t 1. It has representative L a RSTξ = lx, ξ a, ξ N, x ξ where lx, ξ = x nx, ξ is the distance between x and its radial nearest neighbour nx, ξ in ξ + δ 0. The functional l, is monotone and homogeneous in the sense that lx, ξ 1 + ξ 2 + δ x lx, ξ 1 + δ x, x R d, ξ 1, ξ 2 N, 3.1 8

11 and lsx, sξ = slx, ξ, x R d, ξ N, s > 0, 3.2 where sξ is the counting measure x ξ δ sx we obtain by multiplying each point of ξ with s. a The random variable L defined at 1.4 depends on a stationary Poisson point a process η with intensity one. A representative of L is given by L a DSFξ = l e x, ξ a, ξ N, x ξ where l e x, ξ = x ˆnx, ξ is the distance between x and its closest neighbour in ξ δ x H x,e. Similarly to l,, the functional l e, is monotone in that l e x, ξ 1 + ξ 2 + δ x l e x, ξ 1 + δ x, x R d, ξ 1, ξ 2 N. 3.3 e make use of two auxiliary results in the proof of Theorem 1.1, which play a crucial role in Section 4 as well. Lemma 3.1. There is a constant α 0, only depending on and d such that λ d B d x, u B d 0, x α λ d B d x, u = α κ d u d for all x and 0 u x. Proof. Let x and 0 u x be given and define ˆx := 1 u/2 x x. Since x ˆx = u/2 and ˆx = x u/2, B d ˆx, u/2 is contained in B d x, u and in B d 0, x so that B d x, u B d 0, x B d ˆx, u/ It follows from the proof of Lemma 2.5 in [12] that there is a constant γ 0, only depending on and the dimension d such that λ d B d y, r γ κ d r d for all y and 0 r max z1,z 2 z 1 z 2. Together with 3.4, we obtain that λ d B d x, u B d 0, x λ d B d ˆx, u/2 2 d γ κ d u d, and the choice α := 2 d γ concludes the proof. The previous lemma allows us to derive exponential tails for the distribution of lx, η t + δ x as well as bounds for the moments. Lemma 3.2. For all t 1, x and u 0 one has that Plx, η t + δ x u exp tα κ d u d, 3.5 where α is the constant from Lemma 3.1. Furthermore, for each a 0 there is a constant c a 0, only depending on a, d and α such that t a/d E[lx, η t + δ x a ] c a, x, t

12 Proof. Since lx, η t +δ x is at most x, 3.5 is obviously true if u > x. For 0 u x we have that Plx, η t + δ x u = Pη t B d x, u B d 0, x = 0 = exp tλ d B d x, u B d 0, x exp tα κ d u d, where we have used that η t is a Poisson point process and Lemma 3.1. For fixed a > 0 the previous inequality implies that t a/d E[lx, η t + δ x a ] = 0 0 Pt a/d lx, η t + δ x a u du exp α κ d u d/a du a = v a/d 1 exp v dv dα κ d a/d 0 1 = 1 α κ d Γ + a =: c a/d a d 3.7 for all t 1. Finally, if a = 0, 3.6 is obviously true with c 0 = 1. Remark 3.3. In Theorem 1.1 and Theorem 1.2 we assume that a 0. One reason for this is the first equality in 3.7, which is false in case that a < 0. A similar relation is also used for the variance asymptotics studied below. The proof of Theorem 1.1 is divided into several steps. e start with the expectation asymptotics, generalizing thereby the approach in [2] from the planar case to higher dimensions. Proof of 1.2. First, consider the case a = 0. Here, we have that t a/d 1 E[L a t ] = t 1 E[η t ] = λ d. Next, suppose that a > 0. e use the Mecke formula 2.1 to see that t a/d 1 E[L a t ] = t a/d E[lx, η t + δ x a ] dx. By computing the expectation in the integral, we obtain that t a/d 1 E[L a t ] = a u a 1 Pt 1/d lx, η t + δ x u du dx, 3.8 where 0 Pt 1/d lx, η t + δ x u = exp tλ d B d x, t 1/d u B d 0, x, u 0, 3.9 since η t is a Poisson point process with intensity measure tλ d. Since, as a consequence of Lemma 3.2, Pt 1/d lx, η t + δ x u exp α κ d u d 10

13 and a u a 1 exp α κ d u d du dx <, 0 we can apply the dominated convergence theorem to 3.8 and obtain that t ] = a u a 1 Pt 1/d lx, η t + δ x u du dx t t ta/d 1 E[L a 0 with the probability Pt 1/d lx, η t + δ x u given by 3.9. For all x int we have that tλ db d x, t 1/d u B d 0, x = κ d t 2 ud and, consequently, Summarizing, we find that t Pt1/d lx, η t + δ x u = exp 1 2 κ du d. t ta/d 1 E[L a t This completes the proof of 1.2. ] = λ d a 2 = κ d 0 a/d Γ 1 + a d u a 1 exp 1 2 κ du d du λ d. Recall from the definition of the edge-length functionals for the directed spanning forest that l e x, η +δ x is the distance from x to the closest point of the unit-intensity stationary Poisson point process η, which is contained in the half-space H x,e. A computation similar to that in the proof of 1.2 shows that, for a > 0 and x R d, E[l e x, η + δ x a ] = E[l e 0, η + δ 0 a ] = a = a whence, by the Mecke formula 2.1, 0 2 u a 1 exp κ d u d /2 du = t ta/d 1 E[L a t 0 L a u a 1 Pl0, η + δ 0 u du ] = E[ L a ]. κ d a/d Γ 1 + a, d Our next goal is to establish the existence of the variance it. iting variance is postponed to Lemma 3.6 below Positivity of the Lemma 3.4. For any a 0 the it t 2a/d 1 Var[L a t ] exists and equals v a λ d t with a constant v a [0, given by v a = E[l e 0, η + δ 0 + δ z a l e z, η + δ 0 + δ z a ] E[l e 0, η + δ 0 a ] E[l e z, η + δ z a ] dz R d + E[l e 0, η + δ 0 2a ], where e S d 1 is some fixed direction. 11

14 Proof. For a = 0, t 2a/d 1 Var[L a t ] = t 1 Var[η t ] = t 1 E[η t ] = λ d since η t is a Poisson random variable with mean tλ d. Hence, v 0 = 1 and we can and will from now on restrict to the case a > 0, where we re-write the variance of L a t as [ Var[L a t ] = E x,y η 2 t, lx, η t a ly, η t a ] [ E lx, η t a x η t Now, we use the multivariate Mecke equation 2.1 to deduce that t 2a/d 1 Var[L a t ] = T 1 t + T 2 t ] 2 + E [ with T 1 t and T 2 t given by T 1 t := t t 2a/d E[lx, η t + δ x + δ y a ly, η t + δ x + δ y a ] T 2 t := x η t lx, η t 2a t 2a/d E[lx, η t + δ x a ] E[ly, η t + δ y a ] dy dx, t 2a/d E[lx, η t + δ x 2a ] dx. It follows from T 2 t = t 2a/d 1 E[L 2a t ], the expectation asymptotics 1.2 and 3.10 that T 2t = t 2a/d 1 E[L 2a t ] = E[l e 0, η + δ 0 2a ]λ d t t Using the substitution y = x + t 1/d z, we re-write T 1 t as T 1 t = 1{x + t 1/d z } R d t 2a/d E[lx, η t + δ x + δ x+t z a lx + t 1/d z, η 1/d t + δ x + δ x+t z a ] 1/d t 2a/d E[lx, η t + δ x a ] E[lx + t 1/d z, η t + δ x+t z a ] dz dx. 1/d Let A be the union of the events and A 1 = {η t B d x, t 1/d z /2 B d 0, x = 0} A 2 = {η t B d x + t 1/d z, t 1/d z /2 B d 0, x + t 1/d z = 0}. ] Since the expectation factorizes on the complement A C of A by independence, we have that E[lx, η t + δ x + δ x+t 1/d z a lx + t 1/d z, η t + δ x + δ x+t 1/d z a 1 A C] = E[lx, η t + δ x a 1 A C 1 ] E[lx + t 1/d z, η t + δ x+t 1/d z a 1 A C 2 ]. It follows from Lemma 3.1 that PA 1 exp α κ d z d /2 d, PA 2 exp α κ d z d /2 d 12

15 and hence PA 2 exp α κ d z d /2 d. Thus, using 3.1, repeatedly the Cauchy-Schwarz inequality and Lemma 3.2, we see that the integrand of T 1 t is bounded in absolute value from above by t 2a/d E[lx, η t + δ x + δ x+t 1/d z a lx + t 1/d z, η t + δ x + δ x+t 1/d z a 1 A ] + t 2a/d E[lx, η t + δ x a 1 A1 ] E[lx + t 1/d z, η t + δ x+t 1/d z a ] + t 2a/d E[lx, η t + δ x a ] E[lx + t 1/d z, η t + δ x+t 1/d z a 1 A2 ] E[t 4a/d lx, η t + δ x 4a ] 1/4 E[t 4a/d lx + t 1/d z, η t + δ x+t 1/d z 4a ] 1/4 2 exp α κ d z d /2 d+1 + 2E[t 2a/d lx, η t + δ x 2a ] 1/2 E[t 2a/d lx + t 1/d z, η t + δ x+t 1/d z 2a ] 1/2 exp α κ d z d /2 d+1 2c 4a + 2c 2a exp α κ d z d /2 d+1. As a consequence, the integrand in 3.12 is bounded by an integrable function. e can thus apply the dominated convergence theorem and obtain that T 1t= 1{x + t 1/d z } t R d t t 2a/d E[lx, η t + δ x + δ x+t z a lx + t 1/d z, η 1/d t + δ x + δ x+t z a ] 1/d t 2a/d E[lx, η t + δ x a ] E[lx + t 1/d z, η t + δ x+t z a ] dz dx. 1/d Using the homogeneity relation 3.2 and the fact that t 1/d η t has the same distribution as the restriction t 1/d x + η t 1/d of the translated Poisson point process t 1/d x + η to t 1/d, we see that t 2a/d E[lx, η t + δ x + δ x+t 1/d z a lx + t 1/d z, η t + δ x + δ x+t 1/d z a ] t 2a/d E[lx, η t + δ x a ] E[lx + t 1/d z, η t + δ x+t 1/d z a ] = E[lt 1/d x, t 1/d η t + δ t 1/d x + δ t 1/d x+z a lt 1/d x + z, t 1/d η t + δ t 1/d x + δ t 1/d x+z a ] E[lt 1/d x, t 1/d η t + δ t 1/d x a ] E[lt 1/d x + z, t 1/d η t + δ t 1/d x+z a ] = E[lt 1/d x, t 1/d x + η t 1/d + δ t 1/d x + δ t 1/d x+z a lt 1/d x + z, t 1/d x + η t 1/d + δ t 1/d x + δ t 1/d x+z a ] E[lt 1/d x, t 1/d x + η t 1/d + δ t 1/d x a ] E[lt 1/d x + z, t 1/d x + η t 1/d + δ t 1/d x+z a ]. It is not hard to verify that, for all x int with x 0 and z R d, the relations t lt1/d x, t 1/d x + η t 1/d + δ t 1/d x + δ t x+z = l 1/d x/ x 0, η + δ 0 + δ z 3.13 t lt1/d x + z, t 1/d x + η t 1/d + δ t 1/d x + δ t x+z = l 1/d x/ x z, η + δ 0 + δ z 3.14 t lt1/d x, t 1/d x + η t 1/d + δ t x = l 1/d x/ x 0, η + δ t lt1/d x + z, t 1/d x + η t 1/d + δ t x+z = l 1/d x/ x z, η + δ z

16 hold with probability one. In order to apply the dominated convergence theorem again, we have to find integrable majorants for the terms on the left-hand sides of First note that by the monotonicity relation 3.1 we have that and lt 1/d x, t 1/d x + η t 1/d + δ t 1/d x + δ t 1/d x+z lt 1/d x, t 1/d x + η t 1/d + δ t 1/d x lt 1/d x + z, t 1/d x + η t 1/d + δ t 1/d x + δ t 1/d x+z lt 1/d x + z, t 1/d x + η t 1/d + δ t 1/d x+z. For each x int we can construct a cone K x such that, for t 1, a point y t 1/d x+k x is also in t 1/d if y t 1/d x t 1/d x /2. This can be done by choosing a point p x on the line between x and 0, which is sufficiently close to the origin 0. Since this will always be an interior point of, we find a ball B d p x, r x of a certain radius r x > 0 and centred at p x, which is completely contained in and generates K x together with x. That is, K x is the smallest cone with apex x containing B d p x, r x. Then, lt 1/d x, t 1/d x + η t 1/d + δ t 1/d x 2 min y η K x y If the opening angle of the cone K x is sufficiently small, we have that y t 1/d x + z if y t 1/d x + K x and y t 1/d x 2 z. Thus, it holds that lt 1/d x + z, t 1/d x + η t 1/d + δ t 1/d x+z 2 min y η K x, y 2 z y It follows from the fact that η is a stationary Poisson point process with intensity one that P min y η K x, y 2 z y u = exp λ d K x B d 0, 1u d 2 d z d for u 2 z. Hence, the right-hand sides in 3.17 and 3.18 are integrable majorants for the expressions on the left-hand sides in Thus, we obtain by the dominated convergence theorem that 1{x + t t 1/d z }t 2a/d E[lx, η t + δ x + δ x+t z a lx + t 1/d z, η 1/d t + δ x + δ x+t z a ] 1/d E[lx, η t + δ x a ] E[lx + t 1/d z, η t + δ x+t z a ] 1/d = E[l x/ x 0, η + δ 0 + δ z a l x/ x z, η + δ 0 + δ z a ] E[l x/ x 0, η + δ 0 a ] E[l x/ x z, η + δ z a ]. Together with the fact that the expectations in the previous term are independent of the choice of x R d, we see that T 1t = λ d E[l e 0, η + δ 0 + δ z a l e z, η + δ 0 + δ z a ] t R d which together with 3.11 concludes the proof. E[l e 0, η + δ 0 a ] E[l e z, η + δ z a ] dz, 14

17 After having established the existence of the variance it in 1.3, we need to show that it is bounded away from zero. As a first step, the next lemma relates the asymptotic variance of L a a t, as t, to that of L, as r. Recall 1.4 for the definition B d 0,r a a of L and note that L depends on a direction e B d 0,r B d 0,r Sd 1, which is suppressed in our notation. Lemma 3.5. For a 0, where v a is the constant from Lemma 3.4. a Var[ L ] B d 0,r = v r κ d r d a, Proof. For a = 0, La B d 0,r is a Poisson random variable with mean λ db d 0, r = κ d r d and the statement is thus satisfied with v 0 = 1. So, we may assume that a > 0. It follows from the multivariate Mecke equation 2.1 and the same argument as at the beginning of the proof of Lemma 3.4 that a Var[ L ] = T B d 0,r 1,r + T 2,r with T 1,r and T 2,r given by T 1,r := 1{y B d 0, r} E[l e x, η + δ x + δ y a l e y, η + δ x + δ y a ] B d 0,r R d E[l e x, η + δ x a ] E[l e y, η + δ y a ] dy dx, T 2,r := E[l e x, η + δ x 2a ] dx. B d 0,r By the translation invariance of l e and η we have that T 1,r := 1{y x B d x, r} B d 0,r R d E[l e 0, η + δ 0 + δ y x a l e y x, η + δ 0 + δ y x a ] E[l e 0, η + δ 0 a ] E[l e y x, η + δ y x a ] dy dx = 1{y B d x, r} E[l e 0, η + δ 0 + δ y a l e y, η + δ 0 + δ y a ] B d 0,r R d E[l e 0, η + δ 0 a ] E[l e y, η + δ y a ] dy dx = r d 1{y B d rx, r} E[l e 0, η + δ 0 + δ y a l e y, η + δ 0 + δ y a ] B d 0,1 R d E[l e 0, η + δ 0 a ] E[l e y, η + δ y a ] dy dx. Let y R d and let A 1 and A 2 be the events A 1 := {ηb d 0, y /2 = 0} and A 2 := {ηb d y, y /2 = 0}, 15

18 respectively. On A 1 A 2 C the expectation factorizes so that E[l e 0, η + δ 0 + δ y a l e y, η + δ 0 + δ y a 1 A1 A 2 C] = E[l e 0, η + δ 0 a 1 A C 1 ] E[l e y, η + δ y a 1 A C 2 ]. Together with the Cauchy-Schwarz inequality and 3.3, this implies that E[le 0, η + δ 0 + δ y a l e y, η + δ 0 + δ y a ] E[l e 0, η + δ 0 a ] E[l e y, η + δ y a ] E[l e 0, η + δ 0 + δ y a l e y, η + δ 0 + δ y a 1 A1 A 2 ] + E[l e 0, η + δ 0 a 1 A1 ] E[l e y, η + δ y a ] + E[l e 0, η + δ 0 a ] E[l e y, η + δ y a 1 A2 ] E[l e 0, η + δ 0 4a ] 1/4 E[l e y, η + δ y 4a ] 1/4 PA 1 A 2 1/2 + E[l e 0, η + δ 0 2a ] 1/2 E[l e y, η + δ y 2a ] 1/2 PA 1 1/2 + PA 2 1/2. Since Pl e 0, η + δ 0 u = exp κ d u d /2 for u 0, all moments of l e 0, η + δ 0 are finite. Moreover, we have that PA 1 = PA 2 = exp κ d y d /2 d and PA 1 A 2 2 exp κ d y d /2 d. Hence, there is a constant C a 0, such that E[le 0, η + δ 0 + δ y a l e y, η + δ 0 + δ y a ] E[l e 0, η + δ 0 a ] E[l e y, η + δ y a ] C a exp κ d y d /2 d+1. This allows us to apply the dominated convergence theorem, which yields T 1,r r κ d r = E[l d e 0, η + δ 0 + δ y a l e y, η + δ 0 + δ y a ] R d E[l e 0, η + δ 0 a ] E[l e y, η + δ y a ] dy. On the other hand we have that This completes the proof. T 2,r = κ d r d E[l e 0, η + δ 0 2a ]. Lemma 3.4 and Lemma 3.5 show that the suitably normalized functionals L a t and L a have the same iting variance v B d 0,r a, as t or r, respectively. To complete the proof of Theorem 1.1 it remains to show that v a is strictly positive. Our preceding observation shows that for this only an analysis of the directed spanning forest DSFη of a stationary Poisson point process η in R d with intensity 1 is necessary. Compared to the radial spanning tree and as discussed already in the introduction, the advantage of this model is that its construction is homogeneous in space, as it is based on the notion a of half-spaces. This makes it much easier to analyse. Recall the definition 1.4 of L and note that it depends on a previously chosen direction e S d 1. Lemma 3.6. For all e S d 1 a and a 0 one has that Var[ L ] ˆv aλ d with a constant ˆv a 0, only depending on a, d and r := sup{u 0, 1] : λ d {x : B d x, max{2, d}u } λ d /2}. In particular, the constants v a, a 0, defined in Lemma 3.4 are strictly positive. 16

19 Proof. Since v 0 = 1 as observed above, we restrict from now on to the case a > 0. ithout loss of generality, we can assume that e = 0, 1, 0,..., 0. e define 0 = {x : B d x, max{2, d}r }. For technical reasons, we have to distinguish two cases and start with the situation that a 1. Let us define z 1 := 1/2, 3/2, 0,..., 0, z 2 := 1/2, 3/2, 0,..., 0, z 3 := 0, 3/2, 0,..., 0. The points 0, z 1 and z 2 form an equilateral triangle with side-length one, and z 3 is the midpoint between z 1 and z 2, see Figure 3. Because of 2 > 21/2 a + 3/2 a we have that l e z 1, δ 0 + δ z1 + δ z2 a + l e z 2, δ 0 + δ z1 + δ z2 a > c + z 1 z 3 a + z 2 z 3 a + z 3 a with a constant c 0,. By continuity, there is a constant r 0 0, r such that l e y 1, δ 0 + δ y1 + δ y2 a + l e y 2, δ 0 + δ y1 + δ y2 a > c 2 + y 1 y 3 a + y 2 y 3 a + y 3 a for all y 1 B d z 1, r 0 H C z 1,e, y 2 B d z 2, r 0 H C z 2,e and y 3 B d z 3, r 0 H z3,e. Moreover, we choose constants c 0, and s 0, 1 such that c 2 exp 4d κ d s d a1 exp 4 d κ d s d c In the following let x 0 and y 1, y 2, y 3 be as above. If ηb d x, 4s = 0, we have that L a DSFη + δ x + 3 δ x+syi a L DSFη + δ x + 2 δ x+syi s a y 1 y 3 a + y 2 y 3 a + y 3 a l e y 1, δ 0 + δ y1 + δ y2 a l e y 2, δ 0 + δ y1 + δ y2 a csa 2. On the other hand, it holds that L a DSFη + δ x + 3 l e x + sy 3, η + δ x + s a a. δ x+syi 3 δ x+syi a L DSFη + δ x + 2 δ x+syi Denote by A the event that ηb d x, 4s = 0. Then, combining the previous two inequal- 17

20 Figure 3: Illustration of the constructions in the proof of Lemma 3.6 for the planar case if a 1 left and 0 < a < 1 right. ities with the fact that PA = exp 4 d κ d s d and 3.19, we see that [ 3 2 ] E La a DSFη + δ x + δ x+yi L DSFη + δ x + δ x+yi = E [ La + DSFη + δ x + La DSFη + δ x + 3 δ x+yi 3 δ x+yi a L DSFη + δ x + a L DSFη + δ x + csa 3 a 2 exp 4d κ d s d + s a exp 4 d κ d s d cs a 2 δ x+yi 1 A 2 ] δ x+yi 1 A C for x 0 and y 1 B d sz 1, sr 0 Hsz C 1,e, y 2 B d sz 2, sr 0 Hsz C 2,e, y 3 B d sz 3, sr 0 H sz3,e. Now, Proposition 2.2 concludes the proof for a 1. Next, we consider the case that 0 < a < 1. Let N N be a sufficiently large integer a lower bound for N will be stated later. e denote the points of the grid { r } G := N + 1 i 1,..., i d 0, r, 0,..., 0 : i 1,..., i d = 1,..., N by z 1,..., z N d, see Figure 3. Let x 0 and y i B d z i, r /4N + 4 for i = 1,..., N d. By construction y 1,..., y N d are included in the d-dimensional hypercube Q := [0, r ] [ r, 0] [0, r ]... [0, r ] so that x+y i for all i {1,..., N d }. In the following, we derive a lower bound for E [ La DSFη + δ x + N d δ x+yi 18 a L DSFη + δ x ].

21 Adding the points x + y 1,..., x + y N d to the point process η + δ x generates new edges. If ηx + Q = 0, then N d l e x + y i, η + δ x + δ x+yi r 2N + 1, i = 1,..., N d, where the lower bound is just half of the mash of the grid G. Consequently the expectation of the contribution of the new points is at least r a exp r d N d 2 a N + 1. a On the other hand, inserting the additional points x + y 1,..., x + y N d can also shorten the edges belonging to some points of η, whereas the edge associated with x is not affected. By the triangle inequality and the assumption 0 a < 1, we see that, for z η with ˆnz, η + δ x + N d δ x+y i {x + y 1,..., x + y N d}, N d l e z, η + δ x + δ x+yi a l e z, η + δ x a z ˆnz, η + δ x + N d δ x+yi a z x a N d ˆnz, η + δ x + δ x+yi x a, where we have used that x H z,e and that z x z ˆnz, η + δ x + N d δ x+y i. Since x + y i x d r for i = 1,..., N d, this means that N d 0 l e z, η + δ x + δ x+yi a l e z, η + δ x a d r a, z η + δ x. So, it remains to bound the expectation of the number of points of η whose edges are affected by adding x + y 1,..., x + y N d, i.e., [ R := E z η N d ] 1{l e z, η + δ x + δ x+yi < l e z, η + δ x }. Using the multivariate Mecke formula 2.1 and denoting by B Q the smallest ball circumscribing the hypercube Q, we obtain that R κ d d r /2 d + Pl e z, η + δ z inf u z dz. u B Q Transformation into spherical coordinates yields that R κ d dr /2 d + dκ d B C Q dr /2 exp κ d r d r /2 d /2 r d 1 dr =: C R. 19

22 Consequently, we have that E [ La DSFη + δ x + N d r a δ x+yi ] a L DSFη + δ x exp r d N d 2 a N + 1 C R d r a a. e now choose N such that the right-hand side is larger than 1 note that this is always possible. Summarizing, we have shown that E [ La N d DSFη + δ x + δ x+yi ] a L DSFη + δ x 1 for x 0 and y i B d z i, r /4N + 4, i = 1,..., N d. Now, the assertion for 0 < a < 1 follows from Proposition 2.2. It remains to transfer the result to the variances of the edge-length functionals L a t of the radial spanning tree. Since there is a constant r 0 0, such that r B d 0,r = 1 for a all r r 0, we have that Var[ L ] ˆv B d 0,r aκ d r d with the same constant ˆv a 0, for all r r 0. On the other hand, Lemma 3.5 implies that, for a 0, This completes the proof. 4 Proof of Theorem 1.2 Var[ L a v a = ] B d 0,r ˆv r κ d r d a > 0. Our aim is to apply Proposition 2.1. In view of the variance asymptotics provided in Theorem 1.1, it remains to investigate the first and the second-order difference operator of t a/d L a t. Lemma 4.1. For any a 0 there are constants C 1, C 2 0, only depending on, a and d such that for z, z 1, z 2 and t 1. E[ D z t a/d L a t 5 ] C 1 and E[ D 2 z 1,z 2 t a/d L a t 5 ] C 2 Proof. If a = 0, we have L a t = η t so that the statements are obviously true with C 1 = C 2 = 1, for example. For a > 0, ξ N and z we have that D z L a RSTξ lz, ξ + δ z a + x ξ Consequently, it follows from Jensen s inequality that 1{lx, ξ z x } max lx, ξ a. x ξ lx,ξ z x E[ D z t a/d L a t 5 ] 16t 5a/d E[lz, η t + δ z 5a ] [ 5 5 ] + 16E 1{lx, η t z x } max t a/d lx, η x η t a. t x η t lx,η t z x 20

23 Lemma 3.2 implies that t 5a/d E[lz, η t + δ z 5a ] c 5a for z and t 1. For the second term, the Cauchy-Schwarz inequality yields the bound [ 10 ] 1/2 1/2 E 1{lx, η t z x } E[ max t 10a/d lx, η x η t 10a ]. 4.1 t x η t lx,η t z x The 10th moment of x η t 1{lx, η t z x } can be expressed as a linear combination of terms of the form [ ] E 1{lx i, η t x i z, i = 1,..., k} x 1,...,x k η k t, with k {1,..., 10}. Applying the multivariate Mecke formula 2.1, the monotonicity relation 3.1, Hölder s inequality and Lemma 3.2 yields, for each such k, [ ] E 1{lx i, η t x i z, i = 1,..., k} x 1,...,x k η k t, = t k P lx i, η t + k k δ xi x i z, i = 1,..., k dx 1,..., x k t k k P lx i, η t + δ xi x i z, i = 1,..., k dx 1,..., x k t k t k k P lx i, η t + δ xi x i z 1/k dx1,..., x k exp tα κ d x z d /k dx k. Introducing spherical coordinates and replacing by R d, we see that t exp tα κ d x z d /k dx tdκ d exp tα κ d r d /k r d 1 dr = 0 k α. This implies that E [ x η t 1{lx, η t z x } 10] is uniformly bounded for z and t 1. Moreover, for u 0 we have that P t a/d lx, η t a u max x η t lx,η t z x = P x η t : lx, η t t 1/d u 1/a, lx, η t z x [ ] E 1{lx, η t max{t 1/d u 1/a, z x }} x η t = t P lx, η t + δ x max{t 1/d u 1/a, z x } dx t exp tα κ d max{t 1/d u 1/a, z x } d dx R d = κ d u d/a exp α κ d u d/a + t exp tα κ d z x d dx R d \B d z,t 1/d u 1/a 21

24 = κ d u d/a exp α κ d u d/a + dκ d t exp tα κ d r d r d 1 dr t 1/d u 1/a = κ d u d/a exp α κ d u d/a + 1 α exp α κ d u d/a, where we have used the Mecke formula 2.1, Lemma 3.2 and a transformation into spherical coordinates. This exponential tail behaviour implies that the second factor of the product in 4.1 is uniformly bounded for all t 1. Altogether, we see that E[ D z t a/d L a t 5 ] C 1 for all z and t 1 with a constant C 1 0, only depending on, a and d. For the second-order difference operator we have that, for z 1, z 2, E[ D 2 z 1,z 2 t a/d L a t 5 ] 16t 5a/d E[ D z1 L a RSTη t 5 ] + 16t 5a/d E[ D z1 L a RSTη t + δ z2 5 ]. 4.2 In view of the argument for the first-order difference operator above, it only remains to consider the second term. For z 1, z 2 and ξ N we have that Dz1 t a/d L a RSTξ + δ z2 t a/d lz 1, ξ + δ z1 + δ z2 a + 1{lx, ξ + δ z2 z 1 x } x ξ+δ z2 Using the monotonicity relation 3.1, we find that and max x ξ+δz 2 lx,ξ+δ z2 z 1 x t a/d lx, ξ + δ z2 a. t a/d lz 1, ξ + δ z1 + δ z2 a t a/d lz 1, ξ + δ z1 a, 1{lx, ξ + δ z2 z 1 x } 1 + 1{lx, ξ z 1 x } x ξ+δ z2 max t a/d lx, ξ + δ z2 a t a/d lz 2, ξ + δ z2 a + max t a/d lx, ξ a. x ξ+δz 2 x ξ lx,ξ+δ z2 z 1 x lx,ξ z 1 x This implies that for ξ = η t the 5th and the 10th moment of these expressions are uniformly bounded in t by the same arguments as above. In view of 4.2 this shows that E[ Dz 2 1,z 2 t a/d L a t 5 ] C 2 for all z 1, z 2 and t 1 with a constant C 2 0, only depending on, a and d. hile Lemma 4.1 shows that assumption 2.2 in Proposition 2.1 is satisfied, the following result puts us in the position to verify also assumption 2.3 there. Lemma 4.2. For a 0, z 1, z 2 and t 1, with α as in Lemma 3.1. PD 2 z 1,z 2 L a t /α exp tα κ d z 1 z 2 d /2 d x ξ 22

25 Proof. Since Dz 2 1,z 2 L a t = 0 for a = 0, we can restrict ourselves to a > 0 in the following. First notice that D 2 z 1,z 2 L a t = y η t D 2 z 1,z 2 ly, ηt a + D z1 lz2, η t + δ z2 a + D z2 lz1, η t + δ z1 a. To have Dz 2 1,z 2 L a t 0, at least one of the above terms has to be non-zero. Moreover, for x, z and ξ N, D z lx, ξ + δ x a 0 and D z lx, ξ + δ x 0 are equivalent. Thus, PD 2 z 1,z 2 L a t 0 P y η t : D 2 z 1,z 2 ly, ηt a 0 + PD z1 lz2, η t + δ z2 a 0 + PD z2 lz1, η t + δ z1 a 0 = P y η t : D 2 z 1,z 2 ly, η t 0 + PD z1 lz 2, η t + δ z2 0 + PD z2 lz 1, η t + δ z1 0. Since D z1 lz 2, η t + δ z2 0 requires that lz 2, η t + δ z2 z 1 z 2, application of Lemma 3.2 yields PD z1 lz 2, η t + δ z2 0 Plz 2, η t + δ z2 z 1 z 2 exp tα κ d z 1 z 2 d and similarly PD z2 lz 1, η t + δ z1 0 exp tα κ d z 1 z 2 d. Using Mecke s formula 2.1, the fact that Dz 2 1,z 2 ly, η t + δ y 0 implies ly, η t + δ y max{ y z 1, y z 2 } and once again Lemma 3.2, we finally conclude that [ ] P y η t : Dz 2 1,z 2 ly, η t 0 E 1{Dz 2 1,z 2 ly, η t 0} y η t = t PDz 2 1,z 2 ly, η t + δ y 0 dy t Ply, η t + δ y max{ y z 1, y z 2 } dy t exp tα κ d max{ y z 1, y z 2 } d dy t exp tα κ d y z 1 d dy R d \B d z 1, z 1 z 2 /2 + t exp tα κ d y z 2 d dy Consequently, which proves the claim. R d \B d z 2, z 1 z 2 /2 2 α exp tα κ d z 1 z 2 d /2 d. PD 2 z 1,z 2 L a t /α exp tα κ d z 1 z 2 d /2 d, After these preparations, we can now use Proposition 2.1 to prove Theorem

26 Proof of Theorem 1.2. It follows from Lemma 4.2 by using spherical coordinates that sup t PDz 2 1,z 2 t a/d L a t 0 1/20 dz 2 z 1, t /α sup t exp tα κ d z 1 z 2 d /20 2 d dz 2 z 1, t 1 R d 2 + 2/α sup t exp tα κ d z d /20 2 d dz, t 1 R d = 20 2 d 2/α + 2/α 2, which is finite. Moreover, Lemma 4.1 shows that there are constants C 1, C 2 0, only depending on, a and d such that E[ D z t a/d L a t 5 ] C 1 and E[ D 2 z 1,z 2 t a/d L a t 5 ] C 2 for z, z 1, z 2 and t 1. Finally, Lemma 3.4 and Lemma 3.6 ensure the existence of constants v a, t 0 0, depending on, a and d such that t 1 Var[t a/d L a t ] v a /2 for all t t 0. Consequently, an application of Proposition 2.1 completes the proof of Theorem 1.2. Acknowledgements This research was supported through the program Research in Pairs by the Mathematisches Forschungsinstitut Oberwolfach in MS has been funded by the German Research Foundation DFG through the research unit Geometry and Physics of Spatial Random Systems under the grant HU 1874/3-1. CT has been supported by the German Research Foundation DFG via SFB-TR 12. References [1] F. Baccelli and B. Blaszczyszyn Stochastic Geometry and ireless Networks. Volumes I and II. Now Publishers. [2] F. Baccelli and C. Bordenave The radial spanning tree of a Poisson point process. Ann. Appl. Probab. 17, [3] F. Baccelli, D. Coupier and V.C. Tran Semi-infinite paths of the twodimensional radial spanning tree. Adv. in Appl. Probab. 45, [4] A.G. Bhatt and R. Roy On a random directed spanning tree. Adv. in Appl. Probab. 36, [5] C. Bordenave Navigation on a Poisson point process. Ann. Appl. Probab. 18, [6] S. Chatterjee and S. Sen Minimal spanning trees and Stein s method. arxiv:

27 [7] D. Coupier and V.C. Tran The 2D-directed spanning forest is almost surely a tree. Random Structures Algorithms 42, [8] M. Franceschetti and R. Meester Random Networks for Communication. Cambridge University Press, Cambridge. [9] M. Haenggi Stochastic Geometry for ireless Networks. Cambridge University Press, Cambridge. [10] H. Kesten and S. Lee The central it theorem for weighted minimal spanning trees on random points. Ann. Appl. Probab. 6, [11] G. Last, G. Peccati and M. Schulte Normal approximation on Poisson spaces: Mehler s formula, second order Poncaré inequalities and stabilization. arxiv: [12] G. Last and M. Penrose Percolation and it theory for the Poisson lilypond model. Random Structures Algorithms 42, [13] P. Mörters and Y. Peres Brownian Motion. Cambridge University Press, Cambridge. [14] M.D. Penrose Random Geometric Graphs. Oxford University Press, Oxford. [15] M.D. Penrose Multivariate spatial central it theorems with applications to percolation and spatial graphs. Ann. Probab. 33, [16] M.D. Penrose and A.R. ade On the total length of the random minimal directed spanning tree. Adv. in Appl. Probab. 38, [17] M.D. Penrose and A.R. ade Random directed and on-line networks. In: New Perspectives in Stochastic Geometry Eds. I. Molchanov and.s. Kendall, Oxford University Press, Oxford. [18] M.D. Penrose and J.E. Yukich Central it theorems for some graphs in computational geometry. Ann. Appl. Probab. 11, [19] R. Schneider and. eil Classical stochastic geometry. In: New Perspectives in Stochastic Geometry Eds. I. Molchanov and.s. Kendall, Oxford University Press, Oxford. [20] J.M. Steele Probability Theory and Combinatorial Optimization. SIAM, Philadelphia. [21] J.E. Yukich Probability Theory of Classical Euclidean Optimization Problems. Lecture Notes in Mathematics 1675, Springer, New York. 25

arxiv: v3 [math.mg] 3 Nov 2017

arxiv: v3 [math.mg] 3 Nov 2017 arxiv:702.0069v3 [math.mg] 3 ov 207 Random polytopes: central limit theorems for intrinsic volumes Christoph Thäle, icola Turchi and Florian Wespi Abstract Short and transparent proofs of central limit

More information

Normal approximation of Poisson functionals in Kolmogorov distance

Normal approximation of Poisson functionals in Kolmogorov distance Normal approximation of Poisson functionals in Kolmogorov distance Matthias Schulte Abstract Peccati, Solè, Taqqu, and Utzet recently combined Stein s method and Malliavin calculus to obtain a bound for

More information

Oberwolfach Preprints

Oberwolfach Preprints Oberwolfach Preprints OWP 202-07 LÁSZLÓ GYÖFI, HAO WALK Strongly Consistent Density Estimation of egression esidual Mathematisches Forschungsinstitut Oberwolfach ggmbh Oberwolfach Preprints (OWP) ISSN

More information

ON MEHLER S FORMULA. Giovanni Peccati (Luxembourg University) Conférence Géométrie Stochastique Nantes April 7, 2016

ON MEHLER S FORMULA. Giovanni Peccati (Luxembourg University) Conférence Géométrie Stochastique Nantes April 7, 2016 1 / 22 ON MEHLER S FORMULA Giovanni Peccati (Luxembourg University) Conférence Géométrie Stochastique Nantes April 7, 2016 2 / 22 OVERVIEW ı I will discuss two joint works: Last, Peccati and Schulte (PTRF,

More information

Topics in Stochastic Geometry. Lecture 4 The Boolean model

Topics in Stochastic Geometry. Lecture 4 The Boolean model Institut für Stochastik Karlsruher Institut für Technologie Topics in Stochastic Geometry Lecture 4 The Boolean model Lectures presented at the Department of Mathematical Sciences University of Bath May

More information

Oberwolfach Preprints

Oberwolfach Preprints Oberwolfach Preprints OWP 2013-23 CLEONICE F. BRACCIALI AND JUAN JOSÉ MORENO- BALCÁZAR Mehler-Heine Asymptotics of a Class of Generalized Hypergeometric Polynomials Mathematisches Forschungsinstitut Oberwolfach

More information

Oberwolfach Preprints

Oberwolfach Preprints Oberwolfach Preprints OWP 2018-11 HERY RANDRIAMARO A Deformed Quon Algebra Mathematisches Forschungsinstitut Oberwolfach ggmbh Oberwolfach Preprints (OWP ISSN 1864-7596 Oberwolfach Preprints (OWP Starting

More information

R. Lachieze-Rey Recent Berry-Esseen bounds obtained with Stein s method andgeorgia PoincareTech. inequalities, 1 / with 29 G.

R. Lachieze-Rey Recent Berry-Esseen bounds obtained with Stein s method andgeorgia PoincareTech. inequalities, 1 / with 29 G. Recent Berry-Esseen bounds obtained with Stein s method and Poincare inequalities, with Geometric applications Raphaël Lachièze-Rey, Univ. Paris 5 René Descartes, Georgia Tech. R. Lachieze-Rey Recent Berry-Esseen

More information

Random graphs: Random geometric graphs

Random graphs: Random geometric graphs Random graphs: Random geometric graphs Mathew Penrose (University of Bath) Oberwolfach workshop Stochastic analysis for Poisson point processes February 2013 athew Penrose (Bath), Oberwolfach February

More information

Introduction to Real Analysis Alternative Chapter 1

Introduction to Real Analysis Alternative Chapter 1 Christopher Heil Introduction to Real Analysis Alternative Chapter 1 A Primer on Norms and Banach Spaces Last Updated: March 10, 2018 c 2018 by Christopher Heil Chapter 1 A Primer on Norms and Banach Spaces

More information

TREE AND GRID FACTORS FOR GENERAL POINT PROCESSES

TREE AND GRID FACTORS FOR GENERAL POINT PROCESSES Elect. Comm. in Probab. 9 (2004) 53 59 ELECTRONIC COMMUNICATIONS in PROBABILITY TREE AND GRID FACTORS FOR GENERAL POINT PROCESSES ÁDÁM TIMÁR1 Department of Mathematics, Indiana University, Bloomington,

More information

Normal approximation of geometric Poisson functionals

Normal approximation of geometric Poisson functionals Institut für Stochastik Karlsruher Institut für Technologie Normal approximation of geometric Poisson functionals (Karlsruhe) joint work with Daniel Hug, Giovanni Peccati, Matthias Schulte presented at

More information

MAT 570 REAL ANALYSIS LECTURE NOTES. Contents. 1. Sets Functions Countability Axiom of choice Equivalence relations 9

MAT 570 REAL ANALYSIS LECTURE NOTES. Contents. 1. Sets Functions Countability Axiom of choice Equivalence relations 9 MAT 570 REAL ANALYSIS LECTURE NOTES PROFESSOR: JOHN QUIGG SEMESTER: FALL 204 Contents. Sets 2 2. Functions 5 3. Countability 7 4. Axiom of choice 8 5. Equivalence relations 9 6. Real numbers 9 7. Extended

More information

Probability and Measure

Probability and Measure Part II Year 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2018 84 Paper 4, Section II 26J Let (X, A) be a measurable space. Let T : X X be a measurable map, and µ a probability

More information

arxiv: v1 [math.mg] 27 Jan 2016

arxiv: v1 [math.mg] 27 Jan 2016 On the monotonicity of the moments of volumes of random simplices arxiv:67295v [mathmg] 27 Jan 26 Benjamin Reichenwallner and Matthias Reitzner University of Salzburg and University of Osnabrueck Abstract

More information

arxiv:math/ v1 [math.pr] 29 Nov 2002

arxiv:math/ v1 [math.pr] 29 Nov 2002 arxiv:math/0211455v1 [math.pr] 29 Nov 2002 Trees and Matchings from Point Processes Alexander E. Holroyd Yuval Peres February 1, 2008 Abstract A factor graph of a point process is a graph whose vertices

More information

Topological properties

Topological properties CHAPTER 4 Topological properties 1. Connectedness Definitions and examples Basic properties Connected components Connected versus path connected, again 2. Compactness Definition and first examples Topological

More information

Asymptotic theory for statistics of the Poisson-Voronoi approximation

Asymptotic theory for statistics of the Poisson-Voronoi approximation Submitted to Bernoulli arxiv: arxiv:1503.08963 Asymptotic theory for statistics of the Poisson-Voronoi approximation CHRISTOPH THÄLE1 and J. E. YUKICH 2 1 Faculty of Mathematics, Ruhr University Bochum,

More information

STAT 7032 Probability Spring Wlodek Bryc

STAT 7032 Probability Spring Wlodek Bryc STAT 7032 Probability Spring 2018 Wlodek Bryc Created: Friday, Jan 2, 2014 Revised for Spring 2018 Printed: January 9, 2018 File: Grad-Prob-2018.TEX Department of Mathematical Sciences, University of Cincinnati,

More information

LIMIT THEORY FOR PLANAR GILBERT TESSELLATIONS

LIMIT THEORY FOR PLANAR GILBERT TESSELLATIONS PROBABILITY AND MATHEMATICAL STATISTICS Vol. 31, Fasc. 1 (2011), pp. 149 160 LIMIT THEORY FOR PLANAR GILBERT TESSELLATIONS BY TOMASZ S C H R E I B E R (TORUŃ) AND NATALIA S O JA (TORUŃ) Abstract. A Gilbert

More information

Lecture Notes in Advanced Calculus 1 (80315) Raz Kupferman Institute of Mathematics The Hebrew University

Lecture Notes in Advanced Calculus 1 (80315) Raz Kupferman Institute of Mathematics The Hebrew University Lecture Notes in Advanced Calculus 1 (80315) Raz Kupferman Institute of Mathematics The Hebrew University February 7, 2007 2 Contents 1 Metric Spaces 1 1.1 Basic definitions...........................

More information

Random Geometric Graphs

Random Geometric Graphs Random Geometric Graphs Mathew D. Penrose University of Bath, UK Networks: stochastic models for populations and epidemics ICMS, Edinburgh September 2011 1 MODELS of RANDOM GRAPHS Erdos-Renyi G(n, p):

More information

Set, functions and Euclidean space. Seungjin Han

Set, functions and Euclidean space. Seungjin Han Set, functions and Euclidean space Seungjin Han September, 2018 1 Some Basics LOGIC A is necessary for B : If B holds, then A holds. B A A B is the contraposition of B A. A is sufficient for B: If A holds,

More information

3 Integration and Expectation

3 Integration and Expectation 3 Integration and Expectation 3.1 Construction of the Lebesgue Integral Let (, F, µ) be a measure space (not necessarily a probability space). Our objective will be to define the Lebesgue integral R fdµ

More information

The Codimension of the Zeros of a Stable Process in Random Scenery

The Codimension of the Zeros of a Stable Process in Random Scenery The Codimension of the Zeros of a Stable Process in Random Scenery Davar Khoshnevisan The University of Utah, Department of Mathematics Salt Lake City, UT 84105 0090, U.S.A. davar@math.utah.edu http://www.math.utah.edu/~davar

More information

Course 212: Academic Year Section 1: Metric Spaces

Course 212: Academic Year Section 1: Metric Spaces Course 212: Academic Year 1991-2 Section 1: Metric Spaces D. R. Wilkins Contents 1 Metric Spaces 3 1.1 Distance Functions and Metric Spaces............. 3 1.2 Convergence and Continuity in Metric Spaces.........

More information

Lebesgue Measure on R n

Lebesgue Measure on R n CHAPTER 2 Lebesgue Measure on R n Our goal is to construct a notion of the volume, or Lebesgue measure, of rather general subsets of R n that reduces to the usual volume of elementary geometrical sets

More information

Lecture 18: March 15

Lecture 18: March 15 CS71 Randomness & Computation Spring 018 Instructor: Alistair Sinclair Lecture 18: March 15 Disclaimer: These notes have not been subjected to the usual scrutiny accorded to formal publications. They may

More information

Malliavin-Stein Method in Stochastic Geometry

Malliavin-Stein Method in Stochastic Geometry Malliavin-Stein Method in Stochastic Geometry Dissertation zur Erlangung des Doktorgrades (Dr. rer. nat.) des Fachbereichs Mathematik/Informatik der Universität Osnabrück vorgelegt von Matthias Schulte

More information

The Canonical Gaussian Measure on R

The Canonical Gaussian Measure on R The Canonical Gaussian Measure on R 1. Introduction The main goal of this course is to study Gaussian measures. The simplest example of a Gaussian measure is the canonical Gaussian measure P on R where

More information

Stein s Method and Stochastic Geometry

Stein s Method and Stochastic Geometry 1 / 39 Stein s Method and Stochastic Geometry Giovanni Peccati (Luxembourg University) Firenze 16 marzo 2018 2 / 39 INTRODUCTION Stein s method, as devised by Charles Stein at the end of the 60s, is a

More information

CONSTRAINED PERCOLATION ON Z 2

CONSTRAINED PERCOLATION ON Z 2 CONSTRAINED PERCOLATION ON Z 2 ZHONGYANG LI Abstract. We study a constrained percolation process on Z 2, and prove the almost sure nonexistence of infinite clusters and contours for a large class of probability

More information

1.1. MEASURES AND INTEGRALS

1.1. MEASURES AND INTEGRALS CHAPTER 1: MEASURE THEORY In this chapter we define the notion of measure µ on a space, construct integrals on this space, and establish their basic properties under limits. The measure µ(e) will be defined

More information

Oberwolfach Preprints

Oberwolfach Preprints Oberwolfach Preprints OWP 2009-17 ALEXANDER OLEVSKII AND ALEXANDER ULANOVSKII Approximation of Discrete Functions and Size of Spectrum Mathematisches Forschungsinstitut Oberwolfach ggmbh Oberwolfach Preprints

More information

Different scaling regimes for geometric Poisson functionals

Different scaling regimes for geometric Poisson functionals Different scaling regimes for geometric Poisson functionals Raphaël Lachièze-Rey, Univ. Paris 5 René Descartes Joint work with Giovanni Peccati, University of Luxembourg isson Point Processes: Malliavin

More information

Research Reports on Mathematical and Computing Sciences

Research Reports on Mathematical and Computing Sciences ISSN 1342-2804 Research Reports on Mathematical and Computing Sciences Long-tailed degree distribution of a random geometric graph constructed by the Boolean model with spherical grains Naoto Miyoshi,

More information

Appendix B Convex analysis

Appendix B Convex analysis This version: 28/02/2014 Appendix B Convex analysis In this appendix we review a few basic notions of convexity and related notions that will be important for us at various times. B.1 The Hausdorff distance

More information

Introduction to Functional Analysis

Introduction to Functional Analysis Introduction to Functional Analysis Carnegie Mellon University, 21-640, Spring 2014 Acknowledgements These notes are based on the lecture course given by Irene Fonseca but may differ from the exact lecture

More information

REAL ANALYSIS I HOMEWORK 4

REAL ANALYSIS I HOMEWORK 4 REAL ANALYSIS I HOMEWORK 4 CİHAN BAHRAN The questions are from Stein and Shakarchi s text, Chapter 2.. Given a collection of sets E, E 2,..., E n, construct another collection E, E 2,..., E N, with N =

More information

Asymptotic moments of near neighbour. distance distributions

Asymptotic moments of near neighbour. distance distributions Asymptotic moments of near neighbour distance distributions By Dafydd Evans 1, Antonia J. Jones 1 and Wolfgang M. Schmidt 2 1 Department of Computer Science, Cardiff University, PO Box 916, Cardiff CF24

More information

l(y j ) = 0 for all y j (1)

l(y j ) = 0 for all y j (1) Problem 1. The closed linear span of a subset {y j } of a normed vector space is defined as the intersection of all closed subspaces containing all y j and thus the smallest such subspace. 1 Show that

More information

Random Geometric Graphs

Random Geometric Graphs Random Geometric Graphs Mathew D. Penrose University of Bath, UK SAMBa Symposium (Talk 2) University of Bath November 2014 1 MODELS of RANDOM GRAPHS Erdos-Renyi G(n, p): start with the complete n-graph,

More information

Tools from Lebesgue integration

Tools from Lebesgue integration Tools from Lebesgue integration E.P. van den Ban Fall 2005 Introduction In these notes we describe some of the basic tools from the theory of Lebesgue integration. Definitions and results will be given

More information

Introduction to Proofs

Introduction to Proofs Real Analysis Preview May 2014 Properties of R n Recall Oftentimes in multivariable calculus, we looked at properties of vectors in R n. If we were given vectors x =< x 1, x 2,, x n > and y =< y1, y 2,,

More information

Metric Spaces and Topology

Metric Spaces and Topology Chapter 2 Metric Spaces and Topology From an engineering perspective, the most important way to construct a topology on a set is to define the topology in terms of a metric on the set. This approach underlies

More information

Real Analysis Notes. Thomas Goller

Real Analysis Notes. Thomas Goller Real Analysis Notes Thomas Goller September 4, 2011 Contents 1 Abstract Measure Spaces 2 1.1 Basic Definitions........................... 2 1.2 Measurable Functions........................ 2 1.3 Integration..............................

More information

arxiv: v5 [math.pr] 5 Nov 2015

arxiv: v5 [math.pr] 5 Nov 2015 Shortest Path through Random Points Sung Jin Hwang Steven B. Damelin Alfred O. Hero III arxiv:1202.0045v5 [math.pr] 5 Nov 2015 University of Michigan Mathematical Reviews Abstract Let M, g 1 ) be a complete

More information

Stein s Method: Distributional Approximation and Concentration of Measure

Stein s Method: Distributional Approximation and Concentration of Measure Stein s Method: Distributional Approximation and Concentration of Measure Larry Goldstein University of Southern California 36 th Midwest Probability Colloquium, 2014 Stein s method for Distributional

More information

Lecture Notes 1: Vector spaces

Lecture Notes 1: Vector spaces Optimization-based data analysis Fall 2017 Lecture Notes 1: Vector spaces In this chapter we review certain basic concepts of linear algebra, highlighting their application to signal processing. 1 Vector

More information

Some Background Material

Some Background Material Chapter 1 Some Background Material In the first chapter, we present a quick review of elementary - but important - material as a way of dipping our toes in the water. This chapter also introduces important

More information

Partial cubes: structures, characterizations, and constructions

Partial cubes: structures, characterizations, and constructions Partial cubes: structures, characterizations, and constructions Sergei Ovchinnikov San Francisco State University, Mathematics Department, 1600 Holloway Ave., San Francisco, CA 94132 Abstract Partial cubes

More information

EXISTENCE AND UNIQUENESS OF INFINITE COMPONENTS IN GENERIC RIGIDITY PERCOLATION 1. By Alexander E. Holroyd University of Cambridge

EXISTENCE AND UNIQUENESS OF INFINITE COMPONENTS IN GENERIC RIGIDITY PERCOLATION 1. By Alexander E. Holroyd University of Cambridge The Annals of Applied Probability 1998, Vol. 8, No. 3, 944 973 EXISTENCE AND UNIQUENESS OF INFINITE COMPONENTS IN GENERIC RIGIDITY PERCOLATION 1 By Alexander E. Holroyd University of Cambridge We consider

More information

MAS331: Metric Spaces Problems on Chapter 1

MAS331: Metric Spaces Problems on Chapter 1 MAS331: Metric Spaces Problems on Chapter 1 1. In R 3, find d 1 ((3, 1, 4), (2, 7, 1)), d 2 ((3, 1, 4), (2, 7, 1)) and d ((3, 1, 4), (2, 7, 1)). 2. In R 4, show that d 1 ((4, 4, 4, 6), (0, 0, 0, 0)) =

More information

Chapter 4. Measure Theory. 1. Measure Spaces

Chapter 4. Measure Theory. 1. Measure Spaces Chapter 4. Measure Theory 1. Measure Spaces Let X be a nonempty set. A collection S of subsets of X is said to be an algebra on X if S has the following properties: 1. X S; 2. if A S, then A c S; 3. if

More information

Convergence at first and second order of some approximations of stochastic integrals

Convergence at first and second order of some approximations of stochastic integrals Convergence at first and second order of some approximations of stochastic integrals Bérard Bergery Blandine, Vallois Pierre IECN, Nancy-Université, CNRS, INRIA, Boulevard des Aiguillettes B.P. 239 F-5456

More information

18.175: Lecture 3 Integration

18.175: Lecture 3 Integration 18.175: Lecture 3 Scott Sheffield MIT Outline Outline Recall definitions Probability space is triple (Ω, F, P) where Ω is sample space, F is set of events (the σ-algebra) and P : F [0, 1] is the probability

More information

Spring 2014 Advanced Probability Overview. Lecture Notes Set 1: Course Overview, σ-fields, and Measures

Spring 2014 Advanced Probability Overview. Lecture Notes Set 1: Course Overview, σ-fields, and Measures 36-752 Spring 2014 Advanced Probability Overview Lecture Notes Set 1: Course Overview, σ-fields, and Measures Instructor: Jing Lei Associated reading: Sec 1.1-1.4 of Ash and Doléans-Dade; Sec 1.1 and A.1

More information

Theorem 2.1 (Caratheodory). A (countably additive) probability measure on a field has an extension. n=1

Theorem 2.1 (Caratheodory). A (countably additive) probability measure on a field has an extension. n=1 Chapter 2 Probability measures 1. Existence Theorem 2.1 (Caratheodory). A (countably additive) probability measure on a field has an extension to the generated σ-field Proof of Theorem 2.1. Let F 0 be

More information

7 Convergence in R d and in Metric Spaces

7 Convergence in R d and in Metric Spaces STA 711: Probability & Measure Theory Robert L. Wolpert 7 Convergence in R d and in Metric Spaces A sequence of elements a n of R d converges to a limit a if and only if, for each ǫ > 0, the sequence a

More information

X n D X lim n F n (x) = F (x) for all x C F. lim n F n(u) = F (u) for all u C F. (2)

X n D X lim n F n (x) = F (x) for all x C F. lim n F n(u) = F (u) for all u C F. (2) 14:17 11/16/2 TOPIC. Convergence in distribution and related notions. This section studies the notion of the so-called convergence in distribution of real random variables. This is the kind of convergence

More information

Asymptotics for posterior hazards

Asymptotics for posterior hazards Asymptotics for posterior hazards Pierpaolo De Blasi University of Turin 10th August 2007, BNR Workshop, Isaac Newton Intitute, Cambridge, UK Joint work with Giovanni Peccati (Université Paris VI) and

More information

On the cells in a stationary Poisson hyperplane mosaic

On the cells in a stationary Poisson hyperplane mosaic On the cells in a stationary Poisson hyperplane mosaic Matthias Reitzner and Rolf Schneider Abstract Let X be the mosaic generated by a stationary Poisson hyperplane process X in R d. Under some mild conditions

More information

µ X (A) = P ( X 1 (A) )

µ X (A) = P ( X 1 (A) ) 1 STOCHASTIC PROCESSES This appendix provides a very basic introduction to the language of probability theory and stochastic processes. We assume the reader is familiar with the general measure and integration

More information

Introduction and Preliminaries

Introduction and Preliminaries Chapter 1 Introduction and Preliminaries This chapter serves two purposes. The first purpose is to prepare the readers for the more systematic development in later chapters of methods of real analysis

More information

In terms of measures: Exercise 1. Existence of a Gaussian process: Theorem 2. Remark 3.

In terms of measures: Exercise 1. Existence of a Gaussian process: Theorem 2. Remark 3. 1. GAUSSIAN PROCESSES A Gaussian process on a set T is a collection of random variables X =(X t ) t T on a common probability space such that for any n 1 and any t 1,...,t n T, the vector (X(t 1 ),...,X(t

More information

16 1 Basic Facts from Functional Analysis and Banach Lattices

16 1 Basic Facts from Functional Analysis and Banach Lattices 16 1 Basic Facts from Functional Analysis and Banach Lattices 1.2.3 Banach Steinhaus Theorem Another fundamental theorem of functional analysis is the Banach Steinhaus theorem, or the Uniform Boundedness

More information

ELEMENTS OF PROBABILITY THEORY

ELEMENTS OF PROBABILITY THEORY ELEMENTS OF PROBABILITY THEORY Elements of Probability Theory A collection of subsets of a set Ω is called a σ algebra if it contains Ω and is closed under the operations of taking complements and countable

More information

2 Topology of a Metric Space

2 Topology of a Metric Space 2 Topology of a Metric Space The real number system has two types of properties. The first type are algebraic properties, dealing with addition, multiplication and so on. The other type, called topological

More information

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 11 Luca Trevisan February 29, 2016

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 11 Luca Trevisan February 29, 2016 U.C. Berkeley CS294: Spectral Methods and Expanders Handout Luca Trevisan February 29, 206 Lecture : ARV In which we introduce semi-definite programming and a semi-definite programming relaxation of sparsest

More information

1 Independent increments

1 Independent increments Tel Aviv University, 2008 Brownian motion 1 1 Independent increments 1a Three convolution semigroups........... 1 1b Independent increments.............. 2 1c Continuous time................... 3 1d Bad

More information

MEAN CURVATURE FLOW OF ENTIRE GRAPHS EVOLVING AWAY FROM THE HEAT FLOW

MEAN CURVATURE FLOW OF ENTIRE GRAPHS EVOLVING AWAY FROM THE HEAT FLOW MEAN CURVATURE FLOW OF ENTIRE GRAPHS EVOLVING AWAY FROM THE HEAT FLOW GREGORY DRUGAN AND XUAN HIEN NGUYEN Abstract. We present two initial graphs over the entire R n, n 2 for which the mean curvature flow

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

Random Process Lecture 1. Fundamentals of Probability

Random Process Lecture 1. Fundamentals of Probability Random Process Lecture 1. Fundamentals of Probability Husheng Li Min Kao Department of Electrical Engineering and Computer Science University of Tennessee, Knoxville Spring, 2016 1/43 Outline 2/43 1 Syllabus

More information

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and education use, including for instruction at the authors institution

More information

18.175: Lecture 2 Extension theorems, random variables, distributions

18.175: Lecture 2 Extension theorems, random variables, distributions 18.175: Lecture 2 Extension theorems, random variables, distributions Scott Sheffield MIT Outline Extension theorems Characterizing measures on R d Random variables Outline Extension theorems Characterizing

More information

HAUSDORFF DIMENSION AND ITS APPLICATIONS

HAUSDORFF DIMENSION AND ITS APPLICATIONS HAUSDORFF DIMENSION AND ITS APPLICATIONS JAY SHAH Abstract. The theory of Hausdorff dimension provides a general notion of the size of a set in a metric space. We define Hausdorff measure and dimension,

More information

The Skorokhod reflection problem for functions with discontinuities (contractive case)

The Skorokhod reflection problem for functions with discontinuities (contractive case) The Skorokhod reflection problem for functions with discontinuities (contractive case) TAKIS KONSTANTOPOULOS Univ. of Texas at Austin Revised March 1999 Abstract Basic properties of the Skorokhod reflection

More information

Gaussian Limits for Random Measures in Geometric Probability

Gaussian Limits for Random Measures in Geometric Probability Gaussian Limits for Random Measures in Geometric Probability Yu. Baryshnikov and J. E. Yukich 1 May 19, 2004 bstract We establish Gaussian limits for measures induced by binomial and Poisson point processes

More information

NEW FUNCTIONAL INEQUALITIES

NEW FUNCTIONAL INEQUALITIES 1 / 29 NEW FUNCTIONAL INEQUALITIES VIA STEIN S METHOD Giovanni Peccati (Luxembourg University) IMA, Minneapolis: April 28, 2015 2 / 29 INTRODUCTION Based on two joint works: (1) Nourdin, Peccati and Swan

More information

Notes 6 : First and second moment methods

Notes 6 : First and second moment methods Notes 6 : First and second moment methods Math 733-734: Theory of Probability Lecturer: Sebastien Roch References: [Roc, Sections 2.1-2.3]. Recall: THM 6.1 (Markov s inequality) Let X be a non-negative

More information

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3 Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................

More information

LARGE DEVIATIONS OF TYPICAL LINEAR FUNCTIONALS ON A CONVEX BODY WITH UNCONDITIONAL BASIS. S. G. Bobkov and F. L. Nazarov. September 25, 2011

LARGE DEVIATIONS OF TYPICAL LINEAR FUNCTIONALS ON A CONVEX BODY WITH UNCONDITIONAL BASIS. S. G. Bobkov and F. L. Nazarov. September 25, 2011 LARGE DEVIATIONS OF TYPICAL LINEAR FUNCTIONALS ON A CONVEX BODY WITH UNCONDITIONAL BASIS S. G. Bobkov and F. L. Nazarov September 25, 20 Abstract We study large deviations of linear functionals on an isotropic

More information

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2 Order statistics Ex. 4.1 (*. Let independent variables X 1,..., X n have U(0, 1 distribution. Show that for every x (0, 1, we have P ( X (1 < x 1 and P ( X (n > x 1 as n. Ex. 4.2 (**. By using induction

More information

Part V. 17 Introduction: What are measures and why measurable sets. Lebesgue Integration Theory

Part V. 17 Introduction: What are measures and why measurable sets. Lebesgue Integration Theory Part V 7 Introduction: What are measures and why measurable sets Lebesgue Integration Theory Definition 7. (Preliminary). A measure on a set is a function :2 [ ] such that. () = 2. If { } = is a finite

More information

Course Notes. Part IV. Probabilistic Combinatorics. Algorithms

Course Notes. Part IV. Probabilistic Combinatorics. Algorithms Course Notes Part IV Probabilistic Combinatorics and Algorithms J. A. Verstraete Department of Mathematics University of California San Diego 9500 Gilman Drive La Jolla California 92037-0112 jacques@ucsd.edu

More information

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R. Ergodic Theorems Samy Tindel Purdue University Probability Theory 2 - MA 539 Taken from Probability: Theory and examples by R. Durrett Samy T. Ergodic theorems Probability Theory 1 / 92 Outline 1 Definitions

More information

Jensen s inequality for multivariate medians

Jensen s inequality for multivariate medians Jensen s inequality for multivariate medians Milan Merkle University of Belgrade, Serbia emerkle@etf.rs Given a probability measure µ on Borel sigma-field of R d, and a function f : R d R, the main issue

More information

LARGE DEVIATION PROBABILITIES FOR SUMS OF HEAVY-TAILED DEPENDENT RANDOM VECTORS*

LARGE DEVIATION PROBABILITIES FOR SUMS OF HEAVY-TAILED DEPENDENT RANDOM VECTORS* LARGE EVIATION PROBABILITIES FOR SUMS OF HEAVY-TAILE EPENENT RANOM VECTORS* Adam Jakubowski Alexander V. Nagaev Alexander Zaigraev Nicholas Copernicus University Faculty of Mathematics and Computer Science

More information

MATH41011/MATH61011: FOURIER SERIES AND LEBESGUE INTEGRATION. Extra Reading Material for Level 4 and Level 6

MATH41011/MATH61011: FOURIER SERIES AND LEBESGUE INTEGRATION. Extra Reading Material for Level 4 and Level 6 MATH41011/MATH61011: FOURIER SERIES AND LEBESGUE INTEGRATION Extra Reading Material for Level 4 and Level 6 Part A: Construction of Lebesgue Measure The first part the extra material consists of the construction

More information

arxiv: v2 [math.pr] 26 Jun 2017

arxiv: v2 [math.pr] 26 Jun 2017 Existence of an unbounded vacant set for subcritical continuum percolation arxiv:1706.03053v2 math.pr 26 Jun 2017 Daniel Ahlberg, Vincent Tassion and Augusto Teixeira Abstract We consider the Poisson Boolean

More information

Expectation is a positive linear operator

Expectation is a positive linear operator Department of Mathematics Ma 3/103 KC Border Introduction to Probability and Statistics Winter 2017 Lecture 6: Expectation is a positive linear operator Relevant textbook passages: Pitman [3]: Chapter

More information

Random sets. Distributions, capacities and their applications. Ilya Molchanov. University of Bern, Switzerland

Random sets. Distributions, capacities and their applications. Ilya Molchanov. University of Bern, Switzerland Random sets Distributions, capacities and their applications Ilya Molchanov University of Bern, Switzerland Molchanov Random sets - Lecture 1. Winter School Sandbjerg, Jan 2007 1 E = R d ) Definitions

More information

Preliminary Exam 2018 Solutions to Morning Exam

Preliminary Exam 2018 Solutions to Morning Exam Preliminary Exam 28 Solutions to Morning Exam Part I. Solve four of the following five problems. Problem. Consider the series n 2 (n log n) and n 2 (n(log n)2 ). Show that one converges and one diverges

More information

Part II Probability and Measure

Part II Probability and Measure Part II Probability and Measure Theorems Based on lectures by J. Miller Notes taken by Dexter Chua Michaelmas 2016 These notes are not endorsed by the lecturers, and I have modified them (often significantly)

More information

Random Bernstein-Markov factors

Random Bernstein-Markov factors Random Bernstein-Markov factors Igor Pritsker and Koushik Ramachandran October 20, 208 Abstract For a polynomial P n of degree n, Bernstein s inequality states that P n n P n for all L p norms on the unit

More information

Integral Jensen inequality

Integral Jensen inequality Integral Jensen inequality Let us consider a convex set R d, and a convex function f : (, + ]. For any x,..., x n and λ,..., λ n with n λ i =, we have () f( n λ ix i ) n λ if(x i ). For a R d, let δ a

More information

Estimates for probabilities of independent events and infinite series

Estimates for probabilities of independent events and infinite series Estimates for probabilities of independent events and infinite series Jürgen Grahl and Shahar evo September 9, 06 arxiv:609.0894v [math.pr] 8 Sep 06 Abstract This paper deals with finite or infinite sequences

More information

Definition 2.1. A metric (or distance function) defined on a non-empty set X is a function d: X X R that satisfies: For all x, y, and z in X :

Definition 2.1. A metric (or distance function) defined on a non-empty set X is a function d: X X R that satisfies: For all x, y, and z in X : MATH 337 Metric Spaces Dr. Neal, WKU Let X be a non-empty set. The elements of X shall be called points. We shall define the general means of determining the distance between two points. Throughout we

More information

Mathematics 530. Practice Problems. n + 1 }

Mathematics 530. Practice Problems. n + 1 } Department of Mathematical Sciences University of Delaware Prof. T. Angell October 19, 2015 Mathematics 530 Practice Problems 1. Recall that an indifference relation on a partially ordered set is defined

More information

Tree sets. Reinhard Diestel

Tree sets. Reinhard Diestel 1 Tree sets Reinhard Diestel Abstract We study an abstract notion of tree structure which generalizes treedecompositions of graphs and matroids. Unlike tree-decompositions, which are too closely linked

More information