The Favorite Point of a Poisson Process

Similar documents
The Codimension of the Zeros of a Stable Process in Random Scenery

AN EXTREME-VALUE ANALYSIS OF THE LIL FOR BROWNIAN MOTION 1. INTRODUCTION

Logarithmic scaling of planar random walk s local times

MOMENT CONVERGENCE RATES OF LIL FOR NEGATIVELY ASSOCIATED SEQUENCES

4 Sums of Independent Random Variables

The Uniform Modulus of Continuity Of Iterated Brownian Motion 1

PACKING-DIMENSION PROFILES AND FRACTIONAL BROWNIAN MOTION

Packing-Dimension Profiles and Fractional Brownian Motion

1. Stochastic Processes and filtrations

Properties of an infinite dimensional EDS system : the Muller s ratchet

Weak convergence and Brownian Motion. (telegram style notes) P.J.C. Spreij

Lecture 2. We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales.

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Lecture 17 Brownian motion as a Markov process

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

(A n + B n + 1) A n + B n

6. Brownian Motion. Q(A) = P [ ω : x(, ω) A )

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past.

SPATIAL STRUCTURE IN LOW DIMENSIONS FOR DIFFUSION LIMITED TWO-PARTICLE REACTIONS

Jump Processes. Richard F. Bass

A generalization of Strassen s functional LIL

An almost sure invariance principle for additive functionals of Markov chains

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i :=

Wiener Measure and Brownian Motion

RANDOM WALKS WHOSE CONCAVE MAJORANTS OFTEN HAVE FEW FACES

Lecture 12. F o s, (1.1) F t := s>t

ON THE REGULARITY OF SAMPLE PATHS OF SUB-ELLIPTIC DIFFUSIONS ON MANIFOLDS

Notes on uniform convergence

3 Integration and Expectation

1.1 Review of Probability Theory

I forgot to mention last time: in the Ito formula for two standard processes, putting

STAT 7032 Probability Spring Wlodek Bryc

ON THE COMPLETE CONVERGENCE FOR WEIGHTED SUMS OF DEPENDENT RANDOM VARIABLES UNDER CONDITION OF WEIGHTED INTEGRABILITY

LIMITS FOR QUEUES AS THE WAITING ROOM GROWS. Bell Communications Research AT&T Bell Laboratories Red Bank, NJ Murray Hill, NJ 07974

On the static assignment to parallel servers

On a class of additive functionals of two-dimensional Brownian motion and random walk

C.7. Numerical series. Pag. 147 Proof of the converging criteria for series. Theorem 5.29 (Comparison test) Let a k and b k be positive-term series

The Kadec-Pe lczynski theorem in L p, 1 p < 2

Math 456: Mathematical Modeling. Tuesday, March 6th, 2018

Chapter 1. Poisson processes. 1.1 Definitions

Lower Tail Probabilities and Related Problems

Stat410 Probability and Statistics II (F16)

Recurrence of Simple Random Walk on Z 2 is Dynamically Sensitive

ON THE PATHWISE UNIQUENESS OF SOLUTIONS OF STOCHASTIC DIFFERENTIAL EQUATIONS

Convergence Rates for Renewal Sequences

for all f satisfying E[ f(x) ] <.

1 Sequences of events and their limits

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

Coupling Binomial with normal

Solution for Problem 7.1. We argue by contradiction. If the limit were not infinite, then since τ M (ω) is nondecreasing we would have

Figure 10.1: Recording when the event E occurs

µ X (A) = P ( X 1 (A) )

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

Reflected Brownian Motion

ELEMENTS OF PROBABILITY THEORY

On the quantiles of the Brownian motion and their hitting times.

Stability of Stochastic Differential Equations

Weighted Sums of Orthogonal Polynomials Related to Birth-Death Processes with Killing

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.

Conditional independence, conditional mixing and conditional association

Prime numbers and Gaussian random walks

RENEWAL THEORY STEVEN P. LALLEY UNIVERSITY OF CHICAGO. X i

Mi-Hwa Ko. t=1 Z t is true. j=0

Probability Theory. Richard F. Bass

UPPER DEVIATIONS FOR SPLIT TIMES OF BRANCHING PROCESSES

Poisson Processes. Stochastic Processes. Feb UC3M

STAT 380 Markov Chains

Introduction and Preliminaries

On pathwise stochastic integration

Lecture 9. d N(0, 1). Now we fix n and think of a SRW on [0,1]. We take the k th step at time k n. and our increments are ± 1

Self-normalized Cramér-Type Large Deviations for Independent Random Variables

Empirical Processes: General Weak Convergence Theory

h(x) lim H(x) = lim Since h is nondecreasing then h(x) 0 for all x, and if h is discontinuous at a point x then H(x) > 0. Denote

Almost sure asymptotics for the random binary search tree

9 Brownian Motion: Construction

Stochastic integration. P.J.C. Spreij

STA205 Probability: Week 8 R. Wolpert

Optional Stopping Theorem Let X be a martingale and T be a stopping time such

On a Problem of Erdős and Taylor

Martingale Problems. Abhay G. Bhatt Theoretical Statistics and Mathematics Unit Indian Statistical Institute, Delhi

Zeros of lacunary random polynomials

ERRATA: Probabilistic Techniques in Analysis

Some basic elements of Probability Theory

Lecture 21 Representations of Martingales

Self-normalized laws of the iterated logarithm

Formulas for probability theory and linear models SF2941

Theorem 2.1 (Caratheodory). A (countably additive) probability measure on a field has an extension. n=1

MATH 6605: SUMMARY LECTURE NOTES

arxiv: v1 [math.pr] 6 Jan 2014

Estimates for probabilities of independent events and infinite series

SUMMARY OF RESULTS ON PATH SPACES AND CONVERGENCE IN DISTRIBUTION FOR STOCHASTIC PROCESSES

Zeros of a two-parameter random walk

STA 711: Probability & Measure Theory Robert L. Wolpert

Manual for SOA Exam MLC.

MATH 564/STAT 555 Applied Stochastic Processes Homework 2, September 18, 2015 Due September 30, 2015

THE LINDEBERG-FELLER CENTRAL LIMIT THEOREM VIA ZERO BIAS TRANSFORMATION

On the Set of Limit Points of Normed Sums of Geometrically Weighted I.I.D. Bounded Random Variables

Probability Models. 4. What is the definition of the expectation of a discrete random variable?

Brownian Motion and Stochastic Calculus

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 7 9/25/2013

Transcription:

The Favorite Point of a Poisson Process Davar Khoshnevisan & Thomas M. Lewis University of Utah and Furman University Abstract. In this paper we consider the most visited site, X t, of a Poisson process up to time t. Our point of departure from the literature on maximal spacings is our asymptotic analysis of where the maximal spacing occurs (i.e., the size of X t and not the size of the spacings. Keywords and Phrases. Poisson Process, Occupation Times, Association. A.M.S. 1992 Subject Classification. Primary. 60K99; Secondary. 60K05 Suggested Short Title. Favorite Points

1. Introduction. Let (N t, t 0 denote a rate one Poisson process. For every x R 1 +, let us define the occupation time of {x} by time t as follows: t ξ t (x = 1 {x} (N u du. 0 We have adopted the customary notation that 1 A (r is one if r A and zero otherwise. Evidently, ξ t (x = 0 almost surely if and only if x Z 1 +. With this in mind, we can now define the most visited site of N up to time t as X t = min{k 0 : ξt (k ξ t (i for all i 0}. In this paper, we are interested in the growth properties of the process X. Suppose, instead, N were replaced by a mean zero finite variance lattice random walk. Then defining X in the obvious way, asymptotic properties of this favorite point process were studied by Erdős and Révész (1984 and Bass and Griffin (1985. Erdős and Révész have shown that the limsup of the favorite point process is infinity and have determined the rate with which this occurs. The surprising results of Bass and Griffin demonstrate that this limsup is, in fact, a limit; consequently, the favorite point process is transient. In the present setting, it is not hard to see that almost surely lim X t. t Indeed, elementary properties of N demonstrate that X t is uniformly distributed on the the set {0, 1,..., m} conditional on the event {N t = m}. Therefore, by the strong law of large numbers applied to N, X t /t converges in law to the uniform measure on [0, 1]. From this alone, one can deduce that almost surely lim inf t X t t = 0 and lim sup t X t t = 1. This paper is an attempt to refine these statements. Unlike the recurrent case, where the liminf (viz., Bass and Griffin is harder to derive than the limsup (viz., Erdős and Révész, in the present case it is quite the opposite. Although sometimes in disguise, the favorite point process, X t, appears quite naturally in the literature on maximal spacings. For instance, the fact that X t /t is nearly uniformly 1

distributed on [0,1] appears in several applications; see Slud (1978 and Pyke (1970,1980. For a thorough survey paper on this subject, see Pyke (1965. We begin with a result on the lower envelope of X, which demonstrates the rate of escape of X t as t. Theorem 1.1. Suppose ψ : [1, R 1 + t tψ(t is increasing. Then almost surely where lim inf t X t tψ(t = is decreasing to zero as t and that { 0, if J(ψ =, if J(ψ <, J(ψ = ψ(t dt 1 t. For example, one can take ψ(t = ( ln t a to see that almost surely (ln t a { 0, if a 1 lim inf X t = t t, if a > 1. We have the following theorem on the upper envelope of X. Theorem 1.2. Almost surely X t t lim sup = 1. t 2t ln ln t It should be noted that this is not an ordinary law of the iterated logarithm: it is closer, in spirit, to the one sided laws of the iterated logarithm described in Pruitt (1981. As we have already noted, X t /t is asymptotically uniformly distributed on [0, 1]. From this, it is evident that E(X t t/2 as t. From this perspective, the centering in Theorem 1.2 is rather exotic. Moreover, unlike the case for the classical law of the iterated logarithm, (X t t/ t is not converging weakly to the normal law. Indeed, for each a 0, our Lemma 3.1 implies P(X t t + a t 0 as t 0. This shows that the event that X is near its upper envelope is very rare; so rare that proving Theorem 1.2 by the usual methods (i.e., obtaining sharp probability estimates and then using blocking arguments is formidable, if not impossible. Instead we will take a different route: we will define a sequence of stopping times, (T k, and show that (with probability one infinitely often N(T k is large in the sense of the law of the iterated logarithm and N(T k = X(T k eventually. Thus, we show that X automatically inherits a law of the iterated logarithm from N. We also have the following refinement of the upper half of Theorem 1.2: 2

Theorem 1.3. Let ψ : [1, R 1 + be increasing to infinity as t. Suppose further that ψ satisfies 1 exp ( ψ 2 (t/2 dt tψ(t <. Then almost surely X t t + tψ(t for all t sufficiently large. Consequently for any p > 1 and with probability one, X t t + t 2 ln ln t + p ln ln ln t eventually, while, by the integral test of Feller (see Bai (1989, N t t + t 2 ln ln t + 3 ln ln ln t infinitely often. Finally, we point out that by the strong law of large numbers for N and our Theorem 1.1, one easily obtains the following result on the size of the gap between N t and X t. Corollary 1.4. With probability one, N t X t lim sup = 1. t t The corresponding liminf result is trivial, since by its very definition, X t = N t, infinitely often. 2. Proof of Theorem 1.1. In this section, we will prove Theorem 1.1. Before doing so, we will prove some lemmas and propositions concerning the distribution of X t (Lemma 2.1, the joint distribution of X s and X t (Proposition 2.3 and the rate of convergence of X t /t to the uniform measure on [0, 1] as t (Lemma 2.5. We start with the following characterization of the joint distribution of the interarrival times of a conditioned Poisson process: Given {N t = m}, the joint distribution of (ξ t (0,..., ξ t (m is the same as that of (Y 0 /S,..., Y m /S, where (Y i, 0 i m is a collection of independent exponentially distributed random variables with parameter one, and S = t 1 (Y 0 +... + Y m (see, e.g., Karlin and Taylor (1981, p. 105. As such, for 0 k m, we have (2.1 P(X t = k N t = m = P(Y k > Y i for 0 i m with i k = 1 m + 1. Thus, given {N t = m}, X t is uniformly distributed on the set {0, 1,..., m}. From this we can readily calculate the distribution of X t, which is what we do next. 3

Lemma 2.1. For all integers k 1, (1 P(X t k = P(N t k k t P(N t k + 1. (2 P(X t < k = P(N t < k + k t P(N t k + 1. Proof. We will demonstrate (1: (2 follows from (1 by taking complements. Since X t N t, we have P(X t k = P(X t k N t = mp(n t = m m=k By (2.1 we have Consequently, P(X t k N t = m = 1 P(X t k = P(N t k = P(N t k k t m=k k m + 1. k m + 1 P(N t = m m=k t m+1 (m + 1! e t = P(N t k k t P(N t k + 1, which establishes (1. Next we turn to estimating the joint distribution of X s and X t. To do so, we will need additional notation: given 0 < s < t, and a nonnegative integer k, let ξ s,t (k = ξ t (k ξ s (k = t s 1 {k} (N u du. The following lemma is an important calculation in the joint probability estimate: Lemma 2.2. Let 0 < s < t and let 0 j m < k be integers. Then P(X s j, X t = k, N s = m P(X s j, N s = m P(X t s = k m. Proof. Thus The events {N t = q}, q k, partition the event, {X s j, X t = k, N s = m}. P(X s j, X t = k, N s = m = P(A q, q=k 4

where A q = {X s j, N s = m, X t = k, N t = q} Given {N t = q} and {X t = k}, we observe that k is the minimum index for which ξ t (k ξ t (i for 0 i q. Since N s = m < k, we note that ξ s (k = 0; thus, ξ t (k = ξ s,t (k. Now ξ s,t (i ξ t (i for all i 0; thus, k is the minimum index for which ξ s,t (k ξ s,t (i for m i q. Given N s = m, and m i q, we have ξ s,t (i = t s 1 {i m} (N u N s du, which is independent of the event {N s = m, X s j} and, as a process in i {m,..., q}, is distributed as {ξ t s (i m, m i q}. Hence P(A q P(X s j, N s = m P(X t s = k m, N t s = q m. We obtain the desired result upon summing over q. Concerning this result, it should be noted that it was significant that N s < X t : this is what permitted us to identify the most visited site of the random walk between times s and t. If N s X t, then all knowledge of the most visited site between times s and t is lost. In our next result, we obtain an estimate for the joint distribution of X s and X t. Proposition 2.3. Let 0 < s < t and 0 < a < b. Then P(X s a, X t b P(X t a + P(X t = N s + P(X s a P(X t s b. Proof. Without loss of generality, we may assume that a and b are integers. Certainly P(X s a, X t b P(X t a + P(X t = N s + P(X s a, a < X t b, X t N s. 5

However, if X s < X t, then there is a (random time t 0 (s, t] at which X t0 = N t0. Therefore, X t N s. Thus P(X s a, X t b P(X t a + P(X t = N s + + b 1 m=0 P(X s a, a < X t b, N s < X t, N s = m. If 0 m a 1, then, by Lemma 2.2, this summand can be estimated as follows: P(X s a, a < X t b, N s = m P(X s a, N s = m P(X t s b. Likewise, if a m b, then, by Lemma 2.2, this summand can be estimated as follows: P(X s a, m < X t b, N s = m P(X s a, N s = m P(X t s b. We obtain the desired result upon summing over m. Let λ(x = { x ln x + 1 x for x > 0 1 for x = 0. Standard exponential Chebyshev s inequality arguments can be used to show the following: (2.2a (2.2b P(N t tx e tλ(x for x > 1 P(N t tx e tλ(x for 0 x < 1. From this it is easy to obtain the following estimates: Lemma 2.4. Let (N t, t 0 be a rate one Poisson process. Then there exists a positive constant c such that (1 P(N t t + α t e α2 /4 for 0 α t/2. (2 P(N t t + α t e cα t for α t/2. (3 P(N t t α t e α2 /2 for α 0. Proof. Throughout let x = 1 + α/ t. Consider (1: the case α = 0 is trivial; thus, without loss of generality, assume that 0 < α t/2 and hence 1 < x 3/2. Since ln(x (x 1 (x 12 2 for x 1, we have λ(x (x 1 2 /4 = α 2 /(4t; thus (1 follows from (2.2. 6

The condition: α t/2 implies x 3/2. Since λ(x is convex and λ(1 = 0, we obtain λ(x 2 3λ(3/2(x 1 for x 3/2. Letting c = 2λ(3/2/3, we obtain (2 from (2.1. To verify (3, observe that Taylor s theorem implies λ(x (x 1 2 /2 for 0 x < 1 : (3 follows immediately from (2.2. Next we calculate the rate with which X t /t converges to the uniform measure on [0, 1]. Lemma 2.5. (1 Let 0 α 1, then lim t P(X t αt α = 0. (2 Let 0 < α < 1, then, for all t sufficiently large, sup P(X t αt α 2 0 α α t. Remark 2.5.1. Lemma 2.5(1 shows that X t /t converges in law to the uniform measure on [0, 1]. Lemma 2.5(2 gives a uniform rate for this convergence for α bounded away from 1. With more care, one can show that P(X t αt α is uniformly bounded by 6t 1/3 (ln t 1/3 for 0 α 1 and t sufficiently large. Since we will make no use of this latter fact, we will not prove it here. Proof. By Lemma 2.2, we have P(X t αt = P(X t αt = P(N t αt + αt + 1 P(N t αt + 2. t Thus P(X t αt α 1 t + (1 αp(n t αt. To verify (1, observe that the case α = 1 is trivial. If 0 α < 1, then (2.2 shows that P(N t αt e tλ(α (with λ(α > 0, which tends to zero as t : this demonstrates (1. To establish (2, let α be as given. If 0 α α, then, for all t sufficiently large, P(N t αt P(N t α t e tλ(α 1 t, which demonstrates (2 and hence the lemma. 7

We are now ready to prove Theorem 1.1. Proof of Theorem 1.1. Let ψ be as in the statement of Theorem 1.1. In fact, we will show that J(ψ < implies P ( X t tψ(t, i.o. = 0 whereas J(ψ = implies P ( X t tψ(t, i.o. = 1. This suffices to prove the theorem. Indeed, suppose J(ψ <. Then for any K > 0, J(Kψ = KJ(ψ <. Hence, we have shown that for all K > 0 lim inf t X t tψ(t K. Letting K, we see that J(ψ < implies that the above liminf is infinity. Likewise, if J(ψ =, so is J(εψ = εj(ψ, for any ε > 0. Hence we have shown that lim inf t The theorem is proved upon letting ε 0. For j 1, let t j = 2 j and X t tψ(t ε. A j ={Xtj t j ψ(t j }. By Lemma 2.5, for j sufficiently large, we have This, together with the inequalities P(A j ψ(t j 1 2 j 1. 1 tj+1 2 ψ(t j+1 t j ψ(t dt ψ(t j, t demonstrates that j P(A j, j ψ(t j and J(ψ converge or diverge simultaneously. Suppose that j ψ(t j converges. Let η(t = 2ψ(2t and let B j = {X tj t j η(t j }. Then the convergence of j η(t j implies the convergence of j P(B j. By the Borel- Cantelli lemma, it follows that X tj consequence, if t j t t j+1, then eventually eventually exceeds t j η(t j with probability one. As a X t X tj > t j η(t j = t j+1 ψ(t j+1 tψ(t a.s. (where we have used the fact that t tψ(t is increasing to obtain the last inequality. Thus the convergence of J(ψ implies P(X t tψ(t, i.o. = 0. 8

If, however, J(ψ diverges, then by the above considerations, j P(A j =. Consequently to show P(A j, i.o. = 1, by the Kochen-Stone lemma (see Kochen and Stone (1964 it suffices to demonstrate that (2.3a P(A j A k P(A j P(A k + R(j, k for 1 j < k with R(j, k = o ( Σ 2 (2.3b n as n, 1 j<k n where (2.3c By Proposition 2.3, n Σ n = P(A j. j=1 P(A j A k A(j, k + B(j, k + C(j, k, where A(j, k = P(X tk t j ψ(t j B(j, k = P(N tj = X tk By Lemma 2.5, C(j, k = P(A j P(X tk t j t k ψ(t k. A(j, k 2 j k ψ(t j + 2 1 k. Therefore as n, 1 j<k n A(j, k = o ( Σ 2 n. The term B(j, k is quite small: with high probability N tj is close to t j (deviations from t j can be measured by Lemma 2.4. However, with small probability, X tk will assume a value in a neighborhood of t j (being, more or less, distributed uniformly in [0, t k ]. Precisely we have B(j, k B 1 (j, k + B 2 (j, k, where B 1 (j, k = P (N tj t j 2 t j ln t k + P (N tj t j + 2 t j ln t k B 2 (j, k = P (t j 2 t j ln t k X tk t j + 2 t j ln t k. By Lemmas 2.4 and 2.5, we obtain the following: B 1 (j, k 1 4 k + 1 2 k + exp ( c t j ln t k B 2 (j, k 2j/2 k + 1 2 k 2, 9

which demonstrates that 1 j<k n B(j, k is bounded as n. Finally, for j sufficiently large, we have Thus, eventually P(X tk t j t k ψ(t k P(A k + 3 2 k j 1. 3 C(j, k P(A j P(A k + P(A j 2 k j 1. Since 1 j<k n P(A j2 j k+1 = o(σ 2 n as n, this implies (2.3, which proves the Theorem in question. 3. Proof of Theorem 1.3. We shall begin this section with a lemma which will lead to the proof of Theorem 1.3. Lemma 3.1. Let 1 α t. Then P(X t t + α t = α 2πt α ( α x 2 e x2 /2 2 dx + O t. Proof. Let k = t + α t and observe that P(X t t + α t = P(X t k = P(N t = k + ( 1 k P(N t k + 1, t by an application of Lemma 2.1(2. We will estimate the first term on the right hand side directly and the second term by the classical Berry-Esseen theorem. Let δ = k t. By the Stirlings formula, k! = (k/e k 2πk (1 + ε(k, where kε(k 1/12 as k (see, e.g., Artin (1964, p. 24. Thus (3.1 P(N t = k = 1 2πk (k/t k e δ (1 + O(1/t. Recall the following inequality: for x 0, exp ( kx + kx 2 /2 kx 3 /3 (1 + x k exp ( kx + kx 2 /2. 10

Therefore, Also, we have (k/t k = exp ( δ δ2 (1 + O(δ/t. 2t 1 k = 1 t (1 + O(δ/t. Inserting this into (3.1, we obtain the following: However, δ = t + α t t. Thus P(N t = k = 1 ( δ 2 exp 2πt 2t (1 + O(δ/t. ( ( δ 2 α 2 exp = exp (1 + O(α/ t. 2t 2 Consequently (3.2 P(N t = k = 1 ( α 2 exp 2πt 2 Let Φ(b = 1 2π b (1 + O(α/ t. e x2 /2 dx. By the classical Berry-Esseen theorem (and some basic estimates, we have, Moreover, Hence P(N t k + 1 = Φ ( (k + 1 t/ t + O(1/ t t k t = Φ(α + O(α/ t. t k t Finally, an integration by parts reveals: Combining this with (3.2, we obtain = α t + O(1/t. P(N t k + 1 = α t Φ(α + O(α 2 /t. Φ(α = 1 2πα e α2 /2 1 2π P(X t t + α t = α 2πt α 11 α x 2 e x2 /2 dx. x 2 e x2 /2 dx + O(α 2 /t,

as was to be shown. Now we are prepared to prove Theorem 1.3. Proof of Theorem 1.3. By Theorem 1.2 (proved in the next section and a classical argument of Erdős (1942, we can assume that (3.3 (1.3 ln ln t < ψ(t < (1.5 ln ln t For all t 1, let { ψ 0 (t = max 1, ψ(t 7 } ψ(t Observe that t ψ 0 (t is nondecreasing and ψ 0 (t ψ(t as t. From this and (3.3, it follows that (3.4 ln ln t < ψ0 (t < 2 ln ln t for all t sufficiently large. Moreover, for all t sufficiently large, ( ( exp ψ2 0(t C exp ψ2 (t. 2 2 for some positive constant C. Consequently, ( (3.5 exp 1 ψ2 0(t 2 dt tψ 0 (t <. Finally, for future reference, we point out that, for all t sufficiently large, (3.6 ψ 0 (t + 6 ψ 0 (t ψ(t. For each integer m 20, let By the mean value theorem, t m = m 2 ln ln(m (3.7 t m+1 t m 2m ln ln(m as m. Observe that, for all m sufficiently large, tm ( ( exp ψ2 0(t dt t m 1 2 tψ 0 (t exp ψ2 0(t m (tm t m 1 2 t m ψ 0 (t m ( 1 mψ 0 (t m exp ψ2 0(t m 2 (3.8 12

where we have used the fact that the integrand is nonincreasing and (3.7 to obtain the first and second inequalities, respectively. By L Hôpital s rule, as a, a exp( x 2 /2x 2 dx a 3 exp( a 2 /2. Therefore, by Lemma 3.1, there exist positive constants C 1, C 2, C 3 and C 4 such that P(X tm+1 t m+1 + t m+1 ψ 0 (t m C 1 exp( ψ 2 (t m /2 tm+1 ψ 2 0 (t m C 3 exp( ψ 2 0(t m /2 mψ 0 (t m + C 2 ψ 2 0(t m t m + C 4 (ln ln(m 2 m 2, where we have used (3.4 and (3.7 to obtain the second inequality. By (3.5 and (3.8, it follows that P(X tm+1 t m+1 + t m+1 ψ 0 (t m <. m=1 Consequently, by the easy half of the Borel Cantelli lemma, X tm+1 t m+1 + t m+1 ψ 0 (t m on a set of full measure and for all m sufficiently large. However, by some algebra, (3.9 t m+1 + t m+1 ψ 0 (t m = t m + t m ( ψ0 (t m + r m where t m+1 t m tm+1 t r m = + m tm tm ψ 0 (t m = t m+1 t m (1 + o(1, tm By (3.4 and (3.7, we have, for all m sufficiently large, t m+1 t m tm < 5 ψ 0 (t m. Consequently, r m < 6/ψ 0 (t m for all m large enough. recalling (3.6, we may conclude that with probability one By inserting this into (3.9 and X tm+1 t m + t m ψ(t m 13

for all m sufficiently large. Finally, given t m < t t m+1, with probability one and for t sufficiently large, we obtain: X t X tm+1 t m + t m ψ(t m which is what we wished to show. t + tψ(t, 4. Proof of Theorem 1.2. As usual, the proof of such a theorem is divided up into upper bound and a lower bound arguments. Since X t N t, the upper bound for the limsup, i.e., lim sup t X t t 2t ln ln t 1, a.s. is a trivial consequence of the law of the iterated logarithm for N. It therefore remains to prove the lower bound, i.e., (4.1 lim sup t X t t 2t ln ln t 1, a.s. It is not too difficult to see that standard proofs for obtaining such lower bounds fail. In a standard proof of the law of the iterated logarithm for, say, Brownian motion, one shows that the lower bound is achieved along geometric subsequences: these subsequences are sparse enough that the dependence amongst the samples is negligible. Let (t n denote an increasing sequence of real numbers and, for n 1, let E n = { Xtn t n + 2t n ln ln(t n }. Lemma 3.1 shows that n P(E n < whenever n t 1/2 n <, which indicates that the events in question are rare so rare that any attempt at using the independence half of the Borel Cantelli lemma is bound to fail. Instead we shall prove (4.1 along a random subsequence, (T k. For this subsequence, (T k, we will demonstrate that infinitely often N(T k is large (in the sense of the LIL and eventually N(T k = X(T k. With this strategy in mind, let us start with some definitions. Given L R 1 +, define τ(l = inf{t L : N t N t L = 0}. 14

One of the main results of this section is Lemma 4.3, in which we obtain an exponential lower bound for the random variable N τ(l τ(l τ(l. In preparation for this result, we will need some preliminary lemmas. We begin by a process of decimation. More precisely, define for all k 1, D k = { j2 k : j Z }. Define stopping times, τ k (L = inf{t D k [L, : N t N t L = 0}. Of course, τ(l τ k (L and τ k+1 (L τ k (L, for all k 0. In fact, we have the following strong convergence result: Lemma 4.1. For each L > 0, with probability one, Proof. lim τ k(l = τ(l. k Let ε > 0 be fixed. Choose k large enough that 2 k < ε. Then, by the strong Marov property, we obtain: P ( τ(l τ k (L > ε P ( N 2 k > 0. Since P ( N 2 k > 0 2 k, it follows that P ( τ(l τ k (L > ε <. k=1 The proof is completed by an application of the easy half of the Borel Cantelli lemma and letting ε 0 along a countable sequence. Our next lemma is a modification of a moderate deviation inequality, which we state without proof (see, e.g., Feller (1971, p. 552. 15

Lemma 4.2. Given A (0, 1 there exist x 0 = x 0 (A > 0, η = η(a > 0 and ρ = ρ(a > 0, such that for every L > 0, x [x 0, ρt 1/6 ] and t ηl 2, P ( N t L t + tx 1 2 exp ( x2. 2A Our next lemma shows that such an inequality can be obtained for the stopped Poisson process, N τ(l. Lemma 4.3. Fix A (0, 1 and L > 0. For x 0, η and ρ as given by Lemma 4.2, let γ = ( ηl 2 ( x/ρ 6. Then for all such L > 0 and all x x0, Proof. P ( N τ(l τ(l + τ(lx 1 2 P( τ(l γ ( exp x2. 2A We begin by recalling some facts about associated random variables. Following Esary, Proschan and Walkup (1967, we say that a collection of random variables, Y 1,, Y n is associated if Cov ( f(y 1,, Y n, g(y 1,, Y n 0, where f and g are any two measurable coordinatewise nondecreasing functions mapping R n into R (provided the covariance is defined. In Esary, Proschan and Walkup (1967, it is shown that independent random variables are associated. Thus, by direct application of the definition, nonnegative linear combinations of a collection of independent random variables are associated. Consequently, a finite collection of (possibly overlapping increments of a Poisson process is associated. Whenever the collection Y, Y 1,, Y n is associated, by choosing f and g to be appropriate indicator functions, we obtain (4.2 P ( Y a, Y 1 a 1,, Y n a n P ( Y a P ( Y1 a 1,, Y n a n. First, we will establish the desired inequality for τ k (L. Then we will take limits as k. With this in mind, let t D k with t > γ L. Define the measurable event { F t = Ns N s L 1 if s D k [L, t L } { N t L N s L 1 if s D k [t L, t }. = {τ k (L t} 16

We note that, { τk (L = t } = F t { N t N t L = 0 }. Furthermore, on the event {τ k (L = t}, we have N t = N t L. Finally, the events {N t L t + tx} and F t are independent of {N t N t L = 0}. Consequently P ( N t t + tx τ k (L = t = P ( N t L t + tx F t. However, from (4.2 it follows that, P ( N t L t + tx F t P ( Nt L t + tx. From Lemma 4.2, by our choice of γ, this last probability is bounded below by (1/2 exp ( x 2 /(2A. Thus, by the law of total probability, we arrive at the following estimate: (4.3 P ( N τk (L τ k (L + τ k (L x 1 2 P( τ k (L γ ( exp x2. 2A By Lemma 3, τ k (L τ(l, a.s. as k. Since the Poisson process is right continuous, the proof of this lemma is completed upon taking limits in (4.3 as k. In the next lemma, we give the momment generating function of τ(l. Lemma 4.4. (1 For all λ > 1, (2 Eτ(L = e L. E exp ( λτ(l = (λ + 1e λl λe L + e λl. (3 There exists some C > 0 (not depending on L, such that for all a 1, P ( τ(l a Cae L. Proof. The proof of (1 is based on a technique for the study of patterns connected with repeated trials as developed in Ch.XIII, 7 and 8 of Feller (1971. For simplicity, let τ = τ(l and for t L define E t = { ω : Nt N t L = 0 }. 17

By the law of total probability, P(E t = If L s < t L, by independence, t L P(E t τ = sp(τ ds. P(E t τ = s = P(E t = P(N L = 0 = e L. Otherwise, we obtain P(E t τ = s = P(N t N s = 0, N s N t L = 0 τ = s = e (t s. Since P(E t = exp( L, we arrive at the formula, (4.4 1 = P(τ t L + t t L exp ( L + (t s P(τ ds. Define a function, g, by exp(l u, if 0 u < L g(u = 1, if u L. 0, if u < 0 By (4.4, for all t L, we have (4.5 g(t sp(τ ds = 1. On the other hand, if t < L, we have (4.6 g(t sp(τ ds = 0. Let H(t = 1 [L, (t be the indicator function of [L,. Then we have shown in (4.5 and (4.6 that H is a certain convolution. In particular, H(t = g(t sp(τ ds. Therefore H(λ = g(λe exp ( λτ, 18

where for any positive function, F, F denotes the Laplace transform: e λt F (tdt. Since for all λ > 1, H(λ = λ 1 exp ( λl, g(λ = λel + e λl (λ + 1λ, we obtain the Laplace transform of τ by solving. Equation (2 follows by differentiating this transform and setting λ to zero. To prove (3, we use Chebyhev s inequality, viz. P(τ a = P ( exp(1 τ/a 1 e (1 + a 1 e L/a a 1 e L + e L/a 2ea exp( L. Taking C = 2e, we arrive at (3. In the above, we have used the inequalities: 0 exp( L/a 1 and a + 1 2a for a 1. Remark 4.4.1. A by product of Lemma 4.4(1 is that as L, e L τ(l converges weakly to a mean one exponential distribution. We shall have no need for this fact. Let t 1 = 1 and for k 2, define tk = tk 1 + 4 ln k. We point out the elementary fact that as k, t k 4k ln k. Next we define a family of stopping times, T k : let T 1 = T1 = inf { t t1 : N t N t t1 = 0 }. For k 2, let T k = inf { t Tk 1 + t k : N t N t tk = 0 } and T k = Tk T k 1. By the strong Markov property, {T k ; k 1} is an independent sequence of random variables with T k distributed as τ(t k. Since T k = k j=1 T j, by Lemma 4.4(2 ET k = k exp(t j, j=1 from which it is easy to verify that ET k exp(t k as k. 19

Lemma 4.5. With probability one, ( (1 lim k Tk 1 /T k = 0; ( (2 lim k ln ln Tk / ln ln ET k = 1; (3 X Tk = N Tk, eventually. Proof. Let a k = k 2 exp ( t k 1 and let ε > 0 be fixed. Then P ( T k 1 εt k P ( Tk 1 εa k + P ( Tk a k. By Chebychev s inequality and the remarks preceding the statement of the lemma, P ( T k 1 εa k ET k 1 εa k exp(t k 1 εa k. Thus, there exists some C 1 > 0 such that P ( T k 1 εa k C1 k 2. Let us observe that a 1 k exp(t k = k 2. Therefore, by Lemma 4.4(3, P ( T k a k P ( T k a k Ck 2. This shows that k P(T k 1 εt k <. Thus (1 follows from the easy half of the Borel Cantelli lemma. To establish (2, we note that ln ln ET k ln t k, as k. Let ε (0, 1 be fixed. By Lemma 4.4(3, we obtain: (4.7 P ( ( ln ln T k (1 + ε ln ln t k = P Tk exp(t 1+ε k ET k exp ( t 1+ε k 2 exp ( t k t 1+ε k, for all k sufficiently large. Likewise, since T k T k, (4.8 P ( ln ln T k (1 ε ln t k C exp ( t 1 ε k t k. Since the right hand side of (4.7 and (4.8 are both summable in k, (2 follows from the easy half of the Borel Cantelli lemma. 20

It remains to prove (3. First let us note that X(T k = N(T j for some 1 j k. Thus k 1 P (X(T k N(T k = P (X(T k = N(T j. j=1 If X(T k = N(T j, then this would imply N((t k t j + T j N(T j = 0. Thus, by the strong Markov property, P (X(T k = N(T j exp ( (t k t j. Consequently, P (X(T k N(T k exp ( (t k t k 1 k 1 exp ( (t j t k 1. But t k t k 1 = 4 ln(k and each summand is dominated by 1, so we arrive at the estimate: P (X(T k N(T k 1 k 3, which certainly sums in k. The proof is finished by an application of the easy-half of the Borel-Cantelli lemma. We are now ready to prove Theorem 1.2. Let p (0, 1 be fixed. For every k 1, define the event { A k = ω : NTk N Tk 1 T k p 2T k ln k }. We will show that P ( A k, i.o. = 1. By the strong Markov property, (A k are independent. Consequently, it is sufficient to prove that k P(A k =. For each k 1, let S k = τ(tk. By the strong Markov property, P(A k = P ( N Sk S k p 2S k ln k. Let A = p 2 ( ( and γ k = ηt 2 6. k p 2 ln k/ρ By Lemma 4.3, we have P(A k P(S k γ k /2k. It is easy to check that lim k P(S k γ k = 1. Therefore, we have shown that l P(A k = and hence that P(A k, i.o. = 1. It follows that j=1 lim sup k N Tk N Tk 1 T k 2Tk ln k 21 1, a.s.

By Lemma 4.5, with probability one, as k, 2Tk ln k 2T k ln ln T k, and T k 1 ln ln T k 1 T k ln ln T k 0. Therefore, by the ordinary law of the iterated logarithm for N, Hence, we have shown the following: N Tk 1 T k 1 lim = 0, a.s. k 2Tk ln ln T k lim sup k N Tk T k 2Tk ln ln T k 1, a.s. Finally, by Lemma 4.5(3, we are done. Acknowledgements. An anonymous referee provided us with several helpful suggestions for which we are grateful. 22

References. E. Artin, The Gamma Function (Holt, Rinehart and Winston, Inc., New York, 1964. Z. D. Bai, A theorem of Feller revisited, Ann. Prob. 17, (1989 385-395. R.F. Bass and P. S. Griffin, The most visited site of Brownian motion and simple random walk, Z. Wahr. verw. Geb. 70, (1985 417 436. P. Erdős, On the law of the iterated logarithm, Ann. Math, 43, (1942 419 436. P. Erdős and P. Révész, On the favorite points of a random walk, Math. Structures Comp. Math. Math. Modelling, 2, (1984 152 157. J. Esary, F. Proschan and D. Walkup, Association of random variables with applications, Ann. Math. Stat., 38, (1967 1466 1474. W. Feller, An Introduction to Probability Theory and Its Applications, Vol. II, Second Edition (Wiley, New York, 1971. S. Karlin and H. M. Taylor, A Second Course in Stochastic Processes, Academic Press, New York, 1981. S. B. Kochen and C. Stone, A note on the Borel-Cantelli lemma, Ill. J. Math., 8, (1964 248 251. W. E. Pruitt, General one-sided laws of the iterated logarithm, Ann. Probab., 9, (1981 1 48. R. Pyke, Spacings, J. Royal Stat. Soc., Ser. B (Methodological, Vol. 27, No. 3, (1965 395 449. R. Pyke, Spacings revisited, Proc. of the 6-th Berkeley Symp. on Math. Stat. and Prob., (1970 417 427. R. Pyke, The asymptotic behavior of spaceings under Kakutani s model for interval subdivision, Ann. Prob., 8, (1980 157 163. E. Slud, Entropy and maximal spacings for random partitions, Z. Wahr. verw. Geb., 41, (1978 341 352. D. Khoshnevisan T.M. Lewis Department of Mathematics Department of Mathematics University of Utah Furman University Salt Lake City, UT. 84112 Greenville SC. 29613 davar@math.utah.edu Lewis Tom/furman@furman.edu 23