The Codimension of the Zeros of a Stable Process in Random Scenery

Similar documents
PACKING-DIMENSION PROFILES AND FRACTIONAL BROWNIAN MOTION

Packing-Dimension Profiles and Fractional Brownian Motion

AN EXTREME-VALUE ANALYSIS OF THE LIL FOR BROWNIAN MOTION 1. INTRODUCTION

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Zeros of a two-parameter random walk

ON THE REGULARITY OF SAMPLE PATHS OF SUB-ELLIPTIC DIFFUSIONS ON MANIFOLDS

Gaussian Random Fields: Geometric Properties and Extremes

GAUSSIAN PROCESSES; KOLMOGOROV-CHENTSOV THEOREM

Convergence at first and second order of some approximations of stochastic integrals

Citation Osaka Journal of Mathematics. 41(4)

Logarithmic scaling of planar random walk s local times

Multiple points of the Brownian sheet in critical dimensions

Krzysztof Burdzy University of Washington. = X(Y (t)), t 0}

Additive Lévy Processes

Fractals at infinity and SPDEs (Large scale random fractals)

Lecture 4 Lebesgue spaces and inequalities

A connection between the stochastic heat equation and fractional Brownian motion, and a simple proof of a result of Talagrand

Estimates for probabilities of independent events and infinite series

The Azéma-Yor Embedding in Non-Singular Diffusions

STAT 7032 Probability Spring Wlodek Bryc

The Canonical Gaussian Measure on R

Lower Tail Probabilities and Related Problems

Probability and Measure

9 Brownian Motion: Construction

A generalization of Strassen s functional LIL

Random Bernstein-Markov factors

An introduction to some aspects of functional analysis

On the martingales obtained by an extension due to Saisho, Tanemura and Yor of Pitman s theorem

IMAGES AND LEVEL SETS OF ADDITIVE RANDOM WALKS

Theorem 2.1 (Caratheodory). A (countably additive) probability measure on a field has an extension. n=1

Part II Probability and Measure

4 Sums of Independent Random Variables

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Measure and Integration: Solutions of CW2

Some SDEs with distributional drift Part I : General calculus. Flandoli, Franco; Russo, Francesco; Wolf, Jochen

Empirical Processes: General Weak Convergence Theory

(2m)-TH MEAN BEHAVIOR OF SOLUTIONS OF STOCHASTIC DIFFERENTIAL EQUATIONS UNDER PARAMETRIC PERTURBATIONS

MATHS 730 FC Lecture Notes March 5, Introduction

An Analysis of Katsuura s Continuous Nowhere Differentiable Function

l(y j ) = 0 for all y j (1)

Fast Sets and Points for Fractional Brownian Motion

SHARP BOUNDARY TRACE INEQUALITIES. 1. Introduction

On pathwise stochastic integration

THE MALLIAVIN CALCULUS FOR SDE WITH JUMPS AND THE PARTIALLY HYPOELLIPTIC PROBLEM

The small ball property in Banach spaces (quantitative results)

NECESSARY CONDITIONS FOR WEIGHTED POINTWISE HARDY INEQUALITIES

1 Independent increments

Stochastic Differential Equations.

JUHA KINNUNEN. Harmonic Analysis

A Concise Course on Stochastic Partial Differential Equations

Packing Dimension Profiles and Lévy Processes

The Uniform Modulus of Continuity Of Iterated Brownian Motion 1

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability...

17. Convergence of Random Variables

n E(X t T n = lim X s Tn = X s

On the Set of Limit Points of Normed Sums of Geometrically Weighted I.I.D. Bounded Random Variables

Scaling Limits of Waves in Convex Scalar Conservation Laws under Random Initial Perturbations

INDISTINGUISHABILITY OF ABSOLUTELY CONTINUOUS AND SINGULAR DISTRIBUTIONS

The main results about probability measures are the following two facts:

Random Process Lecture 1. Fundamentals of Probability

Prime numbers and Gaussian random walks

SEPARABILITY AND COMPLETENESS FOR THE WASSERSTEIN DISTANCE

MAXIMAL COUPLING OF EUCLIDEAN BROWNIAN MOTIONS

Lecture 12. F o s, (1.1) F t := s>t

Jae Gil Choi and Young Seo Park

Math212a1413 The Lebesgue integral.

The Hilbert Transform and Fine Continuity

On sojourn of Brownian motion inside moving boundaries

Mi-Hwa Ko. t=1 Z t is true. j=0

Regularity of the density for the stochastic heat equation

Spectral Gap and Concentration for Some Spherically Symmetric Probability Measures

LECTURE 15: COMPLETENESS AND CONVEXITY

EXISTENCE OF SOLUTIONS TO A BOUNDARY-VALUE PROBLEM FOR AN INFINITE SYSTEM OF DIFFERENTIAL EQUATIONS

n [ F (b j ) F (a j ) ], n j=1(a j, b j ] E (4.1)

THEOREMS, ETC., FOR MATH 515

Brownian motion and thermal capacity

Gaussian Processes. 1. Basic Notions

Exercises to Applied Functional Analysis

Wiener Measure and Brownian Motion

Statistics and Probability Letters

LARGE DEVIATION PROBABILITIES FOR SUMS OF HEAVY-TAILED DEPENDENT RANDOM VECTORS*

6 Classical dualities and reflexivity

The strictly 1/2-stable example

Lecture 17 Brownian motion as a Markov process

Brownian Motion and Conditional Probability

LIMIT THEOREMS FOR SHIFT SELFSIMILAR ADDITIVE RANDOM SEQUENCES

Weak nonmild solutions to some SPDEs

2 Lebesgue integration

GAUSSIAN MEASURE OF SECTIONS OF DILATES AND TRANSLATIONS OF CONVEX BODIES. 2π) n

Local Times and Related Properties of Multidimensional Iterated Brownian Motion

7 Complete metric spaces and function spaces

Pathwise Construction of Stochastic Integrals

3 (Due ). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure?

Laws of the iterated logarithm for α-time Brownian motion

SUMMARY OF RESULTS ON PATH SPACES AND CONVERGENCE IN DISTRIBUTION FOR STOCHASTIC PROCESSES

Some Background Material

Level sets of the stochastic wave equation driven by a symmetric Lévy noise

A NOTE ON STOCHASTIC INTEGRALS AS L 2 -CURVES

arxiv: v1 [math.oc] 18 Jul 2011

Eigenvalues, random walks and Ramanujan graphs

Transcription:

The Codimension of the Zeros of a Stable Process in Random Scenery Davar Khoshnevisan The University of Utah, Department of Mathematics Salt Lake City, UT 84105 0090, U.S.A. davar@math.utah.edu http://www.math.utah.edu/~davar Summary. We show that for any α (1, ], the (stochastic) codimension of the zeros of an α-stable process in random scenery is identically 1 (α) 1. As an immediate consequence, we deduce that the Hausdorff dimension of the zeros of the latter process is almost surely equal to (α) 1. This solves Conjecture 5. of [6], thereby refining a computation of [10]. Keywords Random walk in random scenery; stochastic codimension; Hausdorff dimension AMS 000 subject classification 60K37 1 Introduction A stable process in random scenery is the continuum limit of a class of random walks in random scenery that is described as follows. A random scenery on Z is a collection, {y(0), y(±1), y(±),...}, of i.i.d. mean-zero variance-one random variables. Given a collection x = {x 1, x,... } of i.i.d. random variables, we consider the usual random walk n s n = x 1 + + x n which leads to the following random walk in random scenery: w n = y(s 1 ) + + y(s n ), n = 1,, (1) In words, w is obtained by summing up the values of the scenery that are encountered by the ordinary random walk s. Consider the local times {ln; a a Z, n = 1,,...} of the ordinary random walk s: n ln a = 1 {a} (s j ), a Z, n = 1,,... j=1 Then, one readily sees from (1) that w n = a Z l a ny(a), n = 1,,... Research partially supported by grants from the NSF and NATO

D. Khoshnevisan As soon as s is in the domain of attraction of a stable process of index α (1, ], one might expect its local times to approximate those of the limiting stable process. Thus, one may surmise an explicit weak limit for a renormalization of w. Reference [4] has shown that this is the case. Indeed, let S = {S(t); t 0} denote a stable Lévy process with Lévy exponent E[exp{iξS(1)}] = exp ( ξ α 1 + iνsgn(ξ) tan(απ/) χ ), ξ R, () where ν [ 1, 1] and χ > 0. If α (1, ], then it is well known ([1]) that S has continuous local times; i.e., there exists a continuous process (x, t) L t (x) such that for all Borel measurable functions f : R R, and all t 0, t 0 f(s(u)) du = f(a)l t (a) da. (3) Then, according to [4], as long as s is in the domain of attraction of S, the random walk in random scenery w can be normalized to converge weakly to the stable process in random scenery W defined by W (t) = L t (x) B(dx). (4) Here B = {B(t); < t < + } is a two-sided Brownian motion that is totally independent of the process S, and the stochastic integral above is defined in the sense of N. Wiener or, more generally, K. Itô. References [5, 6] have established a weak notion of duality between iterated Brownian motion (i.e., B B, where B is an independent Brownian motion) and Brownian motion in random scenery (i.e., the process W when α = ). Since the level sets of iterated Brownian motion have Hausdorff dimension 3 4 ([]), this duality suggests that when α = the level sets of W ought to have Hausdorff dimension 1 4 ; cf. [6, Conjecture 5.]. Reference [10] has shown that a randomized version of this assertion is true: For the α = case, and for any t > 0, P { dim(w 1 {W (t)}) = 1 4} = 1, where W 1 A = {s 0 : W (s) A} for any Borel set A R. In particular, Lebesgue-almost all level sets of W have Hausdorff dimension 1 4 when α =. In this note, we propose to show that the preceeding conjecture is true for all level sets, and has an extension for all α (1, ]. Indeed, we offer the following stronger theorem whose terminology will be explained shortly.

Stable Process in Random Scenery 3 Theorem 1. For any x R, codim ( W 1 {x} ) = 1 1 α. (5) Consequently, if dim represents Hausdorff dimension, then dim ( W 1 {x} ) = 1, almost surely. (6) α To conclude the introduction, we will define stochastic codimension, following the treatment of [8]. Given a random subset, K, of R +, we can define the lower (upper) stochastic codimension of K as the largest (smallest) number β such that for all compact sets F R + whose Hausdorff dimension is strictly less (greater) than β, K cannot (can) intersect F. We write codim (K) and codim (K) for the lower and the upper stochastic codimensions of K, respectively. When they agree, we write codim (K) for their common value, and call it the (stochastic) codimension of K. Note that the upper and the lower stochastic codimensions of K are not random, although K is a random set. Supporting Lemmas We recall from [8, Theorem.] and its proof that when a random set X R has a stochastic codimension, codim X + dim X = 1, P-a.s. This shows that (5) implies (6). Thus, we will only verify (5). Throughout, P (E) denote the conditional probability measure (expectation) P (E), given the entire process S. With the above notation in mind, it should be recognized that, under the measure P, the process W is a centered Gaussian process with covariance E{W (s)w (t)} = L s, L t, s, t 0, (7) where, denotes the usual L (R)-inner product. Needless to say, the above equality holds with P-probability one. In particular, P-a.s., the P -variance of W (t) is L t, where r denotes the usual L r (R)-norm for any 1 r. By the Cauchy Schwarz inequality, f, g f g. We need the following elementary estimate for the slack in this inequality. It will translate to a P -correlation estimate for the process W. Lemma 1. For all f, g L 1 (R) L (R), f g f, g g f g g f g 1.

4 D. Khoshnevisan Proof. One can check the following variant of the parallelogram law on L (R): f g f, g = f g g f g, g, from which the lemma follows immediately. Now, consider the random field ϱ s,t = L s, L t L s, s, t 0. (8) Under the measure P, {ϱ s,t ; s, t 0} can be thought of as a collection of constants. Then, one has the following conditional regression bound: Lemma (Conditional Regression). Fix 1 s < t. Then, under the measure P, W (s) is independent of W (t) ϱ s,t W (s). Moreover, P-a.s., { E W (t) ϱ s,t W (s) } ( L t L s L ) L 1 t s. (9) + Proof. The independence assertion is an elementary result in linear regression. Indeed, it follows from the conditional Gaussian distribution of the process W, together with the following consequence of (7): E {W (s) [W (t) ϱ s,t W (s)]} = 0, P-a.s. Similarly, (conditional) regression shows that P-a.s., E {[W (t) ϱ s,t W (s)] } = L t L s L s, L t L s, (10) P-a.s. Thanks to Lemma 1, the numerator is bounded below by L s L t L s L s L t L s 1. By the occupation density formula ( 3), with P-probability one, L t L s 1 = (t s). Since r L r (x) is non-increasing for any x R, the lemma follows from (10). Now, we work toward showing that the right hand side of (9) is essentially equal to the much simpler expression L t L s. This will be done in a few steps. Lemma 3. If 0 s < t are fixed, then the P-distribution of L t L s is the same as that of (t s) (1/α) L 1. Proof. Since the stable process S is Lévy, by applying the Markov property at time t, we see that the process L t ( ) L s ( ) has the same finite dimensional distributions as L t s ( ). The remainder of this lemma follows from scaling; see [7, 5.4], for instance.

Stable Process in Random Scenery 5 Next, we introduce a somewhat rough estimate of the modulus of continuity of the infinite-dimensional process t L t. Lemma 4. For each η > 0, there exists a P-a.s. finite random variable V 4 such that for all 0 s < t, L t L s V 4 t s (1/α) η. Proof. Thanks to Lemma 3, for any ν > 1, and for all 0 s < t, E { L t L s ν } = (t s) ν (ν/α) E { L 1 ν }. On the other hand, by the occupation density formula ( 3), L 1 L 1 L 1 (x) dx = L 1. According to [9, Theorem 1.4], there exists a finite c > 0 such that P{ L 1 > λ} exp( cλ α ), λ > 1. The result follows from the preceeding two displays, used in conjunction with Kolmogorov s continuity criterion applied to the L (R)-valued process t L t. Up to an infinitesimal in the exponent, the above is sharp, as the following asserts. Lemma 5. For each η > 0, there exists a P-a.s. finite random variable V 5 such that for all 1 s < t, L t L s V 5 t s (1/α)+η. Proof. According to [7, proof of Lemma 5.4], there exists a finite constant c > 0 such that for all λ (0, 1), P{ L 1 λ} exp( cλ α ). (11) Combined with Lemma (3), this yields { P L s+h L s h (1/α)+η} exp( ch η ), s [1, ], h (0, 1). Let F n = {k n ; 0 k n+1 }, n = 0, 1,... Choose and fix some number p > η 1 to see that { } P min L s+n p L s n pγ ( n+1 + 1) exp( cn ηp ), s F n

6 D. Khoshnevisan where γ = (1/α) + η. Since p > η 1, the above probability sums in n. By the Borel Cantelli lemma, P-almost surely, min L s+n p L s n pγ, eventually. (1) s F n On the other hand, any for any s [1, ], there exists s F n such that s s n. In particular, inf L s+n p L s min L s+n p L s 4 sup L u L v s [1,] s F n. 0 u,v u v n We have used the inequality x + y (x + y ) to obtain the above. Thus, by Lemma 4, and by (1), P-almost surely, inf L s+n p L s (1 + o(1))n pγ, s [1,] eventually. Since t L t (x) is increasing, the preceeding display implies the lemma. 3 Proof of Theorem 1 Not surprisingly, we prove Theorem 1 in two steps: First, we obtain a lower bound for codim (W 1 {x}). Then, we establish a corresponding upper bound. In order to simplify the notation, we only work with the case x = 0; the general case follows by the very same methods. 3.1 The Lower Bound The lower bound is quite simple, and follows readily from Lemma 4 and the following general result. Lemma 6. If {Z(t); t [1, ]} is almost surely Hölder continuous of some nonrandom order γ > 0, and if Z(t) has a bounded density function uniformly for every t [1, ], then codim (Z 1 {0}) γ. Proof. If F R + is a compact set whose Hausdorff dimension is < γ, then we are to show that almost surely, Z 1 {0} F =. By the definition of Hausdorff dimension, and since dim(f ) < γ, for any δ > 0 we can find closed intervals F 1, F,... such that (i) F i=1 F i; and (ii) i=1 (diamf i) γ δ. Let s i denote the left endpoint of F i, and observe that whenever Z 1 {0} F j, then with P-probability one, Z(s j ) sup s,t F j Z(s) Z(t) K γ (diamf j ) γ,

Stable Process in Random Scenery 7 where K γ is an almost surely finite random variable that signifies the Hölder constant of Z. In particular, for any M > 0, P { Z 1 {0} F } P { Z(s j ) M(diamF j ) γ } + P{K γ > M} j=1 DM (diamf j ) γ + P{K γ > M}, j=1 where D is the uniform bound on the density function of Z(t), as t varies in [1, ]. Consequently, Since δ is arbitrary, P { Z 1 {0} F } DMδ + P{K γ > M}. which goes to zero as M. We can now turn to our P { Z 1 {0} F } P{K γ > M}, Proof (Theorem 1: Lower Bound). Since W is Gaussian under the measure P, for any ν > 0, there exists a nonrandom and finite constant C ν > 0 such that for all 0 s t, E { W (s) W (t) ν} = C ν ( E { W (s) W (t) }) ν/ = C ν L t L s ν. Taking P-expectations and appealing to Lemma 3 leads to E { W (s) W (t) ν} = C ν(t s) ν (ν/α), where C ν = C ν E{ L 1 ν } is finite, thanks to [9, Theorem 1.4]. By Kolmogorov s continuity theorem, with probability one, t W (t) is Hölder continuous of any order γ < 1 (α) 1. We propose to show that the density function of W (t) is bounded uniformly for all t [1, ]. Lemma 6 would then show that codim (W 1 {0} [1, ]) γ for any γ < 1 (α) 1 ; i.e., codim (W 1 {0} [1, ]) 1 (α) 1. The argument to show this readily implies that codim (W 1 {0}) 1 (α) 1, which is the desired lower bound. To prove the uniform boundedness assertion on the density function, f t, of W (t), we condition first on the entire process S to obtain f t (x) = 1 [ )] 1 E exp ( x π L t L t, t [1, ], x R. In particular, sup sup t [1,] x R which is finite, thanks to (11). f t (x) E{ L 1 1 },

8 D. Khoshnevisan 3. The Upper Bound We intend to show that for any x R, and for any compact set F R + whose Hausdorff dimension is > 1 (α) 1, P{W 1 {x} F } > 0. It suffices to show that for all such F s, P {W 1 {x} F } > 0, P-a.s. As in our lower bound argument, we do this merely for x = 0 and F [1, ], since the general case is not much different. Henceforth, we shall fix one such compact set F without further mention. Let P(F ) denote the collection of probability measures on F, and for all µ P(F ) and all ε > 0, define J ε (µ) = 1 1 { W (s) ε} µ(ds). (13) ε We proceed to estimate the first two moments of J ε (µ). Lemma 7. There exists a P-a.s. finite and positive random variable V 7 such that P-almost surely, lim inf ε 0 E{J ε(µ)} V 7, for any µ P(F ). Proof. Notice the explicit calculation: 1 +ε ) E{J ε (µ)} = L s 1 exp ( x πε F ε L s dx µ(ds), valid for all ε > 0 and all µ P(F ). Since F [1, ], the monotonicity of local times shows that E{J ε (µ)} 1 ) L 1 exp ( ε π L 1. The lemma follows with V 7 = (π) 1 L 1, which is P-almost surely (strictly) positive, thanks to (11). Lemma 8. For any η > 0, there exists a P-a.s. positive and finite random variable V 8 such that for all µ P(F ), sup E { J ε (µ) } V 8 s t 1+(1/α) η µ(ds) µ(dt), P-a.s. ε (0,1) Proof. We recall ϱ s,t from (8), and observe that for any 1 s < t, and for all ε > 0,

Stable Process in Random Scenery 9 P { W (s) ε, W (t) ε} = P { W (s) ε, W (t) ϱ s,t W (s) + ϱ s,t W (s) ε} P { W (s) ε} sup P { W (t) ϱ s,t W (s) + x ε}, x R since W (s) and W (t) ϱ s,t W (s) are P -independent; cf. Lemma. On the other hand, centered Gaussian laws are unimodal. Hence, the above supremum is achieved at x = 0. That is, P { W (s) ε, W (t) ε} P { W (s) ε} P { W (t) ϱ s,t W (s) ε}. Computing explicitly, we obtain sup P { W (s) ε} ε L 1 1. (14) s [1,] Likewise, P { W (t) ϱ s,t W (s) ε} ε E{ W (t) ϱs,t W (s) }, P-a.s. We can combine (14) with conditional regression (Lemma ) and Lemma 5, after a few lines of elementary calculations. Proof (Theorem 1: Upper Bound). Given a compact set F [1, ] with dim(f ) > 1 (α) 1, we are to show that P {W 1 {0} F } > 0, P-almost surely. But, for any µ P(F ), the following holds P-a.s.: P {W 1 {0} F } lim inf P {J ε(µ) > 0} ε 0 E{J ε (µ)} lim inf ε 0 E{ J ε (µ) }, thanks to the classical Paley Zygmund inequality ([3, p. 8]). Lemmas 7 and 8, together imply that for any η > 0, P-almost surely, P {W 1 {0} F } V 8 inf µ P(F ) V 7. s t 1+(1/α) η µ(ds) µ(dt) Note that the random variable V 8 depends on the value of η > 0. Now, choose η such that dim(f ) > 1 (α) 1 + η, and apply Frostman s theorem ([3, p. 130]) to deduce that inf µ P(F ) This concludes our proof. s t 1+(1/α) η µ(ds) µ(dt) < +.

10 D. Khoshnevisan References 1. Boylan, E. S. (1964): Local times for a class of Markov processes, Ill. J. Math., 8, 19 39. Burdzy, K. and Khoshnevisan, D. (1995): The level sets of iterated Brownian motion, Sém. de Probab., XXIX, 31 36, Lecture Notes in Math., 1613, Springer, Berlin 3. Kahane, J.-P. (1985): Some Random Series of Functions, second edition. Cambridge University Press, Cambridge 4. Kesten, H. and Spitzer, F. (1979): A limit theorem related to a new class of self similar processes, Z. Wahr. verw. Geb., 50, 5 6 5. Khoshnevisan, D. and Lewis, T. M. (1999a): Stochastic calculus for Brownian motion on a Brownian fracture, Ann. Appl. Probab., 9:3, 69 667 6. Khoshnevisan, D. and Lewis, T. M. (1999b): Iterated Brownian motion and its intrinsic skeletal structure, Seminar on Stochastic Analysis, Random Fields and Applications (Ascona, 1996), 01 10, In: Progr. Probab., 45, Birkhäuser, Basel 7. Khoshnevisan, D. and Lewis, T. M. (1998): A law of the iterated logarithm for stable processes in random scenery, Stoch. Proc. their Appl., 74, 89 11 8. Khoshnevisan, D. and Shi, Z. (000):. Fast sets and points for fractional Brownian motion, Sém. de Probab., XXXIV, 393 416, Lecture Notes in Math., 179, Springer, Berlin 9. Lacey, M. (1990): Large deviations for the maximum local time of stable Lévy processes, Ann. Prob., 18:4, 1669 1675 10. Xiao, Yimin (1999): The Hausdorff dimension of the level sets of stable processes in random scenery, Acta Sci. Math. (Szeged) 65:1-, 385 395