Gaussian Random Fields: Geometric Properties and Extremes

Similar documents
Gaussian Random Fields: Excursion Probabilities

PACKING-DIMENSION PROFILES AND FRACTIONAL BROWNIAN MOTION

ELEMENTS OF PROBABILITY THEORY

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

The Codimension of the Zeros of a Stable Process in Random Scenery

Packing-Dimension Profiles and Fractional Brownian Motion

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

HITTING PROBABILITIES FOR GENERAL GAUSSIAN PROCESSES

SAMPLE PATH AND ASYMPTOTIC PROPERTIES OF SPACE-TIME MODELS. Yun Xue

Integral representations in models with long memory

Multiple points of the Brownian sheet in critical dimensions

Letting p shows that {B t } t 0. Definition 0.5. For λ R let δ λ : A (V ) A (V ) be defined by. 1 = g (symmetric), and. 3. g

A Concise Course on Stochastic Partial Differential Equations

HAUSDORFF DIMENSION AND ITS APPLICATIONS

Lecture 12. F o s, (1.1) F t := s>t

The stochastic heat equation with a fractional-colored noise: existence of solution

Recent Developments on Fractal Properties of Gaussian Random Fields

Lower Tail Probabilities and Normal Comparison Inequalities. In Memory of Wenbo V. Li s Contributions

RANDOM FIELDS AND GEOMETRY. Robert Adler and Jonathan Taylor

Large Deviations Principles for McKean-Vlasov SDEs, Skeletons, Supports and the law of iterated logarithm

GAUSSIAN PROCESSES; KOLMOGOROV-CHENTSOV THEOREM

Probability and Measure

Lecture 9. d N(0, 1). Now we fix n and think of a SRW on [0,1]. We take the k th step at time k n. and our increments are ± 1

1 Independent increments

Invariant measures for iterated function systems

Regularity of the density for the stochastic heat equation

Topics in fractional Brownian motion

Generalized dimensions of images of measures under Gaussian processes

II - REAL ANALYSIS. This property gives us a way to extend the notion of content to finite unions of rectangles: we define

l(y j ) = 0 for all y j (1)

Hölder regularity for operator scaling stable random fields

Hölder continuity of the solution to the 3-dimensional stochastic wave equation

Pseudo-Poincaré Inequalities and Applications to Sobolev Inequalities

Lévy Processes and Infinitely Divisible Measures in the Dual of afebruary Nuclear2017 Space 1 / 32

Linear SPDEs driven by stationary random distributions

Weak convergence and Brownian Motion. (telegram style notes) P.J.C. Spreij

Hardy-Stein identity and Square functions

Fractals at infinity and SPDEs (Large scale random fractals)

State Space Representation of Gaussian Processes

Random Fields and Random Geometry. I: Gaussian fields and Kac-Rice formulae

Weak Variation of Gaussian Processes

Bayesian Regularization

3 (Due ). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure?

Gaussian Processes. 1. Basic Notions

Brownian Motion and Conditional Probability

Lower Tail Probabilities and Related Problems

THEOREMS, ETC., FOR MATH 515

In Memory of Wenbo V Li s Contributions

Potential Theory on Wiener space revisited

Additive Lévy Processes

Real Analysis II, Winter 2018

Jae Gil Choi and Young Seo Park

Wiener Measure and Brownian Motion

Introduction to Hausdorff Measure and Dimension

In terms of measures: Exercise 1. Existence of a Gaussian process: Theorem 2. Remark 3.

LECTURE 2: LOCAL TIME FOR BROWNIAN MOTION

The Caratheodory Construction of Measures

ON THE REGULARITY OF SAMPLE PATHS OF SUB-ELLIPTIC DIFFUSIONS ON MANIFOLDS

Harnack Inequalities and Applications for Stochastic Equations

MATHS 730 FC Lecture Notes March 5, Introduction

Brownian Motion. Chapter Stochastic Process

Heat kernels and function theory on metric measure spaces

Stochastic Differential Equations.

6.2 Fubini s Theorem. (µ ν)(c) = f C (x) dµ(x). (6.2) Proof. Note that (X Y, A B, µ ν) must be σ-finite as well, so that.

Infinitely iterated Brownian motion

Hilbert Space Methods for Reduced-Rank Gaussian Process Regression

CAPACITIES ON METRIC SPACES

Hausdorff Measures and Perfect Subsets How random are sequences of positive dimension?

FRACTAL BEHAVIOR OF MULTIVARIATE OPERATOR-SELF-SIMILAR STABLE RANDOM FIELDS

Empirical Processes: General Weak Convergence Theory

DISCRETE MINIMAL ENERGY PROBLEMS

Moreover, µ is monotone, that is for any A B which are both elements of A we have

Lecture 17 Brownian motion as a Markov process

Lecture 4: Stochastic Processes (I)

Small ball probabilities and metric entropy

A connection between the stochastic heat equation and fractional Brownian motion, and a simple proof of a result of Talagrand

Multivariate Gaussian Random Fields with SPDEs

Measure Theory. John K. Hunter. Department of Mathematics, University of California at Davis

Handbook of Spatial Statistics Chapter 2: Continuous Parameter Stochastic Process Theory by Gneiting and Guttorp

Recall that if X is a compact metric space, C(X), the space of continuous (real-valued) functions on X, is a Banach space with the norm

212a1416 Wiener measure.

STAT 200C: High-dimensional Statistics

Lebesgue Measure on R n

LAN property for sde s with additive fractional noise and continuous time observation

MAJORIZING MEASURES WITHOUT MEASURES. By Michel Talagrand URA 754 AU CNRS

Part II Probability and Measure

Basic Properties of Metric and Normed Spaces

A NOTE ON CORRELATION AND LOCAL DIMENSIONS

Made available courtesy of Elsevier:

Three hours THE UNIVERSITY OF MANCHESTER. 24th January

n E(X t T n = lim X s Tn = X s

Exact Moduli of Continuity for Operator-Scaling Gaussian Random Fields

Random measures, intersections, and applications

Preliminary Exam: Probability 9:00am 2:00pm, Friday, January 6, 2012

Multi-operator scaling random fields

Measures. Chapter Some prerequisites. 1.2 Introduction

Some new estimates on the Liouville heat kernel

A Framework for Daily Spatio-Temporal Stochastic Weather Simulation

Transcription:

Gaussian Random Fields: Geometric Properties and Extremes Yimin Xiao Michigan State University

Outline Lecture 1: Gaussian random fields and their regularity Lecture 2: Hausdorff dimension results and hitting probabilities Lecture 3: Strong local nondeterminism and fine properties, I Lecture 4: Strong local nondeterminism and fine properties, II Lecture 5: Extremes and excursion probabilities

Lecture 1 Gaussian random fields and their regularity Introduction Construction of Gaussian random fields Regularity of Gaussian random fields A review of Hausdorff measure and dimension

1.1 Introduction A random field X = {X(t), t T} is a family of random variables with values in state space S, where T is the parameter set. If T R N and S = R d (d 1), then X is called an (N, d) random field. They arise naturally in turbulence (e.g., A. N. Kolmogorov, 1941) oceanography (M.S. Longuet-Higgins, 1953,...) spatial statistics, spatio-temporal geostatistics (G. Mathron, 1962) image and signal processing

Examples: X(t, x) = the height of an ocean surface above certain nominal plane at time t 0 and location x R 2. X(t, x) = wind speed at time t 0 and location x R 3. X(t, x) = the levels of d pollutants (e.g., ozone, PM 2.5, nitric oxide, carbon monoxide, etc) measured at location x R 3 and time t 0.

Theory of random fields 1 How to construct random fields? 2 How to characterize and analyze random fields? 3 How to estimate parameters in random fields? 4 How to use random fields to make predictions? In this short course, we provide a brief introduction to (1) and (2).

1.2 Construction and characterization of random fields Construct covariance functions For stationary Gaussian random fields, use spectral representation theorem For random fields with stationary increments or random intrinsic functions, use Yaglom (1957) and Matheron (1973) Stochastic partial differential equations Scaling limits of discrete systems

1.2.1 Stationary random fields and their spectral representations A real-valued random field {X(t), t R N } is called secondorder stationary if E(X(t)) m, where m is a constant, and the covariance function depends on s t only: E [ (X(s) m)(x(t) m) ] = C(s t), s, t R N. Note that C is positive definite: For all n 1, t j R N and all complex numbers a j C (j = 1,..., n), we have n i=1 n a i a j C(t i t j ) 0. j=1

Bochner s theorem Theorem (Bochner, 1932) A bounded continuous function C is positive definite if and only if there is a finite Borel measure µ such that C(t) = e i t, x dµ(x), t R N. R N

Spectral representation theorem In particular, if X = {X(t), t R N } is a centered, stationary Gaussian random field with values in R whose covariance function is the Fourier transform of µ, then there is a complex-valued Gaussian random measure W on A = {A B(R N ) : µ(a) < } such that E ( W(A) ) = 0, E( W(A) W(B)) = µ(a B) and W( A) = W(A) and X has the following Wiener integral representation: X(t) = e i t, x d W(x). R N The finite measure µ is called the spectral measure of X.

The Matérn class An important class of isotropic stationary random fields are those with the Matérn covariance function ( ) 1 2ν ν ( ) t 2ν t C(t) = K Γ(ν)2 ν 1 ν, ρ ρ where Γ is the gamma function, K ν is the modified Bessel function of the second kind, and ρ and ν are non-negative parameters. Since the covariance function C(t) depends only on the Euclidean norm t, the corresponding Gaussian field X is called isotropic.

By the inverse Fourier transform, one can show that the spectral measure of X has the following density function: f (λ) = 1 1, λ R N. (2π) N ( λ 2 + ρ2 2ν )ν+ N 2 Whittle (1954) showed that the Gaussian random field X can be obtained as the solution to the following fractional SPDE ( ρ 2 ) ν + 2 + N 4 X(t) = Ẇ(t), 2ν where = 2 + + 2 is the N-dimensional Laplacian, dt1 2 dtn 2 and Ẇ(t) is the white noise.

A smooth Gaussian field: N = 2, ν =.25

A smooth Gaussian field: N = 2, ν = 2.5

Recent extensions: Random fields on a spatial-temporal domain In statistics, one needs to consider random fields defined on the spatial-temporal domain R N R. It is often not reasonable to assume that these random fields are isotropic. Various anisotropic random fields have been constructed (Cressie and Huang 1999, Stein 2005; Biermé, et al. 2007; X. 2009; Li and X. 2011) Multivariate (stationary) random fields Random fields on the spheres and other manifolds.

1.2.2 Gaussian fields with stationary increments Let X = {X(t), t R N } be a centered Gaussian random field with stationary increments and X(0) = 0. Yaglom (1954) showed that, if R(s, t) = E [ X(s)X(t) ] is continuous, then R(s, t) can be written as R(s, t) = (e i s,λ 1)(e i t,λ 1) (dλ), R N where (dλ) is a Borel measure which satisfies R N (1 λ 2 ) (dλ) <. (1) The measure is called the spectral measure of X.

It follows that E [ (X(s) X(t)) 2] ( ) = 2 1 cos s t, λ (dλ); R N and X has the stochastic integral representation: X(t) = d ( e i t,λ 1 ) W(dλ), R N where d = denotes equality of all finite-dimensional distributions, W(dλ) is a centered complex-valued Gaussian random measure with as its control measure. Gaussian fields with stationary increments can be constructed by choosing spectral measures.

Two examples Example 1 If has a density function f H (λ) = c(h, N) λ (2H+N), where H (0, 1) and c(h, N) > 0, then X is fractional Brownian motion with index H. It can be verified that (for proper choice of c(h, N)), E [ (X(s) X(t)) 2] 1 cos s t, λ = 2c(H, N) dλ R λ 2H+N N = s t 2H. For the last identity, see, e.g., Schoenberg (1939).

FBm X has stationary increments: for any b R N, { X(t + b) X(b), t R N } d = { X(t), t R N }, where d = means equality in finite dimensional distributions. FBm X is H-self-similar: for every constant c > 0, { X(ct), t R N } d = { c H X(t), t R N}.

Example 2 A large class of Gaussian fields can be obtained by letting spectral density functions satisfy (1) and f (λ) 1 ( N j=1 λ j β j) γ, λ R N, λ 1, (2) where (β 1,..., β N ) (0, ) N and γ > N j=1 1 β j. More conveniently, we re-write (2) as f (λ) 1 ( N j=1 λ j H j) Q+2, λ R N, λ 1, (3) where H j = β ( j 2 γ N ) i=1 1 β i and Q = N j=1 H 1 j.

1.2.3 The Brownian sheet and fractional Brownian sheets The Brownian sheet W = {W(t), t R N +} is a centered (N, d)-gaussian field whose covariance function is E [ W i (s)w j (t) ] = δ ij N k=1 s k t k. When N = 1, W is Brownian motion in R d. W is N/2-self-similar, but it does not have stationary increments.

} Fractional Brownian sheet W H = {W H (t), t R N is a mean zero Gaussian field in R with covariance function [ ] E W H (s)w H (t) = N j=1 1 ( ) sj 2H j + t j 2H j s j t j 2H j, 2 where H = (H 1,..., H N ) (0, 1) N. For all constants c > 0, {W H (c E t), t R N } d= {c W H (t), t R N }, where E = (a ij ) is the N N diagonal matrix with a ii = 1/(NH i ) for all 1 i N and a ij = 0 if i j.

1.2.4 Linear stochastic heat equation As an example, we consider the solution of the linear stochastic heat equation u t (t, x) = 2 u (t, x) + σ Ẇ x2 u(0, x) 0, x R. (4) It follows from Walsh (1986) that the mild solution of (4) is the mean zero Gaussian random field u = {u(t, x), t 0], x R} defined by t u(t, x) = G t r (x y) σw(drdy), t 0, x R, 0 R

where G t (x) is the Green kernel given by ( ) G t (x) = (4πt) 1/2 exp x 2, t > 0, x R. 4t One can verify that for every fixed x R, the process {u(t, x), t [0, T]} is a bi-fractional Brownian motion. For every fixed t > 0, the process {u(t, x), x R} is stationary with an explicit spectral density function. This allows to study the properties of u(t, x) in the time and space-variables either separately or jointly.

1.3 Regularity of Gaussian random fields Let X = {X(t), t R N } be a random field. ω Ω, the function X(, ω) : R N R d : t X(t, ω) is called a sample function of X. For each The following are natural questions: (i) When are the sample functions of X bounded, or continuous? (ii) When are the sample functions of X differentiable? (iii) How to characterize the analytic and geometric properties of X( ) precisely?

Let X = {X(t), t T} be a centered Gaussian process with values in R, where (T, τ) is a metric space; e.g., T = [0, 1] N, or T = S N 1. We define a pseudo metric d X (, ) : T T [0, ) by d X (s, t) = { E[X(t) X(s)] 2} 1 2. (d X is often called the canonical metric for X.) Let D = sup t,s T d X (s, t) be the diameter of T, under the pseudo metric d X. For any ε > 0, let N(T, d X, ε) be the minimum number of d X -balls of radius ε that cover T.

Dudley s Theorem H(T, ε) = log N(T, d X, ε) is called the metric entropy of T. Theorem 1.3.1 [Dudley, 1967] Assume N(T, d X, ε) < for every ε > 0. If D 0 log N(T, dx, ε) dε <. Then there exists a modification of X, still denoted by X, such that ( E sup t T ) X(t) 16 2 D 2 0 log N(T, dx, ε) dε. (5)

The proof of Dudley s Theorem is based on a chaining argument, which is similar to that of Kolmogorov s continuity theorem. See Talagrand (2005), Marcus and Rosen (2007). Example: For a Gaussian random field {X(t), t T} satisfying ( 1 ) γ, d X (s, t) log s t its sample functions are continuous if γ > 1/2. Fernique (1975) proved that (5) is also necessary if X is a Gaussian process with stationary increments. In general, (5) is not necessary for sample boundedness and continuity.

Majorizing measure For a general Gaussian process, Talagrand (1987) proved the following necessary and sufficient for the boundedness and continuity. Theorem 1.3.2 [Talagrand, 1987] Let X = {X(t), t T} be a centered Gaussian process with values in R. Suppose D = sup t,s T d X (s, t) <. Then X has a modification which is bounded on T if and only if there exists a probability measure µ on T such that D ( 1 ) 1/2 sup log du <. (6) t T 0 µ(b(t, u))

Majorizing measure Theorem 1.3.2 (Continued) There exists a modification of X with bounded, uniformly continuous sample functions if and only if there exists a probability measure µ on T such that lim sup ε 0 t T ε 0 ( 1 ) 1/2 log du = 0. (7) µ(b(t, u))

Uniform modulus of continuity Theorem 1.3.3 Under the condition of Theorem 1.3.1, there exists a random variable η (0, ) and a constant K > 0 such that for all 0< δ < η, ω X,dX (δ) K δ log N(T, dx, ε) dε, 0 where ω X,dX (δ) = sup s,t T, d X (s,t) δ X(t) X(s) is the modulus of continuity of X(t) on (T, d X ).

Corollary 1.3.4 Let B H = {B H (t), t R N } be a fractional Brownian motion with index H (0, 1). Then B H has a modification, still denoted by B H, whose sample functions are almost surely continuous. Moreover, lim sup ε 0 max t [0,1]N, s ε B H (t + s) B H (t) ε H log 1/ε K, a.s. Proof: Recall that d B H(s, t) = s t H and ε > 0, N ( [0, 1] N, d B H, ε ) K ( 1 ε 1/H ) N.

It follows from Theorem 1.3.3 that a random variable η > 0 and a constant K > 0 such that ω B H(δ) K K δ δ 0 log log 1 δ ( 1 ε 1/H ) dε a.s. Returning to the Euclidean metric and noticing d B H(s, t) δ s t δ 1/H, yields the desired result.

Later on, we will prove that there is a constant K (0, ) such that lim sup ε 0 max t [0,1]N, s ε B H (t + s) B H (t) ε H log 1/ε = K, a.s. This is an analogue of Lévy s uniform modulus of continuity for Brownian motion.

Differentiability (i). Mean-square differentiability: the mean square partial derivative of X at t is defined as X(t) t j X(t + he j ) X(t) = l.i.m h 0, h where e j is the unit vector in the j-th direction. For a Gaussian field, sufficient conditions can be given in terms of the differentiability of the covariance function (Adler, 1981). (ii). Sample path differentiability: the sample function t X(t) is differentiable. This is much stronger and more useful than (i).

Sample path differentiability of X(t) can be proved by using criteria for continuity. Consider a centered Gaussian field with stationary increments whose spectral density function satisfies f (λ) 1 ( N j=1 λ j β j) γ, λ R N, λ 1, (8) where (β 1,..., β N ) (0, ) N and γ > N j=1 1 β j.

Differentiability Theorem 1.3.5 (Xue and X. 2011) (i) If N ) 1 β j (γ > 2, (9) i=1 then the partial derivative X(t)/ t j is continuous almost surely. In particular, if (9) holds for all 1 j N, then almost surely X(t) is continuously differentiable. (ii) If ( N ) max β 1 j γ 2, (10) 1 j N β i i=1 then X(t) is not differentiable in any direction. β i

Differentiability Theorem 1.3.5 (Xue and X. 2011) (i) If N ) 1 β j (γ > 2, (9) i=1 then the partial derivative X(t)/ t j is continuous almost surely. In particular, if (9) holds for all 1 j N, then almost surely X(t) is continuously differentiable. (ii) If ( N ) max β 1 j γ 2, (10) 1 j N β i i=1 then X(t) is not differentiable in any direction. β i

1.4 A review of Hausdorff measure and dimension Let Φ be the class of functions ϕ : (0, δ) (0, ) which are right continuous, monotone increasing with ϕ(0+) = 0 and such that there exists a finite constant K > 0 such that ϕ(2s) ϕ(s) K for 0 < s < 1 2 δ. A function ϕ in Φ is often called a measure function or gauge function. For example, ϕ(s) = s α (α > 0) and ϕ(s) = s α log log(1/s) are measure functions.

1.4 A review of Hausdorff measure and dimension Let Φ be the class of functions ϕ : (0, δ) (0, ) which are right continuous, monotone increasing with ϕ(0+) = 0 and such that there exists a finite constant K > 0 such that ϕ(2s) ϕ(s) K for 0 < s < 1 2 δ. A function ϕ in Φ is often called a measure function or gauge function. For example, ϕ(s) = s α (α > 0) and ϕ(s) = s α log log(1/s) are measure functions.

1.4 A review of Hausdorff measure and dimension Let Φ be the class of functions ϕ : (0, δ) (0, ) which are right continuous, monotone increasing with ϕ(0+) = 0 and such that there exists a finite constant K > 0 such that ϕ(2s) ϕ(s) K for 0 < s < 1 2 δ. A function ϕ in Φ is often called a measure function or gauge function. For example, ϕ(s) = s α (α > 0) and ϕ(s) = s α log log(1/s) are measure functions.

Given ϕ Φ, the ϕ-hausdorff measure of E R d is defined by { ϕ-m(e) = lim inf ε 0 i ϕ(2r i ) : E i=1 } B(x i, r i ), r i < ε, (11) where B(x, r) denotes the open ball of radius r centered at x. The sequence of balls satisfying the two conditions on the right-hand side of (11) is called an ε-covering of E.

Given ϕ Φ, the ϕ-hausdorff measure of E R d is defined by { ϕ-m(e) = lim inf ε 0 i ϕ(2r i ) : E i=1 } B(x i, r i ), r i < ε, (11) where B(x, r) denotes the open ball of radius r centered at x. The sequence of balls satisfying the two conditions on the right-hand side of (11) is called an ε-covering of E.

Theorem 1.4.1 [Hausdorff, etc] ϕ-m is a Carathéodory outer measure. The restriction of ϕ-m to B(R d ) is a [Borel] measure. If ϕ(s) = s d, then ϕ-m B(Rd = c Lebesgue measure ) on R d. A function ϕ Φ is called an exact (or a correct) Hausdorff measure function for E if 0 < ϕ-m(e) <.

Theorem 1.4.1 [Hausdorff, etc] ϕ-m is a Carathéodory outer measure. The restriction of ϕ-m to B(R d ) is a [Borel] measure. If ϕ(s) = s d, then ϕ-m B(Rd = c Lebesgue measure ) on R d. A function ϕ Φ is called an exact (or a correct) Hausdorff measure function for E if 0 < ϕ-m(e) <.

If ϕ(s) = s α, we write ϕ-m(e) as H α (E). The following lemma is elementary. Lemma 1 If H α (E) <, then H α+δ (E) = 0 for all δ > 0. 2 If H α (E) =, then H α δ (E) = for all δ (0, α). 3 For any E R d, we have H d+δ (E) = 0.

The Hausdorff dimension of E is defined by dim H E = inf { α > 0 : H α (E) = 0 } Convention: sup := 0. Lemma = sup { α > 0 : H α (E) = }, 1 E F R d dim H E dim H F d. ( ) 2 (σ-stability) dim H j=1 E j = sup j 1 dim H E j.

Example: Cantor s set Let C denote the standard ternary Cantor set in [0, 1]. At the nth stage of its construction, C is covered by 2 n intervals of length/diameter 3 n each. Therefore, for α = log 3 2, H α (C) lim n 2 n 3 nα = 1. Thus, we obtain dim H C log 3 2 0.6309. We will prove later that this is an equality and 0 < H log3 2(C) <.

Example: Cantor s set Let C denote the standard ternary Cantor set in [0, 1]. At the nth stage of its construction, C is covered by 2 n intervals of length/diameter 3 n each. Therefore, for α = log 3 2, H α (C) lim n 2 n 3 nα = 1. Thus, we obtain dim H C log 3 2 0.6309. We will prove later that this is an equality and 0 < H log3 2(C) <.

Example: Cantor s set Let C denote the standard ternary Cantor set in [0, 1]. At the nth stage of its construction, C is covered by 2 n intervals of length/diameter 3 n each. Therefore, for α = log 3 2, H α (C) lim n 2 n 3 nα = 1. Thus, we obtain dim H C log 3 2 0.6309. We will prove later that this is an equality and 0 < H log3 2(C) <.

Example: Cantor s set Let C denote the standard ternary Cantor set in [0, 1]. At the nth stage of its construction, C is covered by 2 n intervals of length/diameter 3 n each. Therefore, for α = log 3 2, H α (C) lim n 2 n 3 nα = 1. Thus, we obtain dim H C log 3 2 0.6309. We will prove later that this is an equality and 0 < H log3 2(C) <.

Example: Images of Brownian motion Let B([0, 1]) be the image of Brownian motion in R d. Lévy (1948) and Taylor (1953) proved that dim H B([0, 1]) = min{d, 2} a.s. Ciesielski and Taylor (1962), Ray and Taylor (1964) proved that 0 < ϕ d -m ( B([0, 1]) ) < a.s., where ϕ 1 (r) = r ϕ 2 (r) = r 2 log(1/r) log log log(1/r) ϕ d (r) = r 2 log log(1/r), if d 3.

Example: Images of Brownian motion Let B([0, 1]) be the image of Brownian motion in R d. Lévy (1948) and Taylor (1953) proved that dim H B([0, 1]) = min{d, 2} a.s. Ciesielski and Taylor (1962), Ray and Taylor (1964) proved that 0 < ϕ d -m ( B([0, 1]) ) < a.s., where ϕ 1 (r) = r ϕ 2 (r) = r 2 log(1/r) log log log(1/r) ϕ d (r) = r 2 log log(1/r), if d 3.

A method for determining upper bounds Lemma Let I R N be a hyper-cube. If there is a constant α (0, 1) such that for every ε > 0, the function f : I R d satisfies a uniform Hölder condition of order α ε on I, then for every Borel set E I { 1 } dim H f (E) min d, α dim E, (12) H { 1 } dim H Grf (E) min α dim E, dim E+(1 α)d, (13) H H where Grf (E) = {(t, f (t)) : t E}.

Proof of (12) For any γ > dim H E, there is a covering {B(x i, r i )} of E such that (2r i ) γ 1. i=1 For any fixed ε (0, α), f (B(x i, r i )) is contained in a ball in R d of radius ri α ε, which yields a covering of f (E). Since ( ) γ/(α ε) 1, i=1 r α ε i we have dim H f (E) γ/(α ε). Letting ε 0 and γ dim H E yield (12).

Proof of (12) For any γ > dim H E, there is a covering {B(x i, r i )} of E such that (2r i ) γ 1. i=1 For any fixed ε (0, α), f (B(x i, r i )) is contained in a ball in R d of radius ri α ε, which yields a covering of f (E). Since ( ) γ/(α ε) 1, i=1 r α ε i we have dim H f (E) γ/(α ε). Letting ε 0 and γ dim H E yield (12).

Proof of (12) For any γ > dim H E, there is a covering {B(x i, r i )} of E such that (2r i ) γ 1. i=1 For any fixed ε (0, α), f (B(x i, r i )) is contained in a ball in R d of radius ri α ε, which yields a covering of f (E). Since ( ) γ/(α ε) 1, i=1 r α ε i we have dim H f (E) γ/(α ε). Letting ε 0 and γ dim H E yield (12).

Proof of (12) For any γ > dim H E, there is a covering {B(x i, r i )} of E such that (2r i ) γ 1. i=1 For any fixed ε (0, α), f (B(x i, r i )) is contained in a ball in R d of radius ri α ε, which yields a covering of f (E). Since ( ) γ/(α ε) 1, i=1 r α ε i we have dim H f (E) γ/(α ε). Letting ε 0 and γ dim H E yield (12).

A method for determining lower bounds P(E) := all probability measures that are supported in E. Lemma (Frostman, 1935) Let E be a Borel subset of R d and α > 0 be a constant. Then H α (E) > 0 if and only if there exist µ P(E) and a constant K such that µ(b(x, r)) K r α x R d, r > 0. Sufficiency follows from the definition of H α (E) and the subadditivity of µ. For a proof of the necessity, see Kahane (1985).

A method for determining lower bounds P(E) := all probability measures that are supported in E. Lemma (Frostman, 1935) Let E be a Borel subset of R d and α > 0 be a constant. Then H α (E) > 0 if and only if there exist µ P(E) and a constant K such that µ(b(x, r)) K r α x R d, r > 0. Sufficiency follows from the definition of H α (E) and the subadditivity of µ. For a proof of the necessity, see Kahane (1985).

A method for determining lower bounds P(E) := all probability measures that are supported in E. Lemma (Frostman, 1935) Let E be a Borel subset of R d and α > 0 be a constant. Then H α (E) > 0 if and only if there exist µ P(E) and a constant K such that µ(b(x, r)) K r α x R d, r > 0. Sufficiency follows from the definition of H α (E) and the subadditivity of µ. For a proof of the necessity, see Kahane (1985).

A method for determining lower bounds P(E) := all probability measures that are supported in E. Lemma (Frostman, 1935) Let E be a Borel subset of R d and α > 0 be a constant. Then H α (E) > 0 if and only if there exist µ P(E) and a constant K such that µ(b(x, r)) K r α x R d, r > 0. Sufficiency follows from the definition of H α (E) and the subadditivity of µ. For a proof of the necessity, see Kahane (1985).

Theorem 1.4.2 [Frostman, 1935] Let E be a Borel subset of R d. Suppose there exist α > 0 and µ P(E) such that µ(dx) µ(dy) I α (µ) := <. x y α Then, dim H E α. I α (µ) := the α-dimensional [Bessel-] Riesz energy of µ. the α-dimensional capacity of E is [ C α (E) := inf I α(µ) µ P ] 1.

Theorem 1.4.2 [Frostman, 1935] Let E be a Borel subset of R d. Suppose there exist α > 0 and µ P(E) such that µ(dx) µ(dy) I α (µ) := <. x y α Then, dim H E α. I α (µ) := the α-dimensional [Bessel-] Riesz energy of µ. the α-dimensional capacity of E is [ C α (E) := inf I α(µ) µ P ] 1.

Theorem 1.4.2 [Frostman, 1935] Let E be a Borel subset of R d. Suppose there exist α > 0 and µ P(E) such that µ(dx) µ(dy) I α (µ) := <. x y α Then, dim H E α. I α (µ) := the α-dimensional [Bessel-] Riesz energy of µ. the α-dimensional capacity of E is [ C α (E) := inf I α(µ) µ P ] 1.

Proof: For λ > 0, let { E λ = x E : } µ(dy) x y λ. α Then µ(e λ ) > 0 for λ large enough. To show dim H (E λ ) α, we take an arbitrary ε-covering {B(x i, r i )} of E λ. WLOG, we assume x i E λ. λ (2r i ) α i=1 (2r i ) α i=1 B(x i,r i ) µ(dy) x y α µ ( B(x i, r i ) ) µ(e λ ). i=1 Hence H α (E λ ) µ(e λ )/λ > 0.

Proof: For λ > 0, let { E λ = x E : } µ(dy) x y λ. α Then µ(e λ ) > 0 for λ large enough. To show dim H (E λ ) α, we take an arbitrary ε-covering {B(x i, r i )} of E λ. WLOG, we assume x i E λ. λ (2r i ) α i=1 (2r i ) α i=1 B(x i,r i ) µ(dy) x y α µ ( B(x i, r i ) ) µ(e λ ). i=1 Hence H α (E λ ) µ(e λ )/λ > 0.

Proof: For λ > 0, let { E λ = x E : } µ(dy) x y λ. α Then µ(e λ ) > 0 for λ large enough. To show dim H (E λ ) α, we take an arbitrary ε-covering {B(x i, r i )} of E λ. WLOG, we assume x i E λ. λ (2r i ) α i=1 (2r i ) α i=1 B(x i,r i ) µ(dy) x y α µ ( B(x i, r i ) ) µ(e λ ). i=1 Hence H α (E λ ) µ(e λ )/λ > 0.

Proof: For λ > 0, let { E λ = x E : } µ(dy) x y λ. α Then µ(e λ ) > 0 for λ large enough. To show dim H (E λ ) α, we take an arbitrary ε-covering {B(x i, r i )} of E λ. WLOG, we assume x i E λ. λ (2r i ) α i=1 (2r i ) α i=1 B(x i,r i ) µ(dy) x y α µ ( B(x i, r i ) ) µ(e λ ). i=1 Hence H α (E λ ) µ(e λ )/λ > 0.

Using Frostman s lemma, one can also prove that if α < dim H E, then there exists a probability measure µ on E such that I α (µ) <, so C α (E) > 0. This leads to Corollary Let E be a Borel subset of R d. Then dim H E = sup{α > 0 : C α (E) > 0}. (14)

Using Frostman s lemma, one can also prove that if α < dim H E, then there exists a probability measure µ on E such that I α (µ) <, so C α (E) > 0. This leads to Corollary Let E be a Borel subset of R d. Then dim H E = sup{α > 0 : C α (E) > 0}. (14)

Images of Brownian motion: continued Theorem 1.4.3 [Lévy, 1948; Taylor, 1953; McKean, 1955] For any Borel set E R +, dim H B(E) = min{d, 2dim H E} a.s. Suffices to prove that dim H B(E) min{d, 2dim H E}; the upper bound follows from Lemma 1.3. Need a probability measure on B(E) such that I α (µ) < a.s. for α < min{d, 2dim H E}. Since α/2 < dim H E, by Frostman s lemma, there exists a probability measure σ on E such that E E 1 σ(ds)σ(dt) <. s t α/2

Images of Brownian motion: continued Theorem 1.4.3 [Lévy, 1948; Taylor, 1953; McKean, 1955] For any Borel set E R +, dim H B(E) = min{d, 2dim H E} a.s. Suffices to prove that dim H B(E) min{d, 2dim H E}; the upper bound follows from Lemma 1.3. Need a probability measure on B(E) such that I α (µ) < a.s. for α < min{d, 2dim H E}. Since α/2 < dim H E, by Frostman s lemma, there exists a probability measure σ on E such that E E 1 σ(ds)σ(dt) <. s t α/2

Images of Brownian motion: continued Theorem 1.4.3 [Lévy, 1948; Taylor, 1953; McKean, 1955] For any Borel set E R +, dim H B(E) = min{d, 2dim H E} a.s. Suffices to prove that dim H B(E) min{d, 2dim H E}; the upper bound follows from Lemma 1.3. Need a probability measure on B(E) such that I α (µ) < a.s. for α < min{d, 2dim H E}. Since α/2 < dim H E, by Frostman s lemma, there exists a probability measure σ on E such that E E 1 σ(ds)σ(dt) <. s t α/2

Images of Brownian motion: continued Theorem 1.4.3 [Lévy, 1948; Taylor, 1953; McKean, 1955] For any Borel set E R +, dim H B(E) = min{d, 2dim H E} a.s. Suffices to prove that dim H B(E) min{d, 2dim H E}; the upper bound follows from Lemma 1.3. Need a probability measure on B(E) such that I α (µ) < a.s. for α < min{d, 2dim H E}. Since α/2 < dim H E, by Frostman s lemma, there exists a probability measure σ on E such that E E 1 σ(ds)σ(dt) <. s t α/2

Define µ(a) := σ{t E : B(t) A} = E 1 A (B(t)) σ(dt); then µ P(B(E)). Its α-dimensional [Bessel-] Riesz energy is I α (µ) = B(s) B(t) α σ(ds) σ(dt). E(I α (µ)) = E E E E s t α/2 σ(ds) σ(dt) E( Z α ), where Z is a vector of d i.i.d. N(0, 1) s.

Define µ(a) := σ{t E : B(t) A} = E 1 A (B(t)) σ(dt); then µ P(B(E)). Its α-dimensional [Bessel-] Riesz energy is I α (µ) = B(s) B(t) α σ(ds) σ(dt). E(I α (µ)) = E E E E s t α/2 σ(ds) σ(dt) E( Z α ), where Z is a vector of d i.i.d. N(0, 1) s.

Note that E( Z α ) < iff α < d [use polar coordinates]. we see that α < d 2dim H E I α (µ) a.s. <. Hence dim H B(E) a.s. d 2dim H E.

Note that E( Z α ) < iff α < d [use polar coordinates]. we see that α < d 2dim H E I α (µ) a.s. <. Hence dim H B(E) a.s. d 2dim H E.

An upper density theorem For any Borel measure µ on R d and ϕ Φ, the upper ϕ- density of µ at x R d is defined as D ϕ µ(x) = lim sup r 0 µ(b(x, r)). ϕ(2r) Theorem 1.4.5 [Rogers and Taylor, 1961] Given ϕ Φ, K > 0 such that for any Borel measure µ on R d with 0 < µ ˆ=µ(R d ) < and every Borel set E R d, we have { K 1 ϕ µ(e) inf D x E µ(x) } 1 ϕ-m(e) K µ sup x E { D ϕ µ(x) } 1. (15)

An upper density theorem For any Borel measure µ on R d and ϕ Φ, the upper ϕ- density of µ at x R d is defined as D ϕ µ(x) = lim sup r 0 µ(b(x, r)). ϕ(2r) Theorem 1.4.5 [Rogers and Taylor, 1961] Given ϕ Φ, K > 0 such that for any Borel measure µ on R d with 0 < µ ˆ=µ(R d ) < and every Borel set E R d, we have { K 1 ϕ µ(e) inf D x E µ(x) } 1 ϕ-m(e) K µ sup x E { D ϕ µ(x) } 1. (15)

Cantor s set: continued Let µ be the mass distribution on C. That is, µ satisfies µ(i n,i ) = 2 n, n 0 1 i 2 n. Then for every x C and any r (0, 1), µ(b(x, r)) K r log 3 2. (16) Hence sup x C D log 3 2 µ (x) K. By the above theorem, H log3 2(C) K 1.

Cantor s set: continued Let µ be the mass distribution on C. That is, µ satisfies µ(i n,i ) = 2 n, n 0 1 i 2 n. Then for every x C and any r (0, 1), µ(b(x, r)) K r log 3 2. (16) Hence sup x C D log 3 2 µ (x) K. By the above theorem, H log3 2(C) K 1.

Cantor s set: continued Let µ be the mass distribution on C. That is, µ satisfies µ(i n,i ) = 2 n, n 0 1 i 2 n. Then for every x C and any r (0, 1), µ(b(x, r)) K r log 3 2. (16) Hence sup x C D log 3 2 µ (x) K. By the above theorem, H log3 2(C) K 1.

Thank you!