Infinitely iterated Brownian motion

Similar documents
Iterating Brownian Motions, Ad Libitum

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

FINITE AND INFINITE EXCHANGEABILITY

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Mean-field dual of cooperative reproduction

Some properties of the true self-repelling motion

In terms of measures: Exercise 1. Existence of a Gaussian process: Theorem 2. Remark 3.

Introduction to self-similar growth-fragmentations

{σ x >t}p x. (σ x >t)=e at.

A. Bovier () Branching Brownian motion: extremal process and ergodic theorems

Invariance Principle for Variable Speed Random Walks on Trees

Logarithmic scaling of planar random walk s local times

Exercises. T 2T. e ita φ(t)dt.

Theory of Stochastic Processes 3. Generating functions and their applications

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

Practical conditions on Markov chains for weak convergence of tail empirical processes

G(t) := i. G(t) = 1 + e λut (1) u=2

ELEMENTS OF PROBABILITY THEORY

Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals

Modern Discrete Probability Branching processes

Stochastic process for macro

Lecture 9. d N(0, 1). Now we fix n and think of a SRW on [0,1]. We take the k th step at time k n. and our increments are ± 1

Lecture 17 Brownian motion as a Markov process

1 Introduction. 2 Diffusion equation and central limit theorem. The content of these notes is also covered by chapter 3 section B of [1].

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

Discrete random structures whose limits are described by a PDE: 3 open problems

Heavy Tailed Time Series with Extremal Independence

P (A G) dp G P (A G)

The Codimension of the Zeros of a Stable Process in Random Scenery

Non-Essential Uses of Probability in Analysis Part IV Efficient Markovian Couplings. Krzysztof Burdzy University of Washington

On semilinear elliptic equations with measure data

Hard-Core Model on Random Graphs

Strong approximation for additive functionals of geometrically ergodic Markov chains

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past.

Some SDEs with distributional drift Part I : General calculus. Flandoli, Franco; Russo, Francesco; Wolf, Jochen

1. Aufgabenblatt zur Vorlesung Probability Theory

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.

1. Stochastic Processes and filtrations

UNIVERSITY OF MANITOBA

Gaussian Random Fields: Geometric Properties and Extremes

Necessary and sufficient conditions for strong R-positivity

Combinatorics in Banach space theory Lecture 12

Acta Universitatis Carolinae. Mathematica et Physica

Consistency of the maximum likelihood estimator for general hidden Markov models

µ X (A) = P ( X 1 (A) )

Adventures in Stochastic Processes

Admin and Lecture 1: Recap of Measure Theory

Random Process Lecture 1. Fundamentals of Probability

Poisson random measure: motivation

Econ 508B: Lecture 5

Point-Map Probabilities of a Point Process

Random measures, intersections, and applications

Lecture 12. F o s, (1.1) F t := s>t


Universal examples. Chapter The Bernoulli process

STA 711: Probability & Measure Theory Robert L. Wolpert

Lectures on Stochastic Stability. Sergey FOSS. Heriot-Watt University. Lecture 4. Coupling and Harris Processes

18.175: Lecture 14 Infinite divisibility and so forth

b i (µ, x, s) ei ϕ(x) µ s (dx) ds (2) i=1

2 Functions of random variables

JUSTIN HARTMANN. F n Σ.

GAUSSIAN PROCESSES; KOLMOGOROV-CHENTSOV THEOREM

Formulas for probability theory and linear models SF2941

9 Radon-Nikodym theorem and conditioning

Some Tools From Stochastic Analysis

Random Walks Conditioned to Stay Positive

An almost sure invariance principle for additive functionals of Markov chains

Spectral asymptotics for stable trees and the critical random graph

Potential Theory on Wiener space revisited

Itô s excursion theory and random trees

The coupling method - Simons Counting Complexity Bootcamp, 2016

Random Walks on Hyperbolic Groups IV

6. Brownian Motion. Q(A) = P [ ω : x(, ω) A )

18.175: Lecture 2 Extension theorems, random variables, distributions

Shifting processes with cyclically exchangeable increments at random

Cauchy Problems Solved by Running Subordinate Processes

Selected Exercises on Expectations and Some Probability Inequalities

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition

Multiple points of the Brownian sheet in critical dimensions

Weak convergence and Brownian Motion. (telegram style notes) P.J.C. Spreij

21 Gaussian spaces and processes

Heat content asymptotics of some domains with fractal boundary

Conditional Full Support for Gaussian Processes with Stationary Increments

Simulation of stationary fractional Ornstein-Uhlenbeck process and application to turbulence

Weakly interacting particle systems on graphs: from dense to sparse

Stochastic flows associated to coalescent processes

Numerical Methods with Lévy Processes

Building Infinite Processes from Finite-Dimensional Distributions

Erdős-Renyi random graphs basics

Probability and Measure

Additive functionals of infinite-variance moving averages. Wei Biao Wu The University of Chicago TECHNICAL REPORT NO. 535

The genealogy of branching Brownian motion with absorption. by Jason Schweinsberg University of California at San Diego

4 Sums of Independent Random Variables

An Introduction to Entropy and Subshifts of. Finite Type

A CLASSROOM NOTE: ENTROPY, INFORMATION, AND MARKOV PROPERTY. Zoran R. Pop-Stojanović. 1. Introduction

Krzysztof Burdzy University of Washington. = X(Y (t)), t 0}

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

Foundations of Nonparametric Bayesian Methods

I forgot to mention last time: in the Ito formula for two standard processes, putting

Exercises in stochastic analysis

Transcription:

Mathematics department Uppsala University (Joint work with Nicolas Curien)

This talk was given in June 2013, at the Mittag-Leffler Institute in Stockholm, as part of the Symposium in honour of Olav Kallenberg http://www.math.uni-frankfurt.de/ ismi/kallenberg symposium/ The talk was given on the blackboard. These slides were created a posteriori and represent a summary of what was presented at the symposium. The speaker would like to thank the organizers for the invitation.

Definitions B 1,B 2,...,B n (standard) Brownian motions with 2-sided time n-fold iterated Brownian motion: B n (B n 1 ( B 1 (t) )) Extreme cases: I. BMs independent of one another II. BMs identical: B 1 = = B n, a.s. (self-iterated BM) We are interested in case I.

Outline Physical Motivation Background Previous work One-dimensional limit Multi-dimensional limit Exchangeability The directing random measure and its density Conjectures

Physical motivation Subordination Mixing of time and space dimensions (c.f. relativistic processes) Branching processes Higher-order Laplacian PDEs Modern physics problems

Standard heat equation is solved probabilistically by u t = 1 2 u, on D [0, ) u(t = 0,x) = f(x) u(t,x) = Ef(x+B(t)), where B (possibly stopped) standard BM.

Higher-order Laplacian Problems of the form u t = c 2 u, on D [0, ) u(t = 0,x) = f(x) arise in vibrations of membranes. Earliest attempt to solve them probabilistically is by Yu. V. Krylov (1960). Caveat: Letting D = R, f a delta function at x = 0, and taking Fourier transform with respect to x, gives û(t,λ) = exp( λ 4 t) whose inverse Fourier transform is not positive and so signed measures are needed. Program carried out by K. Hochberg (1978). Caveat: only finite additivity on path space is achieved. u(t,x) 0 x

Funaki s approach u t = 1 4 u 8 x 4 with initial condition f satisfying some UTC 1, is solved by (t,x) E f(x+ B 2 (B 1 (t))) where B 2 (t) = B 2 (t)1 t 0 + 1B 2 (t)1 t<0 and f analytic extension of f from R to C. Remark: Ef(x+B 2 (B 1 (t))) does not solve the original PDE [Allouba & Zheng 2001]. 1 Unspecified Technical Condition terminolgy due to Aldous

Fractional PDEs Density u(t,x) of x+b 2 ( B 1 (t) ) satisfies a fractional PDE of the form [Orsingher & Beghin 2004] 1/2n u t 1/2n = c n 2 u x 2.

Discrete index analogy We do know several instances of compositions of discrete-index Brownian motions (=random walks). For example, let S(t) := X 1 + +X t be sum of i.i.d. nonnegative integer-valued RVs. Take S 1,S 2,... be i.i.d. copies of S. Then S n (S n 1 ( S 1 (x) )) is the size of the n-th generation Galton-Watson process with offspring distribution the distribution of X 1, starting from x individuals at the beginning. Other examples...

Known results on n-fold iterated BM Given a sample path of B 2 B 1, we can a.s. determine the paths of B 2 and B 1 (up to a sign) [Burdzy 1992] B n B 1 is not a semimartingale; paths have finite 2 n -variation [Burdzy] Modulus of continuity of B n B 1 becomes bigger as n increases [Eisenbaum & Shi 1999 for n = 2] As n increases, the paths of B n B 1 have smaller upper functions [Bertoin 1996 for LIL and other growth results]

Path behavior Wn := Bn B1. n = 1 (BM) n=2 n=3 Note self-similarity (d) Wn (αt), t R} = 2 n α Wn (t), t R Define occupation measure (on time interval 0 t 1, w.l.o.g.) µn (A) = Z 1 0 1{Wn (t) A} dt, A B(R) R which has density (local time): µn (A) = A Ln (x)dx. We expect that the smoothness of Ln increases with n [Geman & Horowitz 1980].

Problems Does the limit (in distribution) of W n exist, as n? If yes, what is W? Does the limit of µ n exist? What are the properties of W? Convergence of random measures Let M be the space of Radon measures on R equipped with the topology of vague convergence. Let Ω := C(R) N, and P the N-fold product of standard Wiener measures on C(R), be the canonical probability space. A measurable λ : Ω M is a random measure. A sequence {λ n } n=1 of random measures converges to the random measure λ weakly in the usual sense: For any F : M R, continuous and bounded, we have EF(λ n ) EF(λ). Equivalently [Kallenberg, Conv. of Random Measures], R fdλ n R fdλ, weakly as random variables in R, for all continous f : R R with compact support ( infinite-dimensional Wold device.)

One-dimensional marginals Let E(λ) denote an exponential random variable with rate λ and let ±E(λ) be the product of E(λ) and an independent random sign. Theorem For all t R\{0}, W n (t) (d) n ±E(2), Corollary Let N 1,N 2,... be i.i.d. standard normal random variables in R. Then n=1 2 n (d) N n = E(2). This is a probabilistic manifestation of the duplication formula for the gamma function: ( Γ(z) Γ z + 1 ) = 2 1 2z π Γ(2z). 2

Higher-order marginals Recall: W n is 2 n self-similar. Also: W n (0) = 0 and W n has stationary increments. Let < s < t <. Then W 2 (t) W 2 (s) = B 2 (B 1 (t)) B 2 (B 1 (s)) By induction, true for all n. (d) = B 1 (B 1 (t) B 1 (s)) (by conditioning on B 1 ) (d) = B 2 (B 1 (t s)) = W 2 (t s) (by conditioning on B 2 ) Hence, for s,t R\{0}, s t, if weak limit (X 1,X 2 ) of (W n (s),w n (t)) exists then it should have the properties that ±X 1 (d) = ±X 2 (d) = ±(X 2 X 1 )

The Markovian picture {W n } n=1 is a Markov chain with values in C(R): W n+1 = B n+1 W n. However, stationary distribution cannot live on C(R). Look at functionals of W n, e.g., fix (x 1,...,x p ) R p and consider W n := (W n (x 1 ),...,W n (x p )). Here, R p := { (x 1,...,x p ) (R\{0}) p : x i x j for i j }. Then {W n } n=1 is a Markov chain in R p with transition kernel P(x,A) = P((B(x 1 ),...,B(x p )) A), x R p, A R p (Borel).

Theorem {W n } n=1 is a positive recurrent Harris chain. There is Lyapunov function V : R p R +, V(x 1,...,x p ) := max 1 i p x i + 0 i<j p 1 xi x j (x 0 := 0, by convention), such that, for C 1,C 2 universal positive constants, (P I)V C 1 V, on {V > C2 }. Corollary {W n } n=1 has a unique stationary distribution ν p on R p.

The family ν 1,ν 2,... is consistent: ν p+1 (dx 1 dx k 1 dydx k dx p ) = ν p (dx 1 dx p ). y Kolmogorov s extension theorem there exists unique probability measure ν on R N (d) (product σ-algebra) consistent with all the ν p. Also, ν 1 = ±E(2). Define {W (x), x R}, a family of random variables (a random element of R N with the product σ-algebra), such that ( W (x 1 ),...,W (x p ) ) (d) = ν p, whenever x = (x 1,...,x p ) R p, letting W (0) = 0. Then W n fidis n W

Properties If x,y,0 are distinct, W (x) (d) = W (y) (d) = W (x) W (y) (d) = ±E(2), If (x 1,...,x p ) R p and 1 l p, then ( W (x i ) W (x l ) ) 1 i p i l (d) = ν p 1 The collection ( W (x), x R\{0} ) is an exchangeable family of random variables: its law is invariant under permutations of finitely many coordinates By the de Finetti/Ryll-Nardzewski/Hewitt-Savage theorem [Kallenberg, Foundations of Modern Probability, Theorem 11.10], these random variables are i.i.d., conditional on the invariant σ-algebra.

Exchangeability and directing random measure Recall that µ n (A) = 1 0 1{W n (t) A}dt, A B(R) occupation measure of the n-th iterated process. Theorem µ n converges weakly (in the space M) to a random measure µ. Moreover, µ takes values in the set M 1 M of probability measures. Theorem Let µ be a random element of M 1 with distribution as specified by the weak limit above. Conditionally on µ, let { V (x),x R\{0} } be a collection of i.i.d. random variables each with distribution µ. Then { W (x),x R\{0} } (fidis) = { V (x),x R\{0} }.

Intuition Here are some non-rigorous statements: The limiting process ( infinitely iterated Brownian motion ) is merely a collection of independent and identically distributed random variables with a random common distribution. (Exclude the origin!) Each W n is short-range dependent. But the limit is long-range dependent. However, the long-range dependence is due to unknown a priori parameter (µ ). Wheras W n (t) grows, roughly, like O(t 1/2n ), for large t, the limit W (t) is bounded (explanation coming up).

Properties of µ µ has bounded support, almost surely. µ (ω,ξ) := E µ (,ξ) = R Ω exp( 1ξx)µ (ω,dx). µ (ω,ξ)p(dω) = 4 4+ξ 2. Density L (ω,x) of µ (ω,dx) exists, for P-a.e. ω. We may think of L (x) as the local time at level x on the time interval 0 t 1 of the limiting process. (This is not a rigorous statement.)

Properties of L L is a.s. continuous. R L (x) q dx <, a.s., for all 1 q <. For all small ε > 0, the density L is locally (1/2 ε) Hölder continuous.

Oscillation Let n (t) := sup Wn (s) W n (t) 0 s,t t be the oscillation of the n-th iterated process W n on the time interval [0,t]. Theorem The limit in distribution of the random variable n (t), as n, exists and is a random variable which does not depend on t: n (t) (d) n i=0 D 2 i i, where D 0,D 1,... are i.i.d. copies of 1 (1) (the oscillation of a standard BM on the time interval [0,1].)

Joint distributions Recall that, for s,t R\{0}, s t, the joint law ν 2 of (W (s),w (t)) satisfies the remarkable property ±W (s) (d) = ±W (t) (d) = ±(W n (s) W n (t)). We have no further information on what this 2-dimensional law is. The following scatterplot 2 gives an idea of the level sets of the joint density: 2 Thanks to A. Holroyd for the simulation!