LECTURE 2: LOCAL TIME FOR BROWNIAN MOTION

Similar documents
Lecture 12. F o s, (1.1) F t := s>t

Fundamental Inequalities, Convergence and the Optional Stopping Theorem for Continuous-Time Martingales

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

On the martingales obtained by an extension due to Saisho, Tanemura and Yor of Pitman s theorem

Lecture 21 Representations of Martingales

GENERALIZED RAY KNIGHT THEORY AND LIMIT THEOREMS FOR SELF-INTERACTING RANDOM WALKS ON Z 1. Hungarian Academy of Sciences

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

1 Brownian Local Time

1. Stochastic Processes and filtrations

n E(X t T n = lim X s Tn = X s

Lecture 17 Brownian motion as a Markov process

Brownian Motion and Stochastic Calculus

Harmonic Functions and Brownian motion

ERRATA: Probabilistic Techniques in Analysis

Applications of Ito s Formula

Exercises. T 2T. e ita φ(t)dt.

STATISTICS 385: STOCHASTIC CALCULUS HOMEWORK ASSIGNMENT 4 DUE NOVEMBER 23, = (2n 1)(2n 3) 3 1.

Exercises in stochastic analysis

MATH 6605: SUMMARY LECTURE NOTES

Stochastic integration. P.J.C. Spreij

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 7 9/25/2013

Branching Processes II: Convergence of critical branching to Feller s CSB

Lecture 2. We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales.

Lecture 22 Girsanov s Theorem

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition

Squared Bessel Process with Delay

An essay on the general theory of stochastic processes

MATH 56A SPRING 2008 STOCHASTIC PROCESSES 197

Brownian Motion. An Undergraduate Introduction to Financial Mathematics. J. Robert Buchanan. J. Robert Buchanan Brownian Motion

u( x) = g( y) ds y ( 1 ) U solves u = 0 in U; u = 0 on U. ( 3)

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

lim n C1/n n := ρ. [f(y) f(x)], y x =1 [f(x) f(y)] [g(x) g(y)]. (x,y) E A E(f, f),

MA8109 Stochastic Processes in Systems Theory Autumn 2013

Reflected Brownian Motion

Wiener Measure and Brownian Motion

Solutions to the Exercises in Stochastic Analysis

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

An Introduction to Malliavin calculus and its applications

STOCHASTIC CALCULUS JASON MILLER AND VITTORIA SILVESTRI

4 Sums of Independent Random Variables

The Pedestrian s Guide to Local Time

Markov processes Course note 2. Martingale problems, recurrence properties of discrete time chains.

4.5 The critical BGW tree

Stochastic Processes

Lecture Notes 3 Convergence (Chapter 5)

3 (Due ). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure?

X n D X lim n F n (x) = F (x) for all x C F. lim n F n(u) = F (u) for all u C F. (2)

or E ( U(X) ) e zx = e ux e ivx = e ux( cos(vx) + i sin(vx) ), B X := { u R : M X (u) < } (4)

{σ x >t}p x. (σ x >t)=e at.

Elliptic PDEs of 2nd Order, Gilbarg and Trudinger

Stochastic Calculus (Lecture #3)

Poisson random measure: motivation

Selected Exercises on Expectations and Some Probability Inequalities

Malliavin Calculus in Finance

u xx + u yy = 0. (5.1)

I forgot to mention last time: in the Ito formula for two standard processes, putting

Stochastic Processes II/ Wahrscheinlichkeitstheorie III. Lecture Notes

3 Integration and Expectation

A Change of Variable Formula with Local Time-Space for Bounded Variation Lévy Processes with Application to Solving the American Put Option Problem 1

Exercises Measure Theoretic Probability

The Uniform Integrability of Martingales. On a Question by Alexander Cherny

Errata for Stochastic Calculus for Finance II Continuous-Time Models September 2006

2 (Bonus). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure?

Logarithmic scaling of planar random walk s local times

On the quantiles of the Brownian motion and their hitting times.

A Concise Course on Stochastic Partial Differential Equations

1/12/05: sec 3.1 and my article: How good is the Lebesgue measure?, Math. Intelligencer 11(2) (1989),

Verona Course April Lecture 1. Review of probability

Theoretical Tutorial Session 2

CONVERGENCE OF RANDOM SERIES AND MARTINGALES

Introduction to Random Diffusions

Stochastic Calculus and Black-Scholes Theory MTH772P Exercises Sheet 1

Harmonic Functions and Brownian Motion in Several Dimensions

Jump Processes. Richard F. Bass

UPPER DEVIATIONS FOR SPLIT TIMES OF BRANCHING PROCESSES

Laplace s Equation. Chapter Mean Value Formulas

Tyler Hofmeister. University of Calgary Mathematical and Computational Finance Laboratory

Exponential martingales: uniform integrability results and applications to point processes

A D VA N C E D P R O B A B I L - I T Y

Convergence at first and second order of some approximations of stochastic integrals

The strictly 1/2-stable example

LogFeller et Ray Knight

the convolution of f and g) given by

for all subintervals I J. If the same is true for the dyadic subintervals I D J only, we will write ϕ BMO d (J). In fact, the following is true

Skorokhod Embeddings In Bounded Time

On the submartingale / supermartingale property of diffusions in natural scale

Stochastic Integration.

Potentials of stable processes

MFE6516 Stochastic Calculus for Finance

Stochastic Differential Equations

Green s Functions and Distributions

6. Brownian Motion. Q(A) = P [ ω : x(, ω) A )

Lecture 9. d N(0, 1). Now we fix n and think of a SRW on [0,1]. We take the k th step at time k n. and our increments are ± 1

The Dirichlet s P rinciple. In this lecture we discuss an alternative formulation of the Dirichlet problem for the Laplace equation:

Uniformly Uniformly-ergodic Markov chains and BSDEs

Sobolev Spaces. Chapter 10

Path Decomposition of Markov Processes. Götz Kersting. University of Frankfurt/Main

Spring 2014 Advanced Probability Overview. Lecture Notes Set 1: Course Overview, σ-fields, and Measures

The Azéma-Yor Embedding in Non-Singular Diffusions

Continued Fraction Digit Averages and Maclaurin s Inequalities

Transcription:

LECTURE 2: LOCAL TIME FOR BROWNIAN MOTION We will define local time for one-dimensional Brownian motion, and deduce some of its properties. We will then use the generalized Ray-Knight theorem proved in Lecture in order to obtain the classical result (proved originally and separately by Ray and Knight).. Construction of local time Throughout, we let B t denote a standard one dimensional Brownian motion, which may start at B = a with a arbitrary. The occupation measure associated with it is the collection of random measures µ t (A) = {B s A}ds. Lemma.. For each t, µ t << Leb. Proof. By regularity of Leb, it is enough to show that a.s., for µ t -a.e. x, lim inf r µ t (B(x, r)) B(x, r) <. A sufficient condition for the latter is that µ t (B(x, r)) () E dµ t (x) lim inf r B(x, r) <. By Fatou, we have that the left side in () is bounded above by lim inf r 2r E t µ t (B(x, r))dµ t (x) = lim inf r 2r E dµ t (x) = lim inf r = lim inf r 2r E 2r E = lim inf r 2r du du du {x r Bs x+r}ds {B(u) r Bs B(u)+r}ds C duds <. s u { B(u) B(s) r} ds P ( B(u) B(s) r)ds Note that this proof does not work for d 2. For d = 2, a renormalization procedure will be needed in order to define local time. No such procedure is possible for d 3. By Lemma., the occupation measure µ t has a density on R. To get access to it, we begin by introducing a related quantity, the number of downcrossings of an interval around.

. 2 LECTURE 2: LOCAL TIME FOR BROWNIAN MOTION Definition by picture: Let D(a, b, t) = max{i : τ i t} denote the number of downcrossings of (a, b) by time t. Theorem.2. There exists a stochastic process L(t) so that, for any sequence a n and b n with a n < < b n, we have (2) 2(b n a n )D(a n, b n, t) n L(t), a.s. Further, L(t) is a.s. Hölder ( 2 ). We call L(t) = L B (t) the local time at of the Brownian motion B. The next theorem gives an alternative representation of L(t). Theorem.3. For any sequence a n and b n with a n < < b n, (3) b n a n {an B s b n}ds = L(t), a.s. Note that Theorem.3 represents the local time at as the Lebesgue derivative of µ t at. We postpone the proof of Theorems.2 and.3 until after we derive the Ray- Knight theorem. Since Theorem.2 is stated with arbitrary starting point a, we may define the local time at a by L a (t) = L B a (t). We call the pair (a, ω) good if the conclusions of Theorems.2 and.3 hold. Note that since the construction works for any a, we have that P Leb({(ω, a) : L a (t) not good} = ). In particular, it follows by Fubini that the conclusions of these theorems holds a.s. for Lebesgue almost every a. This observation, together with the Lebesgue differentiation theorem, immediately give the following.

LECTURE 2: LOCAL TIME FOR BROWNIAN MOTION 3 Corollary.4. For any bounded measurable function g, g(a)dµ t (a) = g(b(s))ds = g(a)l a (t)da, R a.s. 2. Brownian local times and random walk - a dictionary Fix R, N and set G R,N := Z [ RN, RN]. Set weights W i,i+ =, i = RN,..., RN. Consider both the discrete and continuous time random walks on G R,N. Note that DRW can be coupled with the Brownian motion B s (started at and reflected at R) in a natural way, using lattice spacing /N. Note also that the DRW and CRW are naturally coupled, but we never couple the triple (B s, DRW, CRW ). We introduce the discrete local time L z k,r = #visits of DRW to site z by (discrete) time k. Remark 2.. Let T k (z) denote the total time spent by CRW at site z by the time of the k-th jump from z. If z < RN then T k /k /2 (since the jump rate of CRW is 2) and in fact, for such z, (4) P ( T k(z) k 2 > δ) e I(δ)k. If k N then one can apply a union bound and see that there exist ɛ N, δ N so that T k (z)/k /2 < ɛ N at once for all z < RN and k > δ N N, with probability approaching as N with R fixed. (One can also allow R with N, but we will not need that.) We choose R large and let τ R denote the hitting time of { R, R} by the Brownian motion. Let D N (x, t) denote the number of downcrossings from ([xn] + )/N to [xn] by time t. Let T (N, t) denote the total number of steps of the coupled DRW by (Brownian) time t. The coupling of the BM to DRW gives that for x which is not a multiple of /N, D N (x, t) = #downcrossings of DRW from [xn] + to [xn] by time T (N, t) (5) 2 #visits to [xn] by time T (N, t) = 2 L[xN] T (N,t),R where here and later, is in the sense of Remark 2.. Since T (N, t)/n 2 t, the last term in the RHS of (5) is 2 L[xN] tn 2,R =: N L 2 R,N (x, t). Note that for a.e. irrational x (which are never multiples of /N), we have that L R,N (x, t τ R ) L x (t τ R ), by Theorem.2 (at least on a subsequence of N which guarantees monotonicity). On the other hand, for CRW, the Green function (with x = ) is, with x, y < R, G R (xn, yn) = (x y)n, since for y < x E x τ i= τ Xi=y = E y Xi=y = P y (τ < τ y ) = 2y i= and λ x = 2 for x < RN. In particular, we have that φ [xn] / N W x, x < R is a two sided Brownian motion. d W x where

4 LECTURE 2: LOCAL TIME FOR BROWNIAN MOTION 3. The classical Ray-Knight theorem The object in the left side of the generalized discrete Ray-Knight theorem can be written, in our notation, as 2k,R + 2 φ2 [xn] 2k: 2 L[xN] 2 L 2k,R =u (The factor 2 arises in translating number of visits to downcrossings.) Take u = sn. Take R large enough so that, with high probability, the hitting time of R is larger than the time it takes to accumulate occupation measure of sn at. Divide by N and take limits to get that the above converges (in the sense of finite dimensional distributions) to 2 (Lx (θ s ) + W 2 x ). On the other hand, the right side converges to 2 (W x + s) 2. This works for any finite collection of irrational x, and extends by continuity to all x. We thus obtained: Theorem 3. (Generalized Ray-Knight for Brownian motion). Let θ u = inf{t : L (t) = u}. Then, (L x (θ u ) + W 2 x ) d = (W x + u) 2. 4. Bessel processes and the classical (second) Ray-Knight theorem A square bessel process of parameter δ started at z (denoted BESQ δ (z)) is the solution to the stochastic differential equation (6) dz t = 2 Z t dw t + δdt, Z = z. (That a unique strong, positive solution exists is not completely trivial, but in the cases of interest to us (δ =, ) we show it below.) The reason for the name is as follows. If B t is a d-dimensional Brownian motion then Ito s lemma implies that d d B t 2 = 2 BtdB i t i + d dt But i= d i= Bi sdb i s equals in distribution B s 2 dw s. Thus, the square of the modulus of d-dimensional Brownian motion is a BESQ δ process with δ = d. For δ =, this also gives a strong solution to (6). Weak uniqueness of a positive solution can then be deduced by taking square-root, and strong uniqueness follows by from weak uniqueness + strong existence. To see a strong solution for δ =, note that a strong solution exists up to τ, the first hitting time of, and then extend it by setting Z t = for all t > τ. Lemma 4.. Let X t be a BESQ δ (y) and let Y t be an independent BESQ δ (y ). Then Z t = X t + Y t is a BESQ δ+δ (y + y ). Proof. Write dx t = 2 X t dw t + δdt, dy t = 2 Y t dw t + δ t, where W and W are independent Brownian motions. From Ito s lemma we get dz t = 2( X t dw t + Y t dw t) + (δ + δ )dt. The martingale part in the last equation has quadratic variation X t +Y t = Z t, from which the conclusion follows.

LECTURE 2: LOCAL TIME FOR BROWNIAN MOTION 5 In fact, the following converse also holds. Lemma 4.2. Let y, y, δ, δ. If X t is a BESQ δ (y) process, Y t is independent of it, and X t + Y t is a BESQ δ (y + y ) process, then Y t is a BESQ δ (y ) process. Proof. Note first that Y t is necessarily continuous. Hence, its law is characterized by its finite dimensional distributions. Since it is non-negative, the finite dimensional distributions are characterized by their Laplace transform. Now, with λ i, t i, and Z t a BESQ η (z) process, write Then, from independence, ψ η,z (λ i, t i, i =,..., k) = E(e d i= λizt i ). E(e k ψ δ+δ,y+y (λ i= λiyt i, t i, i =,..., k) i ) = ψ δ,y. (λ i, t i, i =,..., k) In particular, the law of Y t is determined. Now apply Lemma 4. to identify it as BESQ δ (y ). Remark 4.3. By the same reasoning, Lemma 4. gives immediately (weak) uniqueness for the BESQ process. We can now finally prove the classical version of the Ray-Knight theorem. Theorem 4.4 (Classical second Ray-Knight theorem). L x (θ u ) is a BESQ ( u) process. Proof. Note that, by Ito s lemma, (W x + u) 2 is a BESQ ( u) process. It follows from Theorem 3. that L x (θ u ) + Wx 2 is a BESQ ( u) process. Noting that Wx 2 is a BESQ () process, the conclusion follows from Lemma 4.2. 5. Proof of Theorem.2 We follow the treatment in the Mörters-Peres book. We will sketch some of the arguments, see Chapter 6 there if you have troubles filling in the details. (Beware that some of the computations there are not quite right, this will be corrected in the forthcoming paperback edition.) We begin with a decomposition lemma for downcrossings. Lemma 5.. Fix a < m < b < c. Let T c = inf{t : X t = c}. Let D = D(a, m, T c ), D u = D(m, b, T c ), D = D(a, b, T c ). Then there exist independent sequences of independent random variables X i and Y i, independent of D with the following properties: For i, X i is Geometric( b a m a ) and Y i is Geometric( (b a)(c m) (b m)(c a)) (both starting at ), and D D D = X + X i, D u = Y + Y i. i= i=

. 6 LECTURE 2: LOCAL TIME FOR BROWNIAN MOTION Proof. The proof is an exercise in the strong Markov property. (a) Let X = D(a, m; T down (a, b)) where T down (a, b) is the time of the first downcrossing of (a, b). Note that it is possible that X =. (b) Each downcrossing of (a, b) has exactly one downcrossing of (a, m) (since a can t be reached twice). (c) Each upcrossing of (a, b) has a (Geometric-) downcrossings of (a, m), with parameter (b a)/(m a) (start at m to see that!). All these are of course independent of D. Facts (a) (c) give the statement for D. The claim concerning D u is similar: Take Y to be the number of downcrossings of (m, b) after the last downcrossing of (a, b). Note that there are no downcrossings of (m, b) during an upcross of (a, b), and that every downcross of (a, b) gives a geometric number of downcrossings of (m, b). We next prove Theorem.2 in the special case that t = T c and c > b. For that we need the following lemma. Lemma 5.2. Let a n and b n with a n < < b n. Then S n := 2(b n a n )D(a n, b n, T c )/(c a n ) is a submartingale (with respect to its natural filtration F n ) which is bounded in L 2. Proof. We use the representation of Lemma 5.. Assume wlog that either a n = a n+ or b n = b n+ (otherwise, augment sequence). Assume first that a n = a n+. Recall that from Lemma 5. we have that D(a n,b n,t c) D(a n, b n+, T c ) = X + i= X i

LECTURE 2: LOCAL TIME FOR BROWNIAN MOTION 7 where X i are independent of F n and D(a n, b n, T c ) is measurable on F n. Note also that X and that E(X i F n ) = EX i = (b n a n )/(b n+ a n ) for i. We get that b n+ a n E(D(a n, b n+, T c ) F n ) b n a n D(a n, b n, T c ), c a n c a n as needed. The proof for b n = b n+ is similar, using the representation of D u in Lemma 5.. To see the claimed L 2 bound, note that D(a n, b n, T c ) is a geometric random variable with parameter (b n a n )/(c a n ), and therefore of second moment C(c a n ) 2 /(b n a n )) 2. This implies that ESn 2 C and completes the proof. It follows from Lemma 5.2 that L(T b ) = lim n 2(b n a n )D(a n, b n, T c ) exists almost surely, and does not depend on the chosen sequence (a n, b n ) (since one can always interleave sequences and get a contradiction if the limits do not coincide). To complete the proof of Theorem.2, we need to consider a fixed t >. Choose then c large so that P (T c < t) is arbitrarily small, and extend the Brownian motion B s, s t by an independent Brownian motion B started at B t. Let L(T c ) be the local time at of B before Tc, which exists by the first part of the proof, and let L(T c ) denote the local time at up to time T c of the Brownian motion B concatenated with B (both equal a.s. to the limit of downcrossings count). Then, it follows from the definitions that L(t) = L(T c ) L(T c ), a.s., and Theorem.2 follows. Exercise. Use the downcrossing representation to show that, for all γ < /2 there exists ɛ = ɛ(γ) > so that for h small, P (L(t + h) L(t) > h γ ) exp( h ɛ ). Using that, extend L(t) to all times (almost surely), and show it is Holder continuous. Exercise 2. Let T = inf{t : W t = }. Use the downcrossing representation to show that L (T ) has an exponential distribution. 6. Proof of Theorem.3 This is an exercise in law of large numbers, of the type we mentioned in the main text. First note that if B is a standard Brownian motion starting at and τ = min{t > : B t = }, then E τ Bs [,]ds =. This can be shown in several ways: either use that removing the negative excursions of Brownian motion gives a reflected Brownian motion, and so τ has the same law as the exit time of B from the interval [, ], which by a martingale argument equals. Alternatively, if one sets v(x) = E x τ B s [,] then (7) 2 v (x) = [,] (x), x (, ], and v() =.

8 LECTURE 2: LOCAL TIME FOR BROWNIAN MOTION One can check that { ( x) v(x) = 2, x, x < (As pointed out in class, this last argument is not quite OK, since there is no uniqueness to (7), even if one postulates that v(x) = c for x < and that v(x) is monotone decreasing. To fix that, one can approximate the indicator by a smooth function, solve and take limits. A better option is to note that the exit time from [, ] when starting at /2 is /4, and use that to deduce the condition v(/2) = /4+v()/2, which gives the missing boundary condition when solving (7) on [, ].) Note that τ has the law of the time spent in [, ] during a downcrossing of the Brownian motion from to. Now, embed a random walk in the Brownian motion and use the downcrossing representation, together with a law of large numbers and Brownian scaling, to complete the proof. Details are omitted.