Lecture 5. 1 Chung-Fuchs Theorem. Tel Aviv University Spring 2011

Similar documents
Lecture 7. 1 Notations. Tel Aviv University Spring 2011

Problem Sheet 1. You may assume that both F and F are σ-fields. (a) Show that F F is not a σ-field. (b) Let X : Ω R be defined by 1 if n = 1

Lecture 2. We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales.

Math 6810 (Probability) Fall Lecture notes

Useful Probability Theorems

Markov processes Course note 2. Martingale problems, recurrence properties of discrete time chains.

A D VA N C E D P R O B A B I L - I T Y

Lecture 8. Here we show the Asymptotics for Green s funcion and application for exiting annuli in dimension d 3 Reminders: P(S n = x) 2 ( d

Martingale Theory for Finance

Notes 15 : UI Martingales

Analysis on Graphs. Alexander Grigoryan Lecture Notes. University of Bielefeld, WS 2011/12

Selected Exercises on Expectations and Some Probability Inequalities

Lecture 2. 1 Wald Identities For General Random Walks. Tel Aviv University Spring 2011

P (A G) dp G P (A G)

1 Random Walks and Electrical Networks

QUESTIONS AND SOLUTIONS

Lecture 21 Representations of Martingales

Universal examples. Chapter The Bernoulli process

Lecture 9. d N(0, 1). Now we fix n and think of a SRW on [0,1]. We take the k th step at time k n. and our increments are ± 1

Lecture 22: Variance and Covariance

Exercise Exercise Homework #6 Solutions Thursday 6 April 2006

ADVANCED PROBABILITY: SOLUTIONS TO SHEET 1

Probability Theory. Richard F. Bass

Inference for Stochastic Processes

Modern Discrete Probability Branching processes

Notes 18 : Optional Sampling Theorem

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

Course 212: Academic Year Section 1: Metric Spaces

1. Stochastic Processes and filtrations

1 More concise proof of part (a) of the monotone convergence theorem.

Math Homework 5 Solutions

Theorem 2.1 (Caratheodory). A (countably additive) probability measure on a field has an extension. n=1

4 Sums of Independent Random Variables

6 Markov Chain Monte Carlo (MCMC)

Math 456: Mathematical Modeling. Tuesday, March 6th, 2018

Lecture 22 Girsanov s Theorem

LTCC. Exercises. (1) Two possible weather conditions on any day: {rainy, sunny} (2) Tomorrow s weather depends only on today s weather

Most Continuous Functions are Nowhere Differentiable

LECTURE 2. Convexity and related notions. Last time: mutual information: definitions and properties. Lecture outline

and 1 P (w i)=1. n i N N = P (w i) lim

LEBESGUE INTEGRATION. Introduction

Martingale Theory and Applications

Part IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

Lecture 12. F o s, (1.1) F t := s>t

Harmonic Functions and Brownian Motion in Several Dimensions

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

Exercises Measure Theoretic Probability

(A n + B n + 1) A n + B n

Probability Theory I: Syllabus and Exercise

Lecture 17 Brownian motion as a Markov process

Brownian Motion and Stochastic Calculus

Solution for Problem 7.1. We argue by contradiction. If the limit were not infinite, then since τ M (ω) is nondecreasing we would have

Lecture 4 An Introduction to Stochastic Processes

Harmonic functions on groups

Applied Stochastic Processes

CHAPTER 1. Martingales

Harmonic Functions and Brownian motion

BRANCHING PROCESSES 1. GALTON-WATSON PROCESSES

1 Gambler s Ruin Problem

SOME MEASURABILITY AND CONTINUITY PROPERTIES OF ARBITRARY REAL FUNCTIONS

Notes on uniform convergence

CALCULUS JIA-MING (FRANK) LIOU

TUG OF WAR INFINITY LAPLACIAN

Product measure and Fubini s theorem

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.

4 Expectation & the Lebesgue Theorems

G : Statistical Mechanics Notes for Lecture 3 I. MICROCANONICAL ENSEMBLE: CONDITIONS FOR THERMAL EQUILIBRIUM Consider bringing two systems into

Outline. Martingales. Piotr Wojciechowski 1. 1 Lane Department of Computer Science and Electrical Engineering West Virginia University.

The Dirichlet s P rinciple. In this lecture we discuss an alternative formulation of the Dirichlet problem for the Laplace equation:

1 Stat 605. Homework I. Due Feb. 1, 2011

Introduction to Real Analysis

Lectures on Markov Chains

MATH 56A SPRING 2008 STOCHASTIC PROCESSES

MS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10

Contents. 2.1 Vectors in R n. Linear Algebra (part 2) : Vector Spaces (by Evan Dummit, 2017, v. 2.50) 2 Vector Spaces

4.1 Notation and probability review

SMSTC (2007/08) Probability.

MARKOV CHAINS: STATIONARY DISTRIBUTIONS AND FUNCTIONS ON STATE SPACES. Contents

Lecture 9 Classification of States

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 9 10/2/2013. Conditional expectations, filtration and martingales

Stochastic Processes

The range of tree-indexed random walk

Lecture 6. 2 Recurrence/transience, harmonic functions and martingales

Math 341: Probability Seventeenth Lecture (11/10/09)

APPLICATIONS OF DIFFERENTIABILITY IN R n.

Stochastic Processes in Discrete Time

On Optimal Stopping Problems with Power Function of Lévy Processes

Part III Advanced Probability

ELECTRICAL NETWORK THEORY AND RECURRENCE IN DISCRETE RANDOM WALKS

IV.3. Zeros of an Analytic Function

STOCHASTIC DIFFERENTIAL EQUATIONS WITH EXTRA PROPERTIES H. JEROME KEISLER. Department of Mathematics. University of Wisconsin.

1 Lecture 25: Extreme values

Lecture 19 L 2 -Stochastic integration

Fundamental Inequalities, Convergence and the Optional Stopping Theorem for Continuous-Time Martingales

Linear Regression and Its Applications

Metric Spaces Lecture 17

Notes on Iterated Expectations Stephen Morris February 2002

Lecture 4 Lebesgue spaces and inequalities

18.175: Lecture 17 Poisson random variables

Transcription:

Random Walks and Brownian Motion Tel Aviv University Spring 20 Instructor: Ron Peled Lecture 5 Lecture date: Feb 28, 20 Scribe: Yishai Kohn In today's lecture we return to the Chung-Fuchs theorem regarding recurrence of general random walks on R d, and provide a proof for the Z d case. We then move on to present a brief review of Martingales, mentioning several of their properties. In particular, we relate to the optional sampling theorem, Doob's maximal inequality and the martingale convergence theorem. Towards the end we begin to discuss Harmonic functions on graphs, dene a Liouville graph, and nish with concluding that Z 2 is Liouville, using the martingale convergence theorem. Tags for today's lecture: Recurrence criteria for general RWs, Chung-Fuchs theorem, Martingales, Harmonic functions, Liouville graph. Chung-Fuchs Theorem Theorem (Chung-Fuchs). Let S n be a RW on R d.. Suppose d = : If the weak law of large numbers holds in the form S n /n 0 in probability, then S n is neighborhood-recurrent. 2. Suppose d = 2: If the central limit theorem holds in the form S n / n normal distribution, then S n is neighborhood-recurrent. 3. Suppose d = 3: If S n is not contained in a plane (meaning that the group of possible values for S n does not rest on a plane) then S n is neighborhood-transient. proof. (i) The d = case has been proved in the previous lecture (under rst moment assumptions and for RWs on Z). (ii) d = 2: We will prove the theorem for RWs on Z 2. We need to show P(S n = 0) = (according to proposition (3) in last week's lecture). 5-

We have, from the assumption, P( S n n b) ˆ n x b n(y)dy ( refers to the Euclidean norm on R d, and n(y) is the limiting normal distribution. Notice that we can assume n(y) is non-degenerate, otherwise we're back to the -dimension problem). Remark. A local limit theorem would give P(S n = 0) c n, which would yield the wanted divergence of the sum P(S n = 0). Lemma. Let then Notice that G(y, y) = G(x, x). G(x, y) = E x [# of visits to y] = P x (S n = y), G(x, y) = P x (visit y) G(y, y). Proof. Dene T = min(n S n = y) or T = if y not visited. Now, G(x, y) = P x (S n = y) = k=0 P x (S n = y, T = k) (notice that P x (S n = y, T = k) = 0 k > n). Every element in the sum is positive, so we can change the order of summing: G(x, y) = k=0 Using the strong Markov property we get G(x, y) = k=0 P y (S n = y) P x (T = k) = P x (S n = y, T = k). P x (T = k) G(y, y) = P x (visit y) G(y, y). k=0 Corollary. Since P x (visit y) it follows that G(x, y) G(x, x), 5-2

hence for any A Z d P x (S n A) A G(x, x) (by summing upon all y A). Back to the proof of (2): Note that for every m ( ) P(S n = 0) = G(0, 0) c m 2 P( S n m) for some c > 0, from the corollary (here A = {x x m} and A = c m 2 ). We can change the sum into an integral ( ) m 2 P( S n m) = ˆ 0 P( S θm 2 m)dθ, because when n θ n+ then θm 2 = n and each segment of the integral is of length m 2 m 2. From the assumption m 2 P( S S θm θm 2 m) = P( 2 ˆ ) n(y)dy, θm θ m x θ /2 and by Fatou's lemma, ˆ liminf m 0 P( ˆ S θm 2 m)dθ 0 liminf P( S θm m 2 m)dθ = ˆ 0 ˆ x θ /2 n(y)dy. Now, denote { x θ /2 } := B θ. Notice that n(y)dy n(0) B θ as θ, and B θ n(0) B θ c θ for some c > 0. Thus, using ( ) from above, liminf m m 2 for some c, C > 0. Returning to ( ) we get P( S n m) ˆ C c θ = P(S n = 0) =. (iii) Proof outline for the 3-dimensional case of RWs on Z 3. 5-3

Reminder. A RW on Z d is recurrent i ˆ [ sup Re r< [ π,π] d r ϕ(t) ] dt = where ϕ(t) = E[e it X ] is the characteristic function of X. [ ] Remark. If z C, Re [z] then Re z Re[ z]. (exercise) Thus, sup ˆ r< [ π,π] d [ Re r ϕ(t) ] dt sup ˆ r< [ π,π] d Re[ r ϕ(t)] dt. But r < : Re[ rϕ(t)] Re[ ϕ(t)], so the integral on the right hand side is only if ˆ dt =. Re[ ϕ(t)] [ π,π] d Accordingly, it is sucient to show that [ π,π] 3 Re[ ϕ(t)] dt < in order to prove the transience of the walk. Notice that if ϕ(t) = + Θ( t 2 ) as t 0, then [ π,π] 3 Re[ ϕ(t)] = [ π,π] 3 Θ( t 2 ) < as t 0. So, for the integral to be (and hence the RW be recurrent) we need ϕ(t) = + o( t 2 ) as t 0. But, for a -dimensional RV Y we have that (using the Taylor series) E[e iλy ] = + iµλ a 2 λ2 + o(λ 2 ), where E[Y ] = µ, E[Y 2 ] = a. Hence, if ϕ(t) = E[e it X ] = + o( t 2 ), then by writing t = t e iθ we get that e iθ X is a -dimensional RV with both E[e iθ X ] = 0 and V ar[e iθ X ] = 0, and therefore e iθ X 0 (a.s.). Thus, in order to have Re[ ϕ(t)] dt =, we must have that eiθ X 0 for almost all [ π,π] 3 θ. Consequently, a RW on Z 3 is recurrent only if it is contained in a plane, and otherwise transient. 2 Martingales We now supply a quick review of martingales in discrete time. A more complete discussion on the subject appears in []. 5-4

2. Denitions A ltration is a sequence of σ-elds {F n } n 0 s.t. F 0 F... A martingale (with respect to the ltration {F n } n 0 ) is a sequence {M n } n 0 of integrable RVs souch that M n is measurable with respect to F n, and E[M m F n ] = M n m n. Notice that the last requirement is trivially equivalent to E[M n+ F n ] = M n n. Some intuition to this denition: We can think of M n as the fortune earned by a gambler at time n. The last requirement suggests that the game is fair - the expected fortune at any time is the same as the gambler's initial fortune (or the fortune given at a certain time in the past). Submartingale is dened the same way, changing the last requirement into E[M n+ F n ] M n n (a submartingale is a good game to play...). Accordingly, supermartingale is dened with the last requirement E[M n+ F n ] M n n (there's nothing super about a supermartingale). Remark. If no ltration is given, then F n = σ(m 0,..., M n ). Example. A RW with mean 0 is a martingale (assuming X is integrable). 2.2 Optional Stopping Denition. T is a stopping time with respect to {F n } n 0 if T takes values in {0,, 2,...} { } and {T n} is measurable with respect to F n. Intuition: The gambler's decision to stop gambling is a function only of the results that happened in the past. Proposition. If M n is a (super/sub)martingale and T a stopping time, then {M T n } n 0 is also a (super/sub)martingale. In particular, EM T n = EM 0 n (and unequal accordingly for a super/sub-martingale). Proof. (exercise) Example. M n := S n for a SRW {S n } n 0 on Z starting at 0, and T = min(n M n = 0) (rst hitting time of 0). Notice that EM n = EM 0 = 0, P(T < ) = (by recurrence), and EM T = 0, so EM T EM 0. However, EM T n = 0 by the above proposition. 5-5

When is EM T = EM 0? Theorem (optional stopping theorem). let {M n } n 0 be a martingale, T a stopping time, and suppose P(T < ) =. Then EM T = EM 0 if any of the following holds:. T is bounded ( k P(T k) = ). 2. (dominated convergence) RV Y with E Y < and M T n Y n. 3. ET < and the dierences M n M n are uniformly bounded ( M n M n k n a.s. for some k). 4. E M T < and lim n E[ M n (T >n) ] = 0. 5. {M n } is uniformly integrable, i.e. ε > 0 k ε s.t. E[ M n (Mn k ε)] ε n. 6. p > s.t. {M n } are bounded in L p, that is k E M n p < k n. Proof.. By the assumption M T = M T k (a.s.), so EM T = EM T k = EM 0. 2. M T n M T, thus (by dom. conv.) EM T = lim EM T k = EM 0. n n T n 3. We have M T n M 0 = (M k M k ) T k where the right hand side is k= integrable by assumption. Thus, by dominated convergence we obtain. E[M T M 0 ] = lim n E[M T k M 0 ] = 0 4. Notice that M T = M T n + (M T M n ) (T >n), therefore EM T = E[M T n ] + E[M T (T >n) ] E[M n (T >n) ]. Taking n and using the second part of the assumption we are left with EM T = E 0 + lim E[M T (T >n) ]. n The assumption E M T < allows us to use the dominated convergence theorem once again to conclude that lim E[M T (T >n) ] = 0 (here assuming P(T = ) = 0 is n necessary), and so EM T = E 0. 5. We do not provide a proof for (5) and (6), but mention that once we prove (6), we can obtain (5) easily. (Exercise). Remark. for a supermartingale, all the above holds with the slight change that EM T EM 0. 5-6

2.3 Doob's Maximal Inequality Theorem (Doob's maximal inequality). If {M n } n 0 is a non-negative submartingale, then P( max M j λ) EM n 0 j n λ. This theorem is often combined with the next proposition to get a stronger result: Proposition (Jensen). If {M n } n 0 is a martingale, f : R R convex, then {f(m n )} n 0 is a submartingale (assuming additionally that E f(m n ) < n). In particular, we can take f(x) = x p for some p, or f(x) = e bx for some b R. Given a martingale {M n }, we can now use Doob's inequality for the non-negative submartingale {f(m n )} n 0, and obtain: or P( max 0 j n M j λ) E M n p λ p p P( max 0 j n M j λ) EebMn e bλ b R. This way we can replace the original linear bound with a higher polynomial or exponential bound. Proofs for the above theorems can be found in[]. 2.4 Martingale Convergence Theorem To start the (brief) discussion on the martingale convergence theorem, we present the next quote from the book Probability of Martingales by D. Williams, on the signicance of this theorem: signies something important. signies something very important. is the martingale convergence theorem. Theorem. Let {M n } n 0 be a supermartingale, and suppose sup(mn ) < (where Mn = max( M n, 0)), then there exists a RV M such that M n M a.s. E.g. a non-negative martingale always converges (a.s.) Remark. The limit M is integrable, but the convergence is not necessarily in L. n 5-7

For example, consider T = min(n S n = 0) where S n is a SRW (T is the rst crossing time of -0). Then, the martingale S T n converges to the constant M = 0 which is integrable, but the convergence is not in L, since ES T n = ES 0 = 0 (using the fact that S T n is a martingale), hence E[S T n M] = 0 0. n An interesting example of the use of the martingale convergence theorem is Polya's Urn: An urn contains b blue balls and r red balls. At each time we draw a ball out, then replace it with c more balls of the color drawn. The proportion of red balls (for instance) in the urn is a non-negative martingale, and hence this proportion converges to a RV (which must take values in [0, ]). In the special case where initially r = b = and c = 2 it turns out that the limiting RV is uniformly distributed on [0, ]. Theorem (uniformly integrable submartingale). For a submartingale {M n } n 0, the following are equivalent: It is uniformly integrable. It converges a.s. and in L. It converges in L. If {M n } is a martingale then also: an integrable RV M s.t. M n = E[M F n ]. Corollary (Levy 0- law). Denote F n F (that is F = σ( F n ), the smallest σ-eld containing the union), then for every integrable RV X it holds E[X F n ] E[X F ] a.s. and in L The special case where X = A for A F and then P(A F n ) A generalizes the Kolmogorov 0- law (this last remark is left as food for thought for the reader). 3 Harmonic Functions From this point on we assume G to be a locally nite graph (that is deg(v) < v G ). Denition. h : G R is said to be harmonic if h(x) = h(y) N(x) y N(x) 5-8

where N(x) = {y y is a neighbor of x in G}. In other words, for every x, if we start a SRW on G {Z n } n 0 beginning at x, then h(x) = E[h(Z )]. Notice that this means that {h(z n )} n 0 is a martingale for any starting vertex x (acknowledging that h(z n ) takes only nitely many values and thus is integrable). Examples.. Constant functions are harmonic. 2. On Z d, linear functions are harmonic. (exercise) Denition. The space of bounded harmonic functions is called the Poisson Boundary of G. G is called Liouville if it has no non-constant bounded harmonic functions. Remark. All the above can be dened with respect to other RWs than the SRW. Example. Consider T 2 the innite binary tree (for our purposes we dene the tree to be innite both upwards and downwards from the origin, in order to be coherent with the general case of Cayley graphs, which would not be further discussed here). We dene an end of the tree to be an innite simple path from the origin. Then, for a collection of ends A we can dene h(x) = P x (infinite trajectory beginning at x agrees with some end in A i.o.). It is easy to check that h is a bounded harmonic function, indeed h(x) = P x ( ending in A ) = E[P x ( ending in A Z )] = E[P Z ( ending in A )] = E[h(Z )]. But, h is non-constant, for certain choices of the group A. For instance, if we choose A to be the bottom half of the graph, then the probability of ending in A is dierent for vertexes from the top half and vertexes in A. A RW beginning at the top half has a 2/3 probability of going up at every step (until it, possibly, crosses the origin downwards), as where on the bottom half the 2/3 probability is directed down. Thus, the further up we begin the RW (with respect to the origin) - the smaller the probability to end at the bottom half. Hence, T 2 is not Liouville. Remark. Z is Liouville. Proof. For h harmonic on Z, the denition implies that h(x + ) h(x) = h(x) h(x ). So, if h is non-constant, there exists a y s.t. h(y + ) h(y) = a 0, which means h has a xed slope, and thus cannot be bounded. Question is, what about Z 2? Is it Liouville? Proposition. On any connected recurrent graph (meaning a SRW on it is recurrent), any non-negative harmonic function is constant. In particular, the graph is Liouville (because any bounded harmonic function must be con- 5-9

stant, otherwise by adding a large enough constant we would obtain a non-negative nonconstant harmonic function). E.g. Z 2 is Liouville. Proof. Fix x, y G and let h be a non-negative harmonic function. We need to show that h(x) = h(y). Dene {Z n } n 0 to be a SRW on G starting at x. {h(z n )} n 0 is now a non-negative martingale, and so the martingale convergence theorem holds. Consequently, h(z n ) H a.s. (where H is some RV). We have assumed G is recurrent, thus P(Z n = x i.o.) =, so h(z n ) must converge to h(x) a.s. But by recurrence it also holds that P(Z n = y i.o.) =, and similarily h(z n ) h(y) a.s. Therefore h(x) = h(y) and we're done. In the next lecture we will discuss the question - is Z d Liouville for all d? References [] Rick Durret, Probability: Theory and examples, January 200 4.2 (Chung-Fuchs theorem), 5 (martingales) and 6.7 (harmonic functions). 5-0