Random Walks and Brownian Motion Tel Aviv University Spring 20 Instructor: Ron Peled Lecture 5 Lecture date: Feb 28, 20 Scribe: Yishai Kohn In today's lecture we return to the Chung-Fuchs theorem regarding recurrence of general random walks on R d, and provide a proof for the Z d case. We then move on to present a brief review of Martingales, mentioning several of their properties. In particular, we relate to the optional sampling theorem, Doob's maximal inequality and the martingale convergence theorem. Towards the end we begin to discuss Harmonic functions on graphs, dene a Liouville graph, and nish with concluding that Z 2 is Liouville, using the martingale convergence theorem. Tags for today's lecture: Recurrence criteria for general RWs, Chung-Fuchs theorem, Martingales, Harmonic functions, Liouville graph. Chung-Fuchs Theorem Theorem (Chung-Fuchs). Let S n be a RW on R d.. Suppose d = : If the weak law of large numbers holds in the form S n /n 0 in probability, then S n is neighborhood-recurrent. 2. Suppose d = 2: If the central limit theorem holds in the form S n / n normal distribution, then S n is neighborhood-recurrent. 3. Suppose d = 3: If S n is not contained in a plane (meaning that the group of possible values for S n does not rest on a plane) then S n is neighborhood-transient. proof. (i) The d = case has been proved in the previous lecture (under rst moment assumptions and for RWs on Z). (ii) d = 2: We will prove the theorem for RWs on Z 2. We need to show P(S n = 0) = (according to proposition (3) in last week's lecture). 5-
We have, from the assumption, P( S n n b) ˆ n x b n(y)dy ( refers to the Euclidean norm on R d, and n(y) is the limiting normal distribution. Notice that we can assume n(y) is non-degenerate, otherwise we're back to the -dimension problem). Remark. A local limit theorem would give P(S n = 0) c n, which would yield the wanted divergence of the sum P(S n = 0). Lemma. Let then Notice that G(y, y) = G(x, x). G(x, y) = E x [# of visits to y] = P x (S n = y), G(x, y) = P x (visit y) G(y, y). Proof. Dene T = min(n S n = y) or T = if y not visited. Now, G(x, y) = P x (S n = y) = k=0 P x (S n = y, T = k) (notice that P x (S n = y, T = k) = 0 k > n). Every element in the sum is positive, so we can change the order of summing: G(x, y) = k=0 Using the strong Markov property we get G(x, y) = k=0 P y (S n = y) P x (T = k) = P x (S n = y, T = k). P x (T = k) G(y, y) = P x (visit y) G(y, y). k=0 Corollary. Since P x (visit y) it follows that G(x, y) G(x, x), 5-2
hence for any A Z d P x (S n A) A G(x, x) (by summing upon all y A). Back to the proof of (2): Note that for every m ( ) P(S n = 0) = G(0, 0) c m 2 P( S n m) for some c > 0, from the corollary (here A = {x x m} and A = c m 2 ). We can change the sum into an integral ( ) m 2 P( S n m) = ˆ 0 P( S θm 2 m)dθ, because when n θ n+ then θm 2 = n and each segment of the integral is of length m 2 m 2. From the assumption m 2 P( S S θm θm 2 m) = P( 2 ˆ ) n(y)dy, θm θ m x θ /2 and by Fatou's lemma, ˆ liminf m 0 P( ˆ S θm 2 m)dθ 0 liminf P( S θm m 2 m)dθ = ˆ 0 ˆ x θ /2 n(y)dy. Now, denote { x θ /2 } := B θ. Notice that n(y)dy n(0) B θ as θ, and B θ n(0) B θ c θ for some c > 0. Thus, using ( ) from above, liminf m m 2 for some c, C > 0. Returning to ( ) we get P( S n m) ˆ C c θ = P(S n = 0) =. (iii) Proof outline for the 3-dimensional case of RWs on Z 3. 5-3
Reminder. A RW on Z d is recurrent i ˆ [ sup Re r< [ π,π] d r ϕ(t) ] dt = where ϕ(t) = E[e it X ] is the characteristic function of X. [ ] Remark. If z C, Re [z] then Re z Re[ z]. (exercise) Thus, sup ˆ r< [ π,π] d [ Re r ϕ(t) ] dt sup ˆ r< [ π,π] d Re[ r ϕ(t)] dt. But r < : Re[ rϕ(t)] Re[ ϕ(t)], so the integral on the right hand side is only if ˆ dt =. Re[ ϕ(t)] [ π,π] d Accordingly, it is sucient to show that [ π,π] 3 Re[ ϕ(t)] dt < in order to prove the transience of the walk. Notice that if ϕ(t) = + Θ( t 2 ) as t 0, then [ π,π] 3 Re[ ϕ(t)] = [ π,π] 3 Θ( t 2 ) < as t 0. So, for the integral to be (and hence the RW be recurrent) we need ϕ(t) = + o( t 2 ) as t 0. But, for a -dimensional RV Y we have that (using the Taylor series) E[e iλy ] = + iµλ a 2 λ2 + o(λ 2 ), where E[Y ] = µ, E[Y 2 ] = a. Hence, if ϕ(t) = E[e it X ] = + o( t 2 ), then by writing t = t e iθ we get that e iθ X is a -dimensional RV with both E[e iθ X ] = 0 and V ar[e iθ X ] = 0, and therefore e iθ X 0 (a.s.). Thus, in order to have Re[ ϕ(t)] dt =, we must have that eiθ X 0 for almost all [ π,π] 3 θ. Consequently, a RW on Z 3 is recurrent only if it is contained in a plane, and otherwise transient. 2 Martingales We now supply a quick review of martingales in discrete time. A more complete discussion on the subject appears in []. 5-4
2. Denitions A ltration is a sequence of σ-elds {F n } n 0 s.t. F 0 F... A martingale (with respect to the ltration {F n } n 0 ) is a sequence {M n } n 0 of integrable RVs souch that M n is measurable with respect to F n, and E[M m F n ] = M n m n. Notice that the last requirement is trivially equivalent to E[M n+ F n ] = M n n. Some intuition to this denition: We can think of M n as the fortune earned by a gambler at time n. The last requirement suggests that the game is fair - the expected fortune at any time is the same as the gambler's initial fortune (or the fortune given at a certain time in the past). Submartingale is dened the same way, changing the last requirement into E[M n+ F n ] M n n (a submartingale is a good game to play...). Accordingly, supermartingale is dened with the last requirement E[M n+ F n ] M n n (there's nothing super about a supermartingale). Remark. If no ltration is given, then F n = σ(m 0,..., M n ). Example. A RW with mean 0 is a martingale (assuming X is integrable). 2.2 Optional Stopping Denition. T is a stopping time with respect to {F n } n 0 if T takes values in {0,, 2,...} { } and {T n} is measurable with respect to F n. Intuition: The gambler's decision to stop gambling is a function only of the results that happened in the past. Proposition. If M n is a (super/sub)martingale and T a stopping time, then {M T n } n 0 is also a (super/sub)martingale. In particular, EM T n = EM 0 n (and unequal accordingly for a super/sub-martingale). Proof. (exercise) Example. M n := S n for a SRW {S n } n 0 on Z starting at 0, and T = min(n M n = 0) (rst hitting time of 0). Notice that EM n = EM 0 = 0, P(T < ) = (by recurrence), and EM T = 0, so EM T EM 0. However, EM T n = 0 by the above proposition. 5-5
When is EM T = EM 0? Theorem (optional stopping theorem). let {M n } n 0 be a martingale, T a stopping time, and suppose P(T < ) =. Then EM T = EM 0 if any of the following holds:. T is bounded ( k P(T k) = ). 2. (dominated convergence) RV Y with E Y < and M T n Y n. 3. ET < and the dierences M n M n are uniformly bounded ( M n M n k n a.s. for some k). 4. E M T < and lim n E[ M n (T >n) ] = 0. 5. {M n } is uniformly integrable, i.e. ε > 0 k ε s.t. E[ M n (Mn k ε)] ε n. 6. p > s.t. {M n } are bounded in L p, that is k E M n p < k n. Proof.. By the assumption M T = M T k (a.s.), so EM T = EM T k = EM 0. 2. M T n M T, thus (by dom. conv.) EM T = lim EM T k = EM 0. n n T n 3. We have M T n M 0 = (M k M k ) T k where the right hand side is k= integrable by assumption. Thus, by dominated convergence we obtain. E[M T M 0 ] = lim n E[M T k M 0 ] = 0 4. Notice that M T = M T n + (M T M n ) (T >n), therefore EM T = E[M T n ] + E[M T (T >n) ] E[M n (T >n) ]. Taking n and using the second part of the assumption we are left with EM T = E 0 + lim E[M T (T >n) ]. n The assumption E M T < allows us to use the dominated convergence theorem once again to conclude that lim E[M T (T >n) ] = 0 (here assuming P(T = ) = 0 is n necessary), and so EM T = E 0. 5. We do not provide a proof for (5) and (6), but mention that once we prove (6), we can obtain (5) easily. (Exercise). Remark. for a supermartingale, all the above holds with the slight change that EM T EM 0. 5-6
2.3 Doob's Maximal Inequality Theorem (Doob's maximal inequality). If {M n } n 0 is a non-negative submartingale, then P( max M j λ) EM n 0 j n λ. This theorem is often combined with the next proposition to get a stronger result: Proposition (Jensen). If {M n } n 0 is a martingale, f : R R convex, then {f(m n )} n 0 is a submartingale (assuming additionally that E f(m n ) < n). In particular, we can take f(x) = x p for some p, or f(x) = e bx for some b R. Given a martingale {M n }, we can now use Doob's inequality for the non-negative submartingale {f(m n )} n 0, and obtain: or P( max 0 j n M j λ) E M n p λ p p P( max 0 j n M j λ) EebMn e bλ b R. This way we can replace the original linear bound with a higher polynomial or exponential bound. Proofs for the above theorems can be found in[]. 2.4 Martingale Convergence Theorem To start the (brief) discussion on the martingale convergence theorem, we present the next quote from the book Probability of Martingales by D. Williams, on the signicance of this theorem: signies something important. signies something very important. is the martingale convergence theorem. Theorem. Let {M n } n 0 be a supermartingale, and suppose sup(mn ) < (where Mn = max( M n, 0)), then there exists a RV M such that M n M a.s. E.g. a non-negative martingale always converges (a.s.) Remark. The limit M is integrable, but the convergence is not necessarily in L. n 5-7
For example, consider T = min(n S n = 0) where S n is a SRW (T is the rst crossing time of -0). Then, the martingale S T n converges to the constant M = 0 which is integrable, but the convergence is not in L, since ES T n = ES 0 = 0 (using the fact that S T n is a martingale), hence E[S T n M] = 0 0. n An interesting example of the use of the martingale convergence theorem is Polya's Urn: An urn contains b blue balls and r red balls. At each time we draw a ball out, then replace it with c more balls of the color drawn. The proportion of red balls (for instance) in the urn is a non-negative martingale, and hence this proportion converges to a RV (which must take values in [0, ]). In the special case where initially r = b = and c = 2 it turns out that the limiting RV is uniformly distributed on [0, ]. Theorem (uniformly integrable submartingale). For a submartingale {M n } n 0, the following are equivalent: It is uniformly integrable. It converges a.s. and in L. It converges in L. If {M n } is a martingale then also: an integrable RV M s.t. M n = E[M F n ]. Corollary (Levy 0- law). Denote F n F (that is F = σ( F n ), the smallest σ-eld containing the union), then for every integrable RV X it holds E[X F n ] E[X F ] a.s. and in L The special case where X = A for A F and then P(A F n ) A generalizes the Kolmogorov 0- law (this last remark is left as food for thought for the reader). 3 Harmonic Functions From this point on we assume G to be a locally nite graph (that is deg(v) < v G ). Denition. h : G R is said to be harmonic if h(x) = h(y) N(x) y N(x) 5-8
where N(x) = {y y is a neighbor of x in G}. In other words, for every x, if we start a SRW on G {Z n } n 0 beginning at x, then h(x) = E[h(Z )]. Notice that this means that {h(z n )} n 0 is a martingale for any starting vertex x (acknowledging that h(z n ) takes only nitely many values and thus is integrable). Examples.. Constant functions are harmonic. 2. On Z d, linear functions are harmonic. (exercise) Denition. The space of bounded harmonic functions is called the Poisson Boundary of G. G is called Liouville if it has no non-constant bounded harmonic functions. Remark. All the above can be dened with respect to other RWs than the SRW. Example. Consider T 2 the innite binary tree (for our purposes we dene the tree to be innite both upwards and downwards from the origin, in order to be coherent with the general case of Cayley graphs, which would not be further discussed here). We dene an end of the tree to be an innite simple path from the origin. Then, for a collection of ends A we can dene h(x) = P x (infinite trajectory beginning at x agrees with some end in A i.o.). It is easy to check that h is a bounded harmonic function, indeed h(x) = P x ( ending in A ) = E[P x ( ending in A Z )] = E[P Z ( ending in A )] = E[h(Z )]. But, h is non-constant, for certain choices of the group A. For instance, if we choose A to be the bottom half of the graph, then the probability of ending in A is dierent for vertexes from the top half and vertexes in A. A RW beginning at the top half has a 2/3 probability of going up at every step (until it, possibly, crosses the origin downwards), as where on the bottom half the 2/3 probability is directed down. Thus, the further up we begin the RW (with respect to the origin) - the smaller the probability to end at the bottom half. Hence, T 2 is not Liouville. Remark. Z is Liouville. Proof. For h harmonic on Z, the denition implies that h(x + ) h(x) = h(x) h(x ). So, if h is non-constant, there exists a y s.t. h(y + ) h(y) = a 0, which means h has a xed slope, and thus cannot be bounded. Question is, what about Z 2? Is it Liouville? Proposition. On any connected recurrent graph (meaning a SRW on it is recurrent), any non-negative harmonic function is constant. In particular, the graph is Liouville (because any bounded harmonic function must be con- 5-9
stant, otherwise by adding a large enough constant we would obtain a non-negative nonconstant harmonic function). E.g. Z 2 is Liouville. Proof. Fix x, y G and let h be a non-negative harmonic function. We need to show that h(x) = h(y). Dene {Z n } n 0 to be a SRW on G starting at x. {h(z n )} n 0 is now a non-negative martingale, and so the martingale convergence theorem holds. Consequently, h(z n ) H a.s. (where H is some RV). We have assumed G is recurrent, thus P(Z n = x i.o.) =, so h(z n ) must converge to h(x) a.s. But by recurrence it also holds that P(Z n = y i.o.) =, and similarily h(z n ) h(y) a.s. Therefore h(x) = h(y) and we're done. In the next lecture we will discuss the question - is Z d Liouville for all d? References [] Rick Durret, Probability: Theory and examples, January 200 4.2 (Chung-Fuchs theorem), 5 (martingales) and 6.7 (harmonic functions). 5-0