Harmonic functions on groups

Size: px
Start display at page:

Download "Harmonic functions on groups"

Transcription

1 20 10 Harmonic functions on groups 0 DRAFT - updated June 19, Ariel Yadin Disclaimer: These notes are preliminary, and may contain errors. Please send me any comments or corrections

2 2

3 Contents 1 Introduction 7 (0 exercises) Background Spaces of sequences Group actions Discrete group convolutions Measured groups, harmonic functions Bounded, Lipschitz and polynomial growth functions (35 exercises) Martingales Conditional expectation Conditional expectation: more generality Martingales: definition, examples Optional stopping theorem Applications of optional stopping

4 4 CONTENTS 3.6 Martingale convergence Bounded harmonic functions (32 exercises) Random walks on groups Markov chains Irreducibility Random walks on groups Stopping Times Excursion Decomposition Recurrence and transience Null recurrence Summary so far Finite index subgroups (14 exercises) The Poisson-Furstenberg boundary The tail and invariant σ-algebras Parabolic and harmonic functions Entropic criterion Triviality of I and T An entropy inequality Coupling and Liouville Speed and entropy

5 CONTENTS Amenability and speed Lamplighter groups Open problems (58 exercises) The Milnor-Wolf Theorem Growth The Milnor trick Nilpotent groups Nilpotent groups and Z-extensions Proof of the Milnor-Wolf Theorem (27 exercises) Gromov s Theorem Classical reductions Isometric actions Harmonic cocycles Diffusivitiy Ozawa s Theorem Kleiner s Theorem Proof of Gromov s Theorem (30 exercises) A Entropy 167 A.1 Shannon entropy axioms

6 6 CONTENTS A.2 A different perspective on entropy (0 exercises) B Hilbert spaces 173 B.1 Completion B.2 Ultraproduct (11 exercises)

7 Chapter 1 Introduction Number of exercises in lecture: 0 Total number of exercises until here: 0 7

8 8 CHAPTER 1. INTRODUCTION

9 Chapter 2 Background 2.1 Spaces of sequences Let G be a countable set. Let us briefly review the formal setup of the canonical probability spaces on G N. This is the space of sequences (ω n ) n=0 where ω n G for all n N. A cylinder set is a set of the form C(J, ω) = {η G N j J, η j = ω j } J N, 0 < J < ω G N. Note that η C(J, ω) if and only if C(J, ω) = C(J, η). Define F = σ(x 0, X 1, X 2,...) = σ(xn 1 (g) n N, g G). Exercise 2.1 Show that F = σ(x 0, X 1, X 2,...) = σ(c(j, ω) 0 < J <, J N ω G N ) = σ(c({0,..., n}, ω) n N ω G N ) where X j : G N G is the map X j (ω) = ω j projecting onto the j-th coordinate. For times t > s we also use the notation X[s, t] = (X s, X s+1,..., X t ). For t 0 we denote F t = σ(x 0,..., X t ). 9

10 10 CHAPTER 2. BACKGROUND Exercise 2.2 Show that F t F t+1 F. (A sequence of σ-algebras with this property is called a filtration.) Conclude that F = σ( t F t ) Theorems of Carathéodory and Kolmogorov tell us that the probability measure P on (G N, F) is completely determined by knowing the marginal probabilities P[X 0 = g 0,..., X n = g n ] for all n N, g 0,..., g n G. That is, when G is countable, then Kolmogorov s extension theorem implies the following: Theorem Let (P t ) t be a sequence of probability measures, each P t defined on F t. Assume that these measures are consistent in the sense that for all t, ( P t+1 {(X0,..., X t ) = (g 0,..., g t )} ) ( = P t {(X0,..., X t ) = (g 0,..., g t )} ) (2.1) for any g 0,..., g t G. Then, there exists a unique probability measure P on (G N, F) such that for any A F t we have P(A) = P t (A). Exercise 2.3 Let (P t ) t be a sequence of probability measures, each P t defined on F t. Show that (2.1) holds if and only if for any t < s and any A F t we have P s (A) = P t (A). Exercise 2.4 Define an equivalence relation on G N by ω t ω if ω j = ω j for all j = 0, 1,..., t. Show that this is indeed an equivalence relation. We say that en event A respects t if for any equivalent ω t ω we have that ω A if and only if ω A. Show that σ(x 0, X 1,..., X t ) = {A : A respects t }.

11 2.2. GROUP ACTIONS Group actions A (left) group action G X is a function from G X to X, (γ, x) γ.x, that is compatible in the sense that (γη, x) = (γ, (η, x)), and such that (1, x) = 1.x = x for all x X. We usually denote γ.x or γx for the action of γ G on x X. A right action is the analogous for (x, γ) x.γ and compatibility is (x, γη) = ((x, γ), η). Exercise 2.5 Show that any group acts on itself by left multiplication; i.e. G G by x.y := xy. Exercise 2.6 Let C G be the set of all functions from G C. Show that G C G by (x.f)(y) := f(x 1 y). (*) Show that f x (y) := f(yx 1 ) defines a right action. Exercise 2.7 Generalise the previous exercise as follows: Suppose that G X. Consider C X, all functions from X C. Show that G C X by g.f(x) := f(g 1 x), for all f C X, g G, x X. (*) What about a right action of G on C X? Exercise 2.8 Let G X. Let M 1 (X) be the set of all probability measures on (X, F), where F is some σ-algebra on X. Suppose that for any g G the map g : X X given by g(x) = g.x is a measurable map. (In this case we say that G acts on X by measurable maps.) Show that G M 1 (X) by A F g.µ(a) := µ(g 1 A), where g 1 A := { g 1 x : x A }. Exercise 2.9 Let X = {f : G C : f(1) = 0}. Show that (x.f)(y) := f(x 1 y) f(x 1 ) defines a left action of G on X.

12 12 CHAPTER 2. BACKGROUND 2.3 Discrete group convolutions Throughout this book we will mostly deal with discrete groups, and specifically countable groups. Given a group G, one may define the convolution of functions f, g : G C by (f g)(x) := y f(y)g(y 1 x) = y f(y)(y.g)(x). This is the analogue of the usual convolution of function on the group R: (f g)(x) = f(y)g(x y)dy. However, the convolution is not necessarily commutative, as is the case for Abelian groups. Exercise 2.10 Show that (f g)(x) = y f(xy 1 )g(y). Give an example for which f g g f. Exercise 2.11 (left action and convolutions) Show that x.(f g) = (x.f g). Exercise 2.12 Let µ be a probability measure on G, and let X be a random element of G with law µ. Show that E[f(x X 1 )] = (f µ)(x). Exercise 2.13 Show that for any p 1 we have x.f p = f p. (Here f p p = x f(x) p and f = sup x f(x).) Show that ˇf p = f p. Exercise 2.14 Prove Young s inequality for products: For all a, b 0 and any α, β > 0 such that α + β = 1 we have ab αa 1/α + βb 1/β.

13 2.3. DISCRETE GROUP CONVOLUTIONS 13 Solution to ex:14. :( If a = 0 or b = 0 there is nothing to prove. So assume that a, b > 0. Consider the random variable X that satisfies P[X = a 1/α ] = α, P[X = b 1/β ] = β. Then, E[log X] = log a + log b. Also, E[X] = αa 1/α +βb 1/β Jensen s inequality tells us that E[log X] log E[X] which results in log(ab) = log a + log b log(αa 1/α + βb 1/β ). :) Exercise 2.15 Prove the generalized Hölder inequality: for all p 1,..., p n [1, ] such that n j=1 1 p j = 1 we have n f 1 f n 1 f j pj. j=1 Solution to ex:15. :( The proof is by induction on n. For n = 1 there is nothing to prove. For n = 2,this is the usual Hölder inequality, which is proved as follows: denote f = f 1, g = f 2, p = p 1, q = p 2 and f = f f p, g = g g q. Then, fg 1 = f p g q f(x) g(x) x f p g q 1 p f(x) p + 1 q g(x) q x = f p g q ( 1 p f p p + 1 q g q q) = f p g q, where the inequality is just Young s inequality for products: ab αa 1/α + βb 1/β, valid for all positive a, b, α, β and α + β = 1. Now for the induction step, n > 2, let q n = Then, = 1 and p n q n n 1 j=1 pn p and q n 1 j = p j (1 1 ) = p j for 1 j < n. p n q n n 1 1 = (1 1 ) 1 q j p n j=1 By the induction hypothesis (for n = 2 and n 1), 1 p j = 1. f 1 f n 1 f n pn f 1 f n 1 qn = f n pn ( f 1 qn f n 1 qn 1 ) 1/qn f n pn ( n 1 j=1 n 1 = f n pn f j pj j=1 f j qn qj ) 1/qn :)

14 14 CHAPTER 2. BACKGROUND Exercise 2.16 Prove Young s inequality for convolutions: For any p, q 1 and 1 r such that 1 p + 1 q = 1 r + 1 we have f g r f p g q. (One may wish to use the generalized Hölder inequality.) Solution to ex:16. :( For any x, since 1 r + r p pr + r q qr = 1 p + 1 q 1 r = 1, f g(x) y f(y)g(y 1 x) = y ( f(y) p g(y 1 x) q) 1/r f(y) (r p)/r g(y 1 x) (r q)/r = f 1 f 2 f 3 1 f 1 r f 2 pr/(r p) f 3 qr/(r q), where the second inequality is the generalized Hölder inequality with Now, f 1 (y) = ( f(y) p g(y 1 x) q) 1/r f 2 (y) = f(y) (r p)/r f 3 (y) = g(y 1 x) (r q)/r. ( f 1 r = f(y) p g(y 1 x) q) 1/r, y ( f 2 pr/(r p) = f(y) p) (r p)/pr (r p)/r = f p, y ( f 3 qr/(r q) = g(y 1 x) q) (r q)/qr (r q)/r = x.ǧ q y Combining all the above, f g r r = x f g(x) r x,y f(y) p y.g q q y f(y) p = g r q f r p. y = f r p p g r q q = g r q f r p p = g (r q)/r q. f(y) p g(y 1 x) q f r p p g r q q :)

15 2.4. MEASURED GROUPS, HARMONIC FUNCTIONS Measured groups, harmonic functions Metric and measure structures on a group Let G = S be a finitely generated group, for S <, S = S 1. This induces a metric on G: dist S (x, y) is the graph distance in the Cayley graph with respect to S; that is the graph whose vertex set is G and edges are defined by the relations x y x 1 y S. Exercise 2.17 Show that dist S (x, y) is invariant under the diagonal G- action. That is, dist S (γx, γy) = dist S (x, y). Due to this fact, we may denote x = x S := dist S (1, x). So that dist S (x, y) = x 1 y. Balls of radius r in this metric are denoted B(x, r) = B S (x, r) = {y : dist S (x, y) r}. Definition Let µ be a probability measure on G. SAS measure We say that µ is adapted (to G) if the support of µ generates G as a semi-group; that is, if any element x G can be written as a product x = s 1 s k where s 1,..., s k supp(µ). µ is symmetric if µ(x) = µ(x 1 ) for all x G. µ is smooth if for some ε > 0, E µ [e ε X ] = x µ(x)e ε x <. µ is SAS if µ is symmetric, adapted and smooth. Exercise 2.18 Show that if µ is smooth with respect to a finite symmetric generating set S, then it is smooth with respect to any finite symmetric generating set. For a probability measure µ on G we call the pair (G, µ) a measured group. If µ is SAS, we also say that the pair (G, µ) is a SAS. Example Some examples: add figures

16 16 CHAPTER 2. BACKGROUND Z = 1, 1. Z = 1, 1, 2, 2. µ = uniform measure on generating set. Z d with standard basis. Free groups. Heisenberg group: H 3 = [ 1 a 0 ] 0 1 b a, b {0, ±1}, a + b = Exercise 2.19 where Show that Compute: [A ±1, B ±1 ]. A = [ 1 a 0 ] 0 1 b = B b A a, [ ] B = [ ] Show that where Conclude that H 3 = [ 1 a c ] 0 1 b = C c B b A a, C = [ ] {[ 1 a c ] } 0 1 b a, b, c Z Exercise 2.20 Show that H 3 is not Abelian. Compute [H 3, H 3 ] and show that [H 3, H 3 ] = Z. (Reminder: in a group G the commutator of x, y is defined as [x, y] = x 1 y 1 xy. The commutator of two subsets H, K G is defined [H, K] = [h, k] : h H, k K.)

17 2.4. MEASURED GROUPS, HARMONIC FUNCTIONS Random walks Given a group G with a probability measure µ define the µ-random walk on G started at x G as the sequence where (s j ) j are i.i.d. with law µ. X t = xs 1 s 2 s t, The probability measure and expectation on G N (with the canonical cylinderset σ-algebra) are denoted P x, E x. When we omit the subscript x we refer to P = P 1, E = E 1. Note that the law of (X t ) t under P x is the same as the law of (xx t ) t under P. For a probability measure ν on G we denote P ν = x ν(x) P x and similarly for E ν = x ν(x) E x. More precisely, given some probability measure ν on G, we define P ν to be the measure obtained by Kolmogorov s extension theorem, via the sequence of measures P t ( {(X0,..., X t ) = (g 0,..., g t )} ) = ν(g 0 ) t j=1 µ(g 1 j 1 g j). Exercise 2.21 Show that P t above indeed defines a probability measure on F t = σ(x 0,..., X t ). Exercise 2.22 Show that the µ-random walk on G is a Markov process with transition matrix P (x, y) = µ(x 1 y). Show that the corresponding Laplacian operator, usually defined := I P and the averaging operator P are given by P f(x) = f ˇµ(x) f(x) = f (δ 1 ˇµ)(x), where ˇµ(y) = µ(y 1 ). Exercise 2.23 Show that if P t is the t-th matrix power of P then E x [f(x t )] = (P t f)(x).

18 18 CHAPTER 2. BACKGROUND Exercise 2.24 Let µ be a probability measure on G, and let P (x, y) = µ(x 1 y). Show that P t (1, x) = ˇµ t (x), where ˇµ t is convolution of ˇµ with itself t times. (ˇµ(y) = µ(y 1 )). Show that µ is adapted if and only if for every x, y G there exists t 0 such that P t (x, y) > 0. (This property is also called irreducible.) Show that µ is symmetric if and only if P is a symmetric matrix (if and only if ˇµ = µ). We will come back to random walks in Chapter Harmonic functions Recall that in classical analysis, a function f is harmonic at x if for a small enough ball around x, B(x, ε), it satisfies the mean value property: f(y)dy = f(x). Another definition is that f(x) = 0 where 1 B(x,r) = j B(x,r) 2 x 2 j is the Laplace operator. harmonic function Definition Let (G, µ) be a finitely generated group with SAS measure. A function f : G C is µ-harmonic (or simply, harmonic) at x G if µ(y)f(xy) = f(x), and the above sum converges absolutely. y A function is harmonic if it is harmonic for every x G. Exercise 2.25 Show that f is harmonic at x if and only if E µ [f(xx)] = f(x), if and only if f(x) = 0.

19 2.4. MEASURED GROUPS, HARMONIC FUNCTIONS 19 Exercise 2.26 Prove the maximum principle for harmonic functions: If f is harmonic, and there exists x such that f(x) = sup y f(y), then f is constant. Exercise 2.27 (L 2 harmonic functions) Let f l 2 (G). That is, y f(y) 2 <. Prove the following integration by parts identity: for any f, g l 2 (G), P (x, y)(f(x) f(y))(ḡ(x) ḡ(y)) = 2 f, g. x,y (The left-hand-side above is f, g, appropriately interpreted, hence the name integration by parts. This is also sometimes understood as Green s identity.) Here as usual, P (x, y) = µ(x 1 y) for a symmetric measure µ. Show that any f l 2 (G) that is harmonic must be constant. Solution to ex:27. :( Compute using the symmetry of P : 2 f, g = 2 x f(x)ḡ(x) = 2 x P (x, y)(f(x) f(y))ḡ(x) y = P (x, y)(f(x) f(y))ḡ(x) + P (y, x)(f(y) f(x))ḡ(y) x,y y,x = x,y P (x, y)(f(x) f(y))(ḡ(x) ḡ(y)). We have used that f, g l 2, so that the above sums converge absolutely, and so can be summed together. Thus, if f is l 2 and harmonic we have that P (x, y) f(x) f(y) 2 = 2 f, f = 0. x,y Thus, f(x) f(y) 2 = 0 for all x, y such that P (x, y) > 0. Since P is irreducible (i.e. µ is adapted) this implies that f is constant. :)

20 20 CHAPTER 2. BACKGROUND 2.5 Bounded, Lipschitz and polynomial growth functions Bounded functions Recall for f C G and p > 0 we have f p p = x f(x) p f = sup f(x) x Exercise 2.28 Show that for any f C G we have that x.f = f and x.f p = f p. Show that f f p for any p > 0. We use BHF(G, µ) to denote the set of bounded µ-harmonic functions on G; that is, BHF(G, µ) = {f : G C : f <, f 0}. The above exercise tells us that the set is a G-invariant set; i.e. GBHF BHF. Exercise 2.29 Show that BHF(G, µ) is a vector space over C. Any constant function is in BHF(G, µ), so dim BHF(G, µ) 1. The question whether BHF(G, µ) consists of more than just constant functions is an important one, and we will dedicate Chapter 5 to this investigation Lipschitz functions For a group G and a function f C G, define the right-derivative at y y fg C by y f(x) = f(xy 1 ) f(x). Given a finite symmetric generating set S, define the gradient f = S f : G C S by ( f(x)) s = s f(x). We define the Lipschitz semi-norm by S f := sup s S sup s f(x). x G Lipschitz function

21 2.5. BOUNDED, LIPSCHITZ AND POLYNOMIAL GROWTH FUNCTIONS21 Definition A function f : G C is called Lipschitz if S f <. Exercise 2.30 Show that for any two symmetric generating sets S 1, S 2, there exists C > 0 such that S1 f C S2 f. Conclude that the definition of Lipschitz does not depend on the choice of specific generating set. Exercise 2.31 What is the set { f C G : S f = 0 }? We use LHF(G, µ) to denote the set of Lipschitz µ-harmonic functions. Exercise 2.32 Show that LHF(G, µ) is a G-invariant vector space, by showing that x G S x.f = S f. Exercise 2.33 Let G be a finitely generated group with a metric given by some fixed finite symmetric generating set S. horofunctions Consider the space L = {h : G C : S h = 1, h(1) = 0}. Show that L is compact under the topology of pointwise convergence. Show that x.h(y) = h(x 1 y) h(x 1 ) defines a left action of G on L. Show that if h is fixed under the G-action (i.e. x.h = h for all x G) then h is a homomorphism from G into the group (C, +). Show that if h is a homomorphism from G into (C, +), then there exists a unique α > 0 such that αh L. For every x G let b x (y) = dist S (x, y) dist S (x, 1) = x 1 y x. Show that b x L for any x G. Prove that the map x b x from G into L is an injective map.

22 22 CHAPTER 2. BACKGROUND Polynomially growing functions For f C G define the k-th degree polynomial semi-norm by f k = f S,k := lim r k sup f(x). r x r Let HF k (G, µ) = { f C G : f is µ-harmonic, f k < }. Exercise 2.34 Show that k is indeed a semi-norm. Show that x.f k = f k. Show that HF k is a G-invariant vector space. Exercise 2.35 Show that C BHF LHF HF 1 HF k HF k+1, for all k 1. Number of exercises in lecture: 35 Total number of exercises until here: 35

23 Chapter 3 Martingales Martingales are a central object in probability. To define them properly we would need to develop the notion of conditional expectation. Since this is not the main purpose of this book, we will start with the more elementary path, working only with random variables that take values in a countable set. Section 3.2 contains a more general treatment, for those interested. 3.1 Conditional expectation Definition Let X be a discrete but not necessarily real-valued random variable. Let Y be a real-valued random variable with E Y <. The conditional expectation of Y conditional on X is defined as the random variable E[Y X] := x R where R = {x : P[X = x] > 0}. 1 {X=x} E[Y 1 {X=x}], P[X = x] Exercise 3.1 Show that: E[aY + Z X] = a E[Y X] + E[Z X]. If X = X a.s. then E[Y X] = E[Y X ]. 23

24 24 CHAPTER 3. MARTINGALES If X = c is a.s. constant then E[Y X] = E[Y ]. If Y Z then E[Y X] E[Z X]. Proposition Let X be a discrete but not necessarily real-valued random variable. Suppose that Y, Z are real-valued random variables such that E Y, E Y Z <. Suppose that Z is discrete and σ(x)-measurable. Then, E[Y Z X] = Z E[Y X]. Proof. Let R = {x : P[X = x] > 0}. Then we want to show that a.s. x R 1 {X=x} E[Y Z1 {X=x} ] P[X = x] = Z x R So it suffices to prove that for all x R, a.s. E[Y 1 {X=x} ] 1 {X=x}. P[X = x] E[Y Z1 {X=x} ] 1 {X=x} = E[Y 1 {X=x} ] 1 {X=x} Z. If Z = 1 {X=z} for some z R then this is obvious. If Z = k j=1 a j1 {X=xj } for some x j R and positive a j, then this follows by linearity of expectation. Let z be such that P[Z = z] > 0. Since {Z = z} σ(x) we have that {Z = z} = {X A z } for some A z R. That is, 1 {Z=z} = x A z 1 {X=x}, so E[Y 1 {Z=z} 1 {X=x} ] 1 {X=x} = E[Y 1 {X=x} ] 1 {X=x} 1 {Z=z}. Now, if Z 0 and Y 0, then we have E[Y Z1 {X=x} ] 1 {X=x} = z E[Y 1 {Z=z} 1 {X=x} ] 1 {X=x} z = E[Y 1 {X=x} ] 1 {X=x} Z, where the possibly infinite sums converge by monotone convergence. For the general case, E[Y Z1 {X=x} ] 1 {X=x} = E[(Y + Y )(Z + Z )1 {X=x} ] 1 {X=x} = ( E[Y + Z + 1 {X=x} ] + E[Y Z 1 {X=x} ] E[Y + Z 1 {X=x} ] E[Y Z + 1 {X=x} ] ) 1 {X=x} = ( E[Y + 1 {X=x} ]Z + + E[Y 1 {X=x} ]Z E[Y + 1 {X=x} ]Z E[Y 1 {X=x} ]Z +) 1 {X=x} = E[Y 1 {X=x} ] 1 {X=x} Z.

25 3.2. CONDITIONAL EXPECTATION: MORE GENERALITY 25 Proposition Let X be a discrete but not necessarily real-valued random variable. Suppose that Y is a real-valued random variable such that E Y <. Then, E E[Y X] = E[Y ]. Proof. If Y 0 then E E[Y X] = x E[1 {X=x} ] E[Y 1 {X=x}] P[X = x] = x E[Y 1 {X=x} ] = E[Y ]. For general Y, E E[Y X] = E E[Y + X] E E[Y X] = E[Y + ] E[Y ] = E[Y ]. 3.2 Conditional expectation: more generality Proposition (Existence of conditional expectation) Let (Ω, F, P) be a probability space. Let G F be a sub-σ-algebra. Let X : Ω R be an integrable random variable (i.e. E X < ). Then, there exists an a.s. unique G-measurable random variable Y such that for all A G we have E[X1 A ] = E[Y 1 A ]. We denote Y from Proposition above by E[X G]. It is important to note that this is a random variable and not a number. One may think of this as the best guess for X given the information G. Exercise 3.2 Show that if Y satisfies the conditions from Proposition then Y is integrable. Show that such Y must be a.s. unique. Solution to ex:2. :( Let A = {Y > 0} and B = {Y 0}. Note that A, B G. Thus, E[ Y 1 A ] = E[Y 1 A ] = E[X1 A ] E[ X 1 A ], and similarly E[ Y 1 B ] = E[Y 1 B ] = E[X1 B ] E[ X 1 B ]. Thus, E Y E X <. So Y is integrable. If Y is another random variable satisfying that Y is G-measurable and E[Y 1 A ] = E[X1 A ] for all A G, then E[Y 1 A ] = E[Y 1 A ] for all A G. This is easily shown to be equivalent to Y = Y a.s. :)

26 26 CHAPTER 3. MARTINGALES Proof of Proposition The existence of conditional expectation utilizes a powerful theorem from measure theory: the Radon-Nykodim Theorem. Basically, it states that if µ, ν are σ-finite measures on a measurable space (M, Σ), and if ν µ (that is, for any A Σ, if µ(a) = 0 then ν(a) = 0), then there exists a measurable function dν dµ such that for any ν-integrable function f we have that f dν dµ is µ-integrable and fdν = f dν dµ dµ. This is a deep theorem, but from it the existence of conditional expectation is straightforward. We start with the case where X 0. Let µ = P on (Ω, F) and define ν(a) = E[X1 A ] for all A G. One may easily check that ν is a measure on (Ω, G) and that ν P G. Thus, there exists a G-measurable function dν dµ such that for any A G we have E[X1 A ] = ν(a) = 1 A dν = dν 1 A dµ dµ. A G-measurable function is just a random variable measurable with respect to G. So we may take Y = dν dµ and we have E[X1 A] = E[Y 1 A ] for all A G. For a general X (not necessarily non-negative) we may write X = X + X for X ± non-negative. One may check that Y := E[X + G] E[X G] has the required properties. Exercise 3.3 Complete the details of the proof of Proposition The uniqueness property above (Exercise 3.2) is a good tool for computing the conditional expectation in many cases; usually one guesses the correct random variable and verifies it by showing that it admits the properties guarantying it is equal to the conditional expectation a.s. Let us summarize some of the most basic properties of conditional expectation with the following exercises. Exercise 3.4 Let (Ω, F, P) be a probability space, X an integrable random variable and G F a sub-σ-algebra. Show that if X is G-measurable then E[X G] = X a.s. Show that if X is independent of G then E[X G] = E[X] a.s. Show that if P[X = c] = 1 then E[X G] = c a.s.

27 3.2. CONDITIONAL EXPECTATION: MORE GENERALITY 27 Solution to ex:4. :( When X is G-measurable, since for any A G we have E[X1 A ] = E[X1 A ] trivially, we have that E[X G] = X a.s. If X is independent of G then for any A G we have E[X1 A ] = E[X] E[1 A ] = E[E[X] 1 A ]. Since a constant random variable is measurable with respect to any σ-algebra (and specifically E[X] is G-measurable), we have the second assertion. The third assertion is a direct consequence, since a constant is always independent of G. :) Exercise 3.5 Let (Ω, F, P) be a probability space, X an integrable random variable and G F a sub-σ-algebra. Show that E[E[X G]] = E[X]. Solution to ex:5. :( Since Ω G we have E[X] = E[X1 Ω ] = E[E[X G]1 Ω ] = E[E[X G]]. :) Exercise 3.6 Prove Bayes formula for conditional probabilities: Let (Ω, F, P) be a probability space, and G F a sub-σ-algebra. A F define P[A G] := E[1 A G]. For Show that for any B G and A F with P[A] > 0, we have P[B A] = E[1 B P[A G]]. E[P[A G]] Exercise 3.7 Show that conditional expectation is linear; that is, E[aX + Y G] = a E[X G] + E[Y G] a.s. Show that if X Y a.s. then E[X G] E[Y G] a.s. Show that if X n X a.s., X n 0 for all n a.s., and X is integrable, then E[X n G] E[X G].

28 28 CHAPTER 3. MARTINGALES Solution to ex:7. :( Linearity: let Z = a E[X G] + E[Y G]. So Z is G-measurable. Also, for any A G, E[(aX + Y )1 A ] = a E[X1 A ] + E[Y 1 A ] = a E[E[X G]1 A ] + E[E[Y G]1 A ] = E[Z1 A ]. Monotonicity: By linearity it suffices to show that if X 0 a.s. then E[X G] 0 a.s. Indeed, for X 0 a.s., let Y = E[X G]. Then we may ocnsider {Y < 0} G, and we have that 0 E[X1 A ] = E[Y 1 {Y <0} ] 0, so Y 1 {Y <0} = 0 a.s. So if we set Z = Y 1 {Y 0} we have that Y = Z a.s. Now, for any A G we have E[X1 A ] = E[Y 1 A ] = E[Z1 A ] and also, Z is G-measurable, so E[X G] = Y = Z 0 a.s. Monotone convergence: Write Y n = E[X n G], Y = E[X G]. By monotonicity above, Y n Y n+1 Y a.s. Let Z = lim n Y n which exists a.s. because the sequence is monotone. Z is G-measurable as a limit of G-measurable random variables. Also, for any A G we have X n1 A X1 A and Y n1 A Z1 A a.s. So by monotone convergence, E[Z1 A ] E[Y n1 A ] = E[X n1 A ] E[X1 A ]. Hence Z = E[X G] a.s. :) Exercise 3.8 Let (Ω, F, P) be a probability space, X an integrable random variable and G F a sub-σ-algebra. Show that if Y is G-measurable and E XY <, then E[XY G] = Y E[X G] a.s. Solution to ex:8. :( Note that Y E[X G] is a G-measurable random variable, as a product of two such random variables. Se we need to show that for any A G we have E[XY 1 A G] = E[E[X G]Y 1 A ] a.s. If Y = 1 B then this is immediate. If Y is a simple random variable this follows by linearity. For a non-negative Y we may approximate by simple random variables and use the monotone convergence theorem. For general Y, we may write Y = Y + Y where Y ± are non-negative, and use linearity. :) Exercise 3.9 Let (Ω, F, P) be a probability space, X an integrable random variable. Suppose that Ω = Â n A n, where (A n ) n, Â are pairwise disjoint events, P[Â] = 0 and P[A n] > 0 for all n (that is, (A n ) n form an almost-partition of Ω). Let G = σ((a n ) n ). Show that for all n, a.s. E[X G]1 An = E[X1 A n ] 1 An. P[A n ]

29 3.2. CONDITIONAL EXPECTATION: MORE GENERALITY 29 Conclude the formula: E[X G] = n E[X1 An ] P[A n ] 1 An a.s. Solution to ex:9. :( Start with the assumption that X 0. Let Y = n E[X1 An ] P[A n] 1 An. It is immediate to verify that Y is G-measurable. Also, E[Y ] = E[X], so Q(B) := E[X] 1 E[Y 1 B ] defines a probability measure on (Ω, F). Similarly, P (B) := E[X] 1 E[X1 B ] defines a probability measure on (Ω, F). The system {, A n : n N} is a π-system. For any n we have E[X] P (A n) = E[X1 An ] = E[Y 1 An ] = E[X] Q(A n). Since P, Q are equal on a π-system generating G, they must be equal on all of G by Dynkin s Lemma. That is, for any B G we have E[X1 An 1 B ] = E[X]P (A n B) = E[X]Q(A n B) = E[Y 1 An 1 B ], which implies, since A n G, that a.s. E[X G] 1 An = E[X1 An G] = Y 1 An = E[X1 A n ] P[A n] 1 An. For general integrable X, decompose X = X + X. :) Exercise 3.10 Show that if Y is a discrete random variable then E[X Y ] = E[X σ(y )] a.s. In light of this exercise we define E[X Y ] := E[X σ(y )] for any random variable Y. Exercise 3.11 Prove Chebychev s inequality for conditional expectation: Show that a.s. P[ X a G] a 2 E[X 2 G]. Exercise 3.12 that a.s. Prove Cauchy-Schwarz for conditional expectation: Show ( E[XY G] ) 2 E[X 2 G] E[Y 2 G].

30 30 CHAPTER 3. MARTINGALES Proposition (Jensen s inequality) If ϕ is a convex function such that X, ϕ(x) are integrable, then a.s. E[ϕ(X) G] ϕ(e[x G]). Proof. As in the usual proof of Jensen s inequality, we know that ϕ(x) = sup (a,b) S (ax+b) where S = { (a, b) Q 2 : ay + b ϕ(y) y }. If (a, b) S then monotonicity of conditional expectation gives E[ϕ(X) G] a E[X G] + b a.s. Taking supremum over (a, b) S, since S is countable, we have that E[ϕ(X) G] ϕ(e[x G]) a.s. Proposition (Tower property) Let (Ω, F, P) be a probability space, X an integrable random variable and H G F sub-σ-algebras. Then, E[E[X G] H] = E[E[X H] G] = E[X H] a.s. Proof. Note that E[X H] is H-measurable and thus G-measurable. E[E[X H] G] = E[1 G] E[X H] = E[X H] a.s. So For the other assertion, since E[X H] is H-measurable, we only need to show the second property. That is, for any A H, since A G as well, E[E[X H]1 A ] = E[E[X1 A H]] = E[X1 A ] = E[E[X1 A G]] = E[E[X G]1 A ]. martingale 3.3 Martingales: definition, examples Definition Let (M t ) t be a sequence of random variables with discrete distributions. We say that (M t ) t is a martingale if for every t, E M t < and E[M t+1 M 0,..., M t ] = M t.

31 3.3. MARTINGALES: DEFINITION, EXAMPLES 31 Exercise 3.13 Show that if (M t ) t is a martingale then E[M t ] = E[M 0 ]. A stopping time is a random variable T with values in N { }, such that {T t} σ(m 0,..., M t ). Example Some examples: Simple random walk on Z is a martingale. T = inf {t : X t A} is a stopping time. E = sup {t : X t A} is typically not a stopping time. Exercise 3.14 Let (Ω, F, P) be a probability space. Let (F t ) t be a filtration; that is, F t F t+1 is a nested sequence of sub-σ-algebras of F. Let (M t ) t be a sequence of real-valued random variables such that for all t, M t is measurable with respect to F t, E M t <, and E[M t+1 F t ] = M t. Show that (M t ) t is a martingale. Solution to ex:14. :( Note that σ(m 0,..., M t) F t. By the tower property, a.s. E[M t+1 M 0,..., M t] = E [ E[M t+1 F t] M 0,..., M t ] = Mt. :) A slight modification of the original definition of a martingale is: Definition Let (Ω, F, P) be a probability space. Let (F t ) t be a filtration; that is, F t F t+1 is a nested sequence of sub-σ-algebras of F. Let (M t ) t be a sequence of real-valued random variables.

32 32 CHAPTER 3. MARTINGALES The sequence (M t ) t is said to be a martingale with respect to the filtration F t if the following conditions hold: For all t, M t is measurable with respect to F t, E M t <, and E[M t+1 F t ] = M t. Exercise 3.15 Let (X t ) t be the simple random walk on Z d. That is, X t = t j=1 U j where U j are i.i.d. uniform on the standard basis of Z d and the inverses. Show that M t = X t, v is a martingale, where v R d. Show that M t = X t 2 t is a martingale. Solution to ex:15. :( Indeed, note that in both cases M t is measurable with respect to σ(x t) σ(x 0,..., X t). In the first case, if e 1,..., e d are the standard basis of Z d, then E[M t+1 X 0,..., X t] = 1 2d d j=1 X t + e j, v + X t e j, v = 1 2d d 2 X t, v = M t. j=1 In the second case, E[ X t+1 2 X 0,..., X t] = 1 2d d j=1 X t + e j 2 + X t e j 2 = 1 2d d 2( X t 2 + 1) = X t j=1 Thus, E[M t+1 X 0,..., X t] = X t (t + 1) = M t. :) Exercise 3.16 Show that if (M t ) t is a martingale and T is a stopping time then (M T t ) t is also a martingale. Solution to ex:16. :( Because {T t} σ(m 0,..., M t) and {T > t} = {T t} c σ(m 0,..., M t), we get that

33 3.3. MARTINGALES: DEFINITION, EXAMPLES 33 M T t = M t1 {T >t} + t j=0 M j1 {T =j} is measurable with respect to σ(m 0,..., M t). Also, It now suffices to show that t 1 E M T t = E 1 {T >j} ( M j+1 M j ) + E M 0 <. j=0 E[M T (t+1) M 0,..., M t] = M T t. Indeed, since {T = t} = {T t} \ {T t 1} σ(m 0,..., M t), we have that E[M T (t+1) M 0,..., M t] = E[M t+1 1 {T >t} M 0,..., M t] + t E[M j 1 {T =j} M 0,..., M t] j=0 t = E[M t+1 M 0,..., M t] 1 {T >t} + M j 1 {T =j} = M T t. j=0 :) Exercise 3.17 Show that if T, T are both stopping times then so is T T. The relation of probability and harmonic functions is via martingales as the following exercise shows. Exercise 3.18 Show that f : G C is µ-harmonic if and only if (f(x t )) t is a martingale (with respect to the filtration F t = σ(x 0,..., X t )), where (X t ) t is the random walk on G induced by µ. Solution to ex:18. :( If f is µ-harmonic then the Markov property tells us that E[f(X t+1 ) X t = x] = y µ(y)f(xy) = f(x), so E[f(X t+1 ) X 0,..., X t] = f(x t), which is the definition of a martingale. Now assume that f is such that (f(x t)) t is a martingale. Since µ is adapted, for any x G, there exists t > 0 such that P[X t = x] > 0. So, f(x) = E[f(X t+1 ) X t = x] = y µ(y)f(xy), which implies that f is harmonic at x. :)

34 34 CHAPTER 3. MARTINGALES 3.4 Optional stopping theorem We have already seen that E[M t ] = E[M 0 ] for a martingale (M t ) t. We would like to conclude that this also holds for random times. However, this is not true in general. Exercise 3.19 Give an example of a martingale (M t ) t and a random time T such that E[M T ] E[M 0 ]. Can you show such an example when T is a stopping time? Solution to ex:19. :( Let M t = t j=1 X j where (X j ) j are all i.i.d. with distribution P[X j = 1] = P[X j = 1] = 1 2. Let T = inf {t : M t = 1}. It is simple that (M t) t is a martingale and T is a stopping time. However, E[M T ] = 1 by definition and M 0 = 0. :) In contrast to the general case, uniform integrability is a condition under which E[M T ] = E[M 0 ] for stopping times. Definition (Uniform integrability) Let (X α ) α I be a collection of random variables. We say that the collection (X α ) α is uniformly integrable if lim E[ X α 1 { Xα >K}] = 0. sup K α Exercise 3.20 Show that if E X < then the collection X α := X is uniformly integrable. Exercise 3.21 Show that if (X α ) α is uniformly integrable then sup α E X α <. Exercise 3.22 Show that if for some ε > 0 we have sup α E X α 1+ε < then (X α ) α is uniformly integrable. The following is not the strongest form of optional stopping theorems that are possible to prove. OST Theorem (optional stopping theorem) Let (M t ) t be a martingale

35 3.4. OPTIONAL STOPPING THEOREM 35 and T a stopping time. We have that E M T < and E[M T ] = E[M 0 ] if one of the following holds: The stopping time T is a.s. bounded; i.e. there exists t 0 such that T t a.s. T < a.s. and (M t ) t is a.s. uniformly bounded; i.e. there exists m such that for all t we have M t m a.s. T < a.s. and (M t ) t is uniformly integrable and E M T <. Proof. For the first case, if T t a.s. then t 1 E[M T ] = E 1 {T >j} (M j+1 M j ) + E[M 0 ] j=0 Since {T > j} = {T j} c σ(m 0,..., M j ), we get that E[(M j+1 M j )1 {T >j} ] = E E[(M j+1 M j )1 {T >j} M 0,..., M j ] So E[M T ] = E[M 0 ] in this case. = E [ 1 {T >j} E[M j+1 M j M 0,..., M j ] ] = 0. In the final case, since E M T < then E[ M T 1 { MT >K}] 0 as K. Now, (M t ) t is uniformly integrable so sup t E[ M t 1 { Mt >K}] 0 as K. Thus, E[ M T t 1 { MT t >K}] E[ M T 1 { MT >K}1 {T t} ] + E[ M t 1 { Mt >K}1 {T >t} ] E[ M T 1 { MT >K}] + E[ M t 1 { Mt >K}], so sup t E[ M T t 1 { MT t >K}] 0 as K. Let Note that ϕ K (x) x x 1 { x >K}. K if x > K, ϕ K (x) = x if x K, K if x < K. Since M T t M T a.s. as t (because we assumed that T < a.s.), also ϕ K (M T t ) ϕ K (M T ) a.s. as t. Since ϕ K (M T t ), ϕ K (M T ) are

36 36 CHAPTER 3. MARTINGALES uniformly bounded by K, we can apply dominated convergence to obtain that lim t E ϕ K(M T t ) ϕ K (M T ) = 0. Thus, E M T M T t E ϕ K (M T ) M T + E ϕ K (M T t ) M T t + E ϕ K (M T t ) ϕ K (M T ) E[ M T 1 { MT >K}] + sup E[ M T t 1 { MT t >K}] + E ϕ K (M T t ) ϕ K (M T ). t Taking t and then K we get that E M T M T t 0. Since T t is an a.s. bounded stopping time, E[M T t ] = E[M 0 ]. In conclusion, E[M T M 0 ] = E[M T M T t ] 0. The second case is left as an exercise. Exercise 3.23 Show that if a stopping time T is a.s. finite and a martingale (M t ) t is a.s. uniformly bounded then E[M T ] = E[M 0 ]. Exercise 3.24 Show that for a martingale (M t ) t and for any a.s. finite stopping time T, we have E M T t E M t. Solution to ex:24. :( Set X t := M t M T t. Using Jensen s inequality we have that a.s. E[ M t+1 Mt,..., M 0 ] E[Mt+1 Mt,..., M 0 ] = Mt. (That is, ( M t ) t is a sub-martingale.) Note that M T (t+1) M T t = ( M t+1 M t )1 {T >t}, so X t+1 X t = ( M t+1 M t )1 {T t}. Thus, E[X t+1 X t M 0,..., M t] = E [ 1 {T t} E[ M t+1 M t M 0,..., M t] ] 0. (That is, (X t) t is also a sub-martingale.) Taking expectation we get that E[X t] E[X t 1 ] E[X 0 ] = E[M 0 M T 0 ] = 0. Hence, E M t E M T t as required. :)

37 3.5. APPLICATIONS OF OPTIONAL STOPPING 37 Exercise 3.25 A specific case of the Martingale Convergence Theorem states that of (M t ) t is a martingale with sup t E M t < then there exists a random variable M such that M t M a.s., and E M <. Use this to show that if (M t ) t is a uniformly integrable martingale and T is an a.s. finite stopping time then E M T < (so this last condition is redundant in the OST). Solution to ex:25. :( Since sup t E M T t sup t E M t <, we have that M T t M a.s., for some integrable M. But M T t M T a.s. as well, which implies that M T = M a.s., so M T is integrable. :) 3.5 Applications of optional stopping Let us give some applications of the optional stopping theorem to the study of random walks on Z. We consider Z = 1, 1. This is the usual graph on Z, with neighbors given by adjacent integers. We take the measure µ = 1 2 (δ 1 + δ 1 ). That is, uniform on { 1, 1}. Thus, the µ-random walk (X t ) t can be represented as X t = t j=1 s j where (s j ) j are i.i.d. and P[s j = 1] = P[s j = 1] = 1 2. First, it is simple to see that (X t ) t is a martingale. Now, let T z := inf {t : X t = z}. Note that T z is a stopping time. Also, for a < 0 < b we have the stopping time T a,b := T a T b, which is the first exit time of (a, b). Now, the martingale (M t = X Ta,b t) t is a.s. uniformly bounded (by a b ). As an exercise to the reader it is left to show that T a,b < P 0 -a.s. Thus, 0 = E[M Ta,b ] = P[T a < T b ] a + P[T b < T a ] b = P[T a < T b ](a b) + b. We deduce that P 0 [T a < T b ] = b b a. Now, note that (x + X t ) t has the distribution of a random walk started at

38 38 CHAPTER 3. MARTINGALES x. Thus, for all n > x > 0, P x [T 0 < T n ] = P 0 [T x < T n x ] = n x n = 1 x n. This is the probability of a gambler starting with x dollars to go bankrupt before reaching n dollars in wealth. One of the extraordinary facts (albeit classical) about the random walk on Z is now obtained by taking n : P x [T 0 = ] = lim n P x[t 0 > T n ] = 0. That is, no matter how much money you have entering the casino - you always eventually reach 0! (And this is in the case of a fair game...) In other words, the random walk on Z is recurrent: it reaches 0 a.s. But how long does it take to reach 0? Note that since the random walk takes steps of size 1, we have that for n > x > 0, under P x, the event T 0 > T n implies that T 0 2n x. Thus, E x [T 0 ] = P x [T 0 2n x] P x [T 0 > n] n=0 n>x P x [T 0 > T n ] = x n =. n>x n>x So E x [T 0 ] =. Indeed the walker reaches 0 a.s., but the time it takes is infinite in expectation. (The random walk on Z is null-recurrent.) Let us consider a different martingale. E[X 2 t+1 X t ] = 1 2 (X t + 1) (X t 1) 2 = X 2 t + 1. So (M t := X 2 t t) t is a martingale. If we apply the OST we get 1 = E 1 [M T0 ] = E 1 [X 2 T 0 ] E 1 [T 0 ] = E 1 [T 0 ]. So E 1 [T 0 ] = 1, a contradiction! The reason is that we applied the OST in the case where we could not, since (M t ) t is not necessarily bounded.

39 3.6. MARTINGALE CONVERGENCE 39 Exercise 3.26 Show that for a < 0 < b, we have that P 0 [T a,b < ] = 1. In fact, strengthen this to show that for all a < 0 < b there exists a constant c = c(a, b) > 0 such that for all t, and any a < x < b, P x [T a,b > t] e ct. Conclude that E x [T a,b ] < for all a < x < b. One may note that for 0 < x < n, under P x, the martingale (M t T0,n ) t admits M t T n,n X 2 T n,n T n,n + X 2 t t 1 {T n,n >t} 2n 2 +T n,n +t1 {T n,n >t}. Thus, (using (a + b) 2 2a 2 + 2b 2 ) E[ M t T n,n 2 ] 2 E[ 2n 2 +T n,n 2 ]+2t 2 P[T n,n > t] 2 E[ 2n 2 +T n,n 2 ]+2t 2 e ct, for some c = c(n) > 0. This implies that sup t E[ M t T n,n 2 ] <, so (M t T n,n ) t is a uniformly integrable martingale. Given this, we may apply the OST to get that for any 0 < x < n, 0 = E 0 [M T n,n ] = E 0 [X 2 T n,n ] E 0 [T n,n ], so E 0 [T n,n ] = n 2. (This means that the random walk on Z is diffusive.) Finally, we can understand even more. If 0 < x < n, then E x [X 2 T 0,n ] = P x [T 0 > T n ] n 2 + P x [T 0 < T n ] 0 2 = x n n2. Again by applying the OST, So E x [T 0,n ] = x(n x). x 2 = E x [X 2 T 0,n ] E x [T 0,n ] = xn E x [T 0,n ]. 3.6 Martingale convergence One amazing property of martingales is that they converge under appropriate conditions.

40 40 CHAPTER 3. MARTINGALES Definition A sub-martingale is a process (M t ) t such that E M t < and E[M t+1 M 0,..., M t ] M t for all t. A super-martingale is a process (M t ) t E[M t+1 M 0,..., M t ] M t for all t. such that E M t < and A process (H t ) t is called predictable (with respect to (M t ) t ) if H t σ(m 0,..., M t 1 ) for all t. Of course any martingale is a sub-martingale and a super-martingale. Exercise 3.27 Show that if (M t ) t is a sub-martingale then also X t := (M t a)1 {Mt>a} is a sub-martingale. Solution to ex:27. :( The function ϕ(x) = x1 {x>0} is convex and non-decreasing, so by Jensen s inequality E[ϕ(M t+1 a) M 0,..., M t] ϕ(e[m t+1 a M 0,..., M t]) ϕ(m t a). :) Exercise 3.28 Show that if (M t ) t is a sub-martingale (respectively, supermartingale) and (H t ) t is a non-negative predictable process, then (H M) t := t s=1 H s(m s M s 1 ) is a sub-martingale (respectively, supermartingale). Show that when (M t ) t is a martingale and (H t ) t is predictable but not necessarily non-negative, then (H M) t is a martingale. Solution to ex:28. :( We write a solution only for the sub-martingale case, since all are very similar. E[(H M) t+1 (H M) t M 0,..., M t] = H t+1 E[(M t+1 M t) M 0,..., M t] 0. We have used the fact that H t+1 σ(m 0,..., M t). :) Exercise 3.29 Show that if (M t ) t is a sub-martingale and T is a stopping time then (M T t ) t is a sub-martingale. Solution to ex:29. :( Let H t = 1 {T t} which is a predictable process. which is a sub-martingale by the previous exercise. Then, M T t = t s=1 Hs(Ms M s 1) :) Lemma (Upcrossing Lemma) Let (M t ) t be a sub-martingale. Fix a < b R and let U t be the number of upcrossings of the interval (a, b) up

41 3.6. MARTINGALE CONVERGENCE 41 to time t; Precisely, define: N 0 = 1 and inductively N 2k 1 = inf {t > N 2k 2 : M t a} and N 2k = inf {t > N 2k 1 : M t b}. Set U t = sup {k : N 2k t}. Then, (b a) E[U t ] E[(M t a)1 {Mt>a}] E[(M 0 a)1 {M0 >a}]. Proof. Define X t = a + (M t a)1 {Mt>a}. Set H t = 1 { k : N2k 1 <t N 2k }. Note that H t σ(x 0,..., X t 1 ) since H t = 1 if and only if N 2k 1 t 1 and N 2k > t 1. Now, the number of upcrossings of (a, b) by (X t ) t is equal to U t. Also, (b a)u t t H s (X s X s 1 ). s=1 (X t ) t is a sub-martingale by the exercises. By another exercise, since H s [0, 1], also A t := t s=1 H s (X s X s 1 ) and B t := t s=1 (1 H s) (X s X s 1 ) are sub-martingales. Specifically, E[B t ] E[B 0 ] = 0. We have that (b a) E[U t ] E[A t ] E[A t + B t ] = E[X t X 0 ]. This is the required form. Theorem (Martingale Convergence Theorem) Let (M t ) t be a submartingale such that sup t E[M t 1 {Mt>0}] <. Then there exists a random variable M such that M t M a.s. and E M <. con- martingale vergence Proof. Since (M t a)1 {Mt>a} M t 1 {Mt>0} + a, we have by the Upcrossing Lemma that (b a) E[U t ] E[M t 1 {Mt>0}] + a, where U t is the number of upcrossings of the interval (a, b). Let U = U (a,b) = lim t U t be the total number of upcrossings of (a, b). By Fatou s Lemma, E[U] lim inf t E[U t] a + sup t E[M t 1 {Mt>0}] b a <. Specifically, U < a.s. Since this holds for all a < b R, taking a union bound over all a < b Q, we have that P[ a < b Q : lim inf M t a < b lim sup M t ] P[U (a,b) = ] = 0. t t a<b Q

42 42 CHAPTER 3. MARTINGALES But then, a.s. we have that lim sup M t lim inf M t, which implies that an a.s. limit M t M exists. By Fatou s Lemma again, E[M 1 {M >0}] lim inf t Another application of Fatou s Lemma gives E[M t1 {Mt>0}] sup E[M t 1 {Mt>0}] <. t E[ M 1 {M <0}] lim inf t (E[M t1 {Mt>0}] E[M t ]) Thus, lim inf (E[M t1 t {Mt>0}] E[M 0 ]) sup E[M t 1 {Mt>0}] E[M 0 ] <. E M = E[M 1 {M >0}] E[M 1 {M <0}] <. Exercise 3.30 Show that if (M t ) t is a super-martingale and M t 0 for all t a.s., then M t M a.s., for some random variable M with E M <. t Solution to ex:30. :( The process X t := M t is a sub-martingale and sup t E[X t1 {Xt >0}] 0 <. So X t converges a.s., which implies the a.s. convergence of M t. :) Exercise 3.31 Show that if (M t ) t is a bounded martingale then M t M for an integrable M. 3.7 Bounded harmonic functions We will now use the martingale convergence theorem to study the space of the bounded harmonic functions, BHF. Theorem Let G be a finitely generated group, and µ a SAS measure on G. Then, dim BHF(G, µ) {1, }. That is, there are either infinitely many linearly independent bounded harmonic functions or the only bounded harmonic functions are the constants. Proof. Let h be a bounded harmonic function. Let (X t ) t be the µ-random walk on G. Then, for any x G, (h(x t )) t is a bounded martingale. Thus,

43 3.7. BOUNDED HARMONIC FUNCTIONS 43 h(x t ) L a.s. for some integrable random variable L. Hence, h(x t+k ) h(x t ) 0 a.s. for any k. Fix x G. Let k > 0 be such that P[X k = x] = α > 0. The Markov property ensures that P[X t+k = X t x X t ] = α a.s. for all t. Thus, for any ε > 0, P[ h(x t x) h(x t ) > ε] α 1 P[ h(x t+k ) h(x t ) > ε] 0. Now assume that dim BHF(G, µ) <. Then, there exists a ball B = B(1, r) such that for all f, f BHF(G, µ), if f B = f B then f = f. Define a norm on BHF(G, µ) by f B = max x B f(x). Since all norms on finite dimensional spaces are equivalent, there exists a constant K > 0 such that f B f K f B for all f BHF(G, µ). Now, since y.h = h, for any t we have inf h c = inf c C c C X 1 t.h c K inf c C X 1 t.h c B K inf c C max x B h(x tx) c K max x B h(x tx) h(x t ). Since this last term converges to 0 in probability, it must be that inf c C h c = 0. Thus, h is constant. Conjecture Let µ, ν be two SAS measures on a finitely generated group G. Then dim BHF(G, µ) = dim BHF(G, ν). Exercise 3.32 Show that if there exists a non-constant positive h LHF(G, µ) then dim LHF(G, µ) =. (Hint: consider LHF modulo the constant functions.) Solution to ex:32. :( Let V = LHF(G, µ)/c (modulo the constant functions). Fix some finite symmetric generating set of G. Assume that dim LHF(G, µ) <, so also dim V <. Recall the Lipschitz semi-norm S f := sup x G,s S f(xs) f(x). S f = 0 if and only if f is constant. Also S (f + c) = S f for any constant c. Thus, S induces a norm on V. Another semi-norm on G is given by f B := max x B f(x) f(1), where B is some finite subset. Note that f + c B = f B for any constant c. Because dim LHF(G, µ) <, if B = B(1, r) for r large enough then B is a semi-norm on LHF(G, µ) such that f B = 0 if and only if f is constant. Thus, B induces a norm on V as well.

44 44 CHAPTER 3. MARTINGALES Since V is finite dimensional, all norms on it are equivalent. Thus, there exists a constant K > 0 such that S v K v B for any v V. Since these semi-norms are invariant to adding constants, this implies that S f K f B for all f LHF(G, µ). Now, let h LHF(G, µ) be a positive harmonic function. Then, (h(x t)) t is a positive martingale, implying that it converges a.s. Thus, for any fixed k, we have h(x t+k ) h(x t) 0 a.s. Fix x G and let k be such that P[X k = x] = α > 0. Note that the Markov property implies that P[X t+k = X tx X t] = α, independent of t. A.s. convergence implies convergence in probability, so for any ε > 0, P[ h(x tx) h(x t) > ε] α 1 P[ h(x t+k ) h(x t) > ε] 0. So h(x tx) h(x t) 0 in probability, for any x G. Since B is a finite ball this implies that max x B h(x tx) h(x t) 0 in probability. Now we also use the fact that S (x.h) = S h. Thus, for all t, S h = S (X 1 t.h) K Xt 1.h B = K max h(xtx) h(xt). x B Since this converges to 0 in probability, we have S h = 0 and h is constant. :) Number of exercises in lecture: 32 Total number of exercises until here: 67

45 Chapter 4 Random walks on groups As we have already started to see, random walks play a fundamental role in the study of harmonic functions. In this chapter we will review fundamentals of random walks on groups. Recall Section Markov chains Definition A sequence (X t ) t of G-valued random variables is called a Markov chain if for every t 0 the distribution of X t+1 depends only on the value of X t. Precisely, for any x, y G we have Markov chain P[X t+1 = y X t = x, X t 1,..., X 0 ] = P[X t+1 = y X t = x]. A Markov chain (X t ) t is called time independent if for any t 0, and any x, y G, P[X t+1 = y X t = x] = P[X 1 = y X 0 = x]. Otherwise it is called time dependent. For a time independent Markov chain (X t ) t, the matrix P (x, y) := P[X 1 = y X 0 = x] 45

46 46 CHAPTER 4. RANDOM WALKS ON GROUPS is called the transition matrix. Unless otherwise specified, by Markov chain we refer to a time independent Markov chain. Given a matrix (P (x, y)) x,y G, we say that P is stochastic if for any x G we have y P (x, y) = 1. If P is a stochastic matrix, we can define the probability of a cylinder set to be P x [C({1, 2,..., n}, ω)] = 1 {ω0 =x} n P (ω k, ω k 1 ). This can be extended uniquely to a probability measure P x on (G N, F). Under this measure the sequence (X t ) t is a Markov chain with transition matrix P. With this notation note that P x [X 1 = y] = P (x, y). If ν is a probability measure on G we define P ν = x ν(x) P x, which is just considering the Markov chain with the starting point being random with law ν. k=1 Markov property The following property is call the Markov property. Exercise 4.1 Let P be a stochastic matrix, and (X t ) t be the corresponding Markov chain. Show that for any event A σ(x 0,..., X t ), any t, n 0 and any x, y G, P[X t+n = y A, X t = x] = P n (x, y) = P x [X n = y] (provided P[A, X t = x] > 0. Example Consider a bored programmer. She has a (possibly biased) coin, and two chairs, say a and b. Every minute, out of boredom, she tosses the coin. If it comes out heads, she moves to the other chair. Otherwise, she does nothing. This can be modeled by a Markov chain on the state space {a, b}. At each time, with some probability 1 p the programmer does not move, and with probability p she jumps [ to the other ] state. The corresponding transition 1 p p matrix would be P =. p 1 p What is the probability P a [X n = b] =? For this we need to calculate P n.

A D VA N C E D P R O B A B I L - I T Y

A D VA N C E D P R O B A B I L - I T Y A N D R E W T U L L O C H A D VA N C E D P R O B A B I L - I T Y T R I N I T Y C O L L E G E T H E U N I V E R S I T Y O F C A M B R I D G E Contents 1 Conditional Expectation 5 1.1 Discrete Case 6 1.2

More information

Dynkin (λ-) and π-systems; monotone classes of sets, and of functions with some examples of application (mainly of a probabilistic flavor)

Dynkin (λ-) and π-systems; monotone classes of sets, and of functions with some examples of application (mainly of a probabilistic flavor) Dynkin (λ-) and π-systems; monotone classes of sets, and of functions with some examples of application (mainly of a probabilistic flavor) Matija Vidmar February 7, 2018 1 Dynkin and π-systems Some basic

More information

Lecture 5. 1 Chung-Fuchs Theorem. Tel Aviv University Spring 2011

Lecture 5. 1 Chung-Fuchs Theorem. Tel Aviv University Spring 2011 Random Walks and Brownian Motion Tel Aviv University Spring 20 Instructor: Ron Peled Lecture 5 Lecture date: Feb 28, 20 Scribe: Yishai Kohn In today's lecture we return to the Chung-Fuchs theorem regarding

More information

Probability Theory. Richard F. Bass

Probability Theory. Richard F. Bass Probability Theory Richard F. Bass ii c Copyright 2014 Richard F. Bass Contents 1 Basic notions 1 1.1 A few definitions from measure theory............. 1 1.2 Definitions............................. 2

More information

Probability and Measure

Probability and Measure Part II Year 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2018 84 Paper 4, Section II 26J Let (X, A) be a measurable space. Let T : X X be a measurable map, and µ a probability

More information

Math 6810 (Probability) Fall Lecture notes

Math 6810 (Probability) Fall Lecture notes Math 6810 (Probability) Fall 2012 Lecture notes Pieter Allaart University of North Texas September 23, 2012 2 Text: Introduction to Stochastic Calculus with Applications, by Fima C. Klebaner (3rd edition),

More information

ADVANCED PROBABILITY: SOLUTIONS TO SHEET 1

ADVANCED PROBABILITY: SOLUTIONS TO SHEET 1 ADVANCED PROBABILITY: SOLUTIONS TO SHEET 1 Last compiled: November 6, 213 1. Conditional expectation Exercise 1.1. To start with, note that P(X Y = P( c R : X > c, Y c or X c, Y > c = P( c Q : X > c, Y

More information

3 (Due ). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure?

3 (Due ). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure? MA 645-4A (Real Analysis), Dr. Chernov Homework assignment 1 (Due ). Show that the open disk x 2 + y 2 < 1 is a countable union of planar elementary sets. Show that the closed disk x 2 + y 2 1 is a countable

More information

Introduction and Preliminaries

Introduction and Preliminaries Chapter 1 Introduction and Preliminaries This chapter serves two purposes. The first purpose is to prepare the readers for the more systematic development in later chapters of methods of real analysis

More information

Problem Sheet 1. You may assume that both F and F are σ-fields. (a) Show that F F is not a σ-field. (b) Let X : Ω R be defined by 1 if n = 1

Problem Sheet 1. You may assume that both F and F are σ-fields. (a) Show that F F is not a σ-field. (b) Let X : Ω R be defined by 1 if n = 1 Problem Sheet 1 1. Let Ω = {1, 2, 3}. Let F = {, {1}, {2, 3}, {1, 2, 3}}, F = {, {2}, {1, 3}, {1, 2, 3}}. You may assume that both F and F are σ-fields. (a) Show that F F is not a σ-field. (b) Let X :

More information

Examples of Dual Spaces from Measure Theory

Examples of Dual Spaces from Measure Theory Chapter 9 Examples of Dual Spaces from Measure Theory We have seen that L (, A, µ) is a Banach space for any measure space (, A, µ). We will extend that concept in the following section to identify an

More information

Notes 15 : UI Martingales

Notes 15 : UI Martingales Notes 15 : UI Martingales Math 733 - Fall 2013 Lecturer: Sebastien Roch References: [Wil91, Chapter 13, 14], [Dur10, Section 5.5, 5.6, 5.7]. 1 Uniform Integrability We give a characterization of L 1 convergence.

More information

MATH 418: Lectures on Conditional Expectation

MATH 418: Lectures on Conditional Expectation MATH 418: Lectures on Conditional Expectation Instructor: r. Ed Perkins, Notes taken by Adrian She Conditional expectation is one of the most useful tools of probability. The Radon-Nikodym theorem enables

More information

Lectures 22-23: Conditional Expectations

Lectures 22-23: Conditional Expectations Lectures 22-23: Conditional Expectations 1.) Definitions Let X be an integrable random variable defined on a probability space (Ω, F 0, P ) and let F be a sub-σ-algebra of F 0. Then the conditional expectation

More information

Part III Advanced Probability

Part III Advanced Probability Part III Advanced Probability Based on lectures by M. Lis Notes taken by Dexter Chua Michaelmas 2017 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after

More information

2 (Bonus). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure?

2 (Bonus). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure? MA 645-4A (Real Analysis), Dr. Chernov Homework assignment 1 (Due 9/5). Prove that every countable set A is measurable and µ(a) = 0. 2 (Bonus). Let A consist of points (x, y) such that either x or y is

More information

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3 Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................

More information

Lectures on Markov Chains

Lectures on Markov Chains Lectures on Markov Chains David M. McClendon Department of Mathematics Ferris State University 2016 edition 1 Contents Contents 2 1 Markov chains 4 1.1 The definition of a Markov chain.....................

More information

CHAPTER 1. Martingales

CHAPTER 1. Martingales CHAPTER 1 Martingales The basic limit theorems of probability, such as the elementary laws of large numbers and central limit theorems, establish that certain averages of independent variables converge

More information

are Banach algebras. f(x)g(x) max Example 7.4. Similarly, A = L and A = l with the pointwise multiplication

are Banach algebras. f(x)g(x) max Example 7.4. Similarly, A = L and A = l with the pointwise multiplication 7. Banach algebras Definition 7.1. A is called a Banach algebra (with unit) if: (1) A is a Banach space; (2) There is a multiplication A A A that has the following properties: (xy)z = x(yz), (x + y)z =

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 9 10/2/2013. Conditional expectations, filtration and martingales

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 9 10/2/2013. Conditional expectations, filtration and martingales MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 9 10/2/2013 Conditional expectations, filtration and martingales Content. 1. Conditional expectations 2. Martingales, sub-martingales

More information

RANDOM WALKS. Course: Spring 2016 Lecture notes updated: May 2, Contents

RANDOM WALKS. Course: Spring 2016 Lecture notes updated: May 2, Contents RANDOM WALKS ARIEL YADIN Course: 201.1.8031 Spring 2016 Lecture notes updated: May 2, 2016 Contents Lecture 1. Introduction 3 Lecture 2. Markov Chains 8 Lecture 3. Recurrence and Transience 18 Lecture

More information

1. Stochastic Processes and filtrations

1. Stochastic Processes and filtrations 1. Stochastic Processes and 1. Stoch. pr., A stochastic process (X t ) t T is a collection of random variables on (Ω, F) with values in a measurable space (S, S), i.e., for all t, In our case X t : Ω S

More information

Lecture 10. Theorem 1.1 [Ergodicity and extremality] A probability measure µ on (Ω, F) is ergodic for T if and only if it is an extremal point in M.

Lecture 10. Theorem 1.1 [Ergodicity and extremality] A probability measure µ on (Ω, F) is ergodic for T if and only if it is an extremal point in M. Lecture 10 1 Ergodic decomposition of invariant measures Let T : (Ω, F) (Ω, F) be measurable, and let M denote the space of T -invariant probability measures on (Ω, F). Then M is a convex set, although

More information

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past.

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past. 1 Markov chain: definition Lecture 5 Definition 1.1 Markov chain] A sequence of random variables (X n ) n 0 taking values in a measurable state space (S, S) is called a (discrete time) Markov chain, if

More information

Math-Stat-491-Fall2014-Notes-V

Math-Stat-491-Fall2014-Notes-V Math-Stat-491-Fall2014-Notes-V Hariharan Narayanan November 18, 2014 Martingales 1 Introduction Martingales were originally introduced into probability theory as a model for fair betting games. Essentially

More information

MATHS 730 FC Lecture Notes March 5, Introduction

MATHS 730 FC Lecture Notes March 5, Introduction 1 INTRODUCTION MATHS 730 FC Lecture Notes March 5, 2014 1 Introduction Definition. If A, B are sets and there exists a bijection A B, they have the same cardinality, which we write as A, #A. If there exists

More information

1. Probability Measure and Integration Theory in a Nutshell

1. Probability Measure and Integration Theory in a Nutshell 1. Probability Measure and Integration Theory in a Nutshell 1.1. Measurable Space and Measurable Functions Definition 1.1. A measurable space is a tuple (Ω, F) where Ω is a set and F a σ-algebra on Ω,

More information

Stochastic integration. P.J.C. Spreij

Stochastic integration. P.J.C. Spreij Stochastic integration P.J.C. Spreij this version: April 22, 29 Contents 1 Stochastic processes 1 1.1 General theory............................... 1 1.2 Stopping times...............................

More information

THEOREMS, ETC., FOR MATH 515

THEOREMS, ETC., FOR MATH 515 THEOREMS, ETC., FOR MATH 515 Proposition 1 (=comment on page 17). If A is an algebra, then any finite union or finite intersection of sets in A is also in A. Proposition 2 (=Proposition 1.1). For every

More information

MATH MEASURE THEORY AND FOURIER ANALYSIS. Contents

MATH MEASURE THEORY AND FOURIER ANALYSIS. Contents MATH 3969 - MEASURE THEORY AND FOURIER ANALYSIS ANDREW TULLOCH Contents 1. Measure Theory 2 1.1. Properties of Measures 3 1.2. Constructing σ-algebras and measures 3 1.3. Properties of the Lebesgue measure

More information

Brownian Motion and Conditional Probability

Brownian Motion and Conditional Probability Math 561: Theory of Probability (Spring 2018) Week 10 Brownian Motion and Conditional Probability 10.1 Standard Brownian Motion (SBM) Brownian motion is a stochastic process with both practical and theoretical

More information

Theorem 2.1 (Caratheodory). A (countably additive) probability measure on a field has an extension. n=1

Theorem 2.1 (Caratheodory). A (countably additive) probability measure on a field has an extension. n=1 Chapter 2 Probability measures 1. Existence Theorem 2.1 (Caratheodory). A (countably additive) probability measure on a field has an extension to the generated σ-field Proof of Theorem 2.1. Let F 0 be

More information

Integration on Measure Spaces

Integration on Measure Spaces Chapter 3 Integration on Measure Spaces In this chapter we introduce the general notion of a measure on a space X, define the class of measurable functions, and define the integral, first on a class of

More information

Fundamental Inequalities, Convergence and the Optional Stopping Theorem for Continuous-Time Martingales

Fundamental Inequalities, Convergence and the Optional Stopping Theorem for Continuous-Time Martingales Fundamental Inequalities, Convergence and the Optional Stopping Theorem for Continuous-Time Martingales Prakash Balachandran Department of Mathematics Duke University April 2, 2008 1 Review of Discrete-Time

More information

Probability Theory II. Spring 2016 Peter Orbanz

Probability Theory II. Spring 2016 Peter Orbanz Probability Theory II Spring 2016 Peter Orbanz Contents Chapter 1. Martingales 1 1.1. Martingales indexed by partially ordered sets 1 1.2. Martingales from adapted processes 4 1.3. Stopping times and

More information

Lecture 6. 2 Recurrence/transience, harmonic functions and martingales

Lecture 6. 2 Recurrence/transience, harmonic functions and martingales Lecture 6 Classification of states We have shown that all states of an irreducible countable state Markov chain must of the same tye. This gives rise to the following classification. Definition. [Classification

More information

Conditional expectation

Conditional expectation Chapter II Conditional expectation II.1 Introduction Let X be a square integrable real-valued random variable. The constant c which minimizes E[(X c) 2 ] is the expectation of X. Indeed, we have, with

More information

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R. Ergodic Theorems Samy Tindel Purdue University Probability Theory 2 - MA 539 Taken from Probability: Theory and examples by R. Durrett Samy T. Ergodic theorems Probability Theory 1 / 92 Outline 1 Definitions

More information

PCMI LECTURE NOTES ON PROPERTY (T ), EXPANDER GRAPHS AND APPROXIMATE GROUPS (PRELIMINARY VERSION)

PCMI LECTURE NOTES ON PROPERTY (T ), EXPANDER GRAPHS AND APPROXIMATE GROUPS (PRELIMINARY VERSION) PCMI LECTURE NOTES ON PROPERTY (T ), EXPANDER GRAPHS AND APPROXIMATE GROUPS (PRELIMINARY VERSION) EMMANUEL BREUILLARD 1. Lecture 1, Spectral gaps for infinite groups and non-amenability The final aim of

More information

Useful Probability Theorems

Useful Probability Theorems Useful Probability Theorems Shiu-Tang Li Finished: March 23, 2013 Last updated: November 2, 2013 1 Convergence in distribution Theorem 1.1. TFAE: (i) µ n µ, µ n, µ are probability measures. (ii) F n (x)

More information

P (A G) dp G P (A G)

P (A G) dp G P (A G) First homework assignment. Due at 12:15 on 22 September 2016. Homework 1. We roll two dices. X is the result of one of them and Z the sum of the results. Find E [X Z. Homework 2. Let X be a r.v.. Assume

More information

Notes on Measure, Probability and Stochastic Processes. João Lopes Dias

Notes on Measure, Probability and Stochastic Processes. João Lopes Dias Notes on Measure, Probability and Stochastic Processes João Lopes Dias Departamento de Matemática, ISEG, Universidade de Lisboa, Rua do Quelhas 6, 1200-781 Lisboa, Portugal E-mail address: jldias@iseg.ulisboa.pt

More information

The main results about probability measures are the following two facts:

The main results about probability measures are the following two facts: Chapter 2 Probability measures The main results about probability measures are the following two facts: Theorem 2.1 (extension). If P is a (continuous) probability measure on a field F 0 then it has a

More information

Continuity. Chapter 4

Continuity. Chapter 4 Chapter 4 Continuity Throughout this chapter D is a nonempty subset of the real numbers. We recall the definition of a function. Definition 4.1. A function from D into R, denoted f : D R, is a subset of

More information

STAT 7032 Probability Spring Wlodek Bryc

STAT 7032 Probability Spring Wlodek Bryc STAT 7032 Probability Spring 2018 Wlodek Bryc Created: Friday, Jan 2, 2014 Revised for Spring 2018 Printed: January 9, 2018 File: Grad-Prob-2018.TEX Department of Mathematical Sciences, University of Cincinnati,

More information

Real Analysis Problems

Real Analysis Problems Real Analysis Problems Cristian E. Gutiérrez September 14, 29 1 1 CONTINUITY 1 Continuity Problem 1.1 Let r n be the sequence of rational numbers and Prove that f(x) = 1. f is continuous on the irrationals.

More information

Lecture 4 Lebesgue spaces and inequalities

Lecture 4 Lebesgue spaces and inequalities Lecture 4: Lebesgue spaces and inequalities 1 of 10 Course: Theory of Probability I Term: Fall 2013 Instructor: Gordan Zitkovic Lecture 4 Lebesgue spaces and inequalities Lebesgue spaces We have seen how

More information

MATH 6605: SUMMARY LECTURE NOTES

MATH 6605: SUMMARY LECTURE NOTES MATH 6605: SUMMARY LECTURE NOTES These notes summarize the lectures on weak convergence of stochastic processes. If you see any typos, please let me know. 1. Construction of Stochastic rocesses A stochastic

More information

Measurable functions are approximately nice, even if look terrible.

Measurable functions are approximately nice, even if look terrible. Tel Aviv University, 2015 Functions of real variables 74 7 Approximation 7a A terrible integrable function........... 74 7b Approximation of sets................ 76 7c Approximation of functions............

More information

Analysis Comprehensive Exam Questions Fall 2008

Analysis Comprehensive Exam Questions Fall 2008 Analysis Comprehensive xam Questions Fall 28. (a) Let R be measurable with finite Lebesgue measure. Suppose that {f n } n N is a bounded sequence in L 2 () and there exists a function f such that f n (x)

More information

Inference for Stochastic Processes

Inference for Stochastic Processes Inference for Stochastic Processes Robert L. Wolpert Revised: June 19, 005 Introduction A stochastic process is a family {X t } of real-valued random variables, all defined on the same probability space

More information

4 Expectation & the Lebesgue Theorems

4 Expectation & the Lebesgue Theorems STA 205: Probability & Measure Theory Robert L. Wolpert 4 Expectation & the Lebesgue Theorems Let X and {X n : n N} be random variables on a probability space (Ω,F,P). If X n (ω) X(ω) for each ω Ω, does

More information

MAT 570 REAL ANALYSIS LECTURE NOTES. Contents. 1. Sets Functions Countability Axiom of choice Equivalence relations 9

MAT 570 REAL ANALYSIS LECTURE NOTES. Contents. 1. Sets Functions Countability Axiom of choice Equivalence relations 9 MAT 570 REAL ANALYSIS LECTURE NOTES PROFESSOR: JOHN QUIGG SEMESTER: FALL 204 Contents. Sets 2 2. Functions 5 3. Countability 7 4. Axiom of choice 8 5. Equivalence relations 9 6. Real numbers 9 7. Extended

More information

Lectures on Integration. William G. Faris

Lectures on Integration. William G. Faris Lectures on Integration William G. Faris March 4, 2001 2 Contents 1 The integral: properties 5 1.1 Measurable functions......................... 5 1.2 Integration.............................. 7 1.3 Convergence

More information

1/12/05: sec 3.1 and my article: How good is the Lebesgue measure?, Math. Intelligencer 11(2) (1989),

1/12/05: sec 3.1 and my article: How good is the Lebesgue measure?, Math. Intelligencer 11(2) (1989), Real Analysis 2, Math 651, Spring 2005 April 26, 2005 1 Real Analysis 2, Math 651, Spring 2005 Krzysztof Chris Ciesielski 1/12/05: sec 3.1 and my article: How good is the Lebesgue measure?, Math. Intelligencer

More information

1 Stat 605. Homework I. Due Feb. 1, 2011

1 Stat 605. Homework I. Due Feb. 1, 2011 The first part is homework which you need to turn in. The second part is exercises that will not be graded, but you need to turn it in together with the take-home final exam. 1 Stat 605. Homework I. Due

More information

Principle of Mathematical Induction

Principle of Mathematical Induction Advanced Calculus I. Math 451, Fall 2016, Prof. Vershynin Principle of Mathematical Induction 1. Prove that 1 + 2 + + n = 1 n(n + 1) for all n N. 2 2. Prove that 1 2 + 2 2 + + n 2 = 1 n(n + 1)(2n + 1)

More information

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains Markov Chains A random process X is a family {X t : t T } of random variables indexed by some set T. When T = {0, 1, 2,... } one speaks about a discrete-time process, for T = R or T = [0, ) one has a continuous-time

More information

Martingale Theory and Applications

Martingale Theory and Applications Martingale Theory and Applications Dr Nic Freeman June 4, 2015 Contents 1 Conditional Expectation 2 1.1 Probability spaces and σ-fields............................ 2 1.2 Random Variables...................................

More information

Stochastic Processes. Winter Term Paolo Di Tella Technische Universität Dresden Institut für Stochastik

Stochastic Processes. Winter Term Paolo Di Tella Technische Universität Dresden Institut für Stochastik Stochastic Processes Winter Term 2016-2017 Paolo Di Tella Technische Universität Dresden Institut für Stochastik Contents 1 Preliminaries 5 1.1 Uniform integrability.............................. 5 1.2

More information

Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra

Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra D. R. Wilkins Contents 3 Topics in Commutative Algebra 2 3.1 Rings and Fields......................... 2 3.2 Ideals...............................

More information

Part V. 17 Introduction: What are measures and why measurable sets. Lebesgue Integration Theory

Part V. 17 Introduction: What are measures and why measurable sets. Lebesgue Integration Theory Part V 7 Introduction: What are measures and why measurable sets Lebesgue Integration Theory Definition 7. (Preliminary). A measure on a set is a function :2 [ ] such that. () = 2. If { } = is a finite

More information

Solutions to the Exercises in Stochastic Analysis

Solutions to the Exercises in Stochastic Analysis Solutions to the Exercises in Stochastic Analysis Lecturer: Xue-Mei Li 1 Problem Sheet 1 In these solution I avoid using conditional expectations. But do try to give alternative proofs once we learnt conditional

More information

Part II Probability and Measure

Part II Probability and Measure Part II Probability and Measure Theorems Based on lectures by J. Miller Notes taken by Dexter Chua Michaelmas 2016 These notes are not endorsed by the lecturers, and I have modified them (often significantly)

More information

Real Analysis Math 131AH Rudin, Chapter #1. Dominique Abdi

Real Analysis Math 131AH Rudin, Chapter #1. Dominique Abdi Real Analysis Math 3AH Rudin, Chapter # Dominique Abdi.. If r is rational (r 0) and x is irrational, prove that r + x and rx are irrational. Solution. Assume the contrary, that r+x and rx are rational.

More information

Lecture Notes in Advanced Calculus 1 (80315) Raz Kupferman Institute of Mathematics The Hebrew University

Lecture Notes in Advanced Calculus 1 (80315) Raz Kupferman Institute of Mathematics The Hebrew University Lecture Notes in Advanced Calculus 1 (80315) Raz Kupferman Institute of Mathematics The Hebrew University February 7, 2007 2 Contents 1 Metric Spaces 1 1.1 Basic definitions...........................

More information

NOTES ON KLEINER S PROOF OF GROMOV S POLYNOMIAL GROWTH THEOREM

NOTES ON KLEINER S PROOF OF GROMOV S POLYNOMIAL GROWTH THEOREM NOTES ON KLEINER S PROOF OF GROMOV S POLYNOMIAL GROWTH THEOREM ROMAN SAUER Abstract. We present and explain Kleiner s new proof of Gromov s polynomial growth [Kle07] theorem which avoids the use of Montgomery-Zippin

More information

18.175: Lecture 3 Integration

18.175: Lecture 3 Integration 18.175: Lecture 3 Scott Sheffield MIT Outline Outline Recall definitions Probability space is triple (Ω, F, P) where Ω is sample space, F is set of events (the σ-algebra) and P : F [0, 1] is the probability

More information

Exercises Measure Theoretic Probability

Exercises Measure Theoretic Probability Exercises Measure Theoretic Probability 2002-2003 Week 1 1. Prove the folloing statements. (a) The intersection of an arbitrary family of d-systems is again a d- system. (b) The intersection of an arbitrary

More information

Advanced Probability

Advanced Probability Advanced Probability University of Cambridge, Part III of the Mathematical Tripos Michaelmas Term 2006 Grégory Miermont 1 1 CNRS & Laboratoire de Mathématique, Equipe Probabilités, Statistique et Modélisation,

More information

fy (X(g)) Y (f)x(g) gy (X(f)) Y (g)x(f)) = fx(y (g)) + gx(y (f)) fy (X(g)) gy (X(f))

fy (X(g)) Y (f)x(g) gy (X(f)) Y (g)x(f)) = fx(y (g)) + gx(y (f)) fy (X(g)) gy (X(f)) 1. Basic algebra of vector fields Let V be a finite dimensional vector space over R. Recall that V = {L : V R} is defined to be the set of all linear maps to R. V is isomorphic to V, but there is no canonical

More information

PROBABILITY THEORY II

PROBABILITY THEORY II Ruprecht-Karls-Universität Heidelberg Institut für Angewandte Mathematik Prof. Dr. Jan JOHANNES Outline of the lecture course PROBABILITY THEORY II Summer semester 2016 Preliminary version: April 21, 2016

More information

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product Chapter 4 Hilbert Spaces 4.1 Inner Product Spaces Inner Product Space. A complex vector space E is called an inner product space (or a pre-hilbert space, or a unitary space) if there is a mapping (, )

More information

Stochastics Process Note. Xing Wang

Stochastics Process Note. Xing Wang Stochastics Process Note Xing Wang April 2014 Contents 0.1 σ-algebra............................................ 3 0.1.1 Monotone Class Theorem............................... 3 0.2 measure and expectation....................................

More information

2 Probability, random elements, random sets

2 Probability, random elements, random sets Tel Aviv University, 2012 Measurability and continuity 25 2 Probability, random elements, random sets 2a Probability space, measure algebra........ 25 2b Standard models................... 30 2c Random

More information

Probability Theory II. Spring 2014 Peter Orbanz

Probability Theory II. Spring 2014 Peter Orbanz Probability Theory II Spring 2014 Peter Orbanz Contents Chapter 1. Martingales, continued 1 1.1. Martingales indexed by partially ordered sets 1 1.2. Notions of convergence for martingales 3 1.3. Uniform

More information

Lecture 19 L 2 -Stochastic integration

Lecture 19 L 2 -Stochastic integration Lecture 19: L 2 -Stochastic integration 1 of 12 Course: Theory of Probability II Term: Spring 215 Instructor: Gordan Zitkovic Lecture 19 L 2 -Stochastic integration The stochastic integral for processes

More information

CHAPTER 1. Metric Spaces. 1. Definition and examples

CHAPTER 1. Metric Spaces. 1. Definition and examples CHAPTER Metric Spaces. Definition and examples Metric spaces generalize and clarify the notion of distance in the real line. The definitions will provide us with a useful tool for more general applications

More information

5 Measure theory II. (or. lim. Prove the proposition. 5. For fixed F A and φ M define the restriction of φ on F by writing.

5 Measure theory II. (or. lim. Prove the proposition. 5. For fixed F A and φ M define the restriction of φ on F by writing. 5 Measure theory II 1. Charges (signed measures). Let (Ω, A) be a σ -algebra. A map φ: A R is called a charge, (or signed measure or σ -additive set function) if φ = φ(a j ) (5.1) A j for any disjoint

More information

Metric Spaces and Topology

Metric Spaces and Topology Chapter 2 Metric Spaces and Topology From an engineering perspective, the most important way to construct a topology on a set is to define the topology in terms of a metric on the set. This approach underlies

More information

CHAPTER 6. Differentiation

CHAPTER 6. Differentiation CHPTER 6 Differentiation The generalization from elementary calculus of differentiation in measure theory is less obvious than that of integration, and the methods of treating it are somewhat involved.

More information

Exercises Measure Theoretic Probability

Exercises Measure Theoretic Probability Exercises Measure Theoretic Probability Chapter 1 1. Prove the folloing statements. (a) The intersection of an arbitrary family of d-systems is again a d- system. (b) The intersection of an arbitrary family

More information

(1) Consider the space S consisting of all continuous real-valued functions on the closed interval [0, 1]. For f, g S, define

(1) Consider the space S consisting of all continuous real-valued functions on the closed interval [0, 1]. For f, g S, define Homework, Real Analysis I, Fall, 2010. (1) Consider the space S consisting of all continuous real-valued functions on the closed interval [0, 1]. For f, g S, define ρ(f, g) = 1 0 f(x) g(x) dx. Show that

More information

Continuity. Chapter 4

Continuity. Chapter 4 Chapter 4 Continuity Throughout this chapter D is a nonempty subset of the real numbers. We recall the definition of a function. Definition 4.1. A function from D into R, denoted f : D R, is a subset of

More information

If Y and Y 0 satisfy (1-2), then Y = Y 0 a.s.

If Y and Y 0 satisfy (1-2), then Y = Y 0 a.s. 20 6. CONDITIONAL EXPECTATION Having discussed at length the limit theory for sums of independent random variables we will now move on to deal with dependent random variables. An important tool in this

More information

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R In probabilistic models, a random variable is a variable whose possible values are numerical outcomes of a random phenomenon. As a function or a map, it maps from an element (or an outcome) of a sample

More information

Selected Exercises on Expectations and Some Probability Inequalities

Selected Exercises on Expectations and Some Probability Inequalities Selected Exercises on Expectations and Some Probability Inequalities # If E(X 2 ) = and E X a > 0, then P( X λa) ( λ) 2 a 2 for 0 < λ

More information

µ X (A) = P ( X 1 (A) )

µ X (A) = P ( X 1 (A) ) 1 STOCHASTIC PROCESSES This appendix provides a very basic introduction to the language of probability theory and stochastic processes. We assume the reader is familiar with the general measure and integration

More information

Admin and Lecture 1: Recap of Measure Theory

Admin and Lecture 1: Recap of Measure Theory Admin and Lecture 1: Recap of Measure Theory David Aldous January 16, 2018 I don t use bcourses: Read web page (search Aldous 205B) Web page rather unorganized some topics done by Nike in 205A will post

More information

Some Background Material

Some Background Material Chapter 1 Some Background Material In the first chapter, we present a quick review of elementary - but important - material as a way of dipping our toes in the water. This chapter also introduces important

More information

An essay on the general theory of stochastic processes

An essay on the general theory of stochastic processes Probability Surveys Vol. 3 (26) 345 412 ISSN: 1549-5787 DOI: 1.1214/1549578614 An essay on the general theory of stochastic processes Ashkan Nikeghbali ETHZ Departement Mathematik, Rämistrasse 11, HG G16

More information

l(y j ) = 0 for all y j (1)

l(y j ) = 0 for all y j (1) Problem 1. The closed linear span of a subset {y j } of a normed vector space is defined as the intersection of all closed subspaces containing all y j and thus the smallest such subspace. 1 Show that

More information

Brownian Motion and Stochastic Calculus

Brownian Motion and Stochastic Calculus ETHZ, Spring 17 D-MATH Prof Dr Martin Larsson Coordinator A Sepúlveda Brownian Motion and Stochastic Calculus Exercise sheet 6 Please hand in your solutions during exercise class or in your assistant s

More information

Lecture 22: Variance and Covariance

Lecture 22: Variance and Covariance EE5110 : Probability Foundations for Electrical Engineers July-November 2015 Lecture 22: Variance and Covariance Lecturer: Dr. Krishna Jagannathan Scribes: R.Ravi Kiran In this lecture we will introduce

More information

THEOREMS, ETC., FOR MATH 516

THEOREMS, ETC., FOR MATH 516 THEOREMS, ETC., FOR MATH 516 Results labeled Theorem Ea.b.c (or Proposition Ea.b.c, etc.) refer to Theorem c from section a.b of Evans book (Partial Differential Equations). Proposition 1 (=Proposition

More information

Recall that if X is a compact metric space, C(X), the space of continuous (real-valued) functions on X, is a Banach space with the norm

Recall that if X is a compact metric space, C(X), the space of continuous (real-valued) functions on X, is a Banach space with the norm Chapter 13 Radon Measures Recall that if X is a compact metric space, C(X), the space of continuous (real-valued) functions on X, is a Banach space with the norm (13.1) f = sup x X f(x). We want to identify

More information

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539 Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory

More information

REAL AND COMPLEX ANALYSIS

REAL AND COMPLEX ANALYSIS REAL AND COMPLE ANALYSIS Third Edition Walter Rudin Professor of Mathematics University of Wisconsin, Madison Version 1.1 No rights reserved. Any part of this work can be reproduced or transmitted in any

More information