STOR 635 Notes (S13)

Similar documents
A D VA N C E D P R O B A B I L - I T Y

Probability Theory. Richard F. Bass

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.

Notes on Measure, Probability and Stochastic Processes. João Lopes Dias

Measure Theory on Topological Spaces. Course: Prof. Tony Dorlas 2010 Typset: Cathal Ormond

Brownian Motion and Conditional Probability

1 Probability space and random variables

Lecture 6 Basic Probability

The main results about probability measures are the following two facts:

Probability and Measure

Lecture 2. We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales.

Theorem 2.1 (Caratheodory). A (countably additive) probability measure on a field has an extension. n=1

P (A G) dp G P (A G)

MATH 418: Lectures on Conditional Expectation

STAT 7032 Probability Spring Wlodek Bryc

Useful Probability Theorems

Probability Theory I: Syllabus and Exercise

On the convergence of sequences of random variables: A primer

Measures. 1 Introduction. These preliminary lecture notes are partly based on textbooks by Athreya and Lahiri, Capinski and Kopp, and Folland.

PROBABILITY THEORY II

Notes 1 : Measure-theoretic foundations I

17. Convergence of Random Variables

Lectures 22-23: Conditional Expectations

Random Process Lecture 1. Fundamentals of Probability

MATH/STAT 235A Probability Theory Lecture Notes, Fall 2013

STA 711: Probability & Measure Theory Robert L. Wolpert

Lecture 3: Expected Value. These integrals are taken over all of Ω. If we wish to integrate over a measurable subset A Ω, we will write

RS Chapter 1 Random Variables 6/5/2017. Chapter 1. Probability Theory: Introduction

Probability and Measure

1. Probability Measure and Integration Theory in a Nutshell

Probability and Measure

Chapter 1. Measure Spaces. 1.1 Algebras and σ algebras of sets Notation and preliminaries

Integration on Measure Spaces

4 Sums of Independent Random Variables

Measure-theoretic probability

Dynkin (λ-) and π-systems; monotone classes of sets, and of functions with some examples of application (mainly of a probabilistic flavor)

7 Convergence in R d and in Metric Spaces

Part II Probability and Measure

Measure Theoretic Probability. P.J.C. Spreij

Measure and integration

4 Expectation & the Lebesgue Theorems

If Y and Y 0 satisfy (1-2), then Y = Y 0 a.s.

Advanced Probability

1 Stat 605. Homework I. Due Feb. 1, 2011

5 Measure theory II. (or. lim. Prove the proposition. 5. For fixed F A and φ M define the restriction of φ on F by writing.

Lecture 11. Probability Theory: an Overveiw

MATHS 730 FC Lecture Notes March 5, Introduction

1 Probability theory. 2 Random variables and probability theory.

Problem Sheet 1. You may assume that both F and F are σ-fields. (a) Show that F F is not a σ-field. (b) Let X : Ω R be defined by 1 if n = 1

Lebesgue-Radon-Nikodym Theorem

Probability and Measure

3 (Due ). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure?

I. ANALYSIS; PROBABILITY

Some basic elements of Probability Theory

Exercises Measure Theoretic Probability

X n D X lim n F n (x) = F (x) for all x C F. lim n F n(u) = F (u) for all u C F. (2)

CHAPTER 6. Differentiation

Analysis of Probabilistic Systems

Real Analysis, 2nd Edition, G.B.Folland Elements of Functional Analysis

Selected Exercises on Expectations and Some Probability Inequalities

1. Stochastic Processes and filtrations

3 Integration and Expectation

Product measure and Fubini s theorem

L p Spaces and Convexity

CONVERGENCE OF RANDOM SERIES AND MARTINGALES

Math 832 Fall University of Wisconsin at Madison. Instructor: David F. Anderson

STOCHASTIC MODELS FOR WEB 2.0

Exercises Measure Theoretic Probability

Independent random variables

Math 6810 (Probability) Fall Lecture notes

MATH & MATH FUNCTIONS OF A REAL VARIABLE EXERCISES FALL 2015 & SPRING Scientia Imperii Decus et Tutamen 1

Reminder Notes for the Course on Measures on Topological Spaces

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

MA359 Measure Theory

1 Sequences of events and their limits

Lecture 4 Lebesgue spaces and inequalities

II - REAL ANALYSIS. This property gives us a way to extend the notion of content to finite unions of rectangles: we define

2 Probability, random elements, random sets

Product measures, Tonelli s and Fubini s theorems For use in MAT4410, autumn 2017 Nadia S. Larsen. 17 November 2017.

CHAPTER 3: LARGE SAMPLE THEORY

CHAPTER 1. Martingales

MATH MEASURE THEORY AND FOURIER ANALYSIS. Contents

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R

Part IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

2 (Bonus). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure?

36-752: Lecture 1. We will use measures to say how large sets are. First, we have to decide which sets we will measure.

Preliminaries. Probability space

Discrete Probability Refresher

Lecture 5 Theorems of Fubini-Tonelli and Radon-Nikodym

If g is also continuous and strictly increasing on J, we may apply the strictly increasing inverse function g 1 to this inequality to get

Lebesgue measure and integration

Probability Theory II. Spring 2016 Peter Orbanz

Stochastic Models (Lecture #4)

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( )

Spring 2014 Advanced Probability Overview. Lecture Notes Set 1: Course Overview, σ-fields, and Measures

Real Analysis Problems

( f ^ M _ M 0 )dµ (5.1)

Lecture 3: Probability Measures - 2

JUSTIN HARTMANN. F n Σ.

Transcription:

STOR 635 Notes (S13) Jimmy Jin UNC-Chapel Hill Last updated: 1/14/14 Contents 1 Measure theory and probability basics 2 1.1 Algebras and measure.......................... 2 1.2 Integration................................ 4 1.3 Inequalities................................ 8 1.4 Convergence notions........................... 9 1.5 Independence............................... 14 2 Zero-one laws and stopping times 19 2.1 Borel-Cantelli lemmas.......................... 19 2.2 Filtrations and stopping times...................... 25 2.3 Kolmogorov zero-one law........................ 28 3 Laws of Large Numbers 30 3.1 4 th moment/l 2 strong laws, Glivenko-Cantelli............. 30 3.2 2 nd moment results............................ 34 3.3 Strong law of large numbers....................... 37 4 Martingales 42 4.1 Conditional expectation......................... 42 4.2 L 2 interpretation of conditional expectation.............. 47 4.3 Martingale basics............................. 48 4.4 Discrete stochastic integral....................... 52 4.5 Optional sampling theorems....................... 56 4.6 Maximal inequalities and upcrossing lemma.............. 59 4.7 Convergence theorems.......................... 64 4.8 Boundary crossings and Azuma-Hoeffding............... 73 4.9 Uniform integrability and branching processes............. 78 5 Weak convergence 83 5.1 Basic properties, Scheffe s theorem................... 83 5.2 Helly s theorem.............................. 87 5.3 Characteristic functions......................... 91 5.4 The continuity theorem, iid CLT.................... 94 5.5 Lindeberg-Feller CLT.......................... 99 1

1 Measure theory and probability basics 1.1 Algebras and measure Definition. (Probability measure) Let (S, S) be a measurable space. If, for a measure µ, µ(s) = 1, then we say µ is a probability measure. Definition. (Random variable) A measurable map X is a function from one measurable space (Ω, F) to another measurable space (S, S) such that X 1 (B) = {ω : X(ω) B} F B S If the target space (S, S) = (R, B(R)), then X is called a random variable. If the target space is (R d, B(R d )), then X is called a random vector. Remark. We often write {X B} as shorthand for {ω : X(ω) B}. Theorem 1.1. If {ω : X(ω) A} F for all A A and σ(a) = S, then X is measurable. Proof. Note that {ω : X(ω) i=1b i } = i=1{ω : X(ω) B i } {ω : X(ω) B c } = {ω : X(ω) B} c Therefore the class of sets B = {B : {ω : X(ω) B} F} is a σ-field. Then since B A and A generates S, then B S. Definition. (π-class, λ-class) Let S be some space and let L, A be collections of subsets of S. A is a π-class if it is closed under intersections. L is a λ-class if 1. S L 2. If A, B L with A B, then A \ B L 3. If {A n } is increasing with A i L, then lim n A n L. Theorem 1.2. (Dynkin s π λ theorem) For L and A given above, L A L σ(a). Lemma 1.3. (Identification lemma) Suppose p 1, p 2 are probability measures on (S, S) and p 1 (A) = p 2 (A) for all A A. If A is a π-class and σ(a) = S, then p 1 (A) = p 2 (A) for all A S. 2

Proof. Define L = {A S p 1 (A) = p 2 (A)}. Note that: 1. L A is a π-class by assumption. 2. If we show that L is a λ-class, then the result follows. To show the three properties of λ-classes: 1. S L because p 1, p 2 are both probability measures. 2. To show that A B L A \ B L, use countable additivity. 3. To show that lim A n L for increasing {A n } L, use the continuity property of measures. Remark. (Lebesgue measure) There exists a unique σ-finite measure Λ on (R, B(R)) such that Λ((a, b) = b a Thus to get the Uniform probability measure, simply restrict Λ above to [0, 1. Uniqueness is shown by considering the π-class A = {(a, b, a b [0, 1} and applying the identification lemma. Definition. (Induced measure) Let (S 1, S 1, µ 1 ) be a measure space and let (S 2, S 2 ) be another measurable space. Let f : S 1 S 2 be a measurable function. Then we can construct a measure µ 2 on S 2 (called the measure induced on S 2 by f) by: µ 2 (B) = µ 1 {s S 1 : f(s) B}, B S 2 If µ 1 is a probability measure, then µ 2 is also a probability measure. Definition. (Law/distribution, distribution function) Let (Ω, F, P) be a probability measure space. 1. The law or distribution of a r.v. X is the induced measure on (R, B(R)) using the original probability measure P on (Ω, F) and the function X: µ(b) = P(ω Ω : X(ω) B), B B(R) 2. The distribution function of a r.v. X is a function F : R [0, 1 defined by F (x) = µ((, x) = P(ω Ω : X(ω) (, x) that is, it is the law of x evaluated on the Borel set (, x. Remark. The distribution function has three basic properties: 1. 0 F (x) 1 and F is non-decreasing. 3

2. F is right-continuous: x n x F (x n ) F (x). 3. lim x F (x) = 1, lim x F (x) = 0 Theorem 1.4. (Probability integral transform) Suppose F is a function satisfying the three properties of distribution functions above. Then there exists a unique probability measure on (R, B(R)) whose distribution function is F. Proof. We know that the measure space ([0, 1, B([0, 1), Λ), where Λ is the uniform probability measure, exists. Given F satisfying the three properties, define the map G : [0, 1 R by G(y) = inf{x F (x) y} From G, obtain the induced measure µ on (R, B(R)) using Λ: µ(b) = Λ{s [0, 1 G(s) B} µ is guaranteed to be a probability measure since Λ is a probability measure. Then we see that F G (x) = µ((, x)) = Λ({y G(y) x}) = Λ({y y F (x)}) = F (x) 1.2 Integration STUPID NOTATION NOTE: Let (S, S, µ) be a measure space. Then the following are equivalent notations for the integral of f with respect to µ: f(s) µ(ds) = f(s) dµ(s) = f dµ S S Now let (S, S, µ) be a measure space where µ is σ-finite. Define H + to be the space of all non-negative measurable functions f : S [0,. Theorem 1.5. There exists a unique map I : H + [0, such that: 1. I(1 A ) = µ(a) 2. f, g H + I(f + g) = I(f) + I(g) 3. f H + and c 0 I(cf) = ci(f). S 4

4. Let {f n } be a sequence of functions in H + such that f n (x) f n+1 (x) for each n. Also, let f be a function in H + such that f n (x) f(x). Then I(f n ) I(f). Proof. (Sketch) First, we define some notation: I(f) = f dµ = I(f1 A ) = f1 A dµ = S f(s) ds A f dµ We proceed by the Nike method, also known as the Eminem method: to prove the first property, we just do it and define I(1 A ) = µ(a). Furthermore, for a simple function f n = n i=1 c i1 Ai where the A i s partition S, we define I(f n ) = n i=1 c iµ(a i ). To show linearity, prove an intermediate result. Let 0 f L be a bounded function. Then for a sequence of simple functions {f k } such that f k f, then I(f k ) lim I(f k ) also. Then define I(f) by lim I(f k ) and show uniqueness. To show monotone convergence, note that each f n has an increasing sequence of non-negative simple functions {f n,k } k=1 with lim k f n,k (x) = f n (x) for all x X. Also note that the sequence {g k } defined by g k (x) = max n k f n,k (x) is similarly simple and increasing. Establish thte fact that for n k, f n,k (x) g k (x) f k (x) and take limits in the correct order to show that {g k } converges to f. Then take integrals of the above expression and then take limits again to show the main result (using the definition of an integral as the limit of integrals of simple functions). Example. (Counting measure) Let the σ-field F = 2 N and let f : N [0, ). Also, let µ be the counting measure on F, that is: µ(a) = A = the number of elements in A, A F The question is: how do we calculate f dµ? First, define g n by: { f(i), i n g n (i) = 0, otherwise Note that g n f and that g n (s) = n i=1 f(i)1 ({i})(s), so g n is simple. We know 5

how to integrate simple functions: g n (s) dµ(s) = = f(i)µ({i}) i=1 f(i) Then, invoking our theorem, we send n in the expression above to obtain the integral f dµ. Lemma 1.6. Suppose f H + and suppose there exists a set A, µ(a) = 0 such that f(s) = 0 on s A c (f can be infinite on A). Then f dµ = 0. Example. Let h 1, h 2 be such that h 1 = h 2 a.s. Then µ({s h 1 (s) h 2 (s)}) = 0. Example. Let {f n }, f be such that f n f a.s. Then µ({s f n (s) f(s)}) = 0. Next, we discuss the notion of integrability of a function, which is simply a term for whether or not a given function has a finite integral. First, we formalize some notation. i=1 1. For x R: x is x + = max{x, 0} and x = max{ x, 0}. 2. For a measurable function f: f + (s) = max{f(s), 0} and f (s) = max{ f(s), 0} 3. f = f + f and f = f + + f. Definition. (Integrable function) A measurable function f is integrable if: f + dµ < and f dµ < And for a general (not necessarily non-negative) measurable function f, we define the integral as: f dµ = f + dµ f dµ Theorem 1.7. For integrable functions f and g on a measure space (S, S, µ), the following properties hold (all integrals with respect to measure µ): 1. For a, b R, af + bg = a f + b g 2. f 0 a.e. f 0 3. f = 0 a.e. f = 0 4. f g a.e. f g 5. f = g a.e. f = g 6. f f 6

Definition. (Absolutely continuous) Let µ and ν be measures on a σ-field S. We say ν is absolutely continuous with respect to µ (written ν µ) if µ(a) = 0 ν(a) = 0, A S The general setup of the theorem is the following: let µ be a σ-finite measure and let f 0 be a measurable function. Define the set function: ν(a) = 1 A f dµ = f dµ It can be easily checked that ν is a measure and that µ(a) = 0 implies that ν(a) = 0 also, so that ν µ. The Radon-Nikodym theorem gives us the following in addition: Theorem 1.8. Let ν, µ be measures with ν µ and µ σ-finite. Then there exists an a.e. unique measurable function f such that A S, ν(a) = f dµ We call f the Radon-Nikodym derivative or density, written f = dν dµ. Lemma 1.9. Suppose µ is σ-finite, f 0, g 0 are integrable, and the following property holds: A S, f dµ = g dµ Then f = g a.e. Theorem 1.10. (Change of measure) A Let X be a function (Ω, F, P) (S, S) and let µ be the measure on S induced by X. Let h : S R be measurable and furthermore let E( h(x) ) = h(x) dp <. Then: E(h(X)) = h(x(ω)) dp = h(s) dµ(s) Ω Example. Let B be our space of infinite coin tosses from before. Our underlying probability space is (B, σ({a π }), P), where an element of B is given by A A A ω B = (ω 1, ω 2,...) Let X : B R be defined by X(ω) = ω 1. We are interested in calculating the expectation of the function of our random variable: h(x) = sin(x). By our theorem, E(sin X) = sin(x(ω)) dp = B where m is Lebesgue measure on B(R). S R sin(ω 1 ) dm 7

1.3 Inequalities Lemma 1.11. If ψ is a convex function on an open subset I R, then for any x 0 I there exists a support line l 0 such that l 0 (x) ψ(x) x I and l 0 (x 0 ) = ψ(x 0 ) Proof. Convexity gives us two facts: 1. For any h > 0, applying the definition of convexity with α = 1/2 gives: ψ(x) ψ(x h) h ψ(x + h) ψ(x) h 2. For any h 1 > h 2, applying the definition of convexity with α = h 2 /h 1 gives: ψ(x) ψ(x h 1 ) h 1 ψ(x) ψ(x h 2) h 2 By (2) the sequences are monotone so their limits as h 0 + exist, so define: ψ (x) = lim h 0 + ψ(x) ψ(x h) h By (1), ψ (x) ψ +(x) for any x. and ψ +(x) = lim h 0 + ψ(x + h) ψ(x) h Then, for fixed z, choose some a [ψ (z), ψ +(z) and define the line l by: l z (x) = ψ(z) + a(x z) Clearly l(z) = ψ(z) and by monotonicity of limits, it is easy to see that l z (x) ψ(x) x I Theorem 1.12. (Jensen s inequality) Let g be convex on I, an open subset of R. Let x I with probability 1 and assume X, g(x) have finite expectation. Then g(ex) E(g(X)) Proof. First note that EX I. So let l(x) = ax + b be the support line for g( ) at EX. Then by the definition of support line we have 1. l(ex) = g(ex) 2. l(x) g(x) x I Taking expectations in (2) above, we obtain: Eg(x) El(x) = E(ax + b) = aex + b = l(ex) Then noting (1) above, we are done. 8

Theorem 1.13. (Holder s inequality) If p, q > 1 and 1 p + 1 q = 1, then for any random variables X, Y, Proof. (Sketch) E( XY ) (E X p ) 1/p (E Y q ) 1/q Fix y > 0. Consider f(x) = xp p + yq q xy. By finding the minimum of f(x), we can show that f(x) 0 for all x. Then, choose x = X (E X p ), y = Y 1/p (E Y q ) 1/q and take expections of both sides of the inequality. Theorem 1.14. (Markov s inequality) Suppose X is a real valued random variable, and ψ : R R a positive function. Fix any set A B(R) (e.g. A = [a, )). Define i A to be inf{ψ(x) : X A}. Then, i A P(X A) E(ψ(X)) Proof. Note ψ(x) i A 1 X A. Take expectations. Example. Suppose X 0, ψ(x) = X, A = [a, ). Then, ap(x a) EX Similarly, a k P( X a) E( X k ) Example. Let Y 0 and EY 2 <. Then: P(Y > 0) (EY )2 EY 2 Proof. Apply Holder s inequality to Y 1 Y >0. 1.4 Convergence notions Theorem 1.15. (Relationships between notions of convergence) 1. X n a.s. X P(ω : X n (ω) X(ω) > ɛ i.o.) = 0 ɛ > 0 2. X n P X P(ω : X n (ω) X(ω) > ɛ) 0 3. X n L p X E X n X p 0 4. X n L p X X n P X 9

a.s. P Theorem 1.16. X n X X n X a.s. a.u. Proof. Since X n X, then by Egorov s theorem X n X. Then for any a.u. δ > 0 there exists a set B δ F with P(B δ ) < δ and X n X on Ω \ B δ. So for any ɛ > 0, there exists an N(ɛ, δ) N such that, for all n N, In other words, X n (ω) X(ω) < ɛ ω Ω \ B N(ɛ, δ) s.t. P( X n X ɛ) < δ for n > N We can send δ by letting N, which gives us the definition of convergence in P. Theorem 1.17. X n L p P X for some p > 0 X n X. Proof. By an application of Markov s inequality using φ(x) = X k, a k P( X a) E X k Take k = p. Apply to the r.v. X n X, with A = [ɛ, ) so that a = ɛ: P( X n X ɛ) E X n X p ɛ p Theorem 1.18. If X n a.s. X and X n L 1 Y, then X = Y a.s. Proof. Since X n L 1 Y, then there is a subsequence {X nk } that converges a.s. to Y (see section on Borel-Cantelli lemmas for proof). Example. L p convergence does not imply a.s. convergence, even if the sequence is bounded. Example. a.s. convergence implies L p convergence if the sequence is bounded. Theorem 1.19. (L 2 weak law) Suppose X 1, X 2,..., X n are r.v. s such that Then 1. EX i = µ i 1 2. E(X 2 i ) c i 1 3. E(X i X j ) = EX i EX j, i, j 1 n i=1 X i L 2 µ 10

Proof. We want to show that E{ [ 1 n Xi µ 2 } 0, n. Note that E( 1 n Xi ) = µ. Then we have E { [ 1 } 2 ( 1 ) Xi µ = Var Xi n n = 1 ( ) n 2 Var Xi = 1 n 2 Var(X i ) + [E(X i X j ) EX i EX j i i j By our covariance assumption, the rightmost term is zero. Also, note that Var(X i ) = E(X 2 i ) (EX i ) 2 c + µ 2 So our final expression reduces to { [ 1 } 2 E Xi µ n(x + µ2 ) n n 2 0 as n Theorem 1.20. (Bernstein s theorem) Suppose f : [0, 1 R is continuous. Then {f n } n 1, f n = n th degree polynomial such that sup f n (x) f(x) 0 as n x [0,1 Proof. Let x 1, x 2,..., x n,... be independent r.v. s with P(x i = 1) = x and P(x i = 0) = 1 x, where x [0, 1 is fixed. Consider the functions: f n (x) = n=0 ( ) n x k (1 x) n k f k Note that, if S n = the sum of the first n x i s, then [ ( ) Sn f n (x) = E f n ( ) k n We want to show that lim sup x [0,1 f n (x) f(x) < ɛ for all ɛ > 0, so consider 11

the quantity f n (x) f(x) : [ ( ) Sn f n (x) f(x) = E f f(x) n [ ( ) Sn = E f f(x) n ( ) Sn E f f(x) n [ ( ) Sn = E f f(x) 1 n Sn n x <δ [ ( ) Sn + E f f(x) 1 n Sn n x >δ for fixed δ > 0. Now note two facts: 1. sup x [0,1 f(x) = M < due to continuity of f and compactness of [0, 1. 2. f is continuous: for any ɛ > 0, δ > 0 such that if sn n x < δ, then f( sn n ) f(x) < ɛ Therefore we have that [ ( ) Sn E f f(x) 1 n Sn n x <δ < ɛ And that [ ( ) Sn E f f(x) 1 n Sn n x >δ [ ( ) Sn [ E f 1 Sn n n x >δ + E f (x) 1 Sn n x >δ (Triangle inequality) [ [ E M 1 Sn n x >δ + E M 1 Sn n x >δ ( S ) n = 2M P n x > δ Sn Var( n 2M ) δ 2 (Markov s inequality) p(1 p) = 2M nδ 2 1 2M 4nδ 2 = M 2nδ 2 And finally, putting the two pieces together, we have f n (x) f(x) ɛ + M, x [0, 1 2nδ2 12

Then since this is true for all n and the RHS sequence is monotone decreasing in n with limit ɛ, then lim sup f n (x) f(x) ɛ n x [0,1 Note that the use of Markov s inequality in the above step is required to free the probability P ( Sn n x δ) from dependence on x. Theorem 1.21. (Skorohod s representation theorem) d If X n X, then there are r.v. s Y n and Y defined on some joint probability triple a.s. with F Xn = F Yn and F X = F Y for all n, such that Y n Y. Proof. Let ([0, 1, B([0, 1), m) be the uniform probability space where m is the uniform probability measure (Lebesgue measure). Define the random variables: Y and Y n : ([0, 1, B([0, 1)) R by Y (y) = inf{x F X (x) y}, Y n (y) = inf{x F Xn (x) y} From Y and Y n, obtain the induced measures µ Y and µ Yn on (R, B(R)) by µ Y (B) = m{y [0, 1 Y (y) B} µ Yn (B) = m{y [0, 1 Y n (y) B} µ Y and µ Yn are probability measures since m is a probability measure. To show that F X = F Y (the argument for F Xn = F Yn is exactly analogous), note: F Y (x) = µ Y ((, x)) = m({y Y (y) x}) a.s. = m({y y F X (x)}) = F X (x) The process to show that Y n Y is the following. Since we used the probability integral transform to get corresponding random variables on the uniform probability space, we show a.s. convergence on the uniform probability space. So we fix c [0, 1 and show that lim inf n Y n (c) Y (c) and lim sup n Y n (c) Y (c) First note that if Y is not continuous at c, then since there are only countably many discontinuities, each discontinuity is a singleton with zero measure. Therefore we can define Y (c) = Y n (c) = 0 to obtain a.s. convergence at that point without affecting the distribution function. So assume that Y is continuous at c. 1. lim inf n Y n (c) Y (c) Fix ɛ > 0. Since Y is continuous at c, we can find an α R such that Y (c) ɛ < α < Y (c). Thus F Yn (α) < c by definition of Y n. Then since F Yn F Y, for large enough n we can find F Yn (α) < c as well. By monotonicity, then Y (c) ɛ < α < Y n (c) for large enough n. Thus lim inf n Y n (c) Y (c). 13

2. lim sup n Y n (c) Y (c) Fix ɛ > 0. Let d [0, 1 be such that c < d and Y is continuous at d. Then there exists α such that Y (d) < α < Y (d) + ɛ. By definition of Y, we have c < d F Y (Y (d)) F Y (α). Since F Yn F Y, for large enough n we can find c F Yn (α) also. Thus by monotonicity, Y n (c) α Y (d) + ɛ for large enough n. Thus lim sup n Y n (c) Y (c). 1.5 Independence Definition. (σ-field generated by a family of r.v. s) The σ-field generated by X 1,..., X n is the set { n } σ(x 1,..., X n ) = σ σ(x i ) Or, equivalently, σ(x 1,..., X n ) = i=1 { ω : ( X 1 (ω),..., X n (ω) ) } A, A B(R n ) Remark. σ(x 1,..., X n ) is NOT the same as { n i=1 X 1 i (B i ), B i B(R)}. This is because B(R n ) = B(R)... B(R) where: B(R)... B(R) = σ{(b 1,..., B n ), B i B(R)} by the definition of product spaces and their associated sigma-algebras, so B(R n ) is larger than the n-cartesian product of B(R) s. Lemma 1.22. σ(x 1,..., X n ) is precisely the set Proof. Define the following sets: σ ( { n i=1x 1 i (B i ), B i B(R)} ) S 1 = n i=1σ(x i ) We show that σ(s 1 ) = σ(s 2 ). 1. S 2 S 1 : S 2 = { n i=1x 1 i (B i ), B i B(R)} Let S be an arbitrary element of S 1. Then S σ(x k ) for some k. Then S = X 1 k (B) for some B B(R). Therefore S can be written as X 1 k (B) ( i k X 1 i (R)) = X 1 k (B) Ω, so S S 2. 2. S 2 σ(s 1 ): Note that S 2 is precisely the set of all elements in S 1 union all their countable intersections. Since σ(s 1 ) is a σ-field, then it includes all countable intersections of S 1. 14

Therefore since S 2 is contained inside σ(s 1 ), then σ(s 2 ) σ(s 1 ). But since S 2 S 1, then σ(s 2 ) σ(s 1 ). Therefore σ(s 2 ) = σ(s 1 ) Corollary 1.23. σ(x 1,..., X n ) is precisely the set: σ ({ n i=1 A i, A i σ(x i )}). Definition. (Independence of two objects) 1. Events: A, B are independent if P(A B) = P(A) P(B). 2. σ-fields: B 1, B 2 are independent σ-fields if (1) holds for all A B 1, B B 2. 3. Random variables: X, Y are independent random variables if (2) holds for their corresponding generated σ-fields σ(x) and σ(y ). Remark. If two random variables are independent, then they are defined on the same probability space. Theorem 1.24. (Equivalent characterizations of independence) Let X : Ω (S 1, S 1 ) and Y : Ω (S 2, S 2 ). TFAE: 1. X, Y are independent. 2. For all A S 1 and B S 2, P(X A, Y B) = P(X A) P(Y B). 3. Suppose A 1 S 1, A 2 S 2 are π-classes which generate their corresponding σ- fields. Then for all A A 1, B A 2, P(X A, Y B) = P(X A) P(Y B). 4. For all bounded measurable functions h 1 : S 1 R and h 2 : S 2 R, E[h 1 (X) h 2 (Y ) = E[h 1 (X) E[h 2 (Y ). Remark. Note that if X, Y are independent, then the last characterization (4) holds if E[h 2 1(X) < and E[h 2 2(Y ) < Proof. We prove (1) (2) (3) and then (2) (4). (1) implies (2): Consider any A 1 S 1, A 2 S 2. Note that P(X A 1 ) = P({ω : X(ω) A 1 }) P(Y A 2 ) = P({ω : Y (ω) A 2 }) P(X A 1, Y A 2 ) = P({ω : X(ω) A 1 } {ω : Y (ω) A 2 }) By assumption, X and Y are independent, so that σ(x) and σ(y ) satisfy the condition that A 1 σ(x), A 2 σ(y ), P(A 1 A 2 ) = P(A 1 ) P(A 2 ) Changing notation and using the definition of σ(x) and σ(y ), this is equivalent to the statement that, for any B 1 S 1, B 2 S 2, P({ω : X(ω) B 1 } {ω : Y (ω) B 2 }) = P({ω : X(ω) B 1 }) P({ω : X(ω) B 2 }) 15

Letting B 1 = A 1 and B 2 = A 2 shows that P(X A 1, Y A 2 ) = P(X A 1 ) P(Y A 2 ) (2) implies (3): Trivial. (3) implies (2): By the π-λ theorem. Define: L 1 = {A S 1 P(X A, Y B) = P(X A)P(Y B) B A 2 } Note that L 1 A 1. If we can show that L 1 is a Λ-class, then since A 1 is a π-class, we will have shown that L 1 σ(a 1 ), or that L 1 S 1. Since L 1 S 1, then we will have shown that L 1 = S 1. To show that L 1 is a Λ-class: 1. L 1 contains the whole space S: Note that P(X S) = P(X 1 (S)) = P(Ω) = 1. Therefore since X, Y are defined on the same space, then P(X S, Y B) = P(Ω Y 1 (B)) = P(Y B). 2. L 1 is closed under proper differences: Let A 1, A 2 L 1 such that A 1 A 2. Note that A 1 \ A 2 S 1 since S 1 is a σ-field. Note: P(X (A 1 \ A 2 ), Y B) = P([X 1 (A 1 ) \ X 1 (A 2 ) Y 1 (B)) = P([X 1 (A 1 ) Y 1 (B) \ [X 1 (A 2 ) Y 1 (B)) = P([X 1 (A 1 ) Y 1 (B)) P([X 1 (A 2 ) Y 1 (B)) = P(A 1 )P(B) P(A 2 )P(B) = P(A 1 \ A 2 ) P(B) Where in the above steps we have used the facts that X, Y are measurable functions defined on the same space Ω, and that P is a measure on a σ-field. 3. L 1 is closed under monotone increasing limit Let {A n } n=1 be an increasing sequence in L 1. Since S 1 is a σ-field, then n=1a n S 1 and also we can re-write n=1a n as a disjoint union n=1g n S 1 where G n = A n \ n i=1g n 1, G 1 = A 1 16

Then note: P(X A n, Y B) = P(X G n, Y B) = P(X 1 ( G n ) Y 1 (B)) = P( [X 1 (G n ) Y 1 (B)) = P(X 1 (G n ) Y 1 (B)) = n=1 P(X G n ) P(Y B) n=1 = P(X G n ) P(Y B) = P(X A n ) P(Y B) Where in going from the fourth to the fifth line we have used the fact that L 1 is closed under proper differences, and the sequence {G n } is a sequence of proper differences of sets in L 1. Thus we have shown that L 1 is a Λ-class and that, therefore, L 1 = S 1. Next, define: L 2 = {B S 2 P(X A, Y B) = P(X A)P(Y B) A S 1 } Note that, by the result shown at the top, L 2 A 2. By a similar proof to the one above, it can be shown that L 2 is a Λ-class. Then since A 2 is a π-class, L 2 σ(a 2 ), or that L 2 S 2. Since L 2 S 2, then L 2 = S 2, as desired. (4) implies (2) Fix A S 1 and B S 2, and let h 1 (X) = 1 A (X), h 2 (Y ) = 1 B (Y ). (2) implies (4) Independence of X and Y implies that (4) holds at least for indicator functions h 1 = 1 A and h 2 = 1 B. This can be extended to simple functions, and then to bounded measurable functions using approximation by simple functions. Definition. (Pairwise independence) Let B 1,..., B 2 be σ-fields such that, for all i j, B i and B j are independent. Then we say {B i } 1 i n is pairwise independent. Definition. (Independence of countable collections) Three characterizations: 1. Suppose B 1,..., B n F are σ-fields. Then {B i } 1 i n is independent if, for any A 1 B 1,..., A n B n, P( n A i ) = n P(Ai ) 17

2. Suppose A 1,..., A n are arbitrary subcollections of F. We say A 1,..., A n are independent if, for any I {1, 2,..., n} and i I, A i A i, P( i I A i ) = i I P(A i ) 3. Suppose A 1,..., A n are arbitrary subcollections of F. Define A i = {Ω, A A i }. Then A 1,..., A n are independent if, for all A i A i, P( n i=1a i ) = n P(A i ) Remark. What is exactly the difference between the second and third characterization? In the second, we do not force the intersection to be over all n in each I. In the third characterization, we always force the intersection to be over all n. But since we can always set A j = Ω for any j = 1,..., n in the third characterization, then the two characterizations are equivalent. i=1 Definition. (Independence of arbitrary collections) Suppose {B α : α J} where B α F and J is an arbitrary index set. We say that the B α s are independent if any subcollection is independent. That is, for n 2 and α 1 α 2... α n J, {B αi } 1 i n are independent. Theorem 1.25. Suppose {B i } 1 i n are sub-σ-fields of F and A i B i where A i is a π-class that generates B i. If the {A i } 1 i n are independent, then the {B i } 1 i n are independent. Theorem 1.26. If X 1,..., X n are independent, then X 1,..., X j is independent of X j+1,..., X n for 1 < j < n. Proof. By a previous result, σ(x 1,..., X n ) = σ ( { n i=1x 1 i (B i ), B i B(R)} ) Note that { n i=1 X 1 i (B i ), B i B(R)} is a π-class that generates σ(x 1,..., X n ) and apply the setup to X 1,..., X j and X j+1,..., X n. 18

2 Zero-one laws and stopping times 2.1 Borel-Cantelli lemmas Definition. (lim sup and lim inf of a sequence) Let A n be a sequence of subsets of Ω. Then 1. lim sup A n = lim m n=ma n = {ω that are in infinitely many A n } 2. lim inf A n = lim m n=ma n = {ω that are in all but finitely many A n } Remark. Why are these sets given the name lim sup and lim inf? Because: lim sup 1 An = 1 lim sup An and lim inf 1 A n = 1 lim inf An n n Definition. (Infinitely often) Let A n be a sequence of subsets of Ω. Then lim sup A n = {ω : ω A n i.o.}, where i.o. stands for infinitely often. Lemma 2.1. (Very weak lemmas) 1. P(A n i.o.) lim sup n P(A n ) 2. P(A n occurs ultimately) lim inf n P(A n ) Proof. We prove the first. The proof for the second is analogous. Define B n = n=ma n. A n i.o.. Note that this is a decreasing sequence with limit Lemma 2.2. (Borel-Cantelli 1) Let A n be a sequence of elements of F. If n=1 P(A n) <, then P(A n i.o.) = 0. Proof. By definition of lim sup, we have that lim sup A n m=na m. Therefore: P(lim sup A n ) P( m=na m ) P(A m ) m=n for any n Since n=1 P(A n) <, its sequence of partial sums n m=1 P(A m) converges. Therefore m=n P(A m) 0 as n. Lemma 2.3. (Borel-Cantelli 2) Let A n be an independent sequence of elements of F. If n=1 P(A n) =, then P(A n i.o.) = 1. 19

Proof. Let M < N <. By independence and the identity 1 x e x, we have: N P( N n=m A c n) = (1 P(A n )) n=m N exp( P(A n )) n=m ( = exp ) N P(A n ) 0 n=m as N Therefore since P( n=m Ac n) = 0, then P( n=m A n) = 1. Since this holds for arbitrary M and n=m A n decreases (in M) to lim sup A n, then P(lim sup A n ) = 1 also. Remark. If {A n } is an independent sequence, then applying the contrapositive of Borel-Cantelli 2 shows that the converse of Borel-Cantelli 1 holds. To see that it does not hold in general, let the measure space be given by ([0, 1, B([0, 1), m) where m is Lebesgue measure. Consider the sequence of events A n = [0, 1/n for n = 1, 2,.... How can we use the Borel-Cantelli lemmas to show convergence a.s.? Lemma 2.4. X n a.s. X if and only if for every ɛ > 0, 1. lim m P( X n X ɛ for all n m) = 1 2. lim m P( X n X > ɛ for some n m) = 0 Proof. We show each direction separately. ( ) a.s. Suppose that X n X. Fix ɛ > 0. Define Ω 0 = {ω : X n (ω) X(ω)}. By assumption, for every ω 0 Ω 0 there exists N(ω 0, ɛ) N such that n N(ω 0, ɛ) X n (ω 0 ) X(ω 0 ) ɛ Thus for any such ω 0 Ω 0, there exists some corresponding N(ω 0, ɛ) such that ω 0 n=n(ω 0,ɛ) {ω : X n(ω) X(ω) ɛ}. Therefore, Ω 0 = N=1 n=n {ω : X n (ω) X(ω) ɛ} The union is over a sequence of sets increasing to Ω 0 which has probability 1, so by continuity the union has probability 1. And finally note: ( ) N=1 n=n { X n X ɛ} = lim N n=n { X n X ɛ} 20

Define A(ɛ) = N=1 n=n { X n X ɛ}. By assumption, this set has probability 1 for all ɛ > 0. For any ω A(ɛ), there exists some N(ω, ɛ) such that n N(ω, ɛ) X n (ω) X(ω) ɛ. To show X n X at some point ω 0 with P(ω 0 ) > 0, we only need to show that we can select an increasing sequence of such N s corresponding to ɛ 0, e.g. ɛ = 1/n: N(ω 0, 1), N(ω 0, 1/2), N(ω 0, 1/3),... We can do this only for ω 0 n=1a(1/n). But since P(A(ɛ)) = 1 for all ɛ > 0, then n=1a(1/n) must also have probability 1 and we are done. Lemma 2.5. X n a.s. X if and only if for all ɛ > 0, P( X n X > ɛ i.o.) = 0. Proof. Note that: { X n X > ɛ i.o.} = m=1 n=m { X n X > ɛ} And use the second iff condition in the previous lemma. Theorem 2.6. Suppose X 1,..., X n iid exp(1). So we have P(X i x i ) = 1 e x and f(x) = e x for x 0. Then: 1. X n log n P 0 X 2. lim sup n a.s. n log n 1 3. Let M n = max{x 1,..., X n }. Then Mn a.s. log n 1 Proof. In order. Proof of (1) Note simply that: ( ) X n P log n 0 > ɛ = P(X n > ɛ log n) = exp( ɛ log n) = 1 nɛ Proof of (2) X n We show lim sup n log n 1 and lim sup n to showing that, for any ɛ > 0, ( ) Xn P log n 1 + ɛ i.o. = 0 and P 21 Xn log n 1. This is equivalent ( ) Xn log n 1 ɛ i.o. = 1

To show the first, observe that: P(X n (1 + ɛ) log n) = 1 n 1+ɛ which is summable, so the result follows from Borel-Cantelli 1. To show the second, note that P(X n (1 + ɛ) log n) = 1 n 1 ɛ is not summable, so the result follows from Borel-Cantelli 2. Proof of (3) We show the result by showing that lim inf Mn Mn log n 1 and lim sup log n 1. ( ) M For the first, we show P n log n 1 ɛ i.o. = 0: ( ) ( ) n Mn P log n 1 ɛ X1 = P log n 1 ɛ = (1 e (1 ɛ) log n) n ( = 1 1 ) n n 1 ɛ ) (e 1 n 1 ɛ = e nɛ Which is summable, so the result follows by Borel-Cantelli 1. The second follows immediately from a deterministic fact: Let {x n } n 1, x n 1 be a sequence of real numbers. another sequence which increases to. Then lim sup n where m n = max{x 1, x 2,...}. x n m n = α lim sup = α b n n b n Suppose {b n } n 1 is To prove this, fix j 1 and note that, since b n, { } { } max{x1,..., x n } max{xj, x j+1,..., x n } lim sup = lim sup n b n n b n { [ } xk lim sup max : j k n n b k { } xk sup k j b k Letting j, we have { } max{x1,..., x n } lim sup n b n lim j sup = lim sup n { xk k j b k { xn b n } } 22

Observing that lim sup mn b n lim sup xn b n completes the proof. Remark. The point of this whole example is to show that convergence in probability does not necessarily imply convergence a.s. Lemma 2.7. Let {X n } n=1 be a sequence of measurable functions (random variables). If, for any ɛ > 0, P( X n X > ɛ i.o.) = 0, then X n P X. Proof. Note that P( X n X > ɛ i.o.) = P(lim sup X n X > ɛ). Also note {lim X n X > ɛ} {lim sup X n X > ɛ} for any n, so lim P( X n X > ɛ) P(lim sup X n X > ɛ) P Theorem 2.8. X n X if and only if for every subsequence {X nj } j 1 there is a further subsequence {X njk } k 1 that converges a.s. to X. To prove this theorem, we first prove a deterministic lemma: Lemma 2.9. Let y n be a sequence of elements of a topological space. If every subsequence y nm has a further subsequence y nmk that converges to y, then y n converges to y also. Proof. Assume, on the contrary, that y n y. Then there exists 1. an open set I containing the limit y and 2. a subsequence y nm such that y nm / G for all m. but then obviously no subsequence of y nm can converge to y, a contradiction. We now prove the main result. Proof. The proof is a simple application of the first Borel-Cantelli lemma. ( ) P Assume X n X. Fix a subsequence {X nj } j 1. Clearly X nj so that for any ɛ > 0, P X as well, P( X nj X ɛ) 0 as j Let {ɛ k } k 1 be a positive sequence which 0. Since the above holds for any ɛ > 0, then we can sequentially find a subsequence n jk n jk 1... such that P( X njk X ɛ k ) 1 2 k 23

Therefore we have P( Xnjk X ɛ k) 1 2 k < Thus by the first Borel-Cantelli lemma, P( X njk X > ɛ k i.o.) = 0 and a.s. therefore X nkj X. ( ) Assume that, for every subsequence {X nj } j 1 there exists a further subsequence {X njk } k 1 such that X njk X as k a.s.. Fix ɛ > 0. Define y n = P( X n X > ɛ). By assumption and the fact that convergence a.s. implies convergence in probability, there exists a further subsequence (of some intermediate subsequence) {X njk } k 1 such that y njk = P( X njk X > ɛ) 0 as k Since every further subsequence (of some intermediate subsequence) is convergent to 0, then by our deterministic lemma, y n itself is convergent to 0. P Thus X n X. Theorem 2.10. (Dominated convergence theorem) If X n P X and X n Y with E(Y ) <, then 1. EX i EX 2. E( X n X ) 0 P Proof. Suppose X n X. Given any subsequence {n j }, there exists a further a.s. subsequence {n jk } such that X njk X. By DCT, then we have EX njk EX Define the sequence of real numbers {y n } by y n = EX n. Since, given any subsequence {y nj } j 1, there exists a further subsequence {y njk } k 1 that converges to EX, then by our previous deterministic lemma we have y n = EX n EX Remark. Note that this result only requires convergence in probability, not convergence a.s. as is normally required in the statement of DCT. Theorem 2.11. (Miscellaneous note) Let {X n } n 0 be a sequence of random variables. Then: {X n > L n} = {inf X n > } L=1 24

Proof. If some ω is in the LHS, then it is in {X n > L 0 n} for some L 0 N. Thus the sequence {X n (ω)} is bounded below at that value and so is in the RHS. The exact opposite argument works to showing that some ω in the RHS is also in the LHS. 2.2 Filtrations and stopping times Consider an infinite sequence of random variables X 1, X 2,... defined on a common probability space (Ω, F, P). The σ-field generated by the first n random variables is: F n = {B = {ω : X 1 (ω),..., X n (ω) A}, A B(R n )} This is a σ-field since the X i s are measurable and all intersections and unions between elements in each induced σ field are included. Since sets of those form generate the Borel σ-field on R and all X i s are measurable, we can also write: F n = σ{{ω : X 1 (ω) x 1,..., X n (ω) x n }, x 1,..., x n R} From the first formulation, it is easy to see that if we take A = A 1 R and A 1 B(R n 1 ), then F n 1 F n Definition. (Filtration) A collection of sub-σ-fields {F n } n 1 is a filtration if it satisfies F 1 F 2 F 3... Remark. In general, we want to think of these sub-σ-fields as partial information. A σ-algebra defines the set of events that can be measured. Therefore a filtration is often used to represent the change in the set of events that can be measured, through gain or loss of information. Another interpretation of a filtration involves times. Under a filtration, we can view the set of events in F t as the set of questions that can be answered at time t, which naturally carries the ascending/descending structure of the filtration. Lemma 2.12. Fix n 1. Under the usual filtration, a random variable Y is F n -measurable if and only if: Y = g(x 1,..., X n ) where g : R n R is a deterministic measurable function. Proof. (Sketch) Consider an arbitrary F n -measurable indicator function 1 B (ω). By definition, it looks like: 1 B (ω) = 1 A (X 1 (ω),..., X n (ω)) = g(x 1,..., X n ), A B(R n ) 25

Therefore simple functions also have this general form, and thus general measurable functions also have this form. Definition. (Stopping time) A random variable τ : Ω {1, 2,...} { } is a stopping time with respect to the filtration {F n } n 1 if, for any n, {ω : τ(ω) = n} F n n Corollary 2.13. The above condition is equivalent to the condition: Proof. Fix n 1. {ω : τ(ω) n} F n n Assume the first definition. We have that: {τ n} = n i=1 {τ = i}. The RHS is a union of events which are all individually F n since the probability space is filtered. Thus the entire RHS F n. Assume the second definition. Then the event {τ n 1} F n 1 and thus is also F n since the probability space is filtered. Since F n is closed under set difference, then {τ = n} = {τ n} \ {τ n 1} F n Corollary 2.14. {T = } F where F is defined by F = σ( F n ). Proof. Trivial. Example. (Hitting time) Let a filtration be given by F n = σ(x 1,..., X n ). The hitting time of (0, ) defined below is a stopping time: τ = inf{n 1 : X n > 0} To see this, fix n 1 and note that {τ = n} = n 1 j=1 {X j 0} {X n > 0}. Example. The following is not a stopping time: τ = sup{n 1 : X n > 0} To see this, fix n 1 and note that {τ = n} = {X n > 0} { j=n+1 {X j 0} }. Example. (Warning) Even the most innocuous finite a.s. stopping time can have a surprisingly infinite expected value. Consider a symmetric simple random walk {X n } with S n = n X i. Let T = inf{n 0 : S n = 1}, i.e. the hitting time to 1. We show that ET =. 26

Since X n is symmetric, with probability 1 2 we have T = 1. Also with probability 1 2, we have T = 1 + T + T where T, T are iid copies of T. To see why this is, if S 1 = 1, then to get back up to +1 we need to first return to 0 which takes T steps, and then return to +1 which takes T steps. Since stopping times are non-negative, we can take expectations and use linearity regardless of finiteness: ET = 1 2 1 + 1 2 (1 + ET + ET ) Using ET = ET = ET, we obtain that ET must satisfy ET = 1 + ET, which has a unique solution =. Theorem 2.15. (Wald s lemma) Let X 1, X 2,... be iid with E( X i ) < and E(X i ) = µ. Let τ be a stopping time with respect to the filtration F n = σ(x 1,..., X n ) such that E(τ) <. Define the random sum: S τ = τ i=1 X i. Then: Proof. Observe that: ES τ = E(τ) E(X 1 ) S τ = X i 1 i τ i=1 Consider {i τ} for fixed i. Because {i τ} = {τ i 1} c and τ is a stopping time, {τ i 1} F i 1 {i τ} F i 1 The X i s are independent, so X i is independent of F i 1. In particular, X i and 1 i τ are independent. Therefore: [ E X i 1 i τ = E [ X i 1 i τ (by MCT) i=1 = i=1 E X i P(i τ) i=1 = E X 1 P(i τ) i=1 = E X 1 E(τ) < Noting that i=1 X i 1 i τ is dominated by i=1 X i 1 i τ, we can now show the full result by dominated convergence: [ E X i 1 i τ = E [X i 1 i τ (by DCT) i=1 = i=1 EX i P(i τ) i=1 = EX 1 E(τ) < 27

Example. (Usage of Wald s lemma) Suppose X i are iid with X i = { +1, p = 1/2 1, p = 1/2 Let τ = inf{n : S n = 30 or 10}. Is this a stopping time? It is easily checked that E(τ) <. Now note that ES τ = 0 since EX i = 0 (to show this more rigorously, use the stopped process S τ n and apply DCT). Therefore: 0 = 30 P(S τ = 30) + ( 10) (1 P(S τ = 10) And solving gives us that P(S τ = 30) = 1/4. 2.3 Kolmogorov zero-one law Definition. (Tail-after-time-n σ-field) Let X 1, X 2,... be random variables. Define F n = σ(x 1,..., X n ) as usual. Then: 1. The σ-field generated by X 1, X 2,... is: 2. The tail-after-time-n σ-field is: Definition. (Tail σ-field) F = σ ( j=1σ(x 1,..., X j ) ) L n = σ(x n+1, X n+2,...) = σ ( j=1 σ(x n+1,..., X n+j ) ) Observe that L n defined above is a decreasing sequence in n. We call the limit the tail σ-field of X 1, X 2,...: T = n=0l n = n=1σ(x n, X n+1,...) Example. The event {lim sup X n 2} T. To see this, define the set A q = {ω : X n (ω) 2 1/q i.o.} An intuitive interpretation of A q is as the set where X n goes above and stays above 2 1/q. Now note that {lim sup X n 2} = q=1a q. Therefore to show that the limsup is in the tail field, it is sufficient to show that A q T for all q. Note that A q can be written: m=1 n m {X n 2 1/q}. Fix some k 1. Then: A q = m=k n m {X n 2 1/q} as well. Thus for any k 1, A q belongs to L k. 28

Example. The event {lim S n < } T. Apply the Cauchy criterion of convergence. Example. The event {ω : lim sup S n > 2} / T. i.e. this event depends on the first few r.v. s. To see this, let X 1 = 3, X 2 = 3, X 3 = 3, X 4 = 3,... Then S n alternates between 0 and 3 and so lim sup S n = 3 > 2 which is OK. However, if we change X 1 = 10 and leave the rest of the sequence alone, then lim sup S n = 10 < 2. Theorem 2.16. (Kolmogorov zero-one law) Suppose X 1, X 2,... are independent. Let A T, the tail field of X 1, X 2,.... Then either P(A) = 0 or P(A) = 1. Remark. A more elegant proof than the one presented below can be found in the section on martingale convergence theorems. Lemma 2.17. With the setup of the Kolmogorov zero-one law, fix n 1. Then σ(x 1,..., X n ) and L n are independent. Proof. The definition of L n is: L n = σ ( j=1σ(x n+1,..., X nj ) ) Note that j=1 σ(x n+1,..., X n+j ) is a π-class which generates L n. Therefore, we only need to show that P(A 1 A 2 ) = P(A 1 ) P(A 2 ) for A 1 σ(x 1,..., X n ) and A 2 j=1σ(x n+1,..., X n+j ) By this definition, then A 2 σ(x n+1,..., X n+k ) for some k 1. Lemma 2.18. σ(x 1, X 2,...) is independent of T. By the π-class argu- Proof. Write σ(x 1, X 2,...) = σ( j=1 σ(x 1,..., X j )). ment, we only need to show independence of: A 1 j=1σ(x 1,..., X j ) and A 2 T Observe that A 1 σ(x 1,..., X k ) for some k 1. Also, by definition of T, A 2 σ(x k+1,...) for the same k. Invoke the previous lemma. Proof. (of Kolmogorov zero-one law) Fix A T. Since σ(x 1, X 2,...) L n for all n, then σ(x 1, X 2,...) n=1l n = T. Since A σ(x 1, X 2,...) and A T also, then A is independent of A by the previous lemma. 29

3 Laws of Large Numbers 3.1 4 th moment/l 2 strong laws, Glivenko-Cantelli Theorem 3.1. (4 th moment strong law) Let {X i } i 1 be iid with EX i = µ and E(Xi 4 Sn ) <. Then n Lemma 3.2. If EX i = 0, then E(S 4 n) 3n 2 E(X 4 i ). i,j,k,l a.s. µ. Proof. Note that: [ ( ) 4 E(Sn) 4 = E Xi = E(X i X j X k X l ) Note that the RHS expression is zero if there is at least one index which does not equal one of the others. For example, E(X 1, X 2 2, X 3 ) = 0. Therefore, E(S 4 n) = E(X 4 i ) + ( ) 4 E(Xi 2 Xj 2 ) 2 i j = ne(x 4 i ) + 3n(n 1)E(X 2 1 X 2 2 ) ne(x 4 1 ) + 3n(n 1)E(X 4 1 ) Where the last step follows from the Cauchy-Schwarz inequality. Proof. (of 4 th moment strong law) a.s. Let EX i = 0. We show that Sn n 0 or, equivalently, P( Sn n > ɛ i.o.) = 0. Note: ( ( ) [ S n E Sn 4 ) P n > ɛ n ɛ 4 3E(X4 1 ) n 2 ɛ 4 Which is summable, so we can invoke Borel-Cantelli to obtain the result. Lemma 3.3. Suppose {A n } n 1 is an independent sequence of events with P(A n ) = p for all n 1. Then 1 a.s. n 1Ai p. Proof. Let X n (ω) = 1 An (ω). Check assumptions and apply the 4 th moment strong law. Theorem 3.4. (L 2 strong law) Let {X i } i 1 be a sequence of random variables such that 1. EX i = 0 i 30

2. EX 2 i c i 3. E(X i X j ) = 0 i j Then n X i /n = S n /n a.s. 0. Lemma 3.5. Let {S n } n 1 be a sequence in R. If: 1. there exists a subsequence {n(j)} j 1 with n(j) such that S n(j) n(j) 0 2. for d j = max n(j) n n(j+1) S n S n(j) we have dj n(j) 0 as j Then S n /n 0 Proof. Fix an n(j) n. S n n = S n S n(j) + S n(j) n S n(j) + S n S n(j) n S n(j) n(j) + d j n(j) 0 as j Proof. (of main result) Fix n(j) = j 2, j 1. We proceed in two steps: Step 1: show that S n(j) n(j) = Sj2 j 2 a.s. 0 This follows immediately from Borel-Cantelli. Note that which is summable over j. Step 2: show that D j 2 j 2 P[ S j 2 > ɛj 2 Var(S j 2) j 4 ɛ 2 cj2 j 4 ɛ 2 = c j 2 ɛ 2 a.s. 0, where D j 2 = max S n S j 2 j 2 n (j+1) 2 We show this again by Borel-Cantelli. First observe that, by assumption, E(S n S i ) = 0 for all 0 i n. Also, note E[(S n S i ) 2 = Var( n i+1 X i) c(n i). Second, observe that Dj 2 = max Sn S 2 j 2 2 j 2 n (j+1) 2 (j+1) 2 n=j 2 Sn S j 2 2 31

Therefore we have P(D j 2 > ɛj 2 ) E[(D j 2)2 ɛ 2 j [ 4 (j+1) 2 E Sn n=j S 2 j 2 2 ɛ 2 j 4 (j+1) 2 n=j c(n j 2 ) 2 ɛ 2 j 4 P(D j 2 > ɛj 2 ) c ɛ 2 j2 which is summable, so D j 2 j 2 = c 2j+1 k=1 k ɛ 2 j 4 c(2j + 1)(2j + 2) = 2ɛ 2 j 4 a.s. 0. Finally, observe that the two conditions for Lemma 3.5 are satisfied. Since the two conditions hold a.s., then the convergence result of the lemma is also a.s. Theorem 3.6. (Glivenko-Cantelli theorem) Suppose {X i } i 1 is an iid sequence. Define the empirical distribution function as G n (ω, x) = 1 n Define F as the distribution function of X i, i.e. 1Xi(ω) x, x R F (x) = P(X 1 x), x R Then G n (ω, x) F (x) a.s. 0 for any fixed x and sup Gn (ω, x) F (x) a.s. 0 x R Remark. Note that the first (weaker) result is akin to pointwise convergence of the distribution functions, and follows immediately from the strong law because indicator functions have finite moments. The second result gives uniform convergence of the distribution functions and is much stronger. Lemma 3.7. (Deterministic) Suppose {F n } n 1 and F are distribution functions. If 1. For all x Q, F n (x) F (x) 2. For all atoms x of F, F n (x) F (x) and F n (x ) F (x ) Then sup x R F n (x) F (x) 0. Proof. (Exercise) 32

Lemma 3.8. Any probability measure µ on R has only countably many atoms. Proof. Let A be the set of atoms of µ. Consider any x A. By the definition of atom, F (x ) < F (x + ), so we can pick a rational q x such that F (x ) < q x < F (x + ). Then, since µ is non-decreasing, then for y z A we must have q y q z. Otherwise one of the points is not an atom. Therefore the mapping from A Q is one-to-one and therefore A = Q. Proof. (of main result) Fix x R and define: A n = {X n x} = {ω : X n (ω) x} This sequence {A n } satisfies the conditions of Lemma 3.3, so we immediately obtain the first result: 1 n 1Ai = G n (ω, x) a.s. F (x) To show the second result, we show that G n and F satisfy the two conditions of Lemma 3.7. 1. First, note that if we redefine the sequence {A n } to be X n strictly less than X, then we immediately obtain 1 n 1Ai = G n(ω, x ) a.s. F (x ) Also, since we know G n (ω, x) a.s. F (x) for all x R, then it also holds for all q Q. Therefore if we define then P(B q ) = 1. B q = {ω : G n (ω, q) F (q)} 2. Now let A denote the set of all atoms of F. For any x A, x R also and we have already shown that G n (ω, x) a.s. F (x) and G n (ω, x ) a.s. F (x ) for all x R. Therefore if we define then P(C x ) = 1. C x = {ω : G n (ω, x) F (x), G n (ω, x ) F (x )} Now note, by definition of B q and C x above, {ω : sup G n (ω, x) F (x) 0} B q C x x R q R x A The result follows by observing that {A n } n 1 P( A n ) = 1. and P(A n ) = 1 n, then 33

3.2 2 nd moment results Theorem 3.9. (Kolmogorov s maximal inequality) Let {X i } be a sequence of independent r.v. s with EX i = 0 and E(Xi 2) < i. Define S n = Xi and Sn = max S k 1 k n Then P(S n x) E(S2 n) x 2 Remark. Note the similarity to the Chebyshev s inequality bound on S n : P( S n x) E(S2 n) x 2 Proof. Fix k. Define the random variable A k = { S i < x and S k x for 1 i k 1} In other words, A k is the event that the first crossing of partial sums over x occurs at the k th step. Observe that the A k s are disjoint and also that: {S n > x} = n k=1 A k. Furthermore, note that the joint event (A k, S k ) is independent of the event S n S k = n k+1 X i since the former deals only with partial sums up to k while the latter deals with sums of X i s from k + 1 onwards, and the X i s are independent. E(Sn) 2 = Sn 2 dp n k=1 A k = Sn 2 dp k=1 A k = (S n S k + S k ) 2 dp k=1 A k = (S n S k ) 2 + Sk 2 + 2S k (S n S k ) dp k=1 A k = (S n S k ) 2 dp + Sk 2 dp k=1 A k k=1 A k + 2E[S k (S n S k )1 Ak = k=1 k=1 Sk 2 dp + A k k=1 2E[S k (S n S k )1 Ak k=1 A k S 2 k dp (by independence of S k, (S n S k )) 34

Note that, by definition of A k, S k x in the the last step, so the integrand is bounded below by x 2 and we have: n E(Sn) 2 x 2 1 Ak dp = x 2 P(A k ) k=1 k=1 By disjointness and definition of the A k s, we then have E(Sn) 2 x 2 P( n k=1a k ) = x 2 P(Sn > x) Theorem 3.10. (Cauchy criterion) A series n=1 a n in a complete metric space M converges iff for every ɛ > 0 there exists N N such that for any n > k > N, a j < ɛ Or, equivalently, that n k j=k sup a j 0 j=k as k Proof. The Cauchy criterion is equivalent to the condition that the sequence of partial sums S n is a Cauchy sequence. Since M is complete, then S n converges. Since an infinite series converges iff the sequence of partial sums converges, then the result is shown. Theorem 3.11. ( Random series theorem ) Let {X i } i 1 be independent with E(X i ) = 0 and E(Xi 2) = σ2 i < for all i. Suppose that i=1 σ2 i <. Then i=1 X i(ω) converges a.s. Proof. We want to show that the Cauchy criterion is satisfied. Chebyshev s inequality allows us to control the size of the partial sum for given k, but that is not enough to satisfy the criterion. Instead, we must use Kolmogorov s maximal inequality to control the size of any given sub-series in the tail. Define M k (ω) = sup n k n j=k X j(ω). By the Cauchy criterion, it will be sufficient to show that M k (ω) 0 a.s. Fix ɛ > 0 and N > k. Note that, by Kolmogorov s maximal inequality (for the series starting from index k), ( ) σi 2 P sup X i (ω) > ɛ ɛ 2 n k i=k k n N i=k Letting N, and using continuity of P, we have ( ) σi 2 P sup X i (ω) > ɛ = P(M k (ω) > ɛ) ɛ 2 35 i=k i=k

Now we send k to obtain the limit of M k on the left side: lim P(M k(ω) > ɛ) = 0 k where the RHS 0 as k since σi 2 < and σ2 i > 0 for all i, so the series is convergent. P So we have shown M k (ω) 0. We now show that M k (ω) a.s. 0 as well. Fix k 1 and define W k (ω) = n 2 sup X i (ω) k n 1 n 2 i=n 1 By the triangle inequality, we have M k W k 2M k. Now note that W k is decreasing (a.s.) in k. So let W = lim k W k. Since P P a.s. W k 2M k and M k 0, then W k 0 also. Since W k W, then also P P W k W. But we just showed that W k 0, so W = 0. Finally, note that since M k W k, then M k a.s. 0 as well. Theorem 3.12. (Kolmogorov 3-series theorem) Let {X n } n 1 be an independent sequence of R-valued r.v. s with E X n < and E(X 2 n) <. Define, for some A > 0: Y n = X n 1 ( Xn A) Then n=1 X n converges a.s. if and only if: 1. n=1 P( X n > A) < 2. n=1 E(Y n) converges 3. n=1 Var(Y n) < Remark. The conditions for 1) and 3) only require the series to be bounded, but since the terms of those series are non-negative, then it is equivalent to requiring the series to be convergent. Proof. Define X n = Y n EY n. Thus EX n = 0 and Var(X n) = Var(Y n ). Note that the sequence of r.v. s {X n} satisfies the requirements of the Random series theorem. Therefore n=1 X n = n=1 (Y n EY n ) converges a.s. Therefore, since n=1 EY n converges by assumption, the difference of the series n=1 Y n must converge a.s. also. Note P(X n Y n ) = P( X n > A). By assumption, n=1 P( X n > A) < so that by Borel-Cantelli, P(X n Y n i.o.) = 0. 36

Equivalently, this means that P(X n = Y n eventually) = 1. So for a.e. ω, there exists N(ω) such that for n N(ω), X n (ω) = Y n (ω). Therefore, n=n(ω) Y n = n=n(ω) And since Y n converges a.s. as shown above, its tail converges a.s. Thus the tail of X n converges a.s., so that the entire series converges a.s. X n 3.3 Strong law of large numbers We begin by proving the important Kronecker s lemma and give some motivation for the proof. Lemma 3.13. (Cesaro s lemma) Let {a n } be a sequence of strictly positive real numbers with a n. Let {X n } be a convergent sequence of real numbers. Then: 1 X n k=1 (a k a k 1 )X k lim X n Proof. Let ɛ > 0. Since X n is convergent, we can choose N such that X k > lim X n ɛ whenever k N Then split the sum into the portion up to N and the portion beyond N, and apply the above inequality: lim inf n 1 X n i=1 (a k a k 1 )X k lim inf n lim inf n { { 1 X n k=1 1 X n k=1 0 + lim X n ɛ } N (a k a k 1 )X k + a n a N (lim X n ɛ) X n } N (a k a k 1 )X k + a n a N (lim X n ɛ) a n Where the last step follows from the fact that a n and a N is finite. This is true for all ɛ > 0, so lim inf lim X n. To show that lim sup lim X n, follow the same argument except choose N such that X k < lim X n +ɛ whenever k N. Lemma 3.14. (Kronecker s lemma) Let {Y n } n 1 be a real-valued sequence and let {a n } n 1 be a sequence of strictly positive real numbers with a n. If Y n /a n <, then S n /a n 0. 37