Martingale Theory for Finance

Similar documents
Problem Sheet 1. You may assume that both F and F are σ-fields. (a) Show that F F is not a σ-field. (b) Let X : Ω R be defined by 1 if n = 1

Exercise Exercise Homework #6 Solutions Thursday 6 April 2006

Fundamental Inequalities, Convergence and the Optional Stopping Theorem for Continuous-Time Martingales

Martingale Theory and Applications

Lecture 2. We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales.

Inference for Stochastic Processes

Math 6810 (Probability) Fall Lecture notes

Useful Probability Theorems

Lecture 23. Random walks

18.175: Lecture 14 Infinite divisibility and so forth

1 Analysis of Markov Chains

A D VA N C E D P R O B A B I L - I T Y

Exercises: sheet 1. k=1 Y k is called compound Poisson process (X t := 0 if N t = 0).

Random Models. Tusheng Zhang. February 14, 2013

P (A G) dp G P (A G)

Probability Theory. Richard F. Bass

STOCHASTIC MODELS FOR WEB 2.0

17. Convergence of Random Variables

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 9 10/2/2013. Conditional expectations, filtration and martingales

Expected Value 7/7/2006

Stochastics Process Note. Xing Wang

1 Presessional Probability

Exercises: sheet 1. k=1 Y k is called compound Poisson process (X t := 0 if N t = 0).

Math-Stat-491-Fall2014-Notes-V

Problem Points S C O R E Total: 120

Probability Theory I: Syllabus and Exercise

Lecture 5. 1 Chung-Fuchs Theorem. Tel Aviv University Spring 2011

Preliminary Exam: Probability 9:00am 2:00pm, Friday, January 6, 2012

STOCHASTIC CALCULUS JASON MILLER AND VITTORIA SILVESTRI

CONVERGENCE OF RANDOM SERIES AND MARTINGALES

RANDOM WALKS IN ONE DIMENSION

ECON 2530b: International Finance

Probability Theory II. Spring 2016 Peter Orbanz

More than one variable

Part III Advanced Probability

MAT 135B Midterm 1 Solutions

CHAPTER 1. Martingales

Lectures on Markov Chains

Survival Analysis: Counting Process and Martingale. Lu Tian and Richard Olshen Stanford University

1. Stochastic Processes and filtrations

Problem Points S C O R E Total: 120

Selected Exercises on Expectations and Some Probability Inequalities

Course: ESO-209 Home Work: 1 Instructor: Debasis Kundu

1 Gambler s Ruin Problem

Advanced Probability

Notes 15 : UI Martingales

Ph.D. Qualifying Exam Monday Tuesday, January 4 5, 2016

Notes 1 : Measure-theoretic foundations I

n E(X t T n = lim X s Tn = X s

Stochastic Processes II/ Wahrscheinlichkeitstheorie III. Lecture Notes

Lecture 4. David Aldous. 2 September David Aldous Lecture 4

Lecture 4 An Introduction to Stochastic Processes

Notes on Stochastic Calculus

At t = T the investors learn the true state.

Martingales. Chapter Definition and examples

8 Laws of large numbers

Notes 18 : Optional Sampling Theorem

Wahrscheinlichkeitstheorie Prof. Schweizer Additions to the Script

(A n + B n + 1) A n + B n

STA205 Probability: Week 8 R. Wolpert

1 Gambler s Ruin Problem

Lecture 6 Random walks - advanced methods

1 IEOR 6712: Notes on Brownian Motion I

STOR 635 Notes (S13)

Introduction to Stochastic Processes

Probability and Measure

1 Stat 605. Homework I. Due Feb. 1, 2011

Exercises Measure Theoretic Probability

arxiv: v3 [math.pr] 5 Jun 2011

Deep Learning for Computer Vision

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

Random Variables. Statistics 110. Summer Copyright c 2006 by Mark E. Irwin

SIMPLE RANDOM WALKS: IMPROBABILITY OF PROFITABLE STOPPING

1 Basics of probability theory

r=1 I r of intervals I r should be the sum of their lengths:

Lecture 22 Girsanov s Theorem

On the definition and properties of p-harmonious functions

4 Branching Processes

Advanced Probability Theory (Math541)

4 Sums of Independent Random Variables

Chapter 11 Advanced Topic Stochastic Processes

MATH 418: Lectures on Conditional Expectation

Week 12-13: Discrete Probability

JUSTIN HARTMANN. F n Σ.

Probability and Measure

Random Processes. DS GA 1002 Probability and Statistics for Data Science.

1 Sequences of events and their limits

ADVANCED PROBABILITY: SOLUTIONS TO SHEET 1

1 Martingales. Martingales. (Ω, B, P ) is a probability space.

ECE534, Spring 2018: Solutions for Problem Set #4 Due Friday April 6, 2018

Probability, Random Processes and Inference

Lecture Notes 5 Convergence and Limit Theorems. Convergence with Probability 1. Convergence in Mean Square. Convergence in Probability, WLLN

Exercises Measure Theoretic Probability

Applications of Optimal Stopping and Stochastic Control

Advanced Probability

Stochastic Processes MIT, fall 2011 Day by day lecture outline and weekly homeworks. A) Lecture Outline Suggested reading

1 Variance of a Random Variable

1. Let X and Y be independent exponential random variables with rate α. Find the densities of the random variables X 3, X Y, min(x, Y 3 )

Part IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

Transcription:

Martingale Theory for Finance Tusheng Zhang October 27, 2015 1 Introduction 2 Probability spaces and σ-fields 3 Integration with respect to a probability measure. 4 Conditional expectation. 5 Martingales. For a family {X 1, X 2,...X n } of random variables, denote by σ(x 1, X 2,..., X n ) the smallest σ-field containing the events of the form {ω; a < X k (ω) < b}, k = 1,..., n for all choices of a, b. σ(x 1, X 2,..., X n ) is called the σ-field generated by X 1, X 2,..., X n. Random variables determined by σ(x 1, X 2,..., X n ) are functions of X 1, X 2,..., X n. To introduce the notion of martingales we begin with an example. Consider a series of games decided by the tosses of a coin, in which we either win 1 with probability p or lose 1 with probability q = 1 p in each round. Let X i denote the net gain in the i th round. Then X i, i = 1, 2,... are independent random variables with P (X i = 1) = p, P (X i = 1) = q and so E(X i ) = p q. Our total net gain (possibly negative) after the n th round is given by S 0 = 0 and S n = X 1 + X 2 +... + X n, n = 1, 2,... Let F n = σ(x 1, X 2,..., X n ) denote the σ-field generated by X 1, X 2,...X n. The F n F n+1 and F n can be regarded as the history of the games up to time n (the n-th round). Let us now compute the average gain after the n + 1-th round given the history up to time n. We have E[S n+1 F n ] = E[S n + X n+1 F n ] 1

= E[S n F n ] + E[X n+1 F n ] = S n + E[X n+1 ], where we used the fact that S n = X 1 + X 2 +... + X n is determined by F n and X n+1 is independent of F n. Thus, E[S n+1 F n ] = S n + p q S n if the game is fair, i.e. p = q = 1 2 = > S n, if p > q, < S n, if p < q. In the first case, the game is fair, {S n, n 0} is called a martingale. It is called a submartingale in the second case and a supermartingale in the third case. Definition 5.1 A sequence of random variables Z 0, Z 1, Z 2,...Z n,... is said to be a martingale (submartingale, or supermartingale) with respect to an increasing sequence F 0 F 2 F 3 of σ-fields if (i) Z n is determined by F n, (ii) Z n is integrable, i.e., E[ Z n ] <, (iii) E[Z n+1 F n ] = Z n. (E[Z n+1 F n ] Z n, E[Z n+1 F n ] Z n ) Remarks. 1. The notion of martingales is a mathematical formulation for fair games. 2.The typical choice of F n is F n = σ(z 1, Z 2,...Z n ), the σ-field generated by Z 1, Z 2,..., Z n. 3. If Z n, n 0 is a martingale, then E[Z n ] = E[E[Z n+1 F n ]] = E[Z n+1 ] The expectation remains constant for all n 1. 4. If Z n, n 0 is a martingale, then we have E[Z n+2 F n ] = E[E[Z n+2 F n+1 ] F n ] = [Z n+1 F n ] = Z n More generally, it holds that for any m > n, E[Z m F n ] = Z n. Examples. (1). If X 0, X 1, X 2,...X n,... are independent, integrable random variables with E[X i ] = 0 for all i, then S n = X 1 + X 2 +... + X n, n 1, S 0 = 0 is a martingale with respect to F n = σ(x 1, X 2,...X n ) Since S n is a function of X 0, X 1, X 2,...X n, it is F n -determined. We are left to check the third condition (iii). In fact, E[S n+1 F n ] 2

= E[S n + X n+1 F n ] = E[S n F n ] + E[X n+1 F n ] = S n + E[X n+1 ] = S n So (iii) is true and S n, n 0 is a martingale. (2). Let Y be an integrable r.v. and {F n, n 1} be a sequence of increasing σ-fields. Define Z n = E[Y F n ], n 1. Then {Z n, n 1} is a martingale w.r.t. {F n, n 1}. We need to show that {Z n, n 1} satisfies the definition of a martingale. By the definition of the conditional expectation, Z n is F n -determined. For (ii), we notice that Z n E[ Y F n ] Hence E[ Z n ] E[ Y ]. Let us now check (iii). By the property of the conditional expectation, This proves (iii). E[Z n+1 F n ] = E[E[Y F n+1 ] F n ]] = E[Y F n ] = Z n (3). If Y 1, Y 2,...Y n,... are independent, integrable r.v. s with a i = E(Y i ) 0 for all i, then Z n = Y 1Y 2...Y n a 1 a 2...a n, n = 1, 2,... is a martingale w.r.t. F n = σ(y 1, Y 2,..., Y n ), n 1 Since Z n is a function of Y 1, Y 2,..., Y n, Z n is determined by F n. Z n is integrable because Y i are integrable and independent. We now check (iii). We have E[Z n+1 F n ] = E[ Y 1Y 2...Y n Y n+1 F n ] a 1 a 2...a n a n+1 = Y 1Y 2...Y n a 1 a 2...a n E[ Y n+1 a n+1 F n ] = Y 1Y 2...Y n a 1 a 2...a n E[ Y n+1 a n+1 ] = Y 1Y 2...Y n a 1 a 2...a n = Z n Example 5.2 Let {S n } n 0 be the net gain process in a series of fair games. Then S 0 = 0, S n = X 1 + X 2 +... + X n, n 1, and P (X i = 1) = 1 2, P (X i = 1) = 1 2. We already know that {S n} n 0 is a martingale w.r.t. F n = σ(y 1, Y 2,..., Y n ), n 1. Prove that {Y n = S 2 n n, n 0} is also a martingale w.r.t. F n = σ(x 1, X 2,..., X n ), n 1. Proof. Since Y n is a function of X 1, X 2,..., X n, Y n is F n -measurable. As Y n n + n 2, Y n is integrable. It remains to show that E[Y n+1 F n ] = Y n. To this end, writing Y n+1 = S 2 n+1 (n + 1) = (S n + X n+1 ) 2 (n + 1) 3

we have = Sn 2 n + 2S n X n+1 + Xn+1 2 1 = Y n + 2S n X n+1 + Xn+1 2 1, E[Y n+1 F n ] = E[Y n F n ] + E[2S n X n+1 F n ] +E[Xn+1 F 2 n ] 1 = Y n + 2S n E[X n+1 F n ] + E[Xn+1] 2 1 = Y n + 2S n E[X n+1 ] + E[Xn+1] 2 1 = Y n + 0 + 1 1 = Y n. Example 5.3 Let X 1, X 2,..., X n, n 1 be a sequence of independent random variables with P (X i = 1) = p, P (X i = 1) = q. Set S n = X 1 + X 2 +... + X n and define ( ) Sn q Z n =, n 1. p Show {Z n, n 1} is a martingale w.r.t. F n = σ(x 1, X 2,..., X n ), n 1. Proof. ( Since Z n is a function of X 1, X 2,..., X n, Z n is F n -measurable. Since n ( ) n q p Z n p) +, Z q n is integrable. Let us check (iii) Stopping times ( ) Sn+1 q E[Z n+1 F n ] = E[ F n ] p ( ) Sn ( ) Xn+1 q q = E[ F n ] p p ( ) Sn ( ) Xn+1 q q = E[ F n ] p p ( ) Xn+1 q = Z n E[ ] p ( ) ( ) 1 q q = Z n { P (X n+1 = 1) + P (X n+1 = 1)} p p ( ) ( ) 1 q q = Z n { p + q} = Z n (p + q) = Z n. p p Suppose we play a series of games by tossing a fair coin. As we know, the net gain at time n S n, n 1 is a martingale. If we quit at a fixed time n, then E(S n ) = 0. It is not very exciting. However, we can stop playing as soon as our net gain reaches 100. This amounts to saying that we will stop at the random time T = min{n; S n = 100} Such random times are called stopping times. Here is the definition. 4

Definition 5.4 Let {F n, n 1} be an increasing sequence of σ-fields. A random variable T (ω) taking values in the set {0, 1, 2, 3,... } is called a stopping time ( or optional time) w.r.t. {F n, n 1} if for every n, {T n} F n. Example 5.5 Let S n = X 1 + X 2 +... + X n, n 1 be the net gain process in a series of games. Set F n = σ(x 1, X 2,..., X n ), n 1. Define T = min{n; S n = 100} Then T is a stopping time w.r.t. {F n, n 1}. In fact, for every n, {T n} = n k=1{s k = 100} F n. Stopped processes Let T be a stopping time. Let Z n, n 1 be a sequence of random variables. Set T n = min(t, n)). Define { Zn if n T (ω), Ẑ n = Z T n = if n > T (ω). Z T (ω) Z T n, n 1 is the stopped process of Z at the stoping time T. Theorem 5.6 If Z 0, Z 1,..., Z n.. is a martingale w.r.t. {F n, n 1}, then the stopped process Y n = Z T n is also a martingale w.r.t. {F n, n 1}. In particular, E[Z T n ] = E[Z 0 ]. Proof. Observe that Y n+1 = Z T (n+1) = Z T n + I {T n+1} (Z n+1 Z n ) = Y n + I {T n+1} (Z n+1 Z n ) In fact, if T n, the left is equal to the right, which is Z T. While if T n+1, then the left is also equal to the right, which is Z n+1. Thus we have E[Y n+1 F n ] = E[Y n + I {T n+1} (Z n+1 Z n ) F n ] Since {T n + 1} = ({T n}) c F n, it follows that E[Y n + I {T n+1} (Z n+1 Z n ) F n ] = Y n + I {T n+1} E[(Z n+1 Z n ) F n ] = Y n + 0 5

which completes the proof. If Z n, n 0 is a martingale, then we know that E[Z n ] = E[Z n 1 ] =...E[Z 0 ]. Now the question is what will happen if the deterministic time n is replaced by a stopping time T. Namely, will E[Z T ] = E[Z 0 ] be still true? Before answering the question, let us look at one example. Let S n, n 0 denote the capital gain in a series of fair games. Define T = min{n; S n = 10}, for example, the time at which the capital reaches 10. Then E[S T ] = E[10] = 10 E[S 0 ] = 0. But the following theorem holds. Theorem 5.7 (Doob s optional stopping theorem). Let Z 0, Z 1,... be a martingale and T an almost surely finite stopping time. In each of the following three cases, we have E[Z T ] = E[Z 0 ]. Case (i). T is bounded, i.e., there is an integer m such that T (ω) m. Case (ii). The sequence Z T n, n 0 is bounded in the sense that there is a constant K such that Z T n K for all n. Case (iii): E[T ] < and the step Z n Z n 1 are bounded, i.e., there is a constant C such that Z n Z n 1 C for all n. Proof. The proof is an application of the dominated convergence theorem. First of all, we note that Z T n Z T as n. Since the stopped process Z T n is a martingale, we always have E[Z T n ] = E[Z 0 ] Case (i): Since T m, we have T m = T and so E[Z T ] = E[Z T m ] = E[Z 0 ] Case (ii). Since Z T n K is bounded for all n, it follows from the dominated convergence theorem that Case (iii): Write E[Z T ] = lim E[Z T n ] = E[Z 0 ] Z T n = Z T n Z T n 1 + Z T n 1 Z T n 2 Consequently, + Z 3 Z 2 + Z 2 Z 1 + Z 1 Z 0 + Z 0 Z T n Z T n Z T n 1 + Z T n 1 Z T n 2 + Z 3 Z 2 + Z 2 Z 1 + Z 1 Z 0 + Z 0 6

C + C + + C + Z 0 (T n)c + Z 0 CT + Z 0. By virtue of the dominated convergence theorem, we have E[Z T ] = lim E[Z T n ] = E[Z 0 ] Application of Doob s optimal stopping theorem: ruin problem Gambler s Gambler A and B play a series of games against each other in which a fair coin is tossed repeatedly. In each game gambler A wins or loses 1 according as the toss results in a head or a tail respectively. The initial capital of gambler A is a and that of gambler B is b and they continue playing until one of them is ruined. Determine the probability that A will be ruined and also the expected number of games played. Let Ŝn be the fortune of gambler A after the n-th game. Then Ŝ n = a + X 1 + X 2 + + X n = a + S n, where X i, i = 1, 2... are independent r.v. s with P (X i = 1) = 1, P (X 2 i = 1) = 1. The game will stop at the time the gambler A or B is ruined, i.e., 2 the game stop at T = min{n; Ŝn = 0 or Ŝ n = a + b} = min{n; S n = a or S n = b} As discussed before, {S n, n 0} forms a martingale. Since S T = a or S T = b, we have (1).P (S T = a) + P (S T = b) = 1 Note that {Gambler A is ruined} = {S T = a} Since S T n a + b for all n 1, it follows from the Doob s theorem that E[S T ] = E[S 0 ] = 0, which is (2).E[S T ] = ( a)p (S T = a) + bp (S T = b) = 0 Solve (1), (2) together to get P (S T = a) = b a + b, P (S T = b) = a a + b Next we will find the expected number of games E(T ). To this end, define Y n = Sn 2 n, n 0. Then in example 4.2, we have showed (check) that Y n is a martingale. So the stopped process Y T n = ST 2 n (T n) is also a martingale. Thus we have E[Y T n ] = E[S 2 T n] E[(T n)] = 0 7

This yields by dominated convergence theorem that E[T ] = lim E[T n] = lim E[S 2 T n] = E(S 2 T ) = ( a) 2 P (S T = a) + b 2 P (S T = b) = a 2 b a + b + a b2 a + b = ab Example 5.8 Consider a simple random walk {S n, n 0} with 0 < S 0 = k < N, for which each step is rightwards with probability 0 < p < 1 and leftwards with probability q = 1 p, i.e., S n = S 0 + X 1 +... + X n with P (X i = 1) = p, P (X i = 1) = q. Assume p q. Use Doob optional stopping theorem to find the probability that the random walk hits 0 before hitting N. Solution. Write X 1, X 2,...X n,... for the independent steps of the walk. Then S 0 = k, S n = k + X 1 +... + X n with P (X i = 1) = p, P (X i = 1) = q. Define Z n = ( q p )S n, n 0. Then as we have seen before {Z n, n 0} is a martingale w.r.t. {F n = σ(x 1, X 2,..., X n ), n 0}. Let T = min{n; S n = 0 or S n = N} be the first time at which the r.w. hits 0 or N. Then S T = 0 means that the random walk hits 0 before reaching N and S T = N implies that the random walk hits N before reaching 0. Since Z n T = ( q p )S n T max 1 l N ( q p )l = M, by the Doob s optional theorem, On the other hand, E[Z T ] = E[Z 0 ] = ( q p )k E(Z T ) = ( q p )0 P (S T = 0) + ( q p )N P (S T = N) Thus we have = P (S T = 0) + ( q p )N (1 P (S T = 0)) P (S T = 0) + ( q p )N (1 P (S T = 0)) = ( q p )k Solve this equation for P (S T = 0) to get P (S T = 0) = ( q p )k ( q p )N 1 ( q p )N 8

Example 5.9 Let X 1,..., X n... be independent, integrable non-negative random variables with the same mean E(X i ) = µ. Let T be a stopping time w.r.t. {F n = σ(x 1, X 2,..., X n ), n 0}such that E[T ] <. Show that Y n = n X i nµ. Then {Y n, n 0 is a martin- Solution. Let Y 0 = 0, gale. Indeed, E[ E[Y n+1 F n ] = E[ T X i ] = E[T ]µ n X i nµ + X n+1 µ F n ] = E[Y n + X n+1 µ F n ] = Y n + E(X n+1 ) µ = Y n In particular, the stopped process Y n T is also a martingale. This yields that n T E[Y n T ] = E[ X i (n T )µ] = 0 By the monotone convergence theorem we arrive at n T E[T ]µ = lim E[(n T )µ] = lim E[ X i ] = E[ T X i ] Example 5.10 If Z n, n 1 is a martingale w.r.t. {F n, n 1}, prove that Z p n, n 1 is a submartingale for all p 1. Solution. Since Z n, n 1 is a martingale, E[Z n+1 F n ] = Z n. It follows that Z n p = E[Z n+1 F n ] p E[ Z n+1 p F n ] Therefore Z p n, n 1 is a submartingale. Doob s maximum inequality and martingale convergence theorem Theorem 5.11 Let X = (X n, n 0) be a martingale such that E[ X n p ] <, n = 0, 1,... for some p 1. Then for every N, and if p > 1 P (max 0 n N X n λ) E[ X N p ] λ p E[max 0 n N X n p p ] ( p 1 )p E[ X N p ] 9

Proof. Let T = inf{n; X n λ} N T is a bounded stopping time. Since X n p, n 1 is a submartingale, we particularly have E[ X T p ] E[ X N p ]. Note that and We have E[ X N p ] E[ X T p ] = {max 0 n N X n λ} { X T λ} {max 0 n N X n < λ} {T = N} {max 0 n N X n λ} X T p dp + X T p dp {max 0 n N X n <λ} This immediately gives λ p P (max 0 n N X n λ) P (max 0 n N X n λ) E[ X N p ] λ p For a real-valued (F n )-adapted process (X n ) n 0 define a sequence of stopping times τ n as follows and an interval [a, b], We set τ 1 = min{n; X n a}, τ 2 = min{n τ 1 ; X n b},... τ 2k+1 = min{n τ 2k ; X n a}, τ 2k+2 = min{n τ 2k+1 ; X n b},... U X N (a, b)(ω) = max{k; τ 2k (ω) N}. Then U X N (a, b) is the number of upcrossings of (X n) N n=0 for the interval [a, b]. Theorem 5.12 Let (X n ) be a supermartingale. We have P (UN X (a, b) > j) 1 (X N a) dp, (5.1) b a {UN X(a,b)=j} E[U X N (a, b)] 1 b a E[(X N a) ]. (5.2) 10

Proof. We can assume that the process is stopped at N and also a = 0. Otherwise consider the process (X n N a). Set S = τ 2j+1 (N + 1), T = τ 2(j+1) (N + 1) Then {S N} = {τ 2j+1 N}, and X S = X τ2j+1 0 on {S N}. Note that {τ 2j+2 N} is the event that that the process (X n ) N n=0 has at least j +1 upcrossings for the interval [a, b]. Hence, we have Thus, {S<N} {U X N (0, b) > j} = {τ 2j+2 N} = {S < N, X T b} {S < N, X T < b} = {S < N, T = N + 1} {U X N (0, b) = j} bp (UN X (0, b) > j) = bdp = {UN X (0,b)>j} X T dp = X T dp {S<N,X T b} X S dp {S<N,T =N+1} {S<N,T =N+1} {S<N} X T dp 0 {S<N,X T b} {S<N,X T <b} {S<N,T =N+1} X N dp {U XN (0,b)=j} X N dp Adding the above inequality from j = 0 to we obtain bdp X T dp X N+1 dp = E[U X N (0, b)] 1 b E[(X N) ]. (5.3) Theorem 5.13 Let X = (X n, n 0) be a martingale such that sup n E[ X n p ] <, n = 0, 1,... for some p 1. Then almost surely. Proof. For a < b, set X(ω) = lim X n (ω) U X (a, b) = lim N U X N (a, b). U X (a, b) is the number of upcrossings of the process (X n ) n 0. By Theorem, we have E[U X (a, b)] lim E[U N X (a, b)] sup E[(X N a) ] < N This yields that P (W a,b ) = 0, where W a,b = {U X (a, b) = }. Set V a,b = {lim inf N X n < a, lim sup X n > b} 11 {S<N,T =N+1} X N dp

Then V a,b W a,b, and hence P (V a,b ) = 0. Since {lim inf X n < lim sup X n } = a<b,a,b Q V a,b, where Q denotes the set of rational numbers, we deduce that Hence P ({lim inf X n < lim sup X n }) = 0 P ({ lim X n exists }) = 1. Example 5.14 Let {F n, n 0} be a sequence of increasing σ-fields. Let Z be an integrable random variable. Set X n = E[Z F n ], n 0. Explain why the limit X = lim X n exists. Solution. We knew from previous examples that X n, n 0 is a martingale. Moreover, we have sup n E[ X n ] sup E[E[ Z F n ]] = E[ Z ] < n By Theorem 4.11, X = lim X n exists. Example 5.15 Let X 1, X 2,..., X n,... be independent random variables with E[X i ] = µ i, V ar(x i ) = σi 2. If µ i < and σ2 i <, show X i(ω) converges almost surely. Solution. Set Y n = n (X i µ i ), F n = σ(x 1, X 2,..., X n ). It is easy to verify that Y n, n 1 is a martingale. Furthermore, sup E[Yn 2 ] = sup n n n E[(X i µ i ) 2 ] = σi 2 < It follows from the martingale convergence theorem that n lim Y n = lim (X i µ i ) exists. Since by assumption, exists, we conclude that exists. lim n µ i X i (ω) = lim 12 n X i