Martingale Theory for Finance Tusheng Zhang October 27, 2015 1 Introduction 2 Probability spaces and σ-fields 3 Integration with respect to a probability measure. 4 Conditional expectation. 5 Martingales. For a family {X 1, X 2,...X n } of random variables, denote by σ(x 1, X 2,..., X n ) the smallest σ-field containing the events of the form {ω; a < X k (ω) < b}, k = 1,..., n for all choices of a, b. σ(x 1, X 2,..., X n ) is called the σ-field generated by X 1, X 2,..., X n. Random variables determined by σ(x 1, X 2,..., X n ) are functions of X 1, X 2,..., X n. To introduce the notion of martingales we begin with an example. Consider a series of games decided by the tosses of a coin, in which we either win 1 with probability p or lose 1 with probability q = 1 p in each round. Let X i denote the net gain in the i th round. Then X i, i = 1, 2,... are independent random variables with P (X i = 1) = p, P (X i = 1) = q and so E(X i ) = p q. Our total net gain (possibly negative) after the n th round is given by S 0 = 0 and S n = X 1 + X 2 +... + X n, n = 1, 2,... Let F n = σ(x 1, X 2,..., X n ) denote the σ-field generated by X 1, X 2,...X n. The F n F n+1 and F n can be regarded as the history of the games up to time n (the n-th round). Let us now compute the average gain after the n + 1-th round given the history up to time n. We have E[S n+1 F n ] = E[S n + X n+1 F n ] 1
= E[S n F n ] + E[X n+1 F n ] = S n + E[X n+1 ], where we used the fact that S n = X 1 + X 2 +... + X n is determined by F n and X n+1 is independent of F n. Thus, E[S n+1 F n ] = S n + p q S n if the game is fair, i.e. p = q = 1 2 = > S n, if p > q, < S n, if p < q. In the first case, the game is fair, {S n, n 0} is called a martingale. It is called a submartingale in the second case and a supermartingale in the third case. Definition 5.1 A sequence of random variables Z 0, Z 1, Z 2,...Z n,... is said to be a martingale (submartingale, or supermartingale) with respect to an increasing sequence F 0 F 2 F 3 of σ-fields if (i) Z n is determined by F n, (ii) Z n is integrable, i.e., E[ Z n ] <, (iii) E[Z n+1 F n ] = Z n. (E[Z n+1 F n ] Z n, E[Z n+1 F n ] Z n ) Remarks. 1. The notion of martingales is a mathematical formulation for fair games. 2.The typical choice of F n is F n = σ(z 1, Z 2,...Z n ), the σ-field generated by Z 1, Z 2,..., Z n. 3. If Z n, n 0 is a martingale, then E[Z n ] = E[E[Z n+1 F n ]] = E[Z n+1 ] The expectation remains constant for all n 1. 4. If Z n, n 0 is a martingale, then we have E[Z n+2 F n ] = E[E[Z n+2 F n+1 ] F n ] = [Z n+1 F n ] = Z n More generally, it holds that for any m > n, E[Z m F n ] = Z n. Examples. (1). If X 0, X 1, X 2,...X n,... are independent, integrable random variables with E[X i ] = 0 for all i, then S n = X 1 + X 2 +... + X n, n 1, S 0 = 0 is a martingale with respect to F n = σ(x 1, X 2,...X n ) Since S n is a function of X 0, X 1, X 2,...X n, it is F n -determined. We are left to check the third condition (iii). In fact, E[S n+1 F n ] 2
= E[S n + X n+1 F n ] = E[S n F n ] + E[X n+1 F n ] = S n + E[X n+1 ] = S n So (iii) is true and S n, n 0 is a martingale. (2). Let Y be an integrable r.v. and {F n, n 1} be a sequence of increasing σ-fields. Define Z n = E[Y F n ], n 1. Then {Z n, n 1} is a martingale w.r.t. {F n, n 1}. We need to show that {Z n, n 1} satisfies the definition of a martingale. By the definition of the conditional expectation, Z n is F n -determined. For (ii), we notice that Z n E[ Y F n ] Hence E[ Z n ] E[ Y ]. Let us now check (iii). By the property of the conditional expectation, This proves (iii). E[Z n+1 F n ] = E[E[Y F n+1 ] F n ]] = E[Y F n ] = Z n (3). If Y 1, Y 2,...Y n,... are independent, integrable r.v. s with a i = E(Y i ) 0 for all i, then Z n = Y 1Y 2...Y n a 1 a 2...a n, n = 1, 2,... is a martingale w.r.t. F n = σ(y 1, Y 2,..., Y n ), n 1 Since Z n is a function of Y 1, Y 2,..., Y n, Z n is determined by F n. Z n is integrable because Y i are integrable and independent. We now check (iii). We have E[Z n+1 F n ] = E[ Y 1Y 2...Y n Y n+1 F n ] a 1 a 2...a n a n+1 = Y 1Y 2...Y n a 1 a 2...a n E[ Y n+1 a n+1 F n ] = Y 1Y 2...Y n a 1 a 2...a n E[ Y n+1 a n+1 ] = Y 1Y 2...Y n a 1 a 2...a n = Z n Example 5.2 Let {S n } n 0 be the net gain process in a series of fair games. Then S 0 = 0, S n = X 1 + X 2 +... + X n, n 1, and P (X i = 1) = 1 2, P (X i = 1) = 1 2. We already know that {S n} n 0 is a martingale w.r.t. F n = σ(y 1, Y 2,..., Y n ), n 1. Prove that {Y n = S 2 n n, n 0} is also a martingale w.r.t. F n = σ(x 1, X 2,..., X n ), n 1. Proof. Since Y n is a function of X 1, X 2,..., X n, Y n is F n -measurable. As Y n n + n 2, Y n is integrable. It remains to show that E[Y n+1 F n ] = Y n. To this end, writing Y n+1 = S 2 n+1 (n + 1) = (S n + X n+1 ) 2 (n + 1) 3
we have = Sn 2 n + 2S n X n+1 + Xn+1 2 1 = Y n + 2S n X n+1 + Xn+1 2 1, E[Y n+1 F n ] = E[Y n F n ] + E[2S n X n+1 F n ] +E[Xn+1 F 2 n ] 1 = Y n + 2S n E[X n+1 F n ] + E[Xn+1] 2 1 = Y n + 2S n E[X n+1 ] + E[Xn+1] 2 1 = Y n + 0 + 1 1 = Y n. Example 5.3 Let X 1, X 2,..., X n, n 1 be a sequence of independent random variables with P (X i = 1) = p, P (X i = 1) = q. Set S n = X 1 + X 2 +... + X n and define ( ) Sn q Z n =, n 1. p Show {Z n, n 1} is a martingale w.r.t. F n = σ(x 1, X 2,..., X n ), n 1. Proof. ( Since Z n is a function of X 1, X 2,..., X n, Z n is F n -measurable. Since n ( ) n q p Z n p) +, Z q n is integrable. Let us check (iii) Stopping times ( ) Sn+1 q E[Z n+1 F n ] = E[ F n ] p ( ) Sn ( ) Xn+1 q q = E[ F n ] p p ( ) Sn ( ) Xn+1 q q = E[ F n ] p p ( ) Xn+1 q = Z n E[ ] p ( ) ( ) 1 q q = Z n { P (X n+1 = 1) + P (X n+1 = 1)} p p ( ) ( ) 1 q q = Z n { p + q} = Z n (p + q) = Z n. p p Suppose we play a series of games by tossing a fair coin. As we know, the net gain at time n S n, n 1 is a martingale. If we quit at a fixed time n, then E(S n ) = 0. It is not very exciting. However, we can stop playing as soon as our net gain reaches 100. This amounts to saying that we will stop at the random time T = min{n; S n = 100} Such random times are called stopping times. Here is the definition. 4
Definition 5.4 Let {F n, n 1} be an increasing sequence of σ-fields. A random variable T (ω) taking values in the set {0, 1, 2, 3,... } is called a stopping time ( or optional time) w.r.t. {F n, n 1} if for every n, {T n} F n. Example 5.5 Let S n = X 1 + X 2 +... + X n, n 1 be the net gain process in a series of games. Set F n = σ(x 1, X 2,..., X n ), n 1. Define T = min{n; S n = 100} Then T is a stopping time w.r.t. {F n, n 1}. In fact, for every n, {T n} = n k=1{s k = 100} F n. Stopped processes Let T be a stopping time. Let Z n, n 1 be a sequence of random variables. Set T n = min(t, n)). Define { Zn if n T (ω), Ẑ n = Z T n = if n > T (ω). Z T (ω) Z T n, n 1 is the stopped process of Z at the stoping time T. Theorem 5.6 If Z 0, Z 1,..., Z n.. is a martingale w.r.t. {F n, n 1}, then the stopped process Y n = Z T n is also a martingale w.r.t. {F n, n 1}. In particular, E[Z T n ] = E[Z 0 ]. Proof. Observe that Y n+1 = Z T (n+1) = Z T n + I {T n+1} (Z n+1 Z n ) = Y n + I {T n+1} (Z n+1 Z n ) In fact, if T n, the left is equal to the right, which is Z T. While if T n+1, then the left is also equal to the right, which is Z n+1. Thus we have E[Y n+1 F n ] = E[Y n + I {T n+1} (Z n+1 Z n ) F n ] Since {T n + 1} = ({T n}) c F n, it follows that E[Y n + I {T n+1} (Z n+1 Z n ) F n ] = Y n + I {T n+1} E[(Z n+1 Z n ) F n ] = Y n + 0 5
which completes the proof. If Z n, n 0 is a martingale, then we know that E[Z n ] = E[Z n 1 ] =...E[Z 0 ]. Now the question is what will happen if the deterministic time n is replaced by a stopping time T. Namely, will E[Z T ] = E[Z 0 ] be still true? Before answering the question, let us look at one example. Let S n, n 0 denote the capital gain in a series of fair games. Define T = min{n; S n = 10}, for example, the time at which the capital reaches 10. Then E[S T ] = E[10] = 10 E[S 0 ] = 0. But the following theorem holds. Theorem 5.7 (Doob s optional stopping theorem). Let Z 0, Z 1,... be a martingale and T an almost surely finite stopping time. In each of the following three cases, we have E[Z T ] = E[Z 0 ]. Case (i). T is bounded, i.e., there is an integer m such that T (ω) m. Case (ii). The sequence Z T n, n 0 is bounded in the sense that there is a constant K such that Z T n K for all n. Case (iii): E[T ] < and the step Z n Z n 1 are bounded, i.e., there is a constant C such that Z n Z n 1 C for all n. Proof. The proof is an application of the dominated convergence theorem. First of all, we note that Z T n Z T as n. Since the stopped process Z T n is a martingale, we always have E[Z T n ] = E[Z 0 ] Case (i): Since T m, we have T m = T and so E[Z T ] = E[Z T m ] = E[Z 0 ] Case (ii). Since Z T n K is bounded for all n, it follows from the dominated convergence theorem that Case (iii): Write E[Z T ] = lim E[Z T n ] = E[Z 0 ] Z T n = Z T n Z T n 1 + Z T n 1 Z T n 2 Consequently, + Z 3 Z 2 + Z 2 Z 1 + Z 1 Z 0 + Z 0 Z T n Z T n Z T n 1 + Z T n 1 Z T n 2 + Z 3 Z 2 + Z 2 Z 1 + Z 1 Z 0 + Z 0 6
C + C + + C + Z 0 (T n)c + Z 0 CT + Z 0. By virtue of the dominated convergence theorem, we have E[Z T ] = lim E[Z T n ] = E[Z 0 ] Application of Doob s optimal stopping theorem: ruin problem Gambler s Gambler A and B play a series of games against each other in which a fair coin is tossed repeatedly. In each game gambler A wins or loses 1 according as the toss results in a head or a tail respectively. The initial capital of gambler A is a and that of gambler B is b and they continue playing until one of them is ruined. Determine the probability that A will be ruined and also the expected number of games played. Let Ŝn be the fortune of gambler A after the n-th game. Then Ŝ n = a + X 1 + X 2 + + X n = a + S n, where X i, i = 1, 2... are independent r.v. s with P (X i = 1) = 1, P (X 2 i = 1) = 1. The game will stop at the time the gambler A or B is ruined, i.e., 2 the game stop at T = min{n; Ŝn = 0 or Ŝ n = a + b} = min{n; S n = a or S n = b} As discussed before, {S n, n 0} forms a martingale. Since S T = a or S T = b, we have (1).P (S T = a) + P (S T = b) = 1 Note that {Gambler A is ruined} = {S T = a} Since S T n a + b for all n 1, it follows from the Doob s theorem that E[S T ] = E[S 0 ] = 0, which is (2).E[S T ] = ( a)p (S T = a) + bp (S T = b) = 0 Solve (1), (2) together to get P (S T = a) = b a + b, P (S T = b) = a a + b Next we will find the expected number of games E(T ). To this end, define Y n = Sn 2 n, n 0. Then in example 4.2, we have showed (check) that Y n is a martingale. So the stopped process Y T n = ST 2 n (T n) is also a martingale. Thus we have E[Y T n ] = E[S 2 T n] E[(T n)] = 0 7
This yields by dominated convergence theorem that E[T ] = lim E[T n] = lim E[S 2 T n] = E(S 2 T ) = ( a) 2 P (S T = a) + b 2 P (S T = b) = a 2 b a + b + a b2 a + b = ab Example 5.8 Consider a simple random walk {S n, n 0} with 0 < S 0 = k < N, for which each step is rightwards with probability 0 < p < 1 and leftwards with probability q = 1 p, i.e., S n = S 0 + X 1 +... + X n with P (X i = 1) = p, P (X i = 1) = q. Assume p q. Use Doob optional stopping theorem to find the probability that the random walk hits 0 before hitting N. Solution. Write X 1, X 2,...X n,... for the independent steps of the walk. Then S 0 = k, S n = k + X 1 +... + X n with P (X i = 1) = p, P (X i = 1) = q. Define Z n = ( q p )S n, n 0. Then as we have seen before {Z n, n 0} is a martingale w.r.t. {F n = σ(x 1, X 2,..., X n ), n 0}. Let T = min{n; S n = 0 or S n = N} be the first time at which the r.w. hits 0 or N. Then S T = 0 means that the random walk hits 0 before reaching N and S T = N implies that the random walk hits N before reaching 0. Since Z n T = ( q p )S n T max 1 l N ( q p )l = M, by the Doob s optional theorem, On the other hand, E[Z T ] = E[Z 0 ] = ( q p )k E(Z T ) = ( q p )0 P (S T = 0) + ( q p )N P (S T = N) Thus we have = P (S T = 0) + ( q p )N (1 P (S T = 0)) P (S T = 0) + ( q p )N (1 P (S T = 0)) = ( q p )k Solve this equation for P (S T = 0) to get P (S T = 0) = ( q p )k ( q p )N 1 ( q p )N 8
Example 5.9 Let X 1,..., X n... be independent, integrable non-negative random variables with the same mean E(X i ) = µ. Let T be a stopping time w.r.t. {F n = σ(x 1, X 2,..., X n ), n 0}such that E[T ] <. Show that Y n = n X i nµ. Then {Y n, n 0 is a martin- Solution. Let Y 0 = 0, gale. Indeed, E[ E[Y n+1 F n ] = E[ T X i ] = E[T ]µ n X i nµ + X n+1 µ F n ] = E[Y n + X n+1 µ F n ] = Y n + E(X n+1 ) µ = Y n In particular, the stopped process Y n T is also a martingale. This yields that n T E[Y n T ] = E[ X i (n T )µ] = 0 By the monotone convergence theorem we arrive at n T E[T ]µ = lim E[(n T )µ] = lim E[ X i ] = E[ T X i ] Example 5.10 If Z n, n 1 is a martingale w.r.t. {F n, n 1}, prove that Z p n, n 1 is a submartingale for all p 1. Solution. Since Z n, n 1 is a martingale, E[Z n+1 F n ] = Z n. It follows that Z n p = E[Z n+1 F n ] p E[ Z n+1 p F n ] Therefore Z p n, n 1 is a submartingale. Doob s maximum inequality and martingale convergence theorem Theorem 5.11 Let X = (X n, n 0) be a martingale such that E[ X n p ] <, n = 0, 1,... for some p 1. Then for every N, and if p > 1 P (max 0 n N X n λ) E[ X N p ] λ p E[max 0 n N X n p p ] ( p 1 )p E[ X N p ] 9
Proof. Let T = inf{n; X n λ} N T is a bounded stopping time. Since X n p, n 1 is a submartingale, we particularly have E[ X T p ] E[ X N p ]. Note that and We have E[ X N p ] E[ X T p ] = {max 0 n N X n λ} { X T λ} {max 0 n N X n < λ} {T = N} {max 0 n N X n λ} X T p dp + X T p dp {max 0 n N X n <λ} This immediately gives λ p P (max 0 n N X n λ) P (max 0 n N X n λ) E[ X N p ] λ p For a real-valued (F n )-adapted process (X n ) n 0 define a sequence of stopping times τ n as follows and an interval [a, b], We set τ 1 = min{n; X n a}, τ 2 = min{n τ 1 ; X n b},... τ 2k+1 = min{n τ 2k ; X n a}, τ 2k+2 = min{n τ 2k+1 ; X n b},... U X N (a, b)(ω) = max{k; τ 2k (ω) N}. Then U X N (a, b) is the number of upcrossings of (X n) N n=0 for the interval [a, b]. Theorem 5.12 Let (X n ) be a supermartingale. We have P (UN X (a, b) > j) 1 (X N a) dp, (5.1) b a {UN X(a,b)=j} E[U X N (a, b)] 1 b a E[(X N a) ]. (5.2) 10
Proof. We can assume that the process is stopped at N and also a = 0. Otherwise consider the process (X n N a). Set S = τ 2j+1 (N + 1), T = τ 2(j+1) (N + 1) Then {S N} = {τ 2j+1 N}, and X S = X τ2j+1 0 on {S N}. Note that {τ 2j+2 N} is the event that that the process (X n ) N n=0 has at least j +1 upcrossings for the interval [a, b]. Hence, we have Thus, {S<N} {U X N (0, b) > j} = {τ 2j+2 N} = {S < N, X T b} {S < N, X T < b} = {S < N, T = N + 1} {U X N (0, b) = j} bp (UN X (0, b) > j) = bdp = {UN X (0,b)>j} X T dp = X T dp {S<N,X T b} X S dp {S<N,T =N+1} {S<N,T =N+1} {S<N} X T dp 0 {S<N,X T b} {S<N,X T <b} {S<N,T =N+1} X N dp {U XN (0,b)=j} X N dp Adding the above inequality from j = 0 to we obtain bdp X T dp X N+1 dp = E[U X N (0, b)] 1 b E[(X N) ]. (5.3) Theorem 5.13 Let X = (X n, n 0) be a martingale such that sup n E[ X n p ] <, n = 0, 1,... for some p 1. Then almost surely. Proof. For a < b, set X(ω) = lim X n (ω) U X (a, b) = lim N U X N (a, b). U X (a, b) is the number of upcrossings of the process (X n ) n 0. By Theorem, we have E[U X (a, b)] lim E[U N X (a, b)] sup E[(X N a) ] < N This yields that P (W a,b ) = 0, where W a,b = {U X (a, b) = }. Set V a,b = {lim inf N X n < a, lim sup X n > b} 11 {S<N,T =N+1} X N dp
Then V a,b W a,b, and hence P (V a,b ) = 0. Since {lim inf X n < lim sup X n } = a<b,a,b Q V a,b, where Q denotes the set of rational numbers, we deduce that Hence P ({lim inf X n < lim sup X n }) = 0 P ({ lim X n exists }) = 1. Example 5.14 Let {F n, n 0} be a sequence of increasing σ-fields. Let Z be an integrable random variable. Set X n = E[Z F n ], n 0. Explain why the limit X = lim X n exists. Solution. We knew from previous examples that X n, n 0 is a martingale. Moreover, we have sup n E[ X n ] sup E[E[ Z F n ]] = E[ Z ] < n By Theorem 4.11, X = lim X n exists. Example 5.15 Let X 1, X 2,..., X n,... be independent random variables with E[X i ] = µ i, V ar(x i ) = σi 2. If µ i < and σ2 i <, show X i(ω) converges almost surely. Solution. Set Y n = n (X i µ i ), F n = σ(x 1, X 2,..., X n ). It is easy to verify that Y n, n 1 is a martingale. Furthermore, sup E[Yn 2 ] = sup n n n E[(X i µ i ) 2 ] = σi 2 < It follows from the martingale convergence theorem that n lim Y n = lim (X i µ i ) exists. Since by assumption, exists, we conclude that exists. lim n µ i X i (ω) = lim 12 n X i