Martingale Theory for Finance

Size: px
Start display at page:

Download "Martingale Theory for Finance"

Transcription

1 Martingale Theory for Finance Tusheng Zhang October 27, Introduction 2 Probability spaces and σ-fields 3 Integration with respect to a probability measure. 4 Conditional expectation. 5 Martingales. For a family {X 1, X 2,...X n } of random variables, denote by σ(x 1, X 2,..., X n ) the smallest σ-field containing the events of the form {ω; a < X k (ω) < b}, k = 1,..., n for all choices of a, b. σ(x 1, X 2,..., X n ) is called the σ-field generated by X 1, X 2,..., X n. Random variables determined by σ(x 1, X 2,..., X n ) are functions of X 1, X 2,..., X n. To introduce the notion of martingales we begin with an example. Consider a series of games decided by the tosses of a coin, in which we either win 1 with probability p or lose 1 with probability q = 1 p in each round. Let X i denote the net gain in the i th round. Then X i, i = 1, 2,... are independent random variables with P (X i = 1) = p, P (X i = 1) = q and so E(X i ) = p q. Our total net gain (possibly negative) after the n th round is given by S 0 = 0 and S n = X 1 + X X n, n = 1, 2,... Let F n = σ(x 1, X 2,..., X n ) denote the σ-field generated by X 1, X 2,...X n. The F n F n+1 and F n can be regarded as the history of the games up to time n (the n-th round). Let us now compute the average gain after the n + 1-th round given the history up to time n. We have E[S n+1 F n ] = E[S n + X n+1 F n ] 1

2 = E[S n F n ] + E[X n+1 F n ] = S n + E[X n+1 ], where we used the fact that S n = X 1 + X X n is determined by F n and X n+1 is independent of F n. Thus, E[S n+1 F n ] = S n + p q S n if the game is fair, i.e. p = q = 1 2 = > S n, if p > q, < S n, if p < q. In the first case, the game is fair, {S n, n 0} is called a martingale. It is called a submartingale in the second case and a supermartingale in the third case. Definition 5.1 A sequence of random variables Z 0, Z 1, Z 2,...Z n,... is said to be a martingale (submartingale, or supermartingale) with respect to an increasing sequence F 0 F 2 F 3 of σ-fields if (i) Z n is determined by F n, (ii) Z n is integrable, i.e., E[ Z n ] <, (iii) E[Z n+1 F n ] = Z n. (E[Z n+1 F n ] Z n, E[Z n+1 F n ] Z n ) Remarks. 1. The notion of martingales is a mathematical formulation for fair games. 2.The typical choice of F n is F n = σ(z 1, Z 2,...Z n ), the σ-field generated by Z 1, Z 2,..., Z n. 3. If Z n, n 0 is a martingale, then E[Z n ] = E[E[Z n+1 F n ]] = E[Z n+1 ] The expectation remains constant for all n If Z n, n 0 is a martingale, then we have E[Z n+2 F n ] = E[E[Z n+2 F n+1 ] F n ] = [Z n+1 F n ] = Z n More generally, it holds that for any m > n, E[Z m F n ] = Z n. Examples. (1). If X 0, X 1, X 2,...X n,... are independent, integrable random variables with E[X i ] = 0 for all i, then S n = X 1 + X X n, n 1, S 0 = 0 is a martingale with respect to F n = σ(x 1, X 2,...X n ) Since S n is a function of X 0, X 1, X 2,...X n, it is F n -determined. We are left to check the third condition (iii). In fact, E[S n+1 F n ] 2

3 = E[S n + X n+1 F n ] = E[S n F n ] + E[X n+1 F n ] = S n + E[X n+1 ] = S n So (iii) is true and S n, n 0 is a martingale. (2). Let Y be an integrable r.v. and {F n, n 1} be a sequence of increasing σ-fields. Define Z n = E[Y F n ], n 1. Then {Z n, n 1} is a martingale w.r.t. {F n, n 1}. We need to show that {Z n, n 1} satisfies the definition of a martingale. By the definition of the conditional expectation, Z n is F n -determined. For (ii), we notice that Z n E[ Y F n ] Hence E[ Z n ] E[ Y ]. Let us now check (iii). By the property of the conditional expectation, This proves (iii). E[Z n+1 F n ] = E[E[Y F n+1 ] F n ]] = E[Y F n ] = Z n (3). If Y 1, Y 2,...Y n,... are independent, integrable r.v. s with a i = E(Y i ) 0 for all i, then Z n = Y 1Y 2...Y n a 1 a 2...a n, n = 1, 2,... is a martingale w.r.t. F n = σ(y 1, Y 2,..., Y n ), n 1 Since Z n is a function of Y 1, Y 2,..., Y n, Z n is determined by F n. Z n is integrable because Y i are integrable and independent. We now check (iii). We have E[Z n+1 F n ] = E[ Y 1Y 2...Y n Y n+1 F n ] a 1 a 2...a n a n+1 = Y 1Y 2...Y n a 1 a 2...a n E[ Y n+1 a n+1 F n ] = Y 1Y 2...Y n a 1 a 2...a n E[ Y n+1 a n+1 ] = Y 1Y 2...Y n a 1 a 2...a n = Z n Example 5.2 Let {S n } n 0 be the net gain process in a series of fair games. Then S 0 = 0, S n = X 1 + X X n, n 1, and P (X i = 1) = 1 2, P (X i = 1) = 1 2. We already know that {S n} n 0 is a martingale w.r.t. F n = σ(y 1, Y 2,..., Y n ), n 1. Prove that {Y n = S 2 n n, n 0} is also a martingale w.r.t. F n = σ(x 1, X 2,..., X n ), n 1. Proof. Since Y n is a function of X 1, X 2,..., X n, Y n is F n -measurable. As Y n n + n 2, Y n is integrable. It remains to show that E[Y n+1 F n ] = Y n. To this end, writing Y n+1 = S 2 n+1 (n + 1) = (S n + X n+1 ) 2 (n + 1) 3

4 we have = Sn 2 n + 2S n X n+1 + Xn = Y n + 2S n X n+1 + Xn+1 2 1, E[Y n+1 F n ] = E[Y n F n ] + E[2S n X n+1 F n ] +E[Xn+1 F 2 n ] 1 = Y n + 2S n E[X n+1 F n ] + E[Xn+1] 2 1 = Y n + 2S n E[X n+1 ] + E[Xn+1] 2 1 = Y n = Y n. Example 5.3 Let X 1, X 2,..., X n, n 1 be a sequence of independent random variables with P (X i = 1) = p, P (X i = 1) = q. Set S n = X 1 + X X n and define ( ) Sn q Z n =, n 1. p Show {Z n, n 1} is a martingale w.r.t. F n = σ(x 1, X 2,..., X n ), n 1. Proof. ( Since Z n is a function of X 1, X 2,..., X n, Z n is F n -measurable. Since n ( ) n q p Z n p) +, Z q n is integrable. Let us check (iii) Stopping times ( ) Sn+1 q E[Z n+1 F n ] = E[ F n ] p ( ) Sn ( ) Xn+1 q q = E[ F n ] p p ( ) Sn ( ) Xn+1 q q = E[ F n ] p p ( ) Xn+1 q = Z n E[ ] p ( ) ( ) 1 q q = Z n { P (X n+1 = 1) + P (X n+1 = 1)} p p ( ) ( ) 1 q q = Z n { p + q} = Z n (p + q) = Z n. p p Suppose we play a series of games by tossing a fair coin. As we know, the net gain at time n S n, n 1 is a martingale. If we quit at a fixed time n, then E(S n ) = 0. It is not very exciting. However, we can stop playing as soon as our net gain reaches 100. This amounts to saying that we will stop at the random time T = min{n; S n = 100} Such random times are called stopping times. Here is the definition. 4

5 Definition 5.4 Let {F n, n 1} be an increasing sequence of σ-fields. A random variable T (ω) taking values in the set {0, 1, 2, 3,... } is called a stopping time ( or optional time) w.r.t. {F n, n 1} if for every n, {T n} F n. Example 5.5 Let S n = X 1 + X X n, n 1 be the net gain process in a series of games. Set F n = σ(x 1, X 2,..., X n ), n 1. Define T = min{n; S n = 100} Then T is a stopping time w.r.t. {F n, n 1}. In fact, for every n, {T n} = n k=1{s k = 100} F n. Stopped processes Let T be a stopping time. Let Z n, n 1 be a sequence of random variables. Set T n = min(t, n)). Define { Zn if n T (ω), Ẑ n = Z T n = if n > T (ω). Z T (ω) Z T n, n 1 is the stopped process of Z at the stoping time T. Theorem 5.6 If Z 0, Z 1,..., Z n.. is a martingale w.r.t. {F n, n 1}, then the stopped process Y n = Z T n is also a martingale w.r.t. {F n, n 1}. In particular, E[Z T n ] = E[Z 0 ]. Proof. Observe that Y n+1 = Z T (n+1) = Z T n + I {T n+1} (Z n+1 Z n ) = Y n + I {T n+1} (Z n+1 Z n ) In fact, if T n, the left is equal to the right, which is Z T. While if T n+1, then the left is also equal to the right, which is Z n+1. Thus we have E[Y n+1 F n ] = E[Y n + I {T n+1} (Z n+1 Z n ) F n ] Since {T n + 1} = ({T n}) c F n, it follows that E[Y n + I {T n+1} (Z n+1 Z n ) F n ] = Y n + I {T n+1} E[(Z n+1 Z n ) F n ] = Y n + 0 5

6 which completes the proof. If Z n, n 0 is a martingale, then we know that E[Z n ] = E[Z n 1 ] =...E[Z 0 ]. Now the question is what will happen if the deterministic time n is replaced by a stopping time T. Namely, will E[Z T ] = E[Z 0 ] be still true? Before answering the question, let us look at one example. Let S n, n 0 denote the capital gain in a series of fair games. Define T = min{n; S n = 10}, for example, the time at which the capital reaches 10. Then E[S T ] = E[10] = 10 E[S 0 ] = 0. But the following theorem holds. Theorem 5.7 (Doob s optional stopping theorem). Let Z 0, Z 1,... be a martingale and T an almost surely finite stopping time. In each of the following three cases, we have E[Z T ] = E[Z 0 ]. Case (i). T is bounded, i.e., there is an integer m such that T (ω) m. Case (ii). The sequence Z T n, n 0 is bounded in the sense that there is a constant K such that Z T n K for all n. Case (iii): E[T ] < and the step Z n Z n 1 are bounded, i.e., there is a constant C such that Z n Z n 1 C for all n. Proof. The proof is an application of the dominated convergence theorem. First of all, we note that Z T n Z T as n. Since the stopped process Z T n is a martingale, we always have E[Z T n ] = E[Z 0 ] Case (i): Since T m, we have T m = T and so E[Z T ] = E[Z T m ] = E[Z 0 ] Case (ii). Since Z T n K is bounded for all n, it follows from the dominated convergence theorem that Case (iii): Write E[Z T ] = lim E[Z T n ] = E[Z 0 ] Z T n = Z T n Z T n 1 + Z T n 1 Z T n 2 Consequently, + Z 3 Z 2 + Z 2 Z 1 + Z 1 Z 0 + Z 0 Z T n Z T n Z T n 1 + Z T n 1 Z T n 2 + Z 3 Z 2 + Z 2 Z 1 + Z 1 Z 0 + Z 0 6

7 C + C + + C + Z 0 (T n)c + Z 0 CT + Z 0. By virtue of the dominated convergence theorem, we have E[Z T ] = lim E[Z T n ] = E[Z 0 ] Application of Doob s optimal stopping theorem: ruin problem Gambler s Gambler A and B play a series of games against each other in which a fair coin is tossed repeatedly. In each game gambler A wins or loses 1 according as the toss results in a head or a tail respectively. The initial capital of gambler A is a and that of gambler B is b and they continue playing until one of them is ruined. Determine the probability that A will be ruined and also the expected number of games played. Let Ŝn be the fortune of gambler A after the n-th game. Then Ŝ n = a + X 1 + X X n = a + S n, where X i, i = 1, 2... are independent r.v. s with P (X i = 1) = 1, P (X 2 i = 1) = 1. The game will stop at the time the gambler A or B is ruined, i.e., 2 the game stop at T = min{n; Ŝn = 0 or Ŝ n = a + b} = min{n; S n = a or S n = b} As discussed before, {S n, n 0} forms a martingale. Since S T = a or S T = b, we have (1).P (S T = a) + P (S T = b) = 1 Note that {Gambler A is ruined} = {S T = a} Since S T n a + b for all n 1, it follows from the Doob s theorem that E[S T ] = E[S 0 ] = 0, which is (2).E[S T ] = ( a)p (S T = a) + bp (S T = b) = 0 Solve (1), (2) together to get P (S T = a) = b a + b, P (S T = b) = a a + b Next we will find the expected number of games E(T ). To this end, define Y n = Sn 2 n, n 0. Then in example 4.2, we have showed (check) that Y n is a martingale. So the stopped process Y T n = ST 2 n (T n) is also a martingale. Thus we have E[Y T n ] = E[S 2 T n] E[(T n)] = 0 7

8 This yields by dominated convergence theorem that E[T ] = lim E[T n] = lim E[S 2 T n] = E(S 2 T ) = ( a) 2 P (S T = a) + b 2 P (S T = b) = a 2 b a + b + a b2 a + b = ab Example 5.8 Consider a simple random walk {S n, n 0} with 0 < S 0 = k < N, for which each step is rightwards with probability 0 < p < 1 and leftwards with probability q = 1 p, i.e., S n = S 0 + X X n with P (X i = 1) = p, P (X i = 1) = q. Assume p q. Use Doob optional stopping theorem to find the probability that the random walk hits 0 before hitting N. Solution. Write X 1, X 2,...X n,... for the independent steps of the walk. Then S 0 = k, S n = k + X X n with P (X i = 1) = p, P (X i = 1) = q. Define Z n = ( q p )S n, n 0. Then as we have seen before {Z n, n 0} is a martingale w.r.t. {F n = σ(x 1, X 2,..., X n ), n 0}. Let T = min{n; S n = 0 or S n = N} be the first time at which the r.w. hits 0 or N. Then S T = 0 means that the random walk hits 0 before reaching N and S T = N implies that the random walk hits N before reaching 0. Since Z n T = ( q p )S n T max 1 l N ( q p )l = M, by the Doob s optional theorem, On the other hand, E[Z T ] = E[Z 0 ] = ( q p )k E(Z T ) = ( q p )0 P (S T = 0) + ( q p )N P (S T = N) Thus we have = P (S T = 0) + ( q p )N (1 P (S T = 0)) P (S T = 0) + ( q p )N (1 P (S T = 0)) = ( q p )k Solve this equation for P (S T = 0) to get P (S T = 0) = ( q p )k ( q p )N 1 ( q p )N 8

9 Example 5.9 Let X 1,..., X n... be independent, integrable non-negative random variables with the same mean E(X i ) = µ. Let T be a stopping time w.r.t. {F n = σ(x 1, X 2,..., X n ), n 0}such that E[T ] <. Show that Y n = n X i nµ. Then {Y n, n 0 is a martin- Solution. Let Y 0 = 0, gale. Indeed, E[ E[Y n+1 F n ] = E[ T X i ] = E[T ]µ n X i nµ + X n+1 µ F n ] = E[Y n + X n+1 µ F n ] = Y n + E(X n+1 ) µ = Y n In particular, the stopped process Y n T is also a martingale. This yields that n T E[Y n T ] = E[ X i (n T )µ] = 0 By the monotone convergence theorem we arrive at n T E[T ]µ = lim E[(n T )µ] = lim E[ X i ] = E[ T X i ] Example 5.10 If Z n, n 1 is a martingale w.r.t. {F n, n 1}, prove that Z p n, n 1 is a submartingale for all p 1. Solution. Since Z n, n 1 is a martingale, E[Z n+1 F n ] = Z n. It follows that Z n p = E[Z n+1 F n ] p E[ Z n+1 p F n ] Therefore Z p n, n 1 is a submartingale. Doob s maximum inequality and martingale convergence theorem Theorem 5.11 Let X = (X n, n 0) be a martingale such that E[ X n p ] <, n = 0, 1,... for some p 1. Then for every N, and if p > 1 P (max 0 n N X n λ) E[ X N p ] λ p E[max 0 n N X n p p ] ( p 1 )p E[ X N p ] 9

10 Proof. Let T = inf{n; X n λ} N T is a bounded stopping time. Since X n p, n 1 is a submartingale, we particularly have E[ X T p ] E[ X N p ]. Note that and We have E[ X N p ] E[ X T p ] = {max 0 n N X n λ} { X T λ} {max 0 n N X n < λ} {T = N} {max 0 n N X n λ} X T p dp + X T p dp {max 0 n N X n <λ} This immediately gives λ p P (max 0 n N X n λ) P (max 0 n N X n λ) E[ X N p ] λ p For a real-valued (F n )-adapted process (X n ) n 0 define a sequence of stopping times τ n as follows and an interval [a, b], We set τ 1 = min{n; X n a}, τ 2 = min{n τ 1 ; X n b},... τ 2k+1 = min{n τ 2k ; X n a}, τ 2k+2 = min{n τ 2k+1 ; X n b},... U X N (a, b)(ω) = max{k; τ 2k (ω) N}. Then U X N (a, b) is the number of upcrossings of (X n) N n=0 for the interval [a, b]. Theorem 5.12 Let (X n ) be a supermartingale. We have P (UN X (a, b) > j) 1 (X N a) dp, (5.1) b a {UN X(a,b)=j} E[U X N (a, b)] 1 b a E[(X N a) ]. (5.2) 10

11 Proof. We can assume that the process is stopped at N and also a = 0. Otherwise consider the process (X n N a). Set S = τ 2j+1 (N + 1), T = τ 2(j+1) (N + 1) Then {S N} = {τ 2j+1 N}, and X S = X τ2j+1 0 on {S N}. Note that {τ 2j+2 N} is the event that that the process (X n ) N n=0 has at least j +1 upcrossings for the interval [a, b]. Hence, we have Thus, {S<N} {U X N (0, b) > j} = {τ 2j+2 N} = {S < N, X T b} {S < N, X T < b} = {S < N, T = N + 1} {U X N (0, b) = j} bp (UN X (0, b) > j) = bdp = {UN X (0,b)>j} X T dp = X T dp {S<N,X T b} X S dp {S<N,T =N+1} {S<N,T =N+1} {S<N} X T dp 0 {S<N,X T b} {S<N,X T <b} {S<N,T =N+1} X N dp {U XN (0,b)=j} X N dp Adding the above inequality from j = 0 to we obtain bdp X T dp X N+1 dp = E[U X N (0, b)] 1 b E[(X N) ]. (5.3) Theorem 5.13 Let X = (X n, n 0) be a martingale such that sup n E[ X n p ] <, n = 0, 1,... for some p 1. Then almost surely. Proof. For a < b, set X(ω) = lim X n (ω) U X (a, b) = lim N U X N (a, b). U X (a, b) is the number of upcrossings of the process (X n ) n 0. By Theorem, we have E[U X (a, b)] lim E[U N X (a, b)] sup E[(X N a) ] < N This yields that P (W a,b ) = 0, where W a,b = {U X (a, b) = }. Set V a,b = {lim inf N X n < a, lim sup X n > b} 11 {S<N,T =N+1} X N dp

12 Then V a,b W a,b, and hence P (V a,b ) = 0. Since {lim inf X n < lim sup X n } = a<b,a,b Q V a,b, where Q denotes the set of rational numbers, we deduce that Hence P ({lim inf X n < lim sup X n }) = 0 P ({ lim X n exists }) = 1. Example 5.14 Let {F n, n 0} be a sequence of increasing σ-fields. Let Z be an integrable random variable. Set X n = E[Z F n ], n 0. Explain why the limit X = lim X n exists. Solution. We knew from previous examples that X n, n 0 is a martingale. Moreover, we have sup n E[ X n ] sup E[E[ Z F n ]] = E[ Z ] < n By Theorem 4.11, X = lim X n exists. Example 5.15 Let X 1, X 2,..., X n,... be independent random variables with E[X i ] = µ i, V ar(x i ) = σi 2. If µ i < and σ2 i <, show X i(ω) converges almost surely. Solution. Set Y n = n (X i µ i ), F n = σ(x 1, X 2,..., X n ). It is easy to verify that Y n, n 1 is a martingale. Furthermore, sup E[Yn 2 ] = sup n n n E[(X i µ i ) 2 ] = σi 2 < It follows from the martingale convergence theorem that n lim Y n = lim (X i µ i ) exists. Since by assumption, exists, we conclude that exists. lim n µ i X i (ω) = lim 12 n X i

Problem Sheet 1. You may assume that both F and F are σ-fields. (a) Show that F F is not a σ-field. (b) Let X : Ω R be defined by 1 if n = 1

Problem Sheet 1. You may assume that both F and F are σ-fields. (a) Show that F F is not a σ-field. (b) Let X : Ω R be defined by 1 if n = 1 Problem Sheet 1 1. Let Ω = {1, 2, 3}. Let F = {, {1}, {2, 3}, {1, 2, 3}}, F = {, {2}, {1, 3}, {1, 2, 3}}. You may assume that both F and F are σ-fields. (a) Show that F F is not a σ-field. (b) Let X :

More information

Exercise Exercise Homework #6 Solutions Thursday 6 April 2006

Exercise Exercise Homework #6 Solutions Thursday 6 April 2006 Unless otherwise stated, for the remainder of the solutions, define F m = σy 0,..., Y m We will show EY m = EY 0 using induction. m = 0 is obviously true. For base case m = : EY = EEY Y 0 = EY 0. Now assume

More information

Fundamental Inequalities, Convergence and the Optional Stopping Theorem for Continuous-Time Martingales

Fundamental Inequalities, Convergence and the Optional Stopping Theorem for Continuous-Time Martingales Fundamental Inequalities, Convergence and the Optional Stopping Theorem for Continuous-Time Martingales Prakash Balachandran Department of Mathematics Duke University April 2, 2008 1 Review of Discrete-Time

More information

Martingale Theory and Applications

Martingale Theory and Applications Martingale Theory and Applications Dr Nic Freeman June 4, 2015 Contents 1 Conditional Expectation 2 1.1 Probability spaces and σ-fields............................ 2 1.2 Random Variables...................................

More information

Lecture 2. We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales.

Lecture 2. We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales. Lecture 2 1 Martingales We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales. 1.1 Doob s inequality We have the following maximal

More information

Inference for Stochastic Processes

Inference for Stochastic Processes Inference for Stochastic Processes Robert L. Wolpert Revised: June 19, 005 Introduction A stochastic process is a family {X t } of real-valued random variables, all defined on the same probability space

More information

Math 6810 (Probability) Fall Lecture notes

Math 6810 (Probability) Fall Lecture notes Math 6810 (Probability) Fall 2012 Lecture notes Pieter Allaart University of North Texas September 23, 2012 2 Text: Introduction to Stochastic Calculus with Applications, by Fima C. Klebaner (3rd edition),

More information

Useful Probability Theorems

Useful Probability Theorems Useful Probability Theorems Shiu-Tang Li Finished: March 23, 2013 Last updated: November 2, 2013 1 Convergence in distribution Theorem 1.1. TFAE: (i) µ n µ, µ n, µ are probability measures. (ii) F n (x)

More information

Lecture 23. Random walks

Lecture 23. Random walks 18.175: Lecture 23 Random walks Scott Sheffield MIT 1 Outline Random walks Stopping times Arcsin law, other SRW stories 2 Outline Random walks Stopping times Arcsin law, other SRW stories 3 Exchangeable

More information

18.175: Lecture 14 Infinite divisibility and so forth

18.175: Lecture 14 Infinite divisibility and so forth 18.175 Lecture 14 18.175: Lecture 14 Infinite divisibility and so forth Scott Sheffield MIT 18.175 Lecture 14 Outline Infinite divisibility Higher dimensional CFs and CLTs Random walks Stopping times Arcsin

More information

1 Analysis of Markov Chains

1 Analysis of Markov Chains 1 Analysis of Markov Chains 1.1 Martingales Martingales are certain sequences of dependent random variables which have found many applications in probability theory. In order to introduce them it is useful

More information

A D VA N C E D P R O B A B I L - I T Y

A D VA N C E D P R O B A B I L - I T Y A N D R E W T U L L O C H A D VA N C E D P R O B A B I L - I T Y T R I N I T Y C O L L E G E T H E U N I V E R S I T Y O F C A M B R I D G E Contents 1 Conditional Expectation 5 1.1 Discrete Case 6 1.2

More information

Exercises: sheet 1. k=1 Y k is called compound Poisson process (X t := 0 if N t = 0).

Exercises: sheet 1. k=1 Y k is called compound Poisson process (X t := 0 if N t = 0). Exercises: sheet 1 1. Prove: Let X be Poisson(s) and Y be Poisson(t) distributed. If X and Y are independent, then X + Y is Poisson(t + s) distributed (t, s > 0). This means that the property of a convolution

More information

Random Models. Tusheng Zhang. February 14, 2013

Random Models. Tusheng Zhang. February 14, 2013 Random Models Tusheng Zhang February 14, 013 1 Introduction In this module, we will introduce some random models which have many real life applications. The course consists of four parts. 1. A brief review

More information

P (A G) dp G P (A G)

P (A G) dp G P (A G) First homework assignment. Due at 12:15 on 22 September 2016. Homework 1. We roll two dices. X is the result of one of them and Z the sum of the results. Find E [X Z. Homework 2. Let X be a r.v.. Assume

More information

Probability Theory. Richard F. Bass

Probability Theory. Richard F. Bass Probability Theory Richard F. Bass ii c Copyright 2014 Richard F. Bass Contents 1 Basic notions 1 1.1 A few definitions from measure theory............. 1 1.2 Definitions............................. 2

More information

STOCHASTIC MODELS FOR WEB 2.0

STOCHASTIC MODELS FOR WEB 2.0 STOCHASTIC MODELS FOR WEB 2.0 VIJAY G. SUBRAMANIAN c 2011 by Vijay G. Subramanian. All rights reserved. Permission is hereby given to freely print and circulate copies of these notes so long as the notes

More information

17. Convergence of Random Variables

17. Convergence of Random Variables 7. Convergence of Random Variables In elementary mathematics courses (such as Calculus) one speaks of the convergence of functions: f n : R R, then lim f n = f if lim f n (x) = f(x) for all x in R. This

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 9 10/2/2013. Conditional expectations, filtration and martingales

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 9 10/2/2013. Conditional expectations, filtration and martingales MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 9 10/2/2013 Conditional expectations, filtration and martingales Content. 1. Conditional expectations 2. Martingales, sub-martingales

More information

Expected Value 7/7/2006

Expected Value 7/7/2006 Expected Value 7/7/2006 Definition Let X be a numerically-valued discrete random variable with sample space Ω and distribution function m(x). The expected value E(X) is defined by E(X) = x Ω x m(x), provided

More information

Stochastics Process Note. Xing Wang

Stochastics Process Note. Xing Wang Stochastics Process Note Xing Wang April 2014 Contents 0.1 σ-algebra............................................ 3 0.1.1 Monotone Class Theorem............................... 3 0.2 measure and expectation....................................

More information

1 Presessional Probability

1 Presessional Probability 1 Presessional Probability Probability theory is essential for the development of mathematical models in finance, because of the randomness nature of price fluctuations in the markets. This presessional

More information

Exercises: sheet 1. k=1 Y k is called compound Poisson process (X t := 0 if N t = 0).

Exercises: sheet 1. k=1 Y k is called compound Poisson process (X t := 0 if N t = 0). Exercises: sheet 1 1. Prove: Let X be Poisson(s) and Y be Poisson(t) distributed. If X and Y are independent, then X + Y is Poisson(t + s) distributed (t, s > 0). This means that the property of a convolution

More information

Math-Stat-491-Fall2014-Notes-V

Math-Stat-491-Fall2014-Notes-V Math-Stat-491-Fall2014-Notes-V Hariharan Narayanan November 18, 2014 Martingales 1 Introduction Martingales were originally introduced into probability theory as a model for fair betting games. Essentially

More information

Problem Points S C O R E Total: 120

Problem Points S C O R E Total: 120 PSTAT 160 A Final Exam December 10, 2015 Name Student ID # Problem Points S C O R E 1 10 2 10 3 10 4 10 5 10 6 10 7 10 8 10 9 10 10 10 11 10 12 10 Total: 120 1. (10 points) Take a Markov chain with the

More information

Probability Theory I: Syllabus and Exercise

Probability Theory I: Syllabus and Exercise Probability Theory I: Syllabus and Exercise Narn-Rueih Shieh **Copyright Reserved** This course is suitable for those who have taken Basic Probability; some knowledge of Real Analysis is recommended( will

More information

Lecture 5. 1 Chung-Fuchs Theorem. Tel Aviv University Spring 2011

Lecture 5. 1 Chung-Fuchs Theorem. Tel Aviv University Spring 2011 Random Walks and Brownian Motion Tel Aviv University Spring 20 Instructor: Ron Peled Lecture 5 Lecture date: Feb 28, 20 Scribe: Yishai Kohn In today's lecture we return to the Chung-Fuchs theorem regarding

More information

Preliminary Exam: Probability 9:00am 2:00pm, Friday, January 6, 2012

Preliminary Exam: Probability 9:00am 2:00pm, Friday, January 6, 2012 Preliminary Exam: Probability 9:00am 2:00pm, Friday, January 6, 202 The exam lasts from 9:00am until 2:00pm, with a walking break every hour. Your goal on this exam should be to demonstrate mastery of

More information

STOCHASTIC CALCULUS JASON MILLER AND VITTORIA SILVESTRI

STOCHASTIC CALCULUS JASON MILLER AND VITTORIA SILVESTRI STOCHASTIC CALCULUS JASON MILLER AND VITTORIA SILVESTRI Contents Preface 1 1. Introduction 1 2. Preliminaries 4 3. Local martingales 1 4. The stochastic integral 16 5. Stochastic calculus 36 6. Applications

More information

CONVERGENCE OF RANDOM SERIES AND MARTINGALES

CONVERGENCE OF RANDOM SERIES AND MARTINGALES CONVERGENCE OF RANDOM SERIES AND MARTINGALES WESLEY LEE Abstract. This paper is an introduction to probability from a measuretheoretic standpoint. After covering probability spaces, it delves into the

More information

RANDOM WALKS IN ONE DIMENSION

RANDOM WALKS IN ONE DIMENSION RANDOM WALKS IN ONE DIMENSION STEVEN P. LALLEY 1. THE GAMBLER S RUIN PROBLEM 1.1. Statement of the problem. I have A dollars; my colleague Xinyi has B dollars. A cup of coffee at the Sacred Grounds in

More information

ECON 2530b: International Finance

ECON 2530b: International Finance ECON 2530b: International Finance Compiled by Vu T. Chau 1 Non-neutrality of Nominal Exchange Rate 1.1 Mussa s Puzzle(1986) RER can be defined as the relative price of a country s consumption basket in

More information

Probability Theory II. Spring 2016 Peter Orbanz

Probability Theory II. Spring 2016 Peter Orbanz Probability Theory II Spring 2016 Peter Orbanz Contents Chapter 1. Martingales 1 1.1. Martingales indexed by partially ordered sets 1 1.2. Martingales from adapted processes 4 1.3. Stopping times and

More information

More than one variable

More than one variable Chapter More than one variable.1 Bivariate discrete distributions Suppose that the r.v. s X and Y are discrete and take on the values x j and y j, j 1, respectively. Then the joint p.d.f. of X and Y, to

More information

Part III Advanced Probability

Part III Advanced Probability Part III Advanced Probability Based on lectures by M. Lis Notes taken by Dexter Chua Michaelmas 2017 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after

More information

MAT 135B Midterm 1 Solutions

MAT 135B Midterm 1 Solutions MAT 35B Midterm Solutions Last Name (PRINT): First Name (PRINT): Student ID #: Section: Instructions:. Do not open your test until you are told to begin. 2. Use a pen to print your name in the spaces above.

More information

CHAPTER 1. Martingales

CHAPTER 1. Martingales CHAPTER 1 Martingales The basic limit theorems of probability, such as the elementary laws of large numbers and central limit theorems, establish that certain averages of independent variables converge

More information

Lectures on Markov Chains

Lectures on Markov Chains Lectures on Markov Chains David M. McClendon Department of Mathematics Ferris State University 2016 edition 1 Contents Contents 2 1 Markov chains 4 1.1 The definition of a Markov chain.....................

More information

Survival Analysis: Counting Process and Martingale. Lu Tian and Richard Olshen Stanford University

Survival Analysis: Counting Process and Martingale. Lu Tian and Richard Olshen Stanford University Survival Analysis: Counting Process and Martingale Lu Tian and Richard Olshen Stanford University 1 Lebesgue-Stieltjes Integrals G( ) is a right-continuous step function having jumps at x 1, x 2,.. b f(x)dg(x)

More information

1. Stochastic Processes and filtrations

1. Stochastic Processes and filtrations 1. Stochastic Processes and 1. Stoch. pr., A stochastic process (X t ) t T is a collection of random variables on (Ω, F) with values in a measurable space (S, S), i.e., for all t, In our case X t : Ω S

More information

Problem Points S C O R E Total: 120

Problem Points S C O R E Total: 120 PSTAT 160 A Final Exam Solution December 10, 2015 Name Student ID # Problem Points S C O R E 1 10 2 10 3 10 4 10 5 10 6 10 7 10 8 10 9 10 10 10 11 10 12 10 Total: 120 1. (10 points) Take a Markov chain

More information

Selected Exercises on Expectations and Some Probability Inequalities

Selected Exercises on Expectations and Some Probability Inequalities Selected Exercises on Expectations and Some Probability Inequalities # If E(X 2 ) = and E X a > 0, then P( X λa) ( λ) 2 a 2 for 0 < λ

More information

Course: ESO-209 Home Work: 1 Instructor: Debasis Kundu

Course: ESO-209 Home Work: 1 Instructor: Debasis Kundu Home Work: 1 1. Describe the sample space when a coin is tossed (a) once, (b) three times, (c) n times, (d) an infinite number of times. 2. A coin is tossed until for the first time the same result appear

More information

1 Gambler s Ruin Problem

1 Gambler s Ruin Problem 1 Gambler s Ruin Problem Consider a gambler who starts with an initial fortune of $1 and then on each successive gamble either wins $1 or loses $1 independent of the past with probabilities p and q = 1

More information

Advanced Probability

Advanced Probability Advanced Probability Perla Sousi October 10, 2011 Contents 1 Conditional expectation 1 1.1 Discrete case.................................. 3 1.2 Existence and uniqueness............................ 3 1

More information

Notes 15 : UI Martingales

Notes 15 : UI Martingales Notes 15 : UI Martingales Math 733 - Fall 2013 Lecturer: Sebastien Roch References: [Wil91, Chapter 13, 14], [Dur10, Section 5.5, 5.6, 5.7]. 1 Uniform Integrability We give a characterization of L 1 convergence.

More information

Ph.D. Qualifying Exam Monday Tuesday, January 4 5, 2016

Ph.D. Qualifying Exam Monday Tuesday, January 4 5, 2016 Ph.D. Qualifying Exam Monday Tuesday, January 4 5, 2016 Put your solution to each problem on a separate sheet of paper. Problem 1. (5106) Find the maximum likelihood estimate of θ where θ is a parameter

More information

Notes 1 : Measure-theoretic foundations I

Notes 1 : Measure-theoretic foundations I Notes 1 : Measure-theoretic foundations I Math 733-734: Theory of Probability Lecturer: Sebastien Roch References: [Wil91, Section 1.0-1.8, 2.1-2.3, 3.1-3.11], [Fel68, Sections 7.2, 8.1, 9.6], [Dur10,

More information

n E(X t T n = lim X s Tn = X s

n E(X t T n = lim X s Tn = X s Stochastic Calculus Example sheet - Lent 15 Michael Tehranchi Problem 1. Let X be a local martingale. Prove that X is a uniformly integrable martingale if and only X is of class D. Solution 1. If If direction:

More information

Stochastic Processes II/ Wahrscheinlichkeitstheorie III. Lecture Notes

Stochastic Processes II/ Wahrscheinlichkeitstheorie III. Lecture Notes BMS Basic Course Stochastic Processes II/ Wahrscheinlichkeitstheorie III Michael Scheutzow Lecture Notes Technische Universität Berlin Sommersemester 218 preliminary version October 12th 218 Contents

More information

Lecture 4. David Aldous. 2 September David Aldous Lecture 4

Lecture 4. David Aldous. 2 September David Aldous Lecture 4 Lecture 4 David Aldous 2 September 2015 The specific examples I m discussing are not so important; the point of these first lectures is to illustrate a few of the 100 ideas from STAT134. Ideas used in

More information

Lecture 4 An Introduction to Stochastic Processes

Lecture 4 An Introduction to Stochastic Processes Lecture 4 An Introduction to Stochastic Processes Prof. Massimo Guidolin Prep Course in Quantitative Methods for Finance August-September 2017 Plan of the lecture Motivation and definitions Filtrations

More information

Notes on Stochastic Calculus

Notes on Stochastic Calculus Notes on Stochastic Calculus David Nualart Kansas University nualart@math.ku.edu 1 Stochastic Processes 1.1 Probability Spaces and Random Variables In this section we recall the basic vocabulary and results

More information

At t = T the investors learn the true state.

At t = T the investors learn the true state. 1. Martingales A discrete time introduction Model of the market with the following submodels (i) T + 1 trading dates, t =0, 1,...,T. Mathematics of Financial Derivatives II Seppo Pynnönen Professor of

More information

Martingales. Chapter Definition and examples

Martingales. Chapter Definition and examples Chapter 2 Martingales 2.1 Definition and examples In this chapter we introduce and study a very important class of stochastic processes: the socalled martingales. Martingales arise naturally in many branches

More information

8 Laws of large numbers

8 Laws of large numbers 8 Laws of large numbers 8.1 Introduction We first start with the idea of standardizing a random variable. Let X be a random variable with mean µ and variance σ 2. Then Z = (X µ)/σ will be a random variable

More information

Notes 18 : Optional Sampling Theorem

Notes 18 : Optional Sampling Theorem Notes 18 : Optional Sampling Theorem Math 733-734: Theory of Probability Lecturer: Sebastien Roch References: [Wil91, Chapter 14], [Dur10, Section 5.7]. Recall: DEF 18.1 (Uniform Integrability) A collection

More information

Wahrscheinlichkeitstheorie Prof. Schweizer Additions to the Script

Wahrscheinlichkeitstheorie Prof. Schweizer Additions to the Script Wahrscheinlichkeitstheorie Prof. Schweizer Additions to the Script Thomas Rast WS 06/07 Warning: We are sure there are lots of mistakes in these notes. Use at your own risk! Corrections and other feedback

More information

(A n + B n + 1) A n + B n

(A n + B n + 1) A n + B n 344 Problem Hints and Solutions Solution for Problem 2.10. To calculate E(M n+1 F n ), first note that M n+1 is equal to (A n +1)/(A n +B n +1) with probability M n = A n /(A n +B n ) and M n+1 equals

More information

STA205 Probability: Week 8 R. Wolpert

STA205 Probability: Week 8 R. Wolpert INFINITE COIN-TOSS AND THE LAWS OF LARGE NUMBERS The traditional interpretation of the probability of an event E is its asymptotic frequency: the limit as n of the fraction of n repeated, similar, and

More information

1 Gambler s Ruin Problem

1 Gambler s Ruin Problem Coyright c 2017 by Karl Sigman 1 Gambler s Ruin Problem Let N 2 be an integer and let 1 i N 1. Consider a gambler who starts with an initial fortune of $i and then on each successive gamble either wins

More information

Lecture 6 Random walks - advanced methods

Lecture 6 Random walks - advanced methods Lecture 6: Random wals - advanced methods 1 of 11 Course: M362K Intro to Stochastic Processes Term: Fall 2014 Instructor: Gordan Zitovic Lecture 6 Random wals - advanced methods STOPPING TIMES Our last

More information

1 IEOR 6712: Notes on Brownian Motion I

1 IEOR 6712: Notes on Brownian Motion I Copyright c 005 by Karl Sigman IEOR 67: Notes on Brownian Motion I We present an introduction to Brownian motion, an important continuous-time stochastic process that serves as a continuous-time analog

More information

STOR 635 Notes (S13)

STOR 635 Notes (S13) STOR 635 Notes (S13) Jimmy Jin UNC-Chapel Hill Last updated: 1/14/14 Contents 1 Measure theory and probability basics 2 1.1 Algebras and measure.......................... 2 1.2 Integration................................

More information

Introduction to Stochastic Processes

Introduction to Stochastic Processes 18.445 Introduction to Stochastic Processes Lecture 1: Introduction to finite Markov chains Hao Wu MIT 04 February 2015 Hao Wu (MIT) 18.445 04 February 2015 1 / 15 Course description About this course

More information

Probability and Measure

Probability and Measure Chapter 4 Probability and Measure 4.1 Introduction In this chapter we will examine probability theory from the measure theoretic perspective. The realisation that measure theory is the foundation of probability

More information

1 Stat 605. Homework I. Due Feb. 1, 2011

1 Stat 605. Homework I. Due Feb. 1, 2011 The first part is homework which you need to turn in. The second part is exercises that will not be graded, but you need to turn it in together with the take-home final exam. 1 Stat 605. Homework I. Due

More information

Exercises Measure Theoretic Probability

Exercises Measure Theoretic Probability Exercises Measure Theoretic Probability 2002-2003 Week 1 1. Prove the folloing statements. (a) The intersection of an arbitrary family of d-systems is again a d- system. (b) The intersection of an arbitrary

More information

arxiv: v3 [math.pr] 5 Jun 2011

arxiv: v3 [math.pr] 5 Jun 2011 arxiv:0905.0254v3 [math.pr] 5 Jun 2011 Lévy s zero-one law in game-theoretic probability Glenn Shafer, Vladimir Vovk, and Akimichi Takemura The Game-Theoretic Probability and Finance Project Working Paper

More information

Deep Learning for Computer Vision

Deep Learning for Computer Vision Deep Learning for Computer Vision Lecture 3: Probability, Bayes Theorem, and Bayes Classification Peter Belhumeur Computer Science Columbia University Probability Should you play this game? Game: A fair

More information

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains Markov Chains A random process X is a family {X t : t T } of random variables indexed by some set T. When T = {0, 1, 2,... } one speaks about a discrete-time process, for T = R or T = [0, ) one has a continuous-time

More information

Random Variables. Statistics 110. Summer Copyright c 2006 by Mark E. Irwin

Random Variables. Statistics 110. Summer Copyright c 2006 by Mark E. Irwin Random Variables Statistics 110 Summer 2006 Copyright c 2006 by Mark E. Irwin Random Variables A Random Variable (RV) is a response of a random phenomenon which is numeric. Examples: 1. Roll a die twice

More information

SIMPLE RANDOM WALKS: IMPROBABILITY OF PROFITABLE STOPPING

SIMPLE RANDOM WALKS: IMPROBABILITY OF PROFITABLE STOPPING SIMPLE RANDOM WALKS: IMPROBABILITY OF PROFITABLE STOPPING EMILY GENTLES Abstract. This paper introduces the basics of the simple random walk with a flair for the statistical approach. Applications in biology

More information

1 Basics of probability theory

1 Basics of probability theory Examples of Stochastic Optimization Problems In this chapter, we will give examples of three types of stochastic optimization problems, that is, optimal stopping, total expected (discounted) cost problem,

More information

r=1 I r of intervals I r should be the sum of their lengths:

r=1 I r of intervals I r should be the sum of their lengths: m3f33chiii Chapter III: MEASURE-THEORETIC PROBABILITY 1. Measure The language of option pricing involves that of probability, which in turn involves that of measure theory. This originated with Henri LEBESGUE

More information

Lecture 22 Girsanov s Theorem

Lecture 22 Girsanov s Theorem Lecture 22: Girsanov s Theorem of 8 Course: Theory of Probability II Term: Spring 25 Instructor: Gordan Zitkovic Lecture 22 Girsanov s Theorem An example Consider a finite Gaussian random walk X n = n

More information

On the definition and properties of p-harmonious functions

On the definition and properties of p-harmonious functions On the definition and properties of p-harmonious functions University of Pittsburgh, UBA, UAM Workshop on New Connections Between Differential and Random Turn Games, PDE s and Image Processing Pacific

More information

4 Branching Processes

4 Branching Processes 4 Branching Processes Organise by generations: Discrete time. If P(no offspring) 0 there is a probability that the process will die out. Let X = number of offspring of an individual p(x) = P(X = x) = offspring

More information

Advanced Probability Theory (Math541)

Advanced Probability Theory (Math541) Advanced Probability Theory (Math541) Instructor: Kani Chen (Classic)/Modern Probability Theory (1900-1960) Instructor: Kani Chen (HKUST) Advanced Probability Theory (Math541) 1 / 17 Primitive/Classic

More information

4 Sums of Independent Random Variables

4 Sums of Independent Random Variables 4 Sums of Independent Random Variables Standing Assumptions: Assume throughout this section that (,F,P) is a fixed probability space and that X 1, X 2, X 3,... are independent real-valued random variables

More information

Chapter 11 Advanced Topic Stochastic Processes

Chapter 11 Advanced Topic Stochastic Processes Chapter 11 Advanced Topic Stochastic Processes CHAPTER OUTLINE Section 1 Simple Random Walk Section 2 Markov Chains Section 3 Markov Chain Monte Carlo Section 4 Martingales Section 5 Brownian Motion Section

More information

MATH 418: Lectures on Conditional Expectation

MATH 418: Lectures on Conditional Expectation MATH 418: Lectures on Conditional Expectation Instructor: r. Ed Perkins, Notes taken by Adrian She Conditional expectation is one of the most useful tools of probability. The Radon-Nikodym theorem enables

More information

Week 12-13: Discrete Probability

Week 12-13: Discrete Probability Week 12-13: Discrete Probability November 21, 2018 1 Probability Space There are many problems about chances or possibilities, called probability in mathematics. When we roll two dice there are possible

More information

JUSTIN HARTMANN. F n Σ.

JUSTIN HARTMANN. F n Σ. BROWNIAN MOTION JUSTIN HARTMANN Abstract. This paper begins to explore a rigorous introduction to probability theory using ideas from algebra, measure theory, and other areas. We start with a basic explanation

More information

Probability and Measure

Probability and Measure Probability and Measure Robert L. Wolpert Institute of Statistics and Decision Sciences Duke University, Durham, NC, USA Convergence of Random Variables 1. Convergence Concepts 1.1. Convergence of Real

More information

Random Processes. DS GA 1002 Probability and Statistics for Data Science.

Random Processes. DS GA 1002 Probability and Statistics for Data Science. Random Processes DS GA 1002 Probability and Statistics for Data Science http://www.cims.nyu.edu/~cfgranda/pages/dsga1002_fall17 Carlos Fernandez-Granda Aim Modeling quantities that evolve in time (or space)

More information

1 Sequences of events and their limits

1 Sequences of events and their limits O.H. Probability II (MATH 2647 M15 1 Sequences of events and their limits 1.1 Monotone sequences of events Sequences of events arise naturally when a probabilistic experiment is repeated many times. For

More information

ADVANCED PROBABILITY: SOLUTIONS TO SHEET 1

ADVANCED PROBABILITY: SOLUTIONS TO SHEET 1 ADVANCED PROBABILITY: SOLUTIONS TO SHEET 1 Last compiled: November 6, 213 1. Conditional expectation Exercise 1.1. To start with, note that P(X Y = P( c R : X > c, Y c or X c, Y > c = P( c Q : X > c, Y

More information

1 Martingales. Martingales. (Ω, B, P ) is a probability space.

1 Martingales. Martingales. (Ω, B, P ) is a probability space. Martingales January 8, 206 Debdee Pati Martingales (Ω, B, P ) is a robability sace. Definition. (Filtration) filtration F = {F n } n 0 is a collection of increasing sub-σfields such that for m n, we have

More information

ECE534, Spring 2018: Solutions for Problem Set #4 Due Friday April 6, 2018

ECE534, Spring 2018: Solutions for Problem Set #4 Due Friday April 6, 2018 ECE534, Spring 2018: s for Problem Set #4 Due Friday April 6, 2018 1. MMSE Estimation, Data Processing and Innovations The random variables X, Y, Z on a common probability space (Ω, F, P ) are said to

More information

Probability, Random Processes and Inference

Probability, Random Processes and Inference INSTITUTO POLITÉCNICO NACIONAL CENTRO DE INVESTIGACION EN COMPUTACION Laboratorio de Ciberseguridad Probability, Random Processes and Inference Dr. Ponciano Jorge Escamilla Ambrosio pescamilla@cic.ipn.mx

More information

Lecture Notes 5 Convergence and Limit Theorems. Convergence with Probability 1. Convergence in Mean Square. Convergence in Probability, WLLN

Lecture Notes 5 Convergence and Limit Theorems. Convergence with Probability 1. Convergence in Mean Square. Convergence in Probability, WLLN Lecture Notes 5 Convergence and Limit Theorems Motivation Convergence with Probability Convergence in Mean Square Convergence in Probability, WLLN Convergence in Distribution, CLT EE 278: Convergence and

More information

Exercises Measure Theoretic Probability

Exercises Measure Theoretic Probability Exercises Measure Theoretic Probability Chapter 1 1. Prove the folloing statements. (a) The intersection of an arbitrary family of d-systems is again a d- system. (b) The intersection of an arbitrary family

More information

Applications of Optimal Stopping and Stochastic Control

Applications of Optimal Stopping and Stochastic Control Applications of and Stochastic Control YRM Warwick 15 April, 2011 Applications of and Some problems Some technology Some problems The secretary problem Bayesian sequential hypothesis testing the multi-armed

More information

Advanced Probability

Advanced Probability Advanced Probability University of Cambridge, Part III of the Mathematical Tripos Michaelmas Term 2006 Grégory Miermont 1 1 CNRS & Laboratoire de Mathématique, Equipe Probabilités, Statistique et Modélisation,

More information

Stochastic Processes MIT, fall 2011 Day by day lecture outline and weekly homeworks. A) Lecture Outline Suggested reading

Stochastic Processes MIT, fall 2011 Day by day lecture outline and weekly homeworks. A) Lecture Outline Suggested reading Stochastic Processes 18445 MIT, fall 2011 Day by day lecture outline and weekly homeworks A) Lecture Outline Suggested reading Part 1: Random walk on Z Lecture 1: thursday, september 8, 2011 Presentation

More information

1 Variance of a Random Variable

1 Variance of a Random Variable Indian Institute of Technology Bombay Department of Electrical Engineering Handout 14 EE 325 Probability and Random Processes Lecture Notes 9 August 28, 2014 1 Variance of a Random Variable The expectation

More information

1. Let X and Y be independent exponential random variables with rate α. Find the densities of the random variables X 3, X Y, min(x, Y 3 )

1. Let X and Y be independent exponential random variables with rate α. Find the densities of the random variables X 3, X Y, min(x, Y 3 ) 1 Introduction These problems are meant to be practice problems for you to see if you have understood the material reasonably well. They are neither exhaustive (e.g. Diffusions, continuous time branching

More information

Part IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

Part IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015 Part IA Probability Definitions Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.

More information