Notes 15 : UI Martingales

Size: px
Start display at page:

Download "Notes 15 : UI Martingales"

Transcription

1 Notes 15 : UI Martingales Math Fall 2013 Lecturer: Sebastien Roch References: [Wil91, Chapter 13, 14], [Dur10, Section 5.5, 5.6, 5.7]. 1 Uniform Integrability We give a characterization of L 1 convergence. First note: LEM 15.1 Let Y L 1. ε > 0, K > 0 s.t. E[ Y ; Y > K] < ε. Proof: Immediate by (MON) to E[ Y ; Y K]. What we need is for this condition to hold uniformly over the sequence: DEF 15.2 (Uniform Integrability) A collection C of RVs on (Ω, F, P) is uniformly integrable (UI) if: ε > 0, K > + s.t. E[ X ; X > K] < ε, X C. THM 15.3 (Necessary and Sufficient Condition for L 1 Convergence) Let {X n } L 1 and X L 1. Then X n X in L 1 if and only if: X n X in prob {X n } is UI. Before giving the proof, we look at a few examples. EX 15.4 (L 1 -bddness is not sufficient) Let C is UI and X C. Note that E X E[ X ; X K] + E[ X ; X < K] ε + K < +, so UI implies L 1 -bddness. But the opposite is not true by our last example (take n 2 > K). 1

2 Lecture 15: UI Martingales 2 EX 15.5 (L p -bdd RVs) But L p -bddness works for p > 1. Let C be L p -bdd and X C. Then as K +. E[ X ; X > K] E[K 1 p X p ; X > K ] K 1 p A 0, EX 15.6 (Dominated RVs) Assume Y L 1 s.t. X Y X C. Then and apply lemma above. E[ X ; X > K] E[Y ; X > K] E[Y ; Y > K], 2 Proof of main theorem Proof: We start with the if part. Fix ε > 0. We want to show that for n large enough: E X n X ε. It is natural to truncate at K to apply the UI property. Let φ K (x) = sgn(x)[ x K]. Then, E X n X E φ K (X n ) X n + E φ K (X) X + E φ K (X n ) φ K (X) E[ X n ; X n > K] + E[ X ; X > K] + E φ K (X n ) φ K (X). The 1st term ε/3 by UI and the 2nd term ε/3 by lemma above. Check, by case analysis, that φ K (x) φ K (y) x y, so φ K (X n ) P φ K (X). By bounded convergence for convergence in probability, the claim is proved. LEM 15.7 (Bounded convergence theorem (convergence in probability version)) Let X n K < + n and X n P X. Then Proof:(Sketch) By E X n X 0. P[ X K + m 1 ] P[ X n X m 1 ], it follows that P[ X K] = 1. Fix ε > 0 E X n X = E[ X n X ; X n X > ε/2] + E[ X n X ; X n X ε/2] 2KP[ X n X > ε/2] + ε/2 < ε,

3 Lecture 15: UI Martingales 3 for n large enough. Proof of only if part. Suppose X n X in L 1. We know that L 1 implies convergence in probability. So the first claim follows. For the second claim, if n N (large enough), We can choose K large enough so that E X n X ε. E[ X n ; X n > K] < ε, n N. (Because there is only a finite number.) So only need to worry about n > N. To use L 1 convergence, natural to write E[ X n ; X n > K] E[ X n X ; X n > K] + E[ X ; X n > K]. First term ε. The issue with the second term is that we cannot apply the lemma because the event involves X n rather than X. In fact, a stronger version exists: LEM 15.8 (Absolute continuity) Let X L 1. ε > 0, δ > 0, s.t. P[F ] < δ implies E[ X ; F ] < ε. Proof: Argue by contradiction. Suppose there is ε and F n s.t. P[F n ] 2 n and By BC, E[ X ; F n ] ε. P[H] P[F n i.o.] = 0. By reverse Fatou (applied to X 1 H = lim sup X 1 Fn ), a contradiction. To conclude note that E[ X ; H] ε, P[ X n > K] E X n K sup n N E X n sup n N E X + E X n X < δ, K K uniformly in n for K large enough. We are done.

4 Lecture 15: UI Martingales 4 3 UI MGs THM 15.9 (Convergence of UI MGs) Let M be UI MG. Then M n M, a.s. and in L 1. Moreover, M n = E[M F n ], n. Proof: UI implies L 1 -bddness so we have M n M a.s. By necessary and sufficient condition, we also have L 1 convergence. Now note that for all r n and F F n, we know E[M r F n ] = M n or E[M r ; F ] = E[M n ; F ], by definition of CE. We can take a limit by L 1 convergence. More precisely E[M r ; F ] E[M ; F ] E[ M r M ; F ] E[ M r M ] 0, as r. So plugging above and E[M F n ] = M n. E[M ; F ] = E[M n ; F ], 4 Applications I THM (Levy s upward thm) Let Z L 1 and define M n = E[Z F n ]. Then M is a UI MG and M n M = E[Z F ], a.s. and in L 1. Proof: M is a MG by (TOWER). We first show it is UI: LEM Let X L 1 (Ω, F, P). Then is UI. {E[X G] : G is a sub-σ-field of F},

5 Lecture 15: UI Martingales 5 Proof: We use the absolute continuity lemma again. Let Y = E[X G] G. Since { Y > K} G, By Markov and (JENSEN) E[ Y ; Y > K] = E[ E[X G] ; Y > K] P[ Y > K] E Y K E[E[ X G]; Y > K] = E[ X ; Y > K]. E X K δ, for K large enough (uniformly in G). And we are done. In particular, we have convergence a.s. and in L 1 to M F. Let Y = E[Z F ] F. By dividing into negative and positive parts, we assume Z 0. We want to show, for F F, E[Z; F ] = E[M ; F ]. By Uniqueness Lemma, it suffices to prove equality for all F n. If F F n F, then by (TOWER) E[Z; F ] = E[Y ; F ] = E[M n ; F ] = E[M ; F ]. The first equality is by definition of Y ; the second equality is by definition of M n ; the third equality is from our main theorem. THM (Levy s 0 1 law) Let A F. Then Proof: Immediate. P[A F n ] 1 A. COR (Kolmogorov s 0 1 law) Let X 1, X 2,... be iid RVs. Recall that the tail σ-field is T = n T n = n σ(x n+1, X n+2,...). If A T then P[A] {0, 1}. Proof: Since A T n is independent of F n, P[A F n ] = P[A], n. By Levy s law, P[A] = 1 A {0, 1}.

6 Lecture 15: UI Martingales 6 5 Applications II THM (Levy s Downward Thm) Let Z L 1 (Ω, F, P) and {G n } n 0 a collection of σ-fields s.t. G = k G k G n G 1 F. Define Then a.s. and in L 1. M n = E[Z G n ]. M n M = E[Z G ] Proof: We apply the same argument as in the Martingale Convergence Thm. Let α < β Q and Note that Λ α,β = {ω : lim inf X n < α < β < lim sup X n }. Λ {ω : X n does not converge} = {ω : lim inf X n < lim sup X n } = α<β Q Λ α,β. Let U N [α, β] be the number of upcrossings of [α, β] between time N and 1. Then by the Upcrossing Lemma applied to the MG M N,..., M 1 By (MON) and so that Since (β α)eu N [α, β] α + E M 1 α + E Z. U N [α, β] U [α, β], (β α)eu [α, β] α + E Z < +, P[U [α, β] = ] = 0. Λ α,β {U [α, β] = }, we have P[Λ α,β ] = 0. By countability, P[Λ] = 0. Therefore we have convergence a.s.

7 Lecture 15: UI Martingales 7 By lemma in previous class, M is UI and hence we have L 1 convergence as well. Finally, for all G G G n, E[Z; G] = E[M n ; G]. Take the limit n + and use L 1 convergence. 5.1 Law of large numbers An application: THM (Strong Law; Martingale Proof) Let X 1, X 2,... be iid RVs with E[X 1 ] = µ and E X 1 < +. Let S n = i n X n. Then a.s. and in L 1. Proof: Let n 1 S n µ, G n = σ(s n, S n+1, S n+2,...) = σ(s n, X n+1, X n+2,...), and note that, for 1 i n, E[X 1 G n ] = E[X 1 S n ] = E[X i S n ] = E[n 1 S n S n ] = n 1 S n, by symmetry. By Levy s Downward Thm n 1 S n E[X 1 G ], a.s. and in L 1. But the limit must be trivial by Kolmogorov s 0-1 law and we must have E[X 1 G ] = µ. 5.2 Hewitt-Savage DEF Let X 1, X 2,... be iid RVs. Let E n be the σ-field generated by events invariant under permutations of the Xs that leave X n+1, X n+2,... unchanged. The exchangeable σ-field is E = m E m. THM (Hewitt-Savage 0-1 law) Let X 1, X 2,... be iid RVs. If A E then P[A] {0, 1}.

8 Lecture 15: UI Martingales 8 Proof: The idea of the proof is to show that A is independent of itself. Indeed, we then have 0 = P[A] P[A A] = P[A] P[A]P[A] = P[A](1 P[A]). Since A E and A F, it suffices to show that E is independent of F n for every n (by the π-λ theorem). WTS: for every bounded φ, B E, E[φ(X 1,..., X k ); B] = E[φ(X 1,..., X k )]E[B] = E[E[φ(X 1,..., X k )]; B], or equivalently Y = E[φ(X 1,..., X k ) E] = E[φ(X 1,..., X k )]. It suffices to show that Y is independent of F k. Indeed, by the L 2 characterization of conditional expectation and independence, 0 = E[(φ(X 1,..., X k ) Y )Y ] = E[φ(X 1,..., X k )]E[Y ] E[Y 2 ] = Var[Y ], and Y is constant. 1. Since φ is bounded, it is integrable and Levy s Downward Thm implies E[φ(X 1,..., X k ) E n ] E[φ(X 1,..., X k ) E]. 2. We make φ exchangeable by averaging over all configurations and taking a limit as n +. Define A n (φ) = 1 φ(x i1,..., X ik ), (n) k 1 i 1 =i k n where (n) k = n(n 1) (n k + 1). Note by symmetry A n (φ) = E[A n (φ) E n ] = E[φ(X 1,..., X k ) E n ] E[φ(X 1,..., X k ) E]. 3. The reason we did this is that now the first k Xs have little influence on this quantity and therefore the limit is independent of them. However, note that 1 φ(x i1,..., X ik ) k(n 1) k 1 sup φ = k sup φ 0, (n) k (n) k n 1 i so that the limit of A n (φ) is independent of X 1 and and by induction E[φ(X 1,..., X k ) E] σ(x 2,...), Y = E[φ(X 1,..., X k ) E] σ(x k+1,...).

9 Lecture 15: UI Martingales 9 6 Optional Sampling 6.1 Review: Stopping times Recall: DEF A random variable T : Ω Z + {0, 1,..., + } is called a stopping time if {T = n} F n, n Z +. EX Let {A n } be an adapted process and B B. Then is a stopping time. T = inf{n 0 : A n B}, THM Let {M n } be a MG and T be a stopping time. Then M T is integrable and E[M T ] = E[X 0 ]. if one of the following holds: 1. T is bounded. 2. M is bounded and T is a.s. finite. 3. E[T ] < + and M has bounded increments. 4. M is UI. (This one is new. The proof follows from the Optional Sampling Theorem below.) DEF (F T ) Let T be a stopping time. Denote by F T the set of all events F such that n Z + F {T = n} F n. 6.2 More on the σ-field F T The following two lemmas help clarify the definition of F T : LEM F T = F n if T n, F T = F if T and F T F for any T.

10 Lecture 15: UI Martingales 10 Proof: In the first case, note F {T = k} is empty if k n and is F if k = n. So if F F T then F = F {T = n} F n and if F F n then F = F {T = n} F n. Moreover F n so we have proved both inclusions. This works also for n =. For the third claim note F = k Z+ F {T = n} F. LEM If X is adapted and T is a stopping time then X T F T (where we assume that X F, e.g., X = lim inf X n ). Proof: For B B {X T B} {T = n} = {X n B} {T = n} F n. LEM If S, T are stopping times then F S T F T. Proof: Let F F S T. Note that F {T = n} = k n [(F {S T = k}) {T = n}] F n. Indeed, the expression in parenthesis is in F k F n and {T = n} F n. 6.3 Optional Sampling Theorem (OST) THM (Optional Sampling Theorem) If M is a UI MG and S, T are stopping times with S T a.s. then E M T < + and E[M T F S ] = M S. Proof: Since M is UI, M L 1 s.t. M n M a.s. and in L 1. We prove a more general claim: LEM E[M F T ] = M T. Indeed, we then get the theorem by (TOWER) and (JENSEN). Proof:(Lemma) Wlog we assume M 0 so that M n = E[M F n ] 0 n. Let F F T. Then (trivially) E[M ; F {T = }] = E[M T ; F {T = }]

11 Lecture 15: UI Martingales 11 so STS E[M ; F {T < + }] = E[M T ; F {T < + }]. In fact, by (MON), STS E[M ; F {T k}] = E[M T ; F {T k}] = E[M T k ; F {T k}], k. To conclude we make two observations: 1. F {T k} F T k. Indeed if n k F {T k} {T k = n} = F {T = n} F n, and if n > k = F n. 2. E[M F T k ] = M T k. Since E[M F k ] = M k, STS E[M k F T k ] = M T k. But note that if G F T k E[M k ; G] = l k E[M k ; G {T k = l}] = l k E[M l ; G {T k = l}] = E[M T k ; G] since G {T k = l} F l. 6.4 Example: Biased RW DEF The asymmetric simple RW with parameter 1/2 < p < 1 is the process {S n } n 0 with S 0 = 0 and S n = k n X k where the X k s are iid in { 1, +1} s.t. P[X 1 = 1] = p. Let q = 1 p. Let φ(x) = (q/p) x and ψ n (x) = x (p q)n. THM Let {S n } as above. Let a < 0 < b. Define T x = inf{n 0 : S n = x}. Then 1. We have P[T a < T b ] = φ(b) φ(0) φ(b) φ(a). In particular, P[T a < + ] = 1/φ(a) and P[T b < ] = We have E[T b ] = b 2p 1.

12 Lecture 15: UI Martingales 12 Proof: There are two MGs here: and E[φ(S n ) F n 1 ] = p(q/p) S n q(q/p) S n 1 1 = φ(s n 1 ), E[ψ n (S n ) F n 1 ] = p[s n 1 +1 (p q)(n)]+q[s n 1 1 (p q)(n)] = ψ n 1 (S n 1 ). Let N = T a T b. Now note that φ(s N n ) is a bounded MG and therefore applying the MG property at time n and taking limits as n (using (DOM)) φ(0) = E[φ(S N )] = P[T a < T b ]φ(a) + P[T a > T b ]φ(b), where we need to prove that N < + a.s. Indeed, since (b a) +1-steps always take us out of (a, b), P[T b > n(b a)] (1 q b a ) n, so that E[T b ] = k 0 P[T b > k] n (b a)(1 q b a ) n < +. In particular T b < + a.s. and N < + a.s. Rearranging the formula above gives the first result. (For the second part of the first result, take b + and use monotonicity.) For the third one, note that T b n is bounded so that 0 = E[S Tb n (p q)(t b n)]. By (MON), E[T b n] E[T b ]. Finally, using P[ inf n S n a] = P[T a < + ], and the fact that inf n S n 0 shows that E[ inf n S n ] < +. Hence, we can use (DOM) with S Tb n max{b, inf n S n }. References [Dur10] Rick Durrett. Probability: theory and examples. Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge University Press, Cambridge, [Wil91] David Williams. Probability with martingales. Cambridge Mathematical Textbooks. Cambridge University Press, Cambridge, 1991.

Notes 18 : Optional Sampling Theorem

Notes 18 : Optional Sampling Theorem Notes 18 : Optional Sampling Theorem Math 733-734: Theory of Probability Lecturer: Sebastien Roch References: [Wil91, Chapter 14], [Dur10, Section 5.7]. Recall: DEF 18.1 (Uniform Integrability) A collection

More information

Notes 13 : Conditioning

Notes 13 : Conditioning Notes 13 : Conditioning Math 733-734: Theory of Probability Lecturer: Sebastien Roch References: [Wil91, Sections 0, 4.8, 9, 10], [Dur10, Section 5.1, 5.2], [KT75, Section 6.1]. 1 Conditioning 1.1 Review

More information

Notes 1 : Measure-theoretic foundations I

Notes 1 : Measure-theoretic foundations I Notes 1 : Measure-theoretic foundations I Math 733-734: Theory of Probability Lecturer: Sebastien Roch References: [Wil91, Section 1.0-1.8, 2.1-2.3, 3.1-3.11], [Fel68, Sections 7.2, 8.1, 9.6], [Dur10,

More information

Lecture 19 : Brownian motion: Path properties I

Lecture 19 : Brownian motion: Path properties I Lecture 19 : Brownian motion: Path properties I MATH275B - Winter 2012 Lecturer: Sebastien Roch References: [Dur10, Section 8.1], [Lig10, Section 1.5, 1.6], [MP10, Section 1.1, 1.2]. 1 Invariance We begin

More information

Math 6810 (Probability) Fall Lecture notes

Math 6810 (Probability) Fall Lecture notes Math 6810 (Probability) Fall 2012 Lecture notes Pieter Allaart University of North Texas September 23, 2012 2 Text: Introduction to Stochastic Calculus with Applications, by Fima C. Klebaner (3rd edition),

More information

4 Expectation & the Lebesgue Theorems

4 Expectation & the Lebesgue Theorems STA 205: Probability & Measure Theory Robert L. Wolpert 4 Expectation & the Lebesgue Theorems Let X and {X n : n N} be random variables on a probability space (Ω,F,P). If X n (ω) X(ω) for each ω Ω, does

More information

Notes 6 : First and second moment methods

Notes 6 : First and second moment methods Notes 6 : First and second moment methods Math 733-734: Theory of Probability Lecturer: Sebastien Roch References: [Roc, Sections 2.1-2.3]. Recall: THM 6.1 (Markov s inequality) Let X be a non-negative

More information

A D VA N C E D P R O B A B I L - I T Y

A D VA N C E D P R O B A B I L - I T Y A N D R E W T U L L O C H A D VA N C E D P R O B A B I L - I T Y T R I N I T Y C O L L E G E T H E U N I V E R S I T Y O F C A M B R I D G E Contents 1 Conditional Expectation 5 1.1 Discrete Case 6 1.2

More information

Problem Sheet 1. You may assume that both F and F are σ-fields. (a) Show that F F is not a σ-field. (b) Let X : Ω R be defined by 1 if n = 1

Problem Sheet 1. You may assume that both F and F are σ-fields. (a) Show that F F is not a σ-field. (b) Let X : Ω R be defined by 1 if n = 1 Problem Sheet 1 1. Let Ω = {1, 2, 3}. Let F = {, {1}, {2, 3}, {1, 2, 3}}, F = {, {2}, {1, 3}, {1, 2, 3}}. You may assume that both F and F are σ-fields. (a) Show that F F is not a σ-field. (b) Let X :

More information

Lecture 2. We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales.

Lecture 2. We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales. Lecture 2 1 Martingales We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales. 1.1 Doob s inequality We have the following maximal

More information

Modern Discrete Probability Branching processes

Modern Discrete Probability Branching processes Modern Discrete Probability IV - Branching processes Review Sébastien Roch UW Madison Mathematics November 15, 2014 1 Basic definitions 2 3 4 Galton-Watson branching processes I Definition A Galton-Watson

More information

Positive and null recurrent-branching Process

Positive and null recurrent-branching Process December 15, 2011 In last discussion we studied the transience and recurrence of Markov chains There are 2 other closely related issues about Markov chains that we address Is there an invariant distribution?

More information

Useful Probability Theorems

Useful Probability Theorems Useful Probability Theorems Shiu-Tang Li Finished: March 23, 2013 Last updated: November 2, 2013 1 Convergence in distribution Theorem 1.1. TFAE: (i) µ n µ, µ n, µ are probability measures. (ii) F n (x)

More information

Martingale Theory and Applications

Martingale Theory and Applications Martingale Theory and Applications Dr Nic Freeman June 4, 2015 Contents 1 Conditional Expectation 2 1.1 Probability spaces and σ-fields............................ 2 1.2 Random Variables...................................

More information

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R. Ergodic Theorems Samy Tindel Purdue University Probability Theory 2 - MA 539 Taken from Probability: Theory and examples by R. Durrett Samy T. Ergodic theorems Probability Theory 1 / 92 Outline 1 Definitions

More information

Notes 9 : Infinitely divisible and stable laws

Notes 9 : Infinitely divisible and stable laws Notes 9 : Infinitely divisible and stable laws Math 733 - Fall 203 Lecturer: Sebastien Roch References: [Dur0, Section 3.7, 3.8], [Shi96, Section III.6]. Infinitely divisible distributions Recall: EX 9.

More information

18.175: Lecture 14 Infinite divisibility and so forth

18.175: Lecture 14 Infinite divisibility and so forth 18.175 Lecture 14 18.175: Lecture 14 Infinite divisibility and so forth Scott Sheffield MIT 18.175 Lecture 14 Outline Infinite divisibility Higher dimensional CFs and CLTs Random walks Stopping times Arcsin

More information

Lecture 23. Random walks

Lecture 23. Random walks 18.175: Lecture 23 Random walks Scott Sheffield MIT 1 Outline Random walks Stopping times Arcsin law, other SRW stories 2 Outline Random walks Stopping times Arcsin law, other SRW stories 3 Exchangeable

More information

JUSTIN HARTMANN. F n Σ.

JUSTIN HARTMANN. F n Σ. BROWNIAN MOTION JUSTIN HARTMANN Abstract. This paper begins to explore a rigorous introduction to probability theory using ideas from algebra, measure theory, and other areas. We start with a basic explanation

More information

1. Stochastic Processes and filtrations

1. Stochastic Processes and filtrations 1. Stochastic Processes and 1. Stoch. pr., A stochastic process (X t ) t T is a collection of random variables on (Ω, F) with values in a measurable space (S, S), i.e., for all t, In our case X t : Ω S

More information

Fundamental Inequalities, Convergence and the Optional Stopping Theorem for Continuous-Time Martingales

Fundamental Inequalities, Convergence and the Optional Stopping Theorem for Continuous-Time Martingales Fundamental Inequalities, Convergence and the Optional Stopping Theorem for Continuous-Time Martingales Prakash Balachandran Department of Mathematics Duke University April 2, 2008 1 Review of Discrete-Time

More information

Lectures 22-23: Conditional Expectations

Lectures 22-23: Conditional Expectations Lectures 22-23: Conditional Expectations 1.) Definitions Let X be an integrable random variable defined on a probability space (Ω, F 0, P ) and let F be a sub-σ-algebra of F 0. Then the conditional expectation

More information

(A n + B n + 1) A n + B n

(A n + B n + 1) A n + B n 344 Problem Hints and Solutions Solution for Problem 2.10. To calculate E(M n+1 F n ), first note that M n+1 is equal to (A n +1)/(A n +B n +1) with probability M n = A n /(A n +B n ) and M n+1 equals

More information

ADVANCED PROBABILITY: SOLUTIONS TO SHEET 1

ADVANCED PROBABILITY: SOLUTIONS TO SHEET 1 ADVANCED PROBABILITY: SOLUTIONS TO SHEET 1 Last compiled: November 6, 213 1. Conditional expectation Exercise 1.1. To start with, note that P(X Y = P( c R : X > c, Y c or X c, Y > c = P( c Q : X > c, Y

More information

Brownian Motion and Conditional Probability

Brownian Motion and Conditional Probability Math 561: Theory of Probability (Spring 2018) Week 10 Brownian Motion and Conditional Probability 10.1 Standard Brownian Motion (SBM) Brownian motion is a stochastic process with both practical and theoretical

More information

Advanced Probability

Advanced Probability Advanced Probability Perla Sousi October 10, 2011 Contents 1 Conditional expectation 1 1.1 Discrete case.................................. 3 1.2 Existence and uniqueness............................ 3 1

More information

Lecture 5. 1 Chung-Fuchs Theorem. Tel Aviv University Spring 2011

Lecture 5. 1 Chung-Fuchs Theorem. Tel Aviv University Spring 2011 Random Walks and Brownian Motion Tel Aviv University Spring 20 Instructor: Ron Peled Lecture 5 Lecture date: Feb 28, 20 Scribe: Yishai Kohn In today's lecture we return to the Chung-Fuchs theorem regarding

More information

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539 Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory

More information

Solutions to the Exercises in Stochastic Analysis

Solutions to the Exercises in Stochastic Analysis Solutions to the Exercises in Stochastic Analysis Lecturer: Xue-Mei Li 1 Problem Sheet 1 In these solution I avoid using conditional expectations. But do try to give alternative proofs once we learnt conditional

More information

Exercise Exercise Homework #6 Solutions Thursday 6 April 2006

Exercise Exercise Homework #6 Solutions Thursday 6 April 2006 Unless otherwise stated, for the remainder of the solutions, define F m = σy 0,..., Y m We will show EY m = EY 0 using induction. m = 0 is obviously true. For base case m = : EY = EEY Y 0 = EY 0. Now assume

More information

MATH 418: Lectures on Conditional Expectation

MATH 418: Lectures on Conditional Expectation MATH 418: Lectures on Conditional Expectation Instructor: r. Ed Perkins, Notes taken by Adrian She Conditional expectation is one of the most useful tools of probability. The Radon-Nikodym theorem enables

More information

Random Process Lecture 1. Fundamentals of Probability

Random Process Lecture 1. Fundamentals of Probability Random Process Lecture 1. Fundamentals of Probability Husheng Li Min Kao Department of Electrical Engineering and Computer Science University of Tennessee, Knoxville Spring, 2016 1/43 Outline 2/43 1 Syllabus

More information

Lecture 3: Expected Value. These integrals are taken over all of Ω. If we wish to integrate over a measurable subset A Ω, we will write

Lecture 3: Expected Value. These integrals are taken over all of Ω. If we wish to integrate over a measurable subset A Ω, we will write Lecture 3: Expected Value 1.) Definitions. If X 0 is a random variable on (Ω, F, P), then we define its expected value to be EX = XdP. Notice that this quantity may be. For general X, we say that EX exists

More information

STOCHASTIC MODELS FOR WEB 2.0

STOCHASTIC MODELS FOR WEB 2.0 STOCHASTIC MODELS FOR WEB 2.0 VIJAY G. SUBRAMANIAN c 2011 by Vijay G. Subramanian. All rights reserved. Permission is hereby given to freely print and circulate copies of these notes so long as the notes

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 9 10/2/2013. Conditional expectations, filtration and martingales

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 9 10/2/2013. Conditional expectations, filtration and martingales MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 9 10/2/2013 Conditional expectations, filtration and martingales Content. 1. Conditional expectations 2. Martingales, sub-martingales

More information

Lecture 12. F o s, (1.1) F t := s>t

Lecture 12. F o s, (1.1) F t := s>t Lecture 12 1 Brownian motion: the Markov property Let C := C(0, ), R) be the space of continuous functions mapping from 0, ) to R, in which a Brownian motion (B t ) t 0 almost surely takes its value. Let

More information

Harmonic functions on groups

Harmonic functions on groups 20 10 Harmonic functions on groups 0 DRAFT - updated June 19, 2018-10 Ariel Yadin -20-30 Disclaimer: These notes are preliminary, and may contain errors. Please send me any comments or corrections. -40

More information

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3 Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................

More information

Solution for Problem 7.1. We argue by contradiction. If the limit were not infinite, then since τ M (ω) is nondecreasing we would have

Solution for Problem 7.1. We argue by contradiction. If the limit were not infinite, then since τ M (ω) is nondecreasing we would have 362 Problem Hints and Solutions sup g n (ω, t) g(ω, t) sup g(ω, s) g(ω, t) µ n (ω). t T s,t: s t 1/n By the uniform continuity of t g(ω, t) on [, T], one has for each ω that µ n (ω) as n. Two applications

More information

Advanced Probability

Advanced Probability Advanced Probability University of Cambridge, Part III of the Mathematical Tripos Michaelmas Term 2006 Grégory Miermont 1 1 CNRS & Laboratoire de Mathématique, Equipe Probabilités, Statistique et Modélisation,

More information

Probability and Measure

Probability and Measure Probability and Measure Robert L. Wolpert Institute of Statistics and Decision Sciences Duke University, Durham, NC, USA Convergence of Random Variables 1. Convergence Concepts 1.1. Convergence of Real

More information

Martingale Theory for Finance

Martingale Theory for Finance Martingale Theory for Finance Tusheng Zhang October 27, 2015 1 Introduction 2 Probability spaces and σ-fields 3 Integration with respect to a probability measure. 4 Conditional expectation. 5 Martingales.

More information

Lecture 17 Brownian motion as a Markov process

Lecture 17 Brownian motion as a Markov process Lecture 17: Brownian motion as a Markov process 1 of 14 Course: Theory of Probability II Term: Spring 2015 Instructor: Gordan Zitkovic Lecture 17 Brownian motion as a Markov process Brownian motion is

More information

Information Theory and Statistics Lecture 3: Stationary ergodic processes

Information Theory and Statistics Lecture 3: Stationary ergodic processes Information Theory and Statistics Lecture 3: Stationary ergodic processes Łukasz Dębowski ldebowsk@ipipan.waw.pl Ph. D. Programme 2013/2014 Measurable space Definition (measurable space) Measurable space

More information

1 Sequences of events and their limits

1 Sequences of events and their limits O.H. Probability II (MATH 2647 M15 1 Sequences of events and their limits 1.1 Monotone sequences of events Sequences of events arise naturally when a probabilistic experiment is repeated many times. For

More information

Probability Theory. Richard F. Bass

Probability Theory. Richard F. Bass Probability Theory Richard F. Bass ii c Copyright 2014 Richard F. Bass Contents 1 Basic notions 1 1.1 A few definitions from measure theory............. 1 1.2 Definitions............................. 2

More information

CONVERGENCE OF RANDOM SERIES AND MARTINGALES

CONVERGENCE OF RANDOM SERIES AND MARTINGALES CONVERGENCE OF RANDOM SERIES AND MARTINGALES WESLEY LEE Abstract. This paper is an introduction to probability from a measuretheoretic standpoint. After covering probability spaces, it delves into the

More information

Notes 5 : More on the a.s. convergence of sums

Notes 5 : More on the a.s. convergence of sums Notes 5 : More o the a.s. covergece of sums Math 733-734: Theory of Probability Lecturer: Sebastie Roch Refereces: Dur0, Sectios.5; Wil9, Sectio 4.7, Shi96, Sectio IV.4, Dur0, Sectio.. Radom series. Three-series

More information

1/12/05: sec 3.1 and my article: How good is the Lebesgue measure?, Math. Intelligencer 11(2) (1989),

1/12/05: sec 3.1 and my article: How good is the Lebesgue measure?, Math. Intelligencer 11(2) (1989), Real Analysis 2, Math 651, Spring 2005 April 26, 2005 1 Real Analysis 2, Math 651, Spring 2005 Krzysztof Chris Ciesielski 1/12/05: sec 3.1 and my article: How good is the Lebesgue measure?, Math. Intelligencer

More information

Probability Theory I: Syllabus and Exercise

Probability Theory I: Syllabus and Exercise Probability Theory I: Syllabus and Exercise Narn-Rueih Shieh **Copyright Reserved** This course is suitable for those who have taken Basic Probability; some knowledge of Real Analysis is recommended( will

More information

ECON 2530b: International Finance

ECON 2530b: International Finance ECON 2530b: International Finance Compiled by Vu T. Chau 1 Non-neutrality of Nominal Exchange Rate 1.1 Mussa s Puzzle(1986) RER can be defined as the relative price of a country s consumption basket in

More information

4 Sums of Independent Random Variables

4 Sums of Independent Random Variables 4 Sums of Independent Random Variables Standing Assumptions: Assume throughout this section that (,F,P) is a fixed probability space and that X 1, X 2, X 3,... are independent real-valued random variables

More information

Admin and Lecture 1: Recap of Measure Theory

Admin and Lecture 1: Recap of Measure Theory Admin and Lecture 1: Recap of Measure Theory David Aldous January 16, 2018 I don t use bcourses: Read web page (search Aldous 205B) Web page rather unorganized some topics done by Nike in 205A will post

More information

MATH/STAT 235A Probability Theory Lecture Notes, Fall 2013

MATH/STAT 235A Probability Theory Lecture Notes, Fall 2013 MATH/STAT 235A Probability Theory Lecture Notes, Fall 2013 Dan Romik Department of Mathematics, UC Davis December 30, 2013 Contents Chapter 1: Introduction 6 1.1 What is probability theory?...........................

More information

Hitting Probabilities

Hitting Probabilities Stat25B: Probability Theory (Spring 23) Lecture: 2 Hitting Probabilities Lecturer: James W. Pitman Scribe: Brian Milch 2. Hitting probabilities Consider a Markov chain with a countable state space S and

More information

MATH 217A HOMEWORK. P (A i A j ). First, the basis case. We make a union disjoint as follows: P (A B) = P (A) + P (A c B)

MATH 217A HOMEWORK. P (A i A j ). First, the basis case. We make a union disjoint as follows: P (A B) = P (A) + P (A c B) MATH 217A HOMEWOK EIN PEASE 1. (Chap. 1, Problem 2. (a Let (, Σ, P be a probability space and {A i, 1 i n} Σ, n 2. Prove that P A i n P (A i P (A i A j + P (A i A j A k... + ( 1 n 1 P A i n P (A i P (A

More information

Inference for Stochastic Processes

Inference for Stochastic Processes Inference for Stochastic Processes Robert L. Wolpert Revised: June 19, 005 Introduction A stochastic process is a family {X t } of real-valued random variables, all defined on the same probability space

More information

Lecture 11 : Asymptotic Sample Complexity

Lecture 11 : Asymptotic Sample Complexity Lecture 11 : Asymptotic Sample Complexity MATH285K - Spring 2010 Lecturer: Sebastien Roch References: [DMR09]. Previous class THM 11.1 (Strong Quartet Evidence) Let Q be a collection of quartet trees on

More information

P (A G) dp G P (A G)

P (A G) dp G P (A G) First homework assignment. Due at 12:15 on 22 September 2016. Homework 1. We roll two dices. X is the result of one of them and Z the sum of the results. Find E [X Z. Homework 2. Let X be a r.v.. Assume

More information

Part III Advanced Probability

Part III Advanced Probability Part III Advanced Probability Based on lectures by M. Lis Notes taken by Dexter Chua Michaelmas 2017 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after

More information

Selected Exercises on Expectations and Some Probability Inequalities

Selected Exercises on Expectations and Some Probability Inequalities Selected Exercises on Expectations and Some Probability Inequalities # If E(X 2 ) = and E X a > 0, then P( X λa) ( λ) 2 a 2 for 0 < λ

More information

Lecture 9. d N(0, 1). Now we fix n and think of a SRW on [0,1]. We take the k th step at time k n. and our increments are ± 1

Lecture 9. d N(0, 1). Now we fix n and think of a SRW on [0,1]. We take the k th step at time k n. and our increments are ± 1 Random Walks and Brownian Motion Tel Aviv University Spring 011 Lecture date: May 0, 011 Lecture 9 Instructor: Ron Peled Scribe: Jonathan Hermon In today s lecture we present the Brownian motion (BM).

More information

Probability Models of Information Exchange on Networks Lecture 1

Probability Models of Information Exchange on Networks Lecture 1 Probability Models of Information Exchange on Networks Lecture 1 Elchanan Mossel UC Berkeley All Rights Reserved Motivating Questions How are collective decisions made by: people / computational agents

More information

STAT 7032 Probability Spring Wlodek Bryc

STAT 7032 Probability Spring Wlodek Bryc STAT 7032 Probability Spring 2018 Wlodek Bryc Created: Friday, Jan 2, 2014 Revised for Spring 2018 Printed: January 9, 2018 File: Grad-Prob-2018.TEX Department of Mathematical Sciences, University of Cincinnati,

More information

Probability Theory II. Spring 2016 Peter Orbanz

Probability Theory II. Spring 2016 Peter Orbanz Probability Theory II Spring 2016 Peter Orbanz Contents Chapter 1. Martingales 1 1.1. Martingales indexed by partially ordered sets 1 1.2. Martingales from adapted processes 4 1.3. Stopping times and

More information

Random Bernstein-Markov factors

Random Bernstein-Markov factors Random Bernstein-Markov factors Igor Pritsker and Koushik Ramachandran October 20, 208 Abstract For a polynomial P n of degree n, Bernstein s inequality states that P n n P n for all L p norms on the unit

More information

Appendix B for The Evolution of Strategic Sophistication (Intended for Online Publication)

Appendix B for The Evolution of Strategic Sophistication (Intended for Online Publication) Appendix B for The Evolution of Strategic Sophistication (Intended for Online Publication) Nikolaus Robalino and Arthur Robson Appendix B: Proof of Theorem 2 This appendix contains the proof of Theorem

More information

Dynkin (λ-) and π-systems; monotone classes of sets, and of functions with some examples of application (mainly of a probabilistic flavor)

Dynkin (λ-) and π-systems; monotone classes of sets, and of functions with some examples of application (mainly of a probabilistic flavor) Dynkin (λ-) and π-systems; monotone classes of sets, and of functions with some examples of application (mainly of a probabilistic flavor) Matija Vidmar February 7, 2018 1 Dynkin and π-systems Some basic

More information

Problem Points S C O R E Total: 120

Problem Points S C O R E Total: 120 PSTAT 160 A Final Exam Solution December 10, 2015 Name Student ID # Problem Points S C O R E 1 10 2 10 3 10 4 10 5 10 6 10 7 10 8 10 9 10 10 10 11 10 12 10 Total: 120 1. (10 points) Take a Markov chain

More information

Math 564 Homework 1. Solutions.

Math 564 Homework 1. Solutions. Math 564 Homework 1. Solutions. Problem 1. Prove Proposition 0.2.2. A guide to this problem: start with the open set S = (a, b), for example. First assume that a >, and show that the number a has the properties

More information

0.1 Uniform integrability

0.1 Uniform integrability Copyright c 2009 by Karl Sigman 0.1 Uniform integrability Given a sequence of rvs {X n } for which it is known apriori that X n X, n, wp1. for some r.v. X, it is of great importance in many applications

More information

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past.

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past. 1 Markov chain: definition Lecture 5 Definition 1.1 Markov chain] A sequence of random variables (X n ) n 0 taking values in a measurable state space (S, S) is called a (discrete time) Markov chain, if

More information

Optimal stopping for non-linear expectations Part I

Optimal stopping for non-linear expectations Part I Stochastic Processes and their Applications 121 (2011) 185 211 www.elsevier.com/locate/spa Optimal stopping for non-linear expectations Part I Erhan Bayraktar, Song Yao Department of Mathematics, University

More information

It follows from the above inequalities that for c C 1

It follows from the above inequalities that for c C 1 3 Spaces L p 1. In this part we fix a measure space (, A, µ) (we may assume that µ is complete), and consider the A -measurable functions on it. 2. For f L 1 (, µ) set f 1 = f L 1 = f L 1 (,µ) = f dµ.

More information

STOCHASTIC CALCULUS JASON MILLER AND VITTORIA SILVESTRI

STOCHASTIC CALCULUS JASON MILLER AND VITTORIA SILVESTRI STOCHASTIC CALCULUS JASON MILLER AND VITTORIA SILVESTRI Contents Preface 1 1. Introduction 1 2. Preliminaries 4 3. Local martingales 1 4. The stochastic integral 16 5. Stochastic calculus 36 6. Applications

More information

Brownian Motion and Stochastic Calculus

Brownian Motion and Stochastic Calculus ETHZ, Spring 17 D-MATH Prof Dr Martin Larsson Coordinator A Sepúlveda Brownian Motion and Stochastic Calculus Exercise sheet 6 Please hand in your solutions during exercise class or in your assistant s

More information

( f ^ M _ M 0 )dµ (5.1)

( f ^ M _ M 0 )dµ (5.1) 47 5. LEBESGUE INTEGRAL: GENERAL CASE Although the Lebesgue integral defined in the previous chapter is in many ways much better behaved than the Riemann integral, it shares its restriction to bounded

More information

Entropy and Ergodic Theory Lecture 15: A first look at concentration

Entropy and Ergodic Theory Lecture 15: A first look at concentration Entropy and Ergodic Theory Lecture 15: A first look at concentration 1 Introduction to concentration Let X 1, X 2,... be i.i.d. R-valued RVs with common distribution µ, and suppose for simplicity that

More information

Lecture 3 - Expectation, inequalities and laws of large numbers

Lecture 3 - Expectation, inequalities and laws of large numbers Lecture 3 - Expectation, inequalities and laws of large numbers Jan Bouda FI MU April 19, 2009 Jan Bouda (FI MU) Lecture 3 - Expectation, inequalities and laws of large numbersapril 19, 2009 1 / 67 Part

More information

MATH 6605: SUMMARY LECTURE NOTES

MATH 6605: SUMMARY LECTURE NOTES MATH 6605: SUMMARY LECTURE NOTES These notes summarize the lectures on weak convergence of stochastic processes. If you see any typos, please let me know. 1. Construction of Stochastic rocesses A stochastic

More information

n E(X t T n = lim X s Tn = X s

n E(X t T n = lim X s Tn = X s Stochastic Calculus Example sheet - Lent 15 Michael Tehranchi Problem 1. Let X be a local martingale. Prove that X is a uniformly integrable martingale if and only X is of class D. Solution 1. If If direction:

More information

Mathematical Statistical Physics Solution Sketch of the math part of the Exam

Mathematical Statistical Physics Solution Sketch of the math part of the Exam Prof. B. Paredes, Prof. P. Pickl Summer Term 2016 Lukas Nickel, Carlos Velasco July 22, 2016 Mathematical Statistical Physics Solution Sketch of the math part of the Exam Family Name: First name: Student

More information

CHAPTER 1. Martingales

CHAPTER 1. Martingales CHAPTER 1 Martingales The basic limit theorems of probability, such as the elementary laws of large numbers and central limit theorems, establish that certain averages of independent variables converge

More information

Hints/Solutions for Homework 3

Hints/Solutions for Homework 3 Hints/Solutions for Homework 3 MATH 865 Fall 25 Q Let g : and h : be bounded and non-decreasing functions Prove that, for any rv X, [Hint: consider an independent copy Y of X] ov(g(x), h(x)) Solution:

More information

(6, 4) Is there arbitrage in this market? If so, find all arbitrages. If not, find all pricing kernels.

(6, 4) Is there arbitrage in this market? If so, find all arbitrages. If not, find all pricing kernels. Advanced Financial Models Example sheet - Michaelmas 208 Michael Tehranchi Problem. Consider a two-asset model with prices given by (P, P 2 ) (3, 9) /4 (4, 6) (6, 8) /4 /2 (6, 4) Is there arbitrage in

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 15. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

Conditional expectation

Conditional expectation Chapter II Conditional expectation II.1 Introduction Let X be a square integrable real-valued random variable. The constant c which minimizes E[(X c) 2 ] is the expectation of X. Indeed, we have, with

More information

Estimates for probabilities of independent events and infinite series

Estimates for probabilities of independent events and infinite series Estimates for probabilities of independent events and infinite series Jürgen Grahl and Shahar evo September 9, 06 arxiv:609.0894v [math.pr] 8 Sep 06 Abstract This paper deals with finite or infinite sequences

More information

STAT 200C: High-dimensional Statistics

STAT 200C: High-dimensional Statistics STAT 200C: High-dimensional Statistics Arash A. Amini May 30, 2018 1 / 59 Classical case: n d. Asymptotic assumption: d is fixed and n. Basic tools: LLN and CLT. High-dimensional setting: n d, e.g. n/d

More information

Math 421, Homework #6 Solutions. (1) Let E R n Show that = (E c ) o, i.e. the complement of the closure is the interior of the complement.

Math 421, Homework #6 Solutions. (1) Let E R n Show that = (E c ) o, i.e. the complement of the closure is the interior of the complement. Math 421, Homework #6 Solutions (1) Let E R n Show that (Ē) c = (E c ) o, i.e. the complement of the closure is the interior of the complement. 1 Proof. Before giving the proof we recall characterizations

More information

18.175: Lecture 3 Integration

18.175: Lecture 3 Integration 18.175: Lecture 3 Scott Sheffield MIT Outline Outline Recall definitions Probability space is triple (Ω, F, P) where Ω is sample space, F is set of events (the σ-algebra) and P : F [0, 1] is the probability

More information

Chapter 5. Weak convergence

Chapter 5. Weak convergence Chapter 5 Weak convergence We will see later that if the X i are i.i.d. with mean zero and variance one, then S n / p n converges in the sense P(S n / p n 2 [a, b])! P(Z 2 [a, b]), where Z is a standard

More information

Probability and Measure

Probability and Measure Part II Year 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2018 84 Paper 4, Section II 26J Let (X, A) be a measurable space. Let T : X X be a measurable map, and µ a probability

More information

IEOR 6711: Stochastic Models I Fall 2013, Professor Whitt Lecture Notes, Thursday, September 5 Modes of Convergence

IEOR 6711: Stochastic Models I Fall 2013, Professor Whitt Lecture Notes, Thursday, September 5 Modes of Convergence IEOR 6711: Stochastic Models I Fall 2013, Professor Whitt Lecture Notes, Thursday, September 5 Modes of Convergence 1 Overview We started by stating the two principal laws of large numbers: the strong

More information

Lecture 9 : Identifiability of Markov Models

Lecture 9 : Identifiability of Markov Models Lecture 9 : Identifiability of Markov Models MATH285K - Spring 2010 Lecturer: Sebastien Roch References: [SS03, Chapter 8]. Previous class THM 9.1 (Uniqueness of tree metric representation) Let δ be a

More information

Random Walks Conditioned to Stay Positive

Random Walks Conditioned to Stay Positive 1 Random Walks Conditioned to Stay Positive Bob Keener Let S n be a random walk formed by summing i.i.d. integer valued random variables X i, i 1: S n = X 1 + + X n. If the drift EX i is negative, then

More information

Exercises: sheet 1. k=1 Y k is called compound Poisson process (X t := 0 if N t = 0).

Exercises: sheet 1. k=1 Y k is called compound Poisson process (X t := 0 if N t = 0). Exercises: sheet 1 1. Prove: Let X be Poisson(s) and Y be Poisson(t) distributed. If X and Y are independent, then X + Y is Poisson(t + s) distributed (t, s > 0). This means that the property of a convolution

More information

Jump Processes. Richard F. Bass

Jump Processes. Richard F. Bass Jump Processes Richard F. Bass ii c Copyright 214 Richard F. Bass Contents 1 Poisson processes 1 1.1 Definitions............................. 1 1.2 Stopping times.......................... 3 1.3 Markov

More information

Stochastic integration. P.J.C. Spreij

Stochastic integration. P.J.C. Spreij Stochastic integration P.J.C. Spreij this version: April 22, 29 Contents 1 Stochastic processes 1 1.1 General theory............................... 1 1.2 Stopping times...............................

More information