Lecture 2. We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales.
|
|
- Arthur Summers
- 6 years ago
- Views:
Transcription
1 Lecture 2 1 Martingales We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales. 1.1 Doob s inequality We have the following maximal inequality of Doob, which allows us to control the fluctuation of the whole path (X i ) 0 i n by controlling the fluctuation of the end point X n, when X is a (sub-)martingale. Theorem 1.1 [Doob s maximal inequality] Let (X i ) i N be a sub-martingale w.r.t. a filtration (F i ) i N. running maximum of X i. Then for any l > 0, Let S n max 1 i n X i be the P(S n l) 1 l E[ X n + ] 1 1 {Sn l} l E[X+ n ], (1.1) where X + n X n 0. In particular, if X i is a martingale and M n max 1 i n X i, then Proof. Let τ l inf{i 1 : X i l}. Then For each 1 i n, P(M n l) 1 l E[ ] 1 X n 1 {Mn l} l E[ X n ]. (1.2) P(S n l) n P(τ l i). (1.3) P(τ l i) E[1 {Xi l}1 {τl i}] 1 l E[X+ i 1 {τ l i}]. (1.4) Note that {τ l i} F i, and X i + is a sub-martingale because X i itself is a sub-martingale while φ(x) x + is an increasing convex function. Therefore and hence E[X + n 1 {τl i} F i ] 1 {τl i}e[x + n F i ] 1 {τl i}e[x n F i ] + 1 {τl i}x + i, E[X + i 1 {τ l i}] E[X + n 1 {τl i}]. Substituting this inequality into (1.4) and then summing over 1 i n then yields (1.1). (1.2) follows by applying (1.1) to the sub-martingale X i. Corollary 1.2 Let X, S and M be as in Theorem 1.1. Then for any p 1, we have P(S n l) 1 l p E[ (X n + ) p ] 1 1 {Sn l} l p E[(X+ n ) p ], (1.5) P(M n l) 1 l p E[ X n p ] 1 1 {Mn l} l p E[ X n p ]. (1.6) 1
2 Proof. Apply Theorem 1.1 respectively to the sub-martingales (X + n ) p and X n p. The bound we have on P(S n l) allows us to obtain bounds on the moments of S + n. Corollary 1.3 [Doob s L p maximal inequality] Let X i and S i be as in Theorem 1.1. Then for p > 1, we have E[(S n + ) p ] ( p ) p E[(X + p 1 n ) p ]. (1.7) Proof. Note that by the layer-cake representation of an integral, we have E[(S + n ) p ] p 0 0 P((S + n ) p > t)dt p [ l p 2 E[X n + 1 {Sn l}] dl p E p p 1 E[ X + n (S + n ) p 1] 0 l p 1 P(S n l)dl X + n 0 ] l p 2 1 {Sn l} dl p p 1 E[(X+ n ) p ] 1 p E[(S + n ) p ] p 1 p. If E[(S + n ) p ] <, then (1.7) follows immediately. Otherwise, we can first replace S + n by S + n N and repeat the above estimates, and then send N and apply the monotone convergence theorem. Example 1.4 Let X n n ξ i be a random walk with X 0 0, where (ξ i ) i N are i.i.d. random variables with E[ξ 1 ] 0 and E[ξ 2 1 ] σ2 <. Then X n is a square-integrable martingale. Let S n max 0 i n X i. By Doob s inequalities, we have ( Sn ) P a E[X2 n] n na 2 σ2 [( Sn ) 2 ] a 2 and E 4 n n E[X2 n] 4σ 2. (1.8) Exercise 1.5 Show that when ξ i are bounded, the tail bound for S n in (1.8) can be improved to a Gaussian tail bound as in the Azuma-Hoeffding inequality. Exercise 1.6 Doob s L p maximal inequality fails for p 1. Indeed, try to construct a nonnegative martingale X n with E[X n ] 1, and yet sup n N E[S n ]. To get a bound on E[S + n ], we need a bit more than E[X + n ] <. Exercise 1.7 Let X n be a sub-martingale and let S n max 1 i n X i. By mimicking the proof of Doob s L p maximal inequality and by using a log b a log a + b/e for a, b > 0, show that where log + x max{0, log x}. E[S + n ] 1 + E[X+ n log + (X + n )] 1 e 1, 1.2 Stopping times It is often useful to stop a stochastic process at a time which is determined from past observations of the process. Such times are called stopping times. Definition 1.8 [Stopping time] Given (Ω, F, P), let (F n ) n N be a filtration in F. A {0} N { }-valued random variable τ is called a stopping time w.r.t. (F n ) n N, if for every n 0, the event {ω Ω : τ(ω) n} F n. 2
3 Remark 1.9 Intuitively, a stopping time τ is a decision on when to stop the stochastic process (X n ) n N, using only information up to that time. Examples of stopping times include: τ : k for some fixed k 0; τ : inf{n 0 : X n a} for some a R, i.e., the first passage time of level a for a process (X n ) n N adapted to (F n ) n N ; τ : τ 1 τ 2 or τ : τ 1 τ 2, the minimum or maximum of two stopping times τ 1 and τ 2. Definition 1.10 [Stopped σ-field] Let τ be a stopping time w.r.t. the filtration (F n ) n N. The stopped σ-field F τ associated with τ is defined to be F τ : {A F : A {ω : τ(ω) n} F n for all n 0}. (1.9) Remark 1.11 F τ can be interpreted as the information available up to the stopping time τ. For each event A F τ, we can determine whether it has occurred or not based on what we have observed about the process up to (and including) time τ. Exercise 1.12 Verify that F τ is indeed a σ-algebra, and show that the definition of stopping times and stopped σ-fields are unchanged if we replace τ(ω) n by τ(ω) n, but not by τ(ω) < n. Theorem 1.13 [Stopped martingales are martingales] Let (X n ) n N be a martingale, and τ a stopping time, then the stopped martingale (X n τ ) n N is also a martingale. More generally, if θ is another stopping time and θ τ, then X n τ X n θ is a martingale. If X is a sub/super-martingale, then X n τ X n θ is also a sub/supermartingale. Proof. We will verify that X n τ X n θ is a super-martingale when X is a super-martingale. The rest then follows. Note that X n τ X n θ n 1 {θ<i τ} (X i X i 1 ), where {θ < i τ} F i 1 since {i τ} {τ i 1} c F i 1. Therefore, X n τ X n θ is a martingale transform of X n, and E[X n τ X n θ F n 1 ] X (n 1) τ X (n 1) θ + 1 {θ<n τ} E[X n X n 1 F n 1 ] X (n 1) τ X (n 1) θ. Therefore X n τ X n θ is a super-martingale when X n is a super-martingale. If the price of a stock evolves in time as a martingale, then Theorem 1.13 tells us that no matter when do we decide to buy and sell the stock, as long as our strategy is only based on past observations, the expected payoff will be zero. 1.3 Upcrossing inequality, almost sure Martingale Convergence, and Polya s urn As an application of the notion of stopped martingales, we prove the upcrossing inequality. Let (X i ) 0 i n be a super-martingale adapted to the filtration (F i ) 0 i n on a probability space (Ω, F, P). Let a < b. An upcrossing by X over the interval (a, b) is a pair of indices 3
4 0 k < l n with X k a and X l b. We are interested in the number U n of complete upcrossings X makes before time n. Define recursively τ 1 : inf{i : X i a}, τ 2 : inf{i τ 1 : X i b},. τ 2k+1 : inf{i τ 2k : X i a}, τ 2k+2 : inf{i τ 2k+1 : X i b},., where the infimum of an empty set is taken to be. Note that τ i are all stopping times, and the number of completed upcrossings before time n is given by U n max{k : τ 2k n}. Theorem 1.14 [Upcrossing inequality] Let (X i ) 0 i n be a (super-)martingale and let U n be the number of complete upcrossings over (a, b) defined as above. Then E[U n ] E[(a X n) + ] b a a + E[ X n ]. (1.10) b a Proof. By Theorem (1.13), (X i τ2k X i τ2k 1 ) 0 i n is a super-martingale for each 1 k n. Therefore [ n ( ) ] E Xn τ2k X n τ2k 1 0. (1.11) k1 On the other hand, note that X τ2k X τ2k 1 b a if 1 k U n, X n X τ2un+1 X n a if k U n + 1 and τ 2Un+1 n, X n τ2k X n τ2k 1 X n X n 0 if k U n + 1 and τ 2Un+1 > n, X n X n 0 if k U n + 2. Therefore, n ( ) Xn τ2k X n τ2k 1 (b a)un + 1 {τ2un+1 n}(x n a). k1 Taking expectation and combined with (1.11) then yields (b a)e[u n ] E[(a X n )1 {τ2un+1 n}] E[(a X n ) + ], (1.12) and hence (1.10). To summarize, the number of upcrossings has to be balanced out by Xn X is a super-martingale, which explains why E[U n ] is controlled by E[ Xn ]. if Remark 1.15 Note that the number of upcrossings and downcrossings differ by at most 1. When X n is a sub-martingale, the expected number of upcrossings can be bounded in terms of E[(X n a) + ]. See Theorem 4.(2.9) in Durrett [1]. A consequence of the upcrossing inequality is the almost sure martingale convergence theorem for L 1 -bounded martingales. 4
5 Theorem 1.16 [Martingale convergence theorem] Let (X n ) n N be a (super)-martingale with sup n E[ Xn ] <. Then X lim n X n exists almost surely, and E[ X ] <. If X n is a sub-martingale, then the condition becomes sup n E[X n + ] <. Proof. By the upcrossing inequality, for any a < b, a + E[ Xn ] sup E[U n ] sup a + sup n N E[ Xn ] <. (1.13) n N n N b a b a Since U n almost surely increases to a limit U(a, b), which is the total number of upcrossings by X over (a, b), by Fatou s Lemma, E[U(a, b)] < and hence U(a, b) < almost surely. Thus, almost surely, U(a, b) < for all a < b Q, which implies that lim sup X n lim inf X n, n n and hence X lim n X n exists in [, ] almost surely. By Fatou s Lemma, E[ X ] lim inf n E[ X n ] < by assumption. Similarly, by Fatou and the fact that X n is a super-martingale, E[X + ] lim inf n E[X+ n ] lim inf n (E[X n] + E[ X n ]) lim inf n (E[X 1] + E[ X n ]) <. Therefore E[ X ] <. Corollary 1.17 [Almost sure convergence of a non-negative supermartingale] If (X n ) n N is a non-negative super-martingale, then X lim n X n exists a.s., and E[X] E[X 1 ]. Remark 1.18 In Corollary 1.17, we could have E[X] < E[X 1 ] since part of the measure X n dp could escape to. For example, let X n X 0 + n ξ i be a symmetric simple random walk on Z starting from X 0 1, where ξ i are i.i.d. with P(ξ i 1) P(ξ i 1) 1 2. Let τ inf{n : X n 0} be the first hitting time of the origin. Then X n τ is a non-negative martingale which converges to a limit X τ almost surely. The only possible limit for X n τ is either 0 or. Since E[X τ ] E[X 1 ] 1 by Corollary 1.17, we must have X τ 0 almost surely. Note that this also implies that τ < almost surely, so that the symmetric simple random walk always visits 0, which is a property called recurrence. Exercise 1.19 Show by example that a non-negative sub-martingale need not converge almost surely. As an application of Corollary 1.17, we have the following dichotomy between convergence and unbounded oscillation for martingales with bounded increments. Theorem 1.20 Let (X n ) n N be a martingale with X n+1 X n M < a.s. for all n 0. Then almost surely, either lim n X n exists and is finite, or lim sup n X n and lim inf n X n. 5
6 Proof. For L < 0, let τ L inf{n : X n L}, which is a stopping time. By assumption, X τl L M. Therefore X n τl L + M is a non-negative martingale, which converges a.s. to a finite limit. In particular, on the event {τ L }, lim n X n exists and is finite. Letting L, it follows that on the event {lim inf n X n > }, lim n X n exists and is finite. Applying the same argument to X n implies that lim n X n exists and is finite on the event that {lim sup n X n < }. The theorem then follows. Exercise 1.21 Construct an example where the conclusion in Theorem 1.20 fails if the bounded increment assumption is removed. (Hint: Let X n n ξ i, where ξ i are independent with mean 0 but not i.i.d., such that X n almost surely.) Recall that the Borel-Cantelli lemma states: If (A n ) n N F satisfy P(A n) <, then almost surely, (A n ) n N occurs only finitely many times. If (A n ) n N are all independent, then P(A n) guarantees that almost surely, (A n ) n N occurs infinitely often. Using Theorem 1.20, we give a proof of the second Borel-Cantelli lemma which allows dependence of events. Corollary 1.22 [Second Borel-Cantelli Lemma] Let (F n ) n 0 be a filtration on the probability space (Ω, F, P) with F 0 {, Ω}. Let A n F n for n 1. Then { ω : n1 } { 1 An (ω) ω : n1 } P(A n F n 1 ) (1.14) modulo a set of probability 0. When A n are independent, we retrieve the classic Borel-Cantelli lemma. Proof. Let X n n ( 1Ai (ω) P(A i F i 1 ) ). Note that X n is a martingale with bounded increments. Therefore by Theorem 1.20, either X n converges to a finite limit, in which case n1 1 A n (ω) if and only if n1 P(A n F n 1 ) ; or X n oscillates between ±, in which case we have n1 1 A n (ω) n1 P(A n F n 1 ). We now study the example of Polya s urn using martingale techniques. Example 1.23 [Polya s urn] Let an urn initially contain b black balls and w white balls. Each time, we pick a ball from the urn with uniform probability, and we put back in the urn 2 balls of the same color. Obviously the number of balls in the urn increase by one each time. A natural question is what is the fraction X n of black balls in the urn after time step n? What is the asymptotic distribution of X n as n? Note that X 0 b b+w. It turns out that X n is a martingale. To check this, assume that out of the b + w + n balls at time n, balls are black and k b + w + n balls are white. Then X n, and Then +k ] E [X n+1 X 1,, X n + k + k X n k k + k + k + 1 Since X n is non-negative, by the martingale convergence theorem, almost surely, X n converges to a limit X [0, 1]. 6
7 Polya s urn has the special property of exchangeability. If Y n denotes the indicator event that the ball drawn at the n-th time step is black, then the distribution of (Y n ) n N is invariant under finite permutation of the indices. In particular, if (Y i ) 1 i n and (Ỹi) 1 i n are two different realizations of Polya s urn up to time n with n Y i n Ỹi m, then observe that P((Y i ) 1 i n ) P((Ỹi) 1 i n ) n 1 b + w + i 1 m n m (b + i 1) (w + i 1). Assume b w 1 for simplicity, then after the n-th time step, for each 0 n, ( P X n + 1 ) ( ) n!(n )! 1 n + 2 (n + 1)! n + 1. Therefore X is uniformly distributed on [0, 1]. Check that for general b, w > 0, X follows the beta distribution with parameters b, w and density Γ(b + w) Γ(b)Γ(w) (1 x)b 1 (1 x) w 1, (1.15) where Γ(x) is the gamma function. References [1] R. Durrett, Probability: Theory and Examples, 2nd edition, Duxbury Press, Belmont, California, [2] S.R.S. Varadhan, Probability Theory, Courant Lecture Notes 7, American Mathematical Society, Providence, Rhode Island,
Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past.
1 Markov chain: definition Lecture 5 Definition 1.1 Markov chain] A sequence of random variables (X n ) n 0 taking values in a measurable state space (S, S) is called a (discrete time) Markov chain, if
More informationLecture 12. F o s, (1.1) F t := s>t
Lecture 12 1 Brownian motion: the Markov property Let C := C(0, ), R) be the space of continuous functions mapping from 0, ) to R, in which a Brownian motion (B t ) t 0 almost surely takes its value. Let
More informationMASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 9 10/2/2013. Conditional expectations, filtration and martingales
MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 9 10/2/2013 Conditional expectations, filtration and martingales Content. 1. Conditional expectations 2. Martingales, sub-martingales
More informationFundamental Inequalities, Convergence and the Optional Stopping Theorem for Continuous-Time Martingales
Fundamental Inequalities, Convergence and the Optional Stopping Theorem for Continuous-Time Martingales Prakash Balachandran Department of Mathematics Duke University April 2, 2008 1 Review of Discrete-Time
More informationP (A G) dp G P (A G)
First homework assignment. Due at 12:15 on 22 September 2016. Homework 1. We roll two dices. X is the result of one of them and Z the sum of the results. Find E [X Z. Homework 2. Let X be a r.v.. Assume
More informationA D VA N C E D P R O B A B I L - I T Y
A N D R E W T U L L O C H A D VA N C E D P R O B A B I L - I T Y T R I N I T Y C O L L E G E T H E U N I V E R S I T Y O F C A M B R I D G E Contents 1 Conditional Expectation 5 1.1 Discrete Case 6 1.2
More informationLecture 23. Random walks
18.175: Lecture 23 Random walks Scott Sheffield MIT 1 Outline Random walks Stopping times Arcsin law, other SRW stories 2 Outline Random walks Stopping times Arcsin law, other SRW stories 3 Exchangeable
More informationUseful Probability Theorems
Useful Probability Theorems Shiu-Tang Li Finished: March 23, 2013 Last updated: November 2, 2013 1 Convergence in distribution Theorem 1.1. TFAE: (i) µ n µ, µ n, µ are probability measures. (ii) F n (x)
More informationMath 6810 (Probability) Fall Lecture notes
Math 6810 (Probability) Fall 2012 Lecture notes Pieter Allaart University of North Texas September 23, 2012 2 Text: Introduction to Stochastic Calculus with Applications, by Fima C. Klebaner (3rd edition),
More informationMartingale Theory for Finance
Martingale Theory for Finance Tusheng Zhang October 27, 2015 1 Introduction 2 Probability spaces and σ-fields 3 Integration with respect to a probability measure. 4 Conditional expectation. 5 Martingales.
More information1 Martingales. Martingales. (Ω, B, P ) is a probability space.
Martingales January 8, 206 Debdee Pati Martingales (Ω, B, P ) is a robability sace. Definition. (Filtration) filtration F = {F n } n 0 is a collection of increasing sub-σfields such that for m n, we have
More informationMATH 6605: SUMMARY LECTURE NOTES
MATH 6605: SUMMARY LECTURE NOTES These notes summarize the lectures on weak convergence of stochastic processes. If you see any typos, please let me know. 1. Construction of Stochastic rocesses A stochastic
More informationProbability Theory I: Syllabus and Exercise
Probability Theory I: Syllabus and Exercise Narn-Rueih Shieh **Copyright Reserved** This course is suitable for those who have taken Basic Probability; some knowledge of Real Analysis is recommended( will
More informationProblem Sheet 1. You may assume that both F and F are σ-fields. (a) Show that F F is not a σ-field. (b) Let X : Ω R be defined by 1 if n = 1
Problem Sheet 1 1. Let Ω = {1, 2, 3}. Let F = {, {1}, {2, 3}, {1, 2, 3}}, F = {, {2}, {1, 3}, {1, 2, 3}}. You may assume that both F and F are σ-fields. (a) Show that F F is not a σ-field. (b) Let X :
More information18.175: Lecture 14 Infinite divisibility and so forth
18.175 Lecture 14 18.175: Lecture 14 Infinite divisibility and so forth Scott Sheffield MIT 18.175 Lecture 14 Outline Infinite divisibility Higher dimensional CFs and CLTs Random walks Stopping times Arcsin
More informationMarch 1, Florida State University. Concentration Inequalities: Martingale. Approach and Entropy Method. Lizhe Sun and Boning Yang.
Florida State University March 1, 2018 Framework 1. (Lizhe) Basic inequalities Chernoff bounding Review for STA 6448 2. (Lizhe) Discrete-time martingales inequalities via martingale approach 3. (Boning)
More information1. Stochastic Processes and filtrations
1. Stochastic Processes and 1. Stoch. pr., A stochastic process (X t ) t T is a collection of random variables on (Ω, F) with values in a measurable space (S, S), i.e., for all t, In our case X t : Ω S
More informationECON 2530b: International Finance
ECON 2530b: International Finance Compiled by Vu T. Chau 1 Non-neutrality of Nominal Exchange Rate 1.1 Mussa s Puzzle(1986) RER can be defined as the relative price of a country s consumption basket in
More informationNotes 15 : UI Martingales
Notes 15 : UI Martingales Math 733 - Fall 2013 Lecturer: Sebastien Roch References: [Wil91, Chapter 13, 14], [Dur10, Section 5.5, 5.6, 5.7]. 1 Uniform Integrability We give a characterization of L 1 convergence.
More informationProbability Theory II. Spring 2016 Peter Orbanz
Probability Theory II Spring 2016 Peter Orbanz Contents Chapter 1. Martingales 1 1.1. Martingales indexed by partially ordered sets 1 1.2. Martingales from adapted processes 4 1.3. Stopping times and
More informationSelected Exercises on Expectations and Some Probability Inequalities
Selected Exercises on Expectations and Some Probability Inequalities # If E(X 2 ) = and E X a > 0, then P( X λa) ( λ) 2 a 2 for 0 < λ
More informationRandom Process Lecture 1. Fundamentals of Probability
Random Process Lecture 1. Fundamentals of Probability Husheng Li Min Kao Department of Electrical Engineering and Computer Science University of Tennessee, Knoxville Spring, 2016 1/43 Outline 2/43 1 Syllabus
More informationProbability Theory. Richard F. Bass
Probability Theory Richard F. Bass ii c Copyright 2014 Richard F. Bass Contents 1 Basic notions 1 1.1 A few definitions from measure theory............. 1 1.2 Definitions............................. 2
More informationLecture 5. 1 Chung-Fuchs Theorem. Tel Aviv University Spring 2011
Random Walks and Brownian Motion Tel Aviv University Spring 20 Instructor: Ron Peled Lecture 5 Lecture date: Feb 28, 20 Scribe: Yishai Kohn In today's lecture we return to the Chung-Fuchs theorem regarding
More informationRandom Times and Their Properties
Chapter 6 Random Times and Their Properties Section 6.1 recalls the definition of a filtration (a growing collection of σ-fields) and of stopping times (basically, measurable random times). Section 6.2
More informationTheory and Applications of Stochastic Systems Lecture Exponential Martingale for Random Walk
Instructor: Victor F. Araman December 4, 2003 Theory and Applications of Stochastic Systems Lecture 0 B60.432.0 Exponential Martingale for Random Walk Let (S n : n 0) be a random walk with i.i.d. increments
More informationn E(X t T n = lim X s Tn = X s
Stochastic Calculus Example sheet - Lent 15 Michael Tehranchi Problem 1. Let X be a local martingale. Prove that X is a uniformly integrable martingale if and only X is of class D. Solution 1. If If direction:
More informationLecture 17 Brownian motion as a Markov process
Lecture 17: Brownian motion as a Markov process 1 of 14 Course: Theory of Probability II Term: Spring 2015 Instructor: Gordan Zitkovic Lecture 17 Brownian motion as a Markov process Brownian motion is
More informationExercises Measure Theoretic Probability
Exercises Measure Theoretic Probability 2002-2003 Week 1 1. Prove the folloing statements. (a) The intersection of an arbitrary family of d-systems is again a d- system. (b) The intersection of an arbitrary
More informationSTOCHASTIC MODELS FOR WEB 2.0
STOCHASTIC MODELS FOR WEB 2.0 VIJAY G. SUBRAMANIAN c 2011 by Vijay G. Subramanian. All rights reserved. Permission is hereby given to freely print and circulate copies of these notes so long as the notes
More informationCHAPTER 1. Martingales
CHAPTER 1 Martingales The basic limit theorems of probability, such as the elementary laws of large numbers and central limit theorems, establish that certain averages of independent variables converge
More informationPreliminary Exam: Probability 9:00am 2:00pm, Friday, January 6, 2012
Preliminary Exam: Probability 9:00am 2:00pm, Friday, January 6, 202 The exam lasts from 9:00am until 2:00pm, with a walking break every hour. Your goal on this exam should be to demonstrate mastery of
More informationProbability and Measure
Part II Year 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2018 84 Paper 4, Section II 26J Let (X, A) be a measurable space. Let T : X X be a measurable map, and µ a probability
More informationBRANCHING PROCESSES 1. GALTON-WATSON PROCESSES
BRANCHING PROCESSES 1. GALTON-WATSON PROCESSES Galton-Watson processes were introduced by Francis Galton in 1889 as a simple mathematical model for the propagation of family names. They were reinvented
More information(A n + B n + 1) A n + B n
344 Problem Hints and Solutions Solution for Problem 2.10. To calculate E(M n+1 F n ), first note that M n+1 is equal to (A n +1)/(A n +B n +1) with probability M n = A n /(A n +B n ) and M n+1 equals
More informationP i [B k ] = lim. n=1 p(n) ii <. n=1. V i :=
2.7. Recurrence and transience Consider a Markov chain {X n : n N 0 } on state space E with transition matrix P. Definition 2.7.1. A state i E is called recurrent if P i [X n = i for infinitely many n]
More informationLecture 10. Theorem 1.1 [Ergodicity and extremality] A probability measure µ on (Ω, F) is ergodic for T if and only if it is an extremal point in M.
Lecture 10 1 Ergodic decomposition of invariant measures Let T : (Ω, F) (Ω, F) be measurable, and let M denote the space of T -invariant probability measures on (Ω, F). Then M is a convex set, although
More informationModern Discrete Probability Branching processes
Modern Discrete Probability IV - Branching processes Review Sébastien Roch UW Madison Mathematics November 15, 2014 1 Basic definitions 2 3 4 Galton-Watson branching processes I Definition A Galton-Watson
More informationErgodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.
Ergodic Theorems Samy Tindel Purdue University Probability Theory 2 - MA 539 Taken from Probability: Theory and examples by R. Durrett Samy T. Ergodic theorems Probability Theory 1 / 92 Outline 1 Definitions
More informationExercises in stochastic analysis
Exercises in stochastic analysis Franco Flandoli, Mario Maurelli, Dario Trevisan The exercises with a P are those which have been done totally or partially) in the previous lectures; the exercises with
More informationBrownian Motion. 1 Definition Brownian Motion Wiener measure... 3
Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................
More informationTheorem 2.1 (Caratheodory). A (countably additive) probability measure on a field has an extension. n=1
Chapter 2 Probability measures 1. Existence Theorem 2.1 (Caratheodory). A (countably additive) probability measure on a field has an extension to the generated σ-field Proof of Theorem 2.1. Let F 0 be
More informationA PECULIAR COIN-TOSSING MODEL
A PECULIAR COIN-TOSSING MODEL EDWARD J. GREEN 1. Coin tossing according to de Finetti A coin is drawn at random from a finite set of coins. Each coin generates an i.i.d. sequence of outcomes (heads or
More information4 Sums of Independent Random Variables
4 Sums of Independent Random Variables Standing Assumptions: Assume throughout this section that (,F,P) is a fixed probability space and that X 1, X 2, X 3,... are independent real-valued random variables
More informationChapter 1. Poisson processes. 1.1 Definitions
Chapter 1 Poisson processes 1.1 Definitions Let (, F, P) be a probability space. A filtration is a collection of -fields F t contained in F such that F s F t whenever s
More informationThe main results about probability measures are the following two facts:
Chapter 2 Probability measures The main results about probability measures are the following two facts: Theorem 2.1 (extension). If P is a (continuous) probability measure on a field F 0 then it has a
More informationMartingales. Chapter Definition and examples
Chapter 2 Martingales 2.1 Definition and examples In this chapter we introduce and study a very important class of stochastic processes: the socalled martingales. Martingales arise naturally in many branches
More informationMarkov processes Course note 2. Martingale problems, recurrence properties of discrete time chains.
Institute for Applied Mathematics WS17/18 Massimiliano Gubinelli Markov processes Course note 2. Martingale problems, recurrence properties of discrete time chains. [version 1, 2017.11.1] We introduce
More informationAdvanced Probability
Advanced Probability Perla Sousi October 10, 2011 Contents 1 Conditional expectation 1 1.1 Discrete case.................................. 3 1.2 Existence and uniqueness............................ 3 1
More informationConvergence of random variables, and the Borel-Cantelli lemmas
Stat 205A Setember, 12, 2002 Convergence of ranom variables, an the Borel-Cantelli lemmas Lecturer: James W. Pitman Scribes: Jin Kim (jin@eecs) 1 Convergence of ranom variables Recall that, given a sequence
More informationJUSTIN HARTMANN. F n Σ.
BROWNIAN MOTION JUSTIN HARTMANN Abstract. This paper begins to explore a rigorous introduction to probability theory using ideas from algebra, measure theory, and other areas. We start with a basic explanation
More informationSTOR 635 Notes (S13)
STOR 635 Notes (S13) Jimmy Jin UNC-Chapel Hill Last updated: 1/14/14 Contents 1 Measure theory and probability basics 2 1.1 Algebras and measure.......................... 2 1.2 Integration................................
More informationHomework Assignment #2 for Prob-Stats, Fall 2018 Due date: Monday, October 22, 2018
Homework Assignment #2 for Prob-Stats, Fall 2018 Due date: Monday, October 22, 2018 Topics: consistent estimators; sub-σ-fields and partial observations; Doob s theorem about sub-σ-field measurability;
More informationMartingale Theory and Applications
Martingale Theory and Applications Dr Nic Freeman June 4, 2015 Contents 1 Conditional Expectation 2 1.1 Probability spaces and σ-fields............................ 2 1.2 Random Variables...................................
More informationNotes 18 : Optional Sampling Theorem
Notes 18 : Optional Sampling Theorem Math 733-734: Theory of Probability Lecturer: Sebastien Roch References: [Wil91, Chapter 14], [Dur10, Section 5.7]. Recall: DEF 18.1 (Uniform Integrability) A collection
More informationStochastic integration. P.J.C. Spreij
Stochastic integration P.J.C. Spreij this version: April 22, 29 Contents 1 Stochastic processes 1 1.1 General theory............................... 1 1.2 Stopping times...............................
More informationLECTURE 2: LOCAL TIME FOR BROWNIAN MOTION
LECTURE 2: LOCAL TIME FOR BROWNIAN MOTION We will define local time for one-dimensional Brownian motion, and deduce some of its properties. We will then use the generalized Ray-Knight theorem proved in
More informationMASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 7 9/25/2013
MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.65/15.070J Fall 013 Lecture 7 9/5/013 The Reflection Principle. The Distribution of the Maximum. Brownian motion with drift Content. 1. Quick intro to stopping times.
More informationarxiv: v3 [math.pr] 5 Jun 2011
arxiv:0905.0254v3 [math.pr] 5 Jun 2011 Lévy s zero-one law in game-theoretic probability Glenn Shafer, Vladimir Vovk, and Akimichi Takemura The Game-Theoretic Probability and Finance Project Working Paper
More information1 Basics of probability theory
Examples of Stochastic Optimization Problems In this chapter, we will give examples of three types of stochastic optimization problems, that is, optimal stopping, total expected (discounted) cost problem,
More informationBrownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539
Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory
More information2.2 Some Consequences of the Completeness Axiom
60 CHAPTER 2. IMPORTANT PROPERTIES OF R 2.2 Some Consequences of the Completeness Axiom In this section, we use the fact that R is complete to establish some important results. First, we will prove that
More informationLecture 22 Girsanov s Theorem
Lecture 22: Girsanov s Theorem of 8 Course: Theory of Probability II Term: Spring 25 Instructor: Gordan Zitkovic Lecture 22 Girsanov s Theorem An example Consider a finite Gaussian random walk X n = n
More informationPart III Advanced Probability
Part III Advanced Probability Based on lectures by M. Lis Notes taken by Dexter Chua Michaelmas 2017 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after
More informationPart II Probability and Measure
Part II Probability and Measure Theorems Based on lectures by J. Miller Notes taken by Dexter Chua Michaelmas 2016 These notes are not endorsed by the lecturers, and I have modified them (often significantly)
More informationSolution for Problem 7.1. We argue by contradiction. If the limit were not infinite, then since τ M (ω) is nondecreasing we would have
362 Problem Hints and Solutions sup g n (ω, t) g(ω, t) sup g(ω, s) g(ω, t) µ n (ω). t T s,t: s t 1/n By the uniform continuity of t g(ω, t) on [, T], one has for each ω that µ n (ω) as n. Two applications
More informationConcentration of Measures by Bounded Size Bias Couplings
Concentration of Measures by Bounded Size Bias Couplings Subhankar Ghosh, Larry Goldstein University of Southern California [arxiv:0906.3886] January 10 th, 2013 Concentration of Measure Distributional
More informationCONVERGENCE OF RANDOM SERIES AND MARTINGALES
CONVERGENCE OF RANDOM SERIES AND MARTINGALES WESLEY LEE Abstract. This paper is an introduction to probability from a measuretheoretic standpoint. After covering probability spaces, it delves into the
More informationMATH 117 LECTURE NOTES
MATH 117 LECTURE NOTES XIN ZHOU Abstract. This is the set of lecture notes for Math 117 during Fall quarter of 2017 at UC Santa Barbara. The lectures follow closely the textbook [1]. Contents 1. The set
More informationSTAT 7032 Probability Spring Wlodek Bryc
STAT 7032 Probability Spring 2018 Wlodek Bryc Created: Friday, Jan 2, 2014 Revised for Spring 2018 Printed: January 9, 2018 File: Grad-Prob-2018.TEX Department of Mathematical Sciences, University of Cincinnati,
More informationarxiv: v1 [math.pr] 20 May 2018
arxiv:180507700v1 mathr 20 May 2018 A DOOB-TYE MAXIMAL INEQUALITY AND ITS ALICATIONS TO VARIOUS STOCHASTIC ROCESSES Abstract We show how an improvement on Doob s inequality leads to new inequalities for
More informationMS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10
MS&E 321 Spring 12-13 Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10 Section 3: Regenerative Processes Contents 3.1 Regeneration: The Basic Idea............................... 1 3.2
More information1/12/05: sec 3.1 and my article: How good is the Lebesgue measure?, Math. Intelligencer 11(2) (1989),
Real Analysis 2, Math 651, Spring 2005 April 26, 2005 1 Real Analysis 2, Math 651, Spring 2005 Krzysztof Chris Ciesielski 1/12/05: sec 3.1 and my article: How good is the Lebesgue measure?, Math. Intelligencer
More informationVerona Course April Lecture 1. Review of probability
Verona Course April 215. Lecture 1. Review of probability Viorel Barbu Al.I. Cuza University of Iaşi and the Romanian Academy A probability space is a triple (Ω, F, P) where Ω is an abstract set, F is
More information6. Brownian Motion. Q(A) = P [ ω : x(, ω) A )
6. Brownian Motion. stochastic process can be thought of in one of many equivalent ways. We can begin with an underlying probability space (Ω, Σ, P) and a real valued stochastic process can be defined
More informationMarkov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains
Markov Chains A random process X is a family {X t : t T } of random variables indexed by some set T. When T = {0, 1, 2,... } one speaks about a discrete-time process, for T = R or T = [0, ) one has a continuous-time
More informationOn Pólya Urn Scheme with Infinitely Many Colors
On Pólya Urn Scheme with Infinitely Many Colors DEBLEENA THACKER Indian Statistical Institute, New Delhi Joint work with: ANTAR BANDYOPADHYAY, Indian Statistical Institute, New Delhi. Genaralization of
More information1 Math 285 Homework Problem List for S2016
1 Math 85 Homework Problem List for S016 Note: solutions to Lawler Problems will appear after all of the Lecture Note Solutions. 1.1 Homework 1. Due Friay, April 8, 016 Look at from lecture note exercises:
More informationLecture 21: Expectation of CRVs, Fatou s Lemma and DCT Integration of Continuous Random Variables
EE50: Probability Foundations for Electrical Engineers July-November 205 Lecture 2: Expectation of CRVs, Fatou s Lemma and DCT Lecturer: Krishna Jagannathan Scribe: Jainam Doshi In the present lecture,
More information(6, 4) Is there arbitrage in this market? If so, find all arbitrages. If not, find all pricing kernels.
Advanced Financial Models Example sheet - Michaelmas 208 Michael Tehranchi Problem. Consider a two-asset model with prices given by (P, P 2 ) (3, 9) /4 (4, 6) (6, 8) /4 /2 (6, 4) Is there arbitrage in
More informationProbability and Measure
Chapter 4 Probability and Measure 4.1 Introduction In this chapter we will examine probability theory from the measure theoretic perspective. The realisation that measure theory is the foundation of probability
More information4 Expectation & the Lebesgue Theorems
STA 7: Probability & Measure Theory Robert L. Wolpert 4 Expectation & the Lebesgue Theorems Let X and {X n : n N} be random variables on the same probability space (Ω,F,P). If X n (ω) X(ω) for each ω Ω,
More informationSTA205 Probability: Week 8 R. Wolpert
INFINITE COIN-TOSS AND THE LAWS OF LARGE NUMBERS The traditional interpretation of the probability of an event E is its asymptotic frequency: the limit as n of the fraction of n repeated, similar, and
More informationLecture 6. 2 Recurrence/transience, harmonic functions and martingales
Lecture 6 Classification of states We have shown that all states of an irreducible countable state Markov chain must of the same tye. This gives rise to the following classification. Definition. [Classification
More informationADVANCED PROBABILITY: SOLUTIONS TO SHEET 1
ADVANCED PROBABILITY: SOLUTIONS TO SHEET 1 Last compiled: November 6, 213 1. Conditional expectation Exercise 1.1. To start with, note that P(X Y = P( c R : X > c, Y c or X c, Y > c = P( c Q : X > c, Y
More informationStochastic Processes in Discrete Time
Stochastic Processes in Discrete Time May 5, 2005 Contents Basic Concepts for Stochastic Processes 3. Conditional Expectation....................................... 3.2 Stochastic Processes, Filtrations
More informationRisk-Minimality and Orthogonality of Martingales
Risk-Minimality and Orthogonality of Martingales Martin Schweizer Universität Bonn Institut für Angewandte Mathematik Wegelerstraße 6 D 53 Bonn 1 (Stochastics and Stochastics Reports 3 (199, 123 131 2
More informationAdmin and Lecture 1: Recap of Measure Theory
Admin and Lecture 1: Recap of Measure Theory David Aldous January 16, 2018 I don t use bcourses: Read web page (search Aldous 205B) Web page rather unorganized some topics done by Nike in 205A will post
More informationRENEWAL THEORY STEVEN P. LALLEY UNIVERSITY OF CHICAGO. X i
RENEWAL THEORY STEVEN P. LALLEY UNIVERSITY OF CHICAGO 1. RENEWAL PROCESSES A renewal process is the increasing sequence of random nonnegative numbers S 0,S 1,S 2,... gotten by adding i.i.d. positive random
More informationMinimal Supersolutions of Backward Stochastic Differential Equations and Robust Hedging
Minimal Supersolutions of Backward Stochastic Differential Equations and Robust Hedging SAMUEL DRAPEAU Humboldt-Universität zu Berlin BSDEs, Numerics and Finance Oxford July 2, 2012 joint work with GREGOR
More informationLecture 1 Measure concentration
CSE 29: Learning Theory Fall 2006 Lecture Measure concentration Lecturer: Sanjoy Dasgupta Scribe: Nakul Verma, Aaron Arvey, and Paul Ruvolo. Concentration of measure: examples We start with some examples
More informationStochastic Processes II/ Wahrscheinlichkeitstheorie III. Lecture Notes
BMS Basic Course Stochastic Processes II/ Wahrscheinlichkeitstheorie III Michael Scheutzow Lecture Notes Technische Universität Berlin Sommersemester 218 preliminary version October 12th 218 Contents
More information1 Stat 605. Homework I. Due Feb. 1, 2011
The first part is homework which you need to turn in. The second part is exercises that will not be graded, but you need to turn it in together with the take-home final exam. 1 Stat 605. Homework I. Due
More informationExercise Exercise Homework #6 Solutions Thursday 6 April 2006
Unless otherwise stated, for the remainder of the solutions, define F m = σy 0,..., Y m We will show EY m = EY 0 using induction. m = 0 is obviously true. For base case m = : EY = EEY Y 0 = EY 0. Now assume
More informationDETERMINISTIC AND STOCHASTIC SELECTION DYNAMICS
DETERMINISTIC AND STOCHASTIC SELECTION DYNAMICS Jörgen Weibull March 23, 2010 1 The multi-population replicator dynamic Domain of analysis: finite games in normal form, G =(N, S, π), with mixed-strategy
More informationn! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2
Order statistics Ex. 4. (*. Let independent variables X,..., X n have U(0, distribution. Show that for every x (0,, we have P ( X ( < x and P ( X (n > x as n. Ex. 4.2 (**. By using induction or otherwise,
More informationStochastic integral. Introduction. Ito integral. References. Appendices Stochastic Calculus I. Geneviève Gauthier.
Ito 8-646-8 Calculus I Geneviève Gauthier HEC Montréal Riemann Ito The Ito The theories of stochastic and stochastic di erential equations have initially been developed by Kiyosi Ito around 194 (one of
More informationarxiv: v1 [math.pr] 6 Jan 2014
Recurrence for vertex-reinforced random walks on Z with weak reinforcements. Arvind Singh arxiv:40.034v [math.pr] 6 Jan 04 Abstract We prove that any vertex-reinforced random walk on the integer lattice
More information