Martingales. Chapter Definition and examples

Similar documents
1. Stochastic Processes and filtrations

Stochastic integration. P.J.C. Spreij

Fundamental Inequalities, Convergence and the Optional Stopping Theorem for Continuous-Time Martingales

A D VA N C E D P R O B A B I L - I T Y

Lecture 2. We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales.

Stochastic Processes II/ Wahrscheinlichkeitstheorie III. Lecture Notes

Math 6810 (Probability) Fall Lecture notes

Lecture 21 Representations of Martingales

An Introduction to Stochastic Processes in Continuous Time

Lecture 12. F o s, (1.1) F t := s>t

Chapter 1. Poisson processes. 1.1 Definitions

n E(X t T n = lim X s Tn = X s

Stochastic Processes. Winter Term Paolo Di Tella Technische Universität Dresden Institut für Stochastik

Doléans measures. Appendix C. C.1 Introduction

Lecture 17 Brownian motion as a Markov process

Lecture 22 Girsanov s Theorem

(A n + B n + 1) A n + B n

Solutions to the Exercises in Stochastic Analysis

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 9 10/2/2013. Conditional expectations, filtration and martingales

Lecture 19 L 2 -Stochastic integration

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past.

Probability Theory II. Spring 2016 Peter Orbanz

An essay on the general theory of stochastic processes

Dynkin (λ-) and π-systems; monotone classes of sets, and of functions with some examples of application (mainly of a probabilistic flavor)

Exercises Measure Theoretic Probability

Useful Probability Theorems

Martingale Theory for Finance

{σ x >t}p x. (σ x >t)=e at.

The Uniform Integrability of Martingales. On a Question by Alexander Cherny

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition

Risk-Minimality and Orthogonality of Martingales

16.1. Signal Process Observation Process The Filtering Problem Change of Measure

LECTURE 2: LOCAL TIME FOR BROWNIAN MOTION

Notes 15 : UI Martingales

Jump Processes. Richard F. Bass

Problem Sheet 1. You may assume that both F and F are σ-fields. (a) Show that F F is not a σ-field. (b) Let X : Ω R be defined by 1 if n = 1

4 Sums of Independent Random Variables

Selected Exercises on Expectations and Some Probability Inequalities

Advanced Probability

STOCHASTIC CALCULUS JASON MILLER AND VITTORIA SILVESTRI

Probability Theory. Richard F. Bass

Preliminary Exam: Probability 9:00am 2:00pm, Friday, January 6, 2012

Random Process Lecture 1. Fundamentals of Probability

CHAPTER 1. Martingales

Martingale Theory and Applications

Applications of Ito s Formula

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

ECON 2530b: International Finance

Part III Advanced Probability

Optimal stopping for non-linear expectations Part I

CONVERGENCE OF RANDOM SERIES AND MARTINGALES

P (A G) dp G P (A G)

Exercise Exercise Homework #6 Solutions Thursday 6 April 2006

Brownian Motion and Stochastic Calculus

4th Preparation Sheet - Solutions

FOUNDATIONS OF MARTINGALE THEORY AND STOCHASTIC CALCULUS FROM A FINANCE PERSPECTIVE

Solution for Problem 7.1. We argue by contradiction. If the limit were not infinite, then since τ M (ω) is nondecreasing we would have

Lecture 5. 1 Chung-Fuchs Theorem. Tel Aviv University Spring 2011

The Lebesgue Integral

Exercises in stochastic analysis

STAT 331. Martingale Central Limit Theorem and Related Results

Exercises Measure Theoretic Probability

µ X (A) = P ( X 1 (A) )

MATH 6605: SUMMARY LECTURE NOTES

Brownian Motion and Conditional Probability

Verona Course April Lecture 1. Review of probability

An Almost Sure Approximation for the Predictable Process in the Doob Meyer Decomposition Theorem

1/12/05: sec 3.1 and my article: How good is the Lebesgue measure?, Math. Intelligencer 11(2) (1989),

Stochastic Analysis I S.Kotani April 2006

7 Convergence in R d and in Metric Spaces

Modern Discrete Probability Branching processes

4 Expectation & the Lebesgue Theorems

3 Integration and Expectation

MATH 117 LECTURE NOTES

Stochastic Calculus. Alan Bain

STOCHASTIC MODELS FOR WEB 2.0

Notes 1 : Measure-theoretic foundations I

Inference for Stochastic Processes

Martingales, standard filtrations, and stopping times

Brownian Motion. Chapter Definition of Brownian motion

X n D X lim n F n (x) = F (x) for all x C F. lim n F n(u) = F (u) for all u C F. (2)

Probability and Measure

Notes 18 : Optional Sampling Theorem

Poisson random measure: motivation

Probability Theory I: Syllabus and Exercise

Exponential martingales: uniform integrability results and applications to point processes

Theory and Applications of Stochastic Systems Lecture Exponential Martingale for Random Walk

1. Aufgabenblatt zur Vorlesung Probability Theory

Stability of Stochastic Differential Equations

The main results about probability measures are the following two facts:

Real Analysis Notes. Thomas Goller

Weak convergence. Amsterdam, 13 November Leiden University. Limit theorems. Shota Gugushvili. Generalities. Criteria

JUSTIN HARTMANN. F n Σ.

PROBABILITY THEORY II

6. Brownian Motion. Q(A) = P [ ω : x(, ω) A )

On the martingales obtained by an extension due to Saisho, Tanemura and Yor of Pitman s theorem

Stochastic Calculus (Lecture #3)

Properties of an infinite dimensional EDS system : the Muller s ratchet

Transcription:

Chapter 2 Martingales 2.1 Definition and examples In this chapter we introduce and study a very important class of stochastic processes: the socalled martingales. Martingales arise naturally in many branches of the theory of stochastic processes. In particular, they are very helpful tools in the study of BM. In this section, the index set T is an arbitrary interval of Z + and R +. Definition 2.1.1 An (F t )-adapted, real-valued process M is called a martingale (with respect to the filtration (F t )) if i) E M t < for all t T ; ii) E(M t F s ) a.s. = M s for all s t. If property (ii) holds with (resp. ) instead of =, then M is called a submartingale (resp. supermartingale). Intuitively, a martingale is a process that is constant on average. Given all information up to time s, the best guess for the value of the process at time t s is smply the current value M s. In particular, property (ii) implies that EM t = EM 0 for all t T. Likewise, a submartingale is a process that increases on average, and a supermartingale decreases on average. Clearly, M is a submartingale if and only if M is a supermartingale and M is a martingale if it is both a submartingale and a supermartingale. The basic properties of conditional expectations give us the following result and examples. Review BN 7 Conditional expectations. N.B. Let T = Z +. The tower property implies that (sub-, super-)martingale property (ii) is implied by (ii ) E{M n+1 F n } = M n (, ) a.s. for n Z +. Example 2.1.2 Let X n, n =1,..., be a sequence of i.i.d. real-valued integrable random variables. Take e.g. the filtration F n = σ(x 1,...,X n ). Then M n = n k=1 X k is a martingale if EX 1 = 0, a submartingale if EX 1 > 0 and a supermartingale if EX 1 < 0. The process M =(M n ) n can be viewed as a random walk on the real line. If EX 1 = 0, but X 1 is square integrable, Mn = Mn 2 nex1 2 is a martingale. 27

28 CHAPTER 2. MARTINGALES Example 2.1.3 (Doob martingale) Suppose that X is an integrable random variable and (F t ) t T a filtration. For t T,defineM t = E(X F t ), or, more precisely, let M t be a version of E(X F t ). Then M =(M t ) t T is an (F t )-martingale and M is uniformly integrable (see Exercise 2.1). Review BN 8 Uniform integrability. Example 2.1.4 Suppose that M is a martingale and that φ is a convex function such that E φ(m t ) < for t T.Thentheprocessφ(M) is a submartingale. The same is true if M is a submartingale and φ is an increasing, convex function (see Exercise 2.2). BM generates many examples of martingales. following example. The most important ones are given in the Example 2.1.5 Let W be a BM. Then the following processes are martingales with respect to the same filtration: i) W itself; ii) W 2 t t; iii) for every a R the process exp{aw t a 2 t/2}; You are asked to prove this in Exercise 2.3. Example 2.1.6 Let N be a Poisson process with rate λ. Then{N(t) λt} t is a martingale. In the next section we first develop the theory for discrete-time martingales. The generalisation to continuous time is discussed in section 2.3. In section 2.4 we continue our study of BM. 2.2 Discrete-time martingales In this section we restrict ourselves to martingales (and filtrations) that are indexed by (a subinterval of) Z +. We will assume the underlying filtered probability space (Ω, F, (F n ) n, P) to be fixed. Note that as a consequence, it only makes sense to consider Z + -valued stopping times. In discrete time, τ is a stopping time with respect to the filtration (F n ) n Z+, if {τ n} F n for all n Z +. 2.2.1 Martingale transforms If the value of a process at time n is already known at time n 1, we call a process predictable. The precise definition is as follows. Definition 2.2.1 We call a discrete-time process X predictable with respect to the filtration (F n ) n if X n is F n 1 -measurable for every n. In the following definition we introduce discrete-time integrals. martingale theory. This is a useful tool in

2.2. DISCRETE-TIME MARTINGALES 29 Definition 2.2.2 Let M and X be two discrete-time processes. We define the process X M by (X M) 0 = 0 and for n 1 (X M) n = n X k (M k M k 1 ). k=1 We call X M the discrete integral of X with respect to M. IfM is a (sub-, super-)martingale, it is often called the martingale transform of M by X. One can view martingale transforms as a discrete version of the Ito integral. predictability plays a crucial role in the construction of the Ito integral. The The following lemma explains why these integrals are so useful: the integral of a predictable process with respect to a martingale is again a martingale. Lemma 2.2.3 Let X be a predictable process, such that for all n there exists a constant K n such that X 1,..., X n K n.ifm is an martingale, then X M is a martingale. If M be a submartingale (resp. a supermartingale) and X is non-negative then X M is a submartingale (resp. supermartingale) as well. Proof. Put Y = X M. Clearly Y is adapted. Since X is bounded, say X n K a.s., for all n, wehavee Y n 2K n k n E M k <. Now suppose first that M is a submartingale and X is non-negative. Then a.s. E(Y n F n 1 ) = E(Y n 1 + X n (M n M n 1 ) F n 1 ) = Y n 1 + X n E(M n M n 1 F n 1 ) Y n 1, a.s. Consequently, Y is a submartingale. If M is a martingale, the last inequality is an equality, irrespective of the sign of X n. This implies that then Y is a martingale as well. Using this lemma, it is easy to see that a stopped (sub-, super-)martingale is again a (sub-, super-)martingale. Theorem 2.2.4 Let M be a (sub-, super-)martingale and τ a F n -stopping time. Then the stopped process M τ is a (sub-, super-)martingale as well. Proof. Define the process X by X n = 1 {τ n}. Verify that M τ = M 0 + X M. Since τ is a stopping time, we have that {τ n} = {τ n 1} c F n 1. Hence the process X is predictable. It is also a bounded process, and so the statement follows from the preceding lemma. We will also give a direct proof. First note that E Mt τ = E M t τ t n=0 E M n < for t T.Write M τ t = M t τ = = t 1 1 {τ=n} + 1 {τ t} M t τ n=0 t 1 M n 1 {τ=n} + M t 1 {τ t}. n=0

30 CHAPTER 2. MARTINGALES Taking conditional expectations yields t 1 E(Mt τ F t 1 )= M n 1 {τ=n} + 1 {τ t} E(M t F t 1 ), n=0 since {τ t} F t 1. The rest follows immediately. The following result can be viewed as a first version of the so-called optional sampling theorem. The general version will be discussed in section 2.2.5. Theorem 2.2.5 Let M be a (sub)martingale and let σ, τ be two stopping times such that σ τ K, forsomeconstantk>0. Then An adapted integrable process M is a martingale if and only if E(M τ F σ )( ) =M σ, a.s. (2.2.1) EM τ = EM σ, for any pairs of bounded stopping times σ τ. Proof. Suppose first that M is a martingale. Define the predictable process X n = 1 {τ n} 1 {σ n}. Note that X n 0 a.s.! Hence, X M = M τ M σ. By Lemma 2.2.3 the process X M is a martingale, hence E(M τ n M σ n )=E(X M) n = 0 for all n. Sinceσ τ K a.s., it follows that EM τ = EM τ K = EM σ K = EM σ. Now we take A F σ and we define the truncated random times By definition of F σ it holds for every n that σ A = σ1 {A} + K1 {A c }, τ A = τ1 {A} + K1 {A c }. (2.2.2) {σ A n} =(A {σ n}) (A c {K n}) F n, and so σ A is a stopping time. Similarly, τ A is a stopping time and clearly σ A τ A K a.s. By the first part of the proof, it follows that EM σ A = EM τ A, in other words M σ dp+ M K dp = M τ dp+ M K dp, (2.2.3) A A c A A c by which A M σdp = A M τ dp. SinceA F σ is arbitrary, E(M τ F σ )=M σ a.s. (recall that M σ is F σ -measurable, cf. Lemma 1.6.14). Let M be an adapted process with EM σ = EM τ for each bounded pair σ τ of stopping times. Take σ = n 1 and τ = n in the preceding and use truncated stopping times σ A and τ A as in (2.2.2) for A F n 1. Then (2.2.3) for A F n 1 and stopping times σ A and τ A implies that E(M n F n 1 )=M n 1 a.s. In other words, M is a martingale. If M is a submartingale, the same reasoning applies, but with inequalities instead of equalities. As in the previous lemma, we will also give a direct proof of (2.2.1). First note that E(M K F n ) M n a.s. EM K 1 {F } EM n 1 {F }, F F n, (2.2.4)

2.2. DISCRETE-TIME MARTINGALES 31 (this follows from Exercise 2.7 (a)). We will first show that E(M K F σ ) M σ a.s. (2.2.5) Similarly to (2.2.4) it is sufficient to show that E1 {F } M σ E1 {F } M K for all F F σ. Now, K E1 {F } M σ = E1 {F } 1 {σ=n} + 1 {σ>k} M σ = n=0 K E1 {F {σ=n}} M n n=0 K E1 {F {σ=n}} M K n=0 K = E1 {F } 1 {σ=n} + 1 {σ>k} M K = E1 {F } M K. n=0 In the second and fourth equalities we have used that E1 {σ>k} M σ = E1 {σ>k} M K = 0, since P{σ >K} = 0. In the third inequality, we have used (2.2.4) and the fact that F {σ = n} F n (why?). This shows the validity of (2.2.5). Apply (2.2.5) to the stopped process M τ.thisyields E(M τ K F σ ) M τ σ. Now, note that M τ K = M τ a.s. and M τ σ = M σ a.s. (why?). This shows (2.2.1). Note that we may in fact allow that σ τ K a.s. Lateron we need σ τ everywhere. 2.2.2 Inequalities Markov s inequality implies that if M is a discrete time process, then λp{m n λ} E M n for all n Z + and λ>0. Doob s classical submartingale inequality states that for submartingales we have a much stronger result. Theorem 2.2.6 (Doob s submartingale inequality) Let M be a submartingale. For all λ>0 and n N λp{max k n M k λ} EM n 1 {maxk n M k λ} E M n. Proof. Define τ = n inf{k M k λ}. This is a stopping time (see Lemma 1.6.7) withτ n. By Theorem 2.2.5, wehaveem n EM τ. It follows that EM n EM τ 1 {maxk n M k λ} + EM τ 1 {maxk n M k <λ} λp{max k λ} + EM n 1 {maxk n M k n k <λ}. This yields the first inequality. The second one is obvious.

32 CHAPTER 2. MARTINGALES Theorem 2.2.7 (Doob s L p inequality) If M is a martingale or a non-negative submartingale and p>1, thenforalln N E max M n p p pe Mn p, k n 1 p provided M is in L p. Proof. Define M = max k n M k. Assume that M is defined on an underlying probability space (Ω, F, P). We have for any m N E(M m) p = (M (ω) m) p dp(ω) = = = ω M (ω) m ω 0 m ω 0 m 0 px p 1 dxdp(ω) px p 1 1 {M (ω) x}dxdp(ω) px p 1 P{M x}dx, (2.2.6) where we have used Fubini s theorem in the last equality (non-negative integrand!). By conditional Jensen s inequality, M is a submartingale, and so we can apply Doob s submartingale inequality to estimate P{M x}. Thus Insert this in (2.2.6), then E(M m) p P{M x} E( M n 1 {M x}). x = m 0 m 0 = p = ω px p 2 E( M n 1 {M x})dx px p 2 M n (ω) dp(ω)dx M n (ω) ω:m (ω) x M (ω) m p p 1 E( M n (M m) p 1 ). 0 x p 2 dx dp(ω) By Hölder s inequality, it follows that with p 1 + q 1 =1 E M m p p p 1 (E M n p ) 1/p (E M m (p 1)q ) 1/q. Since p>1wehaveq = p/(p 1), so that E M m p p p 1 (E M n p ) 1/p (E M m p ) (p 1)/p. Now take pth power of both sides and cancel common factors. Then p pe Mn E M m p p. p 1 The proof is completed by letting m tend to infinity.

2.2. DISCRETE-TIME MARTINGALES 33 2.2.3 Doob decomposition An adapted, integrable process X can always be written as a sum of a martingale and a predictable process. This is called the Doob decomposition of the process X. Theorem 2.2.8 Let X be an adapted, integrable process. There exists a martingale M and apredictableprocessa, suchthata 0 = M 0 =0and X = X 0 + M + A. TheprocessesM and A are a.s. unique. The process X is a submartingale if and only if A is a.s. increasing (i.e. P{A n A n+1 } =1). Proof. Suppose first that there exist a martingale M and a predictable process A such that A 0 = M 0 = 0 and X = X 0 + M + A. The martingale property of M and predictability of A show that a.s. E(X n X n 1 F n 1 )=A n A n 1 a.s. (2.2.7) Since A 0 = 0 it follows that A n = n E(X k X k 1 F k 1 ), (2.2.8) k=1 for n 1 and hence M n = X n A n X 0. This shows that M and A are a.s. unique. Conversely, given a process X, (2.2.8) defines a predictable process A. It is easily seen that the process M defined by M = X A X 0 is a martingale. This proves the existence of the decomposition. Equation (2.2.7) shows that X is a submartingale if and only if A is increasing. An important application of the Doob decomposition is the following. Corollary Let M be a martingale with EMn 2 < for all n. Thenthereexistsana.s.unique predictable, increasing process A with A 0 =0such that M 2 A is a martingale. Moreover the random variable A n+1 A n is a version of the conditional variance of M n given F n 1, i.e. A n A n 1 = E (M n E(M n F n 1 )) 2 F n 1 = E (M n M n 1 ) 2 F n 1 a.s. It follows that Pythagoras theorem holds for square integrable martingales EM 2 n = EM 2 0 + n E(M k M k 1 ) 2. k=1 The process A is called the predictable quadratic variation process of M and is often denoted by M. Proof. By conditional Jensen, it follows that M 2 is a submartingale. Hence Theorem 2.2.8 applies. The only thing left to prove is the statement about conditional variance. Since M is a martingale, we have a.s. E((M n M n 1 ) 2 F n 1 ) = E(M 2 n 2M n M n 1 + M 2 n 1 F n 1 ) = E(M 2 n F n 1 ) 2M n 1 E(M n F n 1 )+M 2 n 1 = E(M 2 n F n 1 ) M 2 n 1 = E(M 2 n M 2 n 1 F n 1 )=A n A n 1.

34 CHAPTER 2. MARTINGALES Using the Doob decomposition in combination with the submartingale inequality yields the following result. Theorem 2.2.9 Let X be a sub- or supermartingale. For all λ>0 and n Z + λp{max k n X k 3λ} 4E X 0 +3E X n. Proof. Suppose that X is a submartingale. By the Doob decomposition theorem there exist a martingale M and an increasing, predictable process A such that M 0 = A 0 = 0 and X = X 0 + M + A. By the triangle inequality and the fact that A is increasing P{max k n X k 3λ} P{ X 0 λ} + P{max k n M k λ} + P{A n λ}. Hence, by Markov s inequality and the submartingale inequality ( M n is a submartingale!) λp{max k n X k 3λ} E X 0 + E M n + EA n. Since M n = X n X 0 A n, the right-hand side is bounded by 2E X 0 + E X n +2EA n. We know that A n is given by (2.2.7). Taking expectations in the latter expression shows that EA n = EX n EX 0 E X n + E X 0. This completes the proof. 2.2.4 Convergence theorems Let M be a supermartingale and consider a compact interval [a, b] R. The number of upcrossings of [a, b] that the process makes upto time n is the number of time that the process passes from a level below a to a level above b. The precise definition is as follows. Definition 2.2.10 The number U n [a, b] is the largest value k Z +, such that there exis 0 s 1 <t 1 <s 2 < <s k <t k n with M si <aand M ti >b, i =1,...,k. First we define the limit σ-algebra F = σ F n. n Lemma 2.2.11 (Doob s upcrossing lemma) Let M be a supermartingale. Then for all a<b,thenumberofupcrossingsu n [a, b] of the interval [a, b] by M upto time n is an F n - measurable random variable and satisfies (b a)eu n [a, b] E(M n a). The total number of upcrossings U [a, b] is F -measurable. Proof. Check yourself that U n [a, b] isf n -measurable and that U [a, b] isf -measurable. Consider the bounded, predictable process X given by X 0 = 1 {M0 <a} and X n = 1 {Xn 1 =1}1 {Mn 1 b} + 1 {Xn 1 =0}1 {Mn 1 <a}, n Z +.

2.2. DISCRETE-TIME MARTINGALES 35 Define Y = X M. TheprocessX equals 0, until M drops below level a, thenstaysuntilm gets above b etc. So every completed upcrossing of [a, b] increases the value of Y by at least b a. If the last upcrossing has not yet been completed at time n, thenthismayreducey by at most (M n a). Hence Y n (b a)u n [a, b] (M n a). (2.2.9) By Lemma 2.2.3, theprocessy = X M is a supermartingale. In particular EY n EY 0 = 0. The proof is completed by taking expectations in both sides of (2.22). Observe that the upcrossing lemma implies for a supermartingale M that is bounded in L 1 (i.e. sup n E M n < ) that EU [a, b] < for all a b. In particular, the total number U [a, b] of upcrossings of the interval [a, b] is almost surely finite. The proof of the classical martingale convergence theorem is now straightforward. Theorem 2.2.12 (Doob s martingale convergence theorem) If M is a supermartingale that is bounded in L 1, then M n converges a.s. to a finite F -measurable limit M as n,withe M <. Proof. Assume that M is defined on the underlying probability space (Ω, F, P). Suppose that M(ω) does not converge to a limit in [, ]. Then there exist two rationals a<bsuch that lim inf M n (ω) <a<b<lim sup M n (ω). In particular, we must have U [a, b](ω) =. By Doob s upcrossing lemma P{U [a, b] = } = 0. Now note that A := {ω M(ω) does not converge to a limit in [, ]} a,b Q: a<b {ω U [a, b](ω) = }. Hence P{A} P{U [a, b] = } = 0. This implies that M n a.s. converges to a limit a,b Q: a<b M in [, ]. Moreover, in view of Fatou s lemma E M = E(lim inf M n ) lim inf E M n sup E M n <. It follows that M is a.s. finite and it is integrable. Note that M n is F n -measurable, hence it is F -measurable. Since M =lim n M n is the limit of F -measurable maps, it is F -measurable as well (see MTP-lecture notes). If the supermartingale M is not only bounded in L 1 but also uniformly integrable, the in addition to a.s. convergence we have convergence in L 1. Moreover, in the case, the whole sequence M 1,...,M is a supermartingale. Theorem 2.2.13 Let M be a supermartingale that is bounded in L 1. Then M n L 1 M, n,ifandonlyif{m n n Z + } is uniformly integrable, where M is integrable and F -measurable. In that case E(M F n ) M n, a.s. (2.2.10) If in addition M is a martingale, then there is equality in (2.2.10), in other words, M is a Doob martingale.

36 CHAPTER 2. MARTINGALES Proof. By virtue of Theorem 2.2.12 M n M a.s., for a finite random variable M. BN Theorem 8.5 implies the first statement. To prove the second statement, suppose that L M 1 n M.SinceM is a supermartingale, we have E1 {A} M m E1 {A} M n, A F n,m n. (2.2.11) Since 1 {A} M m 1 {A} M 1 {A} M m M M m M, it follows directly that 1 {A} M m L 1 1 {A} M. Taking the limit m in (2.2.11) yields E1 {A} M E1 {A} M n, A F n. This implies (see Exercise 2.7(a)) that E(M F n ) M n a.s. Hence uniformly integrable martingales that are bounded in L 1, are Doob martingales. On the other hand, let X be an F-measurable, integrable random variable and let (F n ) n be a filtration. Then (Example 2.1.3) E(X F n ) is a uniformly integrable Doob martingale. By uniform integrability, it is bounded in L 1. For Doob martingales, we can identify the limit explicitly in terms of the limit σ-algebra F. Theorem 2.2.14 (Lévy s upward theorem) Let X be an integrable random variable, defined on a probability space (Ω, F, P), andlet(f n ) n be a filtration, F n F,foralln. Then as n E(X F n ) E(X F ), a.s. and in L 1. Proof. The process M n = E(X F n ) is uniformly integrable (see Example 2.1.3), hence bounded in L 1 (explain!). By Theorem 2.2.13 M n M a.s. and in L 1, as n with M integrable and F -measurable. It remains to show that M = E(X F ) a.s. Note that E1 {A} M = E1 {A} M n = E1 {A} X, A F n, (2.2.12) where we have used Theorem 2.2.13 for the first equality and the definition of M n for the second. First assume that X 0, then M n = E(X F n ) 0 a.s. (see BN Lemma 7.2 (iv)), hence M 0 a.s. As in the construction of the conditional expectation we will associate measures with X and M and show that they agree on a π-system for F. Define measures Q 1 and Q 2 on (Ω, F )by Q 1 (A) =E1 {A} X, Q 2 (A) =E1 {A} M. (Check that these are indeed measures). By virtue of (2.2.12) Q 1 and Q 2 agree on the π-system (algebra) n F n. Moreover, Q 1 (Ω) = Q 2 (Ω)(= EX) sinceω F n. By virtue of BN Lemma 1.1, Q 1 and Q 2 agree on σ( n F n ). This implies by definition of conditional expectation that M = E(X F ) a.s. Finally we consider the case of general F-measurable X. Then X = X + X,isthe difference of two non-negative F-measurable functions X + and X. Use the linearity of conditional expectation.

2.2. DISCRETE-TIME MARTINGALES 37 The message here is that one cannot know more than what one can observe. We will also need the corresponding result for decreasing families of σ-algebras. If we have a filtration of the form (F n ) n Z+, i.e. a collection of σ-algebras such that F (n+1) F n,thenwedefine F = n F n. Theorem 2.2.15 (Lévy-Doob downward theorem) Let (F n n Z + ) be a collection of σ-algebras, such that F (n+1) F n for every n, and let M = (,M 2,M 1 ) be a supermartingale, i.e. E(M m F n ) M n a.s., for all n m 1. If sup EM n <, thentheprocessm is uniformly integrable and the limit exists a.s. and in L 1.Moreover, M = lim n M n E(M n F ) M a.s. (2.2.13) If M is a martingale, we have equality in (2.2.13) andinparticularm = E(M 1 F ). Proof. For every n Z + the upcrossing inequality applied to the supermartingale (M n,m (n 1),...,M 1 ) yields (b a)eu n [a, b] E(M 1 a) for every a<b. By a similar reasoning as in the proof of Theorem 2.2.12, we see that the limit M =lim n M n exists and is finite almost surely. Next, we would like to show uniform integrability. For all K>0and n Z + we have M n dp = EM n M n dp M n dp. M n >K M n K M n < K The sequence EM n is non-decreasing in n, and bounded. Hence the limit lim n EM n exists (as a finite number). For arbitrary >0, there exists m Z +, such that EM n EM m +, n m. Together with the supermartingale property this implies for all n m M n dp EM m + M m dp M m dp M n >K M n >K M n K M m dp+. M n < K Hence to prove uniform integrability, in view of BN Lemma 8.1 it is sufficient to show that we can make P{ M n >K} arbitrarily small for all n simultaneously. By Chebychev s inequality, it suffices to show that sup n E M n <. To this end, consider the process M = max{ M,0}. Withg : R R given by g(x) = max{x, 0}, one has M = g( M). The function g is a non-decreasing, convex function.

38 CHAPTER 2. MARTINGALES Since M is a submartingale, it follows that M is a submartingale (see Example 2.1.4). In particular, EM n EM 1 for all n Z +. It follows that Consequently E M n = EM n +2EM n sup EM n +2E M 1. P{ M n >K} 1 K (sup EM n +2E M 1 ). n Indeed, M is uniformly integrable. The limit M therefore exists in L 1 as well. Suppose that M is a martingale. Then M n = E(M 1 F n ) a.s. The rest follows in a similar manner as the proof of the Lévy upward theorem. Note that the downward theorem includes the downward version of Theorem 2.2.14 as a special case. Indeed, if X is an integrable random variable and F 1 F 2 n F n = F is a decreasing sequence of σ-algebras, then E(X F n ) E(X F ), n a.s. and in L 1. This is generalised in the following corollary to Theorems 2.2.14 and 2.2.15. It will be useful in the sequel. Corollary 2.2.16 Suppose that X n X a.s., and that X n Y a.s. for all n, wherey is an integrable random variable. Moreover, suppose that F 1 F 2 (resp. F 1 F 2 ) is an increasing (resp. decreasing) sequence of σ-algebras. Then E(X n F n ) E(X F ) a.s., where F = σ( n F n ) (resp. F = n F n ). In case of an increasing sequence of σ-algebras, the corollary is known as Hunt s lemma. Proof. For m Z +,putu m inf n m X n and V m =sup n m X n.sincex m X a.s., necessarily V m U m 0 a.s., as m. Furthermore V m U m 2Y. Dominated convergence then implies that E(V m U m ) 0, as m. Fix >0and choose m so large that E(V m U m ) <. For n m we have U m X n V m a.s. (2.2.14) Consequently E(U m F n ) E(X n F n ) E(V m F n ) a.s. The processes on the left and right are martingales that satisfy the conditions of the upward (resp. downward) theorem. Letting n tend to we obtain E(U m F ) lim inf E(X n F n ) lim sup E(X n F n ) E(V m F ) a.s. (2.2.15) It follows that 0 E lim sup E(X n F n ) lim inf E(X n F n ) E E(V m F ) E(U m F ) E(V m U m ) <. Letting 0 yields that lim sup E(X n F n ) = lim infe(x n F n ) a.s. and so E(X n F n ) converges a.s. We wish to identify the limit. Let n in (2.2.14). Then U m X V m a.s. Hence E(U m F ) E(X F ) E(V m F ) a.s. (2.2.16)

2.2. DISCRETE-TIME MARTINGALES 39 Equations (2.2.15) and (2.2.16) impy that both lim E(X n F n ) and E(X F ) are a.s.between V m and U m. Consequently E lim E(X n F n ) E(X F ) E(V m U m ) <. By letting 0 we obtain that lim n E(X n F n )=E(X F ) a.s. 2.2.5 Optional sampling theorems Theorem 2.2.5 implies for a martingale M and two bounded stopping times σ τ that E(M τ F σ )=M σ. The following theorem extends this result. Theorem 2.2.17 (Optional sampling theorem) Let M be a uniformly integrable (super)martingale. Then the family of random variables {M τ τ is a finite stopping time} is uniformly integrable and for all stopping times σ τ we have E(M τ F σ )=( )M σ a.s. Proof. We will only prove the martingale statement. For the proof in case of a supermartingale see Exercise 2.14. By Theorem 2.2.13, M =lim n M n exists a.s. and in L 1 and E(M F n )=M n a.s. Now let τ be an arbitrary stopping time and n Z +. Since τ n n, F τ n F n. By the tower property, it follows for every n that E(M F τ n )=E(E(M F n ) F τ n )=E(M n F τ n ) a.s. By Theorem 2.2.5 we a.s. have E(M F τ n )=M τ n. Now let n tend to infinity. Then the right-hand side converges a.s. to M τ. By the upward convergence theorem, the left-hand side converges a.s. and in L 1 to E(M G), where G = σ F τ n. n Therefore E(M G)=M τ a.s. (2.2.17) We have to show that G can be replaced by F τ. Take A F τ.then E1 {A} M = E1 {A {τ< }} M + E1 {A {τ= }} M. By virtue of Exercise 2.6, relation (2.2.17) implies that E1 {A {τ< }} M = E1 {A {τ< }} M τ. Trivially E1 {A {τ= }} M = E1 {A {τ= }} M τ.

40 CHAPTER 2. MARTINGALES Combination yields E1 {A} M = E1 {A} M τ, A F τ. We conclude hat E(M F τ )=M τ a.s. The first statement of the theorem follows from BN Lemma 8.4. The second statement follows from the tower property and the fact that F σ F τ. For the equality E(M τ F σ )=M σ a.s. in the preceding theorem to hold, it is necessary that M is uniformly integrable. There exist (positive) martingales that are bounded in L 1 but not uniformly integrable, for which the equality fails in general (see Exercise 2.26)! For nonnegative supermartingales without additional integrability properties we only have an inequality. Theorem 2.2.18 Let M be a nonnegative supermartingale and let σ τ be stopping times. Then E(M τ F σ ) M σ a.s. Proof. First note that M is bounded in L 1 and so it converges a.s. Fix n Z +. The stopped supermartingale M τ n is a supermartingale again (cf. Theorem 2.2.4). Check that it is uniformly integrable. Precisely as in the proof of the preceding theorem we find that E(M τ n F σ )=E(M τ n F σ ) Mσ τ n = M σ n a.s. Since the limit exists, we have M τ 1 {τ= } = M 1 {τ= }. By conditional Fatou E(M τ F σ ) E(lim inf M τ n F σ ) lim inf E(M τ n F σ ) lim inf M σ n = M σ, a.s. This proves the result. 2.2.6 Law of Large numbers We have already pointed out that martingales are a generalisation of sums of i.i.d. random variables with zero expectation. For such sums, we can derive the Law of Large Numbers, Central Limit Theorem, and law of Iterated Logarithm. The question is then: do these Laws also apply to martingales? If yes, what sort of conditions do we need to require. Here we discuss a simplest version. To this end we need some preparations. Theorem 2.2.19 Let S =(S n = n k=1 X k) n=1,2,... be a martingale with respect to the filtration (F n ) n=1,2,....assumethates n =0and that ES 2 n < for all n. ThenS n converges a.s. on the set { k=1 E(X2 k F k 1) < }. Proof. Let F 0 be the trivial σ-algebra. Fix K>0 and let τ =min{n n+1 k=1 E(X2 k F k 1) > K}, if such n exists. Otherwise let τ =. Clearly τ is a stopping time (check yourself). Then S τ is a martingale.

2.3. CONTINUOUS-TIME MARTINGALES 41 Note that S τ n = S τ n = n k=1 1 {τ k}x k. Using the martingale property and the fact that {τ k} F k 1, we obtain n ESτ n 2 = E 1 {τ k} Xk 2 k=1 n = E E(1 {τ k} Xk 2 F k 1) k=1 n = E 1 {τ k} E(Xk 2 F k 1) = E k=1 τ n E(Xk 2 F k 1 K. k=1 By Jensen s inequality (E Sn ) τ 2 E(Sn) τ 2 K 2. As a consequence S τ is a martingale that is bounded in L 1. By the martingale convergence theorem, it converges a.s. to an integrable limit S,say. ThusS n converges a.s. on the event τ =, in other words, on the event k=1 E(X2 k F k 1) K}. Let K. Without proof we will now recall a simple but effective lemma. Lemma 2.2.20 (Kronecker Lemma) Let {x n } n 1 be a sequence of real numbers such that n x n converges. Le {b n } n be a non-decreasing sequence of positive constants with b n as n.thenb 1 n n k=1 b kx k 0, asn. This allows to formulate the following version of a martingale Law of Large Numbers. Theorem 2.2.21 Let (S n = n k=1 X k) n 1 be a martingale with respect to the filtration (F n ) n 1. Then n k=1 X k/k converges a.s. on the set { k=1 k 2 E(Xk 2 F k 1) < }. Hence S n /n 0 a.s. on the set { k=1 k 2 E(Xk 2 F k 1) < }. Proof. Combine Theorem 9.1 and Lemma 9.2. 2.3 Continuous-time martingales In this section we consider general martingales indexed by a subset T of R +. If the martingale M =(M t ) t 0 has nice sample paths, for instance they are right-continuous, then M can be approximated accurately by a discrete-time martingale. Simply choose a countable dense subset {t n } of the index set T and compare the continuous-times martingale M with the discrete-time martingale (M tn ) n. This simple idea allows to transfer many of the discretetime results to the continuous-time setting. 2.3.1 Upcrossings in continuous time For a continuous-time process X we define the number of upcrossings of the interval [a, b] in the bounded set of time points T R + as follows. For a finite set F = {t 1,...,t n } T we define U F [a, b] as the number of upcrossings of [a, b] of the discrete-time process (X ti ) i=1,...,n (see Definition 2.2.10). We put U T [a, b] =sup{u F [a, b] F T,F finite }.

42 CHAPTER 2. MARTINGALES Doob s upcrossing lemma has the following extension. Lemma 2.3.1 Let M be a supermartingale and let T R + be a countable, bounded set. Then for all a<b,thenumberofupcrossingsu T [a, b] of the interval [a, b] by M satisfies (b a)eu T [a, b] sup E(M t a). t T Proof. Let T n be a nested sequence of finite sets, such that U T [a, b] =lim n U Tn [a, b]. For every n, the discrete-time upcrossing inequality states that (b a)eu Tn [a, b] E(M tn a), where t n is the largest element of T n. By the conditional version of Jensen s inequality the process (M a) is a submartingale (see Example 2.1.4). In particular, the function t E(M t a) is increasing, so So, for every n we have the inequality E(M tn a) =sup t T n E(M t a). (b a)eu Tn [a, b] sup t T n E(M t a). The proof is completed by letting n tend to infinity. Hence U T [a, b] is a.s. finite! By Doob s upcrossing Lemma 2.2.11 U T [a, b] isf t -measurable if t =sup{s s T }. 2.3.2 Regularisation We always consider the processes under consideration to be defined on an underlying filtered probability space (Ω, F, (F)t) t T, P). Remind that F = σ(f t,t T ). We will assume that T = R +. For shorthand notation, if we write lim q ( )t, we mean the limit along non-increasing (non-decreasing) rational sequences converging to t. The same holds for lim sup / lim inf. Theorem 2.3.2 Let M be a supermartingale. Then there exists a set Ω s F s of probability 1, such that for all ω Ω s the limits lim M q (ω) and lim M q (ω) q t q t exist and are finite for every t (0,s] and t [0,s) respectively, for any s. Is the set of discontinuities of each path M(ω) =(M t (ω)) t R+, ω Ω, at most countable? This is not (yet) clear. Proof. We give the proof for s =. Fix n Z +. Let a<b, a, b Q. By virtue of Lemma 2.3.1 there exists a set Ω n,a,b F n, of probability 1, such that U [0,n] Q [a, b](ω) <, for all ω Ω n,a,b.

2.3. CONTINUOUS-TIME MARTINGALES 43 Put Ω n = a<b, a,b Q Then Ω n F n. Let now t<nand suppose that lim M q (ω) q t Ω n,a,b. does not exist for some ω Ω n.thenthereexistsa<b, a, b Q, such that lim inf q t M q (ω) <a<b<lim sup M q (ω). q t Hence U [0,n] Q [a, b](ω) =, a contradiction. It follows that lim q t M q (ω) exists for all ω Ω n and all t [0,n). A similar argument holds for the left limits: lim q t M q (ω) exists for all ω Ω n for all t (0,n]. It follows that on Ω = n Ω n these limits exist in [, ] for all t>0 in case of left limits and for all t 0 in case of right limits. Note that Ω F and P{Ω } = 1. We still have to show that the limits are in fact finite. Fix t T, n>t. Let Q n = [0,n] Q and let Q m,n be a nested sequence of finitely many rational numbers increasing to Q n, all containing 0 and n. Then (M s ) s Qm,n is a discrete-time supermartingale. By virtue of Theorem 2.2.9 λp{ max s Q m,n M s > 3λ} 4E M 0 +3E M n. Letting m and then λ, by virtue of the monotone convergence theorem for sets sup M s <, s Q n This implies that the limits are finite. Put Ω n = {ω sup s Qn M s (ω) < }. By the above, Ω n F n and P{Ω n} = 1. Hence, Ω := n Ω n is a set of probability 1, belonging to F. Finally, set Ω =Ω Ω. This is a set of probability 1, belonging to F. a.s. Corollary 2.3.3 There exists an F -measurable set Ω, P{Ω } =1,suchthateverysample path of a right-continuous supermartingale is cadlag on Ω. Proof. See Exercise 2.21. Our aim is now to construct a modification of a supermartingale that is a supermartingale with a.s. cadlag sample paths itself, under suitable conditions. To this end read LN 1.6, definitions 1.6.2 (right-continuity of a filtration) and 1.6.3 (usual conditions) Given a supermartingale M, define for every t 0 limq t,q Q M M t +(ω) = q (ω), if this limit exists 0, otherwise. The random variables M t + are well-defined by Theorem 2.3.2. By inspection of the proof of this theorem, one can check that M t + is F t +-measurable. We have the following result concerning the process (M t +) t 0.

44 CHAPTER 2. MARTINGALES Lemma 2.3.4 Let M be a supermartingale. i) Then E M t + < for every t and E(M t + F t ) M t, a.s. If in addition t EM t is right-continuous, then this inequality is an equality. ii) The process (M t +) t 0 is a supermartingale with respect to the filtration (F t +) t 0 and it is amartingaleifm is a martingale. Proof. Fix t 0. Let q n t be a sequence of rational numbers decreasing to t. Then the process (M qn ) n is a backward discrete-time supermartingale like we considered in Theorem 2.2.15, L with sup n EM qn EM t <. By that theorem, M t + is integrable, and M 1 qn M t +. As in the proof of Theorem 2.2.13, L 1 -convergence allows to take the limit n in the inequality E(M qn F t ) M t a.s. yielding E(M t + F t ) M t L 1 convergence also implies that EM qn EM t +. So, if s EM s is right-continuous, then EM t + =lim n EM qn = EM t. But this implies (cf. Exercise 2.7 b) that EM t + F t )=M t a.s. To prove the final statement, let s<tand let qn t be a sequence of rational numbers decreasing to s. Then a.s. E(M t + F q n )=E(E(M t + F t ) F q n ) E(M t F q n ) M q n a.s. with equality if M is a martingale. The right-hand side of inequality converges to M s + as n. The process E(M t + F q n ) is a backward martingale satisfying the conditions of Theorem 2.2.15. Hence, E(M t + F q n ) converges to E(M t + F s +) a.s. We can now prove the main regularisation theorem for supermartingales with respect to filtrations satisfying the usual conditions. The idea is the following. In essence we want ensure that all sample paths are cadlag a priori (that the cadlag paths are a measurable set w.r.t to each σ-algebra F t ). In view of Corollary 2.3.3 this requires to complete F 0 with all P-null sets in F. On the other hand, we want to make out of (M t +) t a cadlag modification of M. This is guaranteed if the filtration involved is right-continuous. Can one enlarge a given filtration to obtain a filtration satisfying the usual conditions? If yes, can this procedure destroy properties of interest - like the supermartingale property, independence properties? For some details on this issue see BN 9. Theorem 2.3.5 (Doob s regularity theorem) Let M be a supermartingale with respect to the filtration (F t ) t 0 satisfying the usual conditions. Then M has a cadlag modification M (such that {M t M t =0}is a null set contained in F 0 )ifandonlyift EM t is rightcontinuous. In that case M is a supermartingale with respect to (F t ) t 0 as well. If M is a martingale then M is a martingale.

2.3. CONTINUOUS-TIME MARTINGALES 45 Proof. By Theorem 2.3.2 and the fact that (F t ) satisfies the usual conditions, there exists an event Ω F 0 (!) of probability 1, on which the limits M t =lim q t M q, M t + =lim q t M q exist for every t. Define the process M by M t (ω) = M t +(ω)1 {Ω }(ω) for t 0. Then M t = M t + a.s. and they differ at most on a null-set contained in F 0. Since M t + is F t +- measurable, we have that M t is F t +-measurable. It follows that M t = E(M t + F t +) a.s. By right-continuity of the filtration, right-continuity of the map t EM t and the preceding lemma, we get that a.s. Mt = E(M t + F t +)=E(M t + F t )=M t a.s. In other words, M is a modification of M. Right-continuity of the filtration implies further that M is adapted to (F t ). The process M is cadlag as well as a supermartingale (see Exercise 2.22). seeexercise2.23. Corollary 2.3.6 Amartingalewithrespecttoafiltrationthatsatisfiestheusualconditions has a cadlag modification, which is a martingale w.r.t the same filtration. We will next give two example showing what can go wrong without right-continuity of the filtration, or without continuity of the expectation as a function of time. Example Let Ω = { 1, 1}, F t = {Ω, } for t 1 and F t = {Ω,, {1}, { 1}} for t>1. Let P({1}) = P({ 1}) =1/2. Note that F t is not right-continuous, sincef 1 = F 1 +!Define Y t (ω) = 0, t 1 ω, t > 1, X t(ω) = 0, t < 1 ω, t 1. Now, Y = (Y t ) t is a martingale, but it is not right-continuous, whereas X = (X t ) t is a right-continuous process. One does have that EY t = 0 is a right-continuous function of t. Moreover, Y t + = X t and P{X 1 = Y 1 } = 0. Hence X is not a cadlag modification of Y, and in particular Y cannot have a cadlag modification. By Lemma 2.3.4 it follows that E(X t F t )=Y t, so that X cannot be a martingale w.r.t the filtration (F t ) t. By the same lemma, X is a right-continuous martingale, w.r.t to (F t +) t. On the other hand, Y is not a martingale w.r.t. to (F t +) t! Example Let Ω = { 1, 1}, F t = {Ω, } for t<1 and F t = {Ω,, {1}, { 1}} for t 1. Let P({1}) = P({ 1}) =1/2. Define 0, t 1 Y t (ω) = 1 t, ω =1,t>1 1, ω = 1,t>1, X t (ω) = 0, t < 1 1 t, ω =1,t 1 1, ω = 1,t 1 In this case the filtration (F t ) t is right-continuous and Y and X are both supermartingales w.r.t (F t ) t. Furthermore X t =lim q t Y q for t 0, but P{X 1 = Y 1 } = 0 and hence X is not a modification of Y..

46 CHAPTER 2. MARTINGALES 2.3.3 Convergence theorems In view of the results of the previous section, we will only consider everywhere right-continuous martingales from this point on. Under this assumption, many of the discrete-time theorems can be generalised to continuous time. Theorem 2.3.7 Let M be a right-continuous supermartingale that is bounded in L 1. Then M t converges a.s. to a finite F -measurable limit M,ast,withE M <. Proof. The first step to show is that we can restrict to take a limit along rational timesequences. In other words, that M t M a.s. as t if and only if lim M q = M a.s. (2.3.1) q To prove the non-trivial implication in this assertion, assume that (2.3.1) holds. Fix >0 and ω Ω for which M q (ω) M (ω). Then there exists a number a = a ω, > 0 such that M q (ω) M (ω) <for all q>a. Now let t>abe arbitrary. Since M is right-continuous, there exists q >tsuch that M q (ω) M t (ω) <. By the triangle inequality, it follows that M t (ω) M (ω) M q (ω) M (ω) + M t (ω) M q (ω) < 2. This proves that M t (ω) M (ω), t. To prove convergence to a finite F -measurable, integrable limit, we may assume that M is indexed by the countable set Q +. The proof can now be finished by arguing as in the proof of Theorem 2.2.12, replacing Doob s discrete-time upcrossing inequality by Lemma 2.3.1. Corollary 2.3.8 Anon-negative,right-continuoussupermartingaleM converges a.s. as t, toafinite,integrable,f -measurable random variable. Proof. Simple consequence of Theorem 2.3.7. The following continuous-time extension of Theorem 2.2.13 can be derived by reasoning as in discrete-time. The only slight difference is that for a continuous-time process X L 1 -convergence of X t as t need not imply that X is UI. Theorem 2.3.9 Let M be a right-continuous supermartingale that is bounded in L 1. i) If M is uniformly integrable, then M t M a.s. and in L 1,and E(M F t ) M t a.s. with equality if M is a martingale. ii) If M is a martingale and M t M in L 1 as t,thenm is uniformly integrable. Proof. See Exercise 2.24.

2.3. CONTINUOUS-TIME MARTINGALES 47 2.3.4 Inequalities Dobb s submartingale inequality and L p -inequality are very easily extended to the setting of general right-continuous martingales. Theorem 2.3.10 (Doob s submartingale inequality) Let M be a right-continuous submartingale. Then for all λ>0 and t 0 P{sup M s λ} 1 λ E M t. s t Proof. Let T be a countable, dense subset of [0,t] and choose an increasing sequence of finite subsets T n T, such that t T n for every n and T n T as n. By right-continuity of M we have that sup max n s T n M s =sup s T M s = sup M s. s [0,t] This implies that {max s Tn M s >c} {sup s T M s >c} and so by monotone convergence of sets P{max s Tn M s >c} P{sup s T M s >c}. By the discrete-time version of the submartingale inequality for each m>0 sufficiently large P{ sup M s >λ 1 m } = P{sup M s >λ 1 m } s [0,t] s T = lim n P{max s T n M s >λ 1 m } 1 λ 1/m E M t. Let m tend to infinity. By exactly the same reasoning, we can generalise the L p -inequality to continuous time. Theorem 2.3.11 (Doob s L p -inequality) Let M be a right-continuous martingale or a right-continuous, nonnegative submartingale. Then for all p > 1 and t 0 E sup M s p s t p p 1 pe Mt p. 2.3.5 Optional sampling We will now discuss the continuous-time version of the optional stopping theorem. Theorem 2.3.12 (Optional sampling theorem) Let M be a right-continuous, uniformly integrable supermartingale. Then for all stopping times σ τ we have that M τ and M σ are integrable and E(M τ F σ ) M σ a.s.. with equality if M is martingale.

48 CHAPTER 2. MARTINGALES Proof. By Lemma 1.6.16 there exist stopping times σ n and τ n taking only finitely many values, such that σ n τ n, and σ n σ, τ n τ. By the discrete-time optional sampling theorem applied to the supermartingale (M k/2 n) k Z+ (it is uniformly integrable!) E(M τn F σn ) M σn a.s. Since σ σ n it holds that F σ F σn. It follows that E(M τn F σ )=E E(M τn F σn ) F σ E(M σn F σ ) a.s. (2.3.2) Similarly E(M τn F τn+1 ) M τn+1 Hence, (M τn ) n is a backward supermartingale in the sense of the Lévy-Doob downward theorem 2.2.15. Since sup n EM τn EM 0, this theorem implies that (M τn ) n is uniformly integrable and it converges a.s. and in L 1. By right-continuity M τn M τ a.s. Hence M τn M τ in L 1. Similarly M σn M σ in L 1. Now take A F σ. By equation (2.3.2) it holds that M τn dp M σn dp. By L 1 -convergence, this yields A A M τ dp if we let n tend to infinity. This completes the proof. A A M σ dp, a.s. So far we have not yet addressed the question whether stopped (sub-, super-) martingales are (sub-, super-) martingales. The next theorem prepares the way to prove this. Theorem 2.3.13 A right-continuous, adapted process M is a supermartingale (resp. a martingale) if and only if for all bounded stopping times τ, σ with σ τ, therandomvariables M τ and M σ are integrable and EM τ EM σ (resp. EM τ = EM σ ). Proof. Suppose that M is a supermartingale. Since τ is bounded, there exists a constant K>0such that τ K a.s. As in the construction in the proof of Lemma 1.6.16 there exist stopping times τ n τ and σ n σ that are bounded by K and take finitely many values. In particular τ n,σ n D n = {K k 2 n,k =0, 1,...,2 n }. Note that (D n ) n is an increasing sequence of sets. By bounded optional stopping for discrete-time martingales, we have that E(M τn F τn+1 ) M τn+1, a.s. (consider M restricted to the discrete time points {k 2 (n+1),k Z + }. We can apply the Lévy-Doob downward theorem 2.2.15, to obtain that (M τn ) n is a uniformly integrable supermartingale, converging a.s. and in L 1 to an integrable limit as in the proof of the previous theorem. By right-continuity the limit is M τ. Analogously, we obtain that M σ is integrable. Bounded optional stopping for discrete-time supermartingales similarly yields that E(M τn M σn ) M σn a.s. Taking expectations and using L 1 -convergence proves that EM τ EM σ. The reverse statement is proved by arguing as in the proof of Theorem 2.2.5. Let s t and let A F s. Choose stopping times σ = s and τ = 1 {A} t + 1 {A c }s.

2.3. CONTINUOUS-TIME MARTINGALES 49 Corollary 2.3.14 If M is a right-continuous (super)martingale and τ is a stopping time, then the stopped process is a (super)martingale as well. Proof. Note that M τ is right-continuous. By Lemmas 1.6.13 and 1.6.15 it is adapted. Let σ ξ be bounded stopping times. By applying the previous theorem to the supermartingale M, and using that σ τ, ξ τ are bounded stopping times, we find that EM τ σ = EM τ σ EM τ ξ = EM τ ξ. Since σ and ξ were arbitrary bounded stopping times, another application of the previous theorem yields the desired result. Just as in discrete time the assumption of uniform integrability is crucial for the optional sampling theorem. If this condition is dropped, we only have an inequality in general. Theorem 2.2.18 carries over to continuous time by using the same arguments as in the proof of Theorem 2.3.12. Theorem 2.3.15 Let M be a right-continuous non-negative supermartingale and let σ τ be stopping times. Then E(M τ F σ ) M σ, a.s. A consequence of this result is that non-negative right-continuous supermartingales stay at zero once they have hit it. Corollary 2.3.16 Let M be a non-negative, right-continuous supermartingale and define τ =inf{t M t = 0 or M t =0}. Then P{M τ+t τ< =0,t 0} =1,whereMt = lim sup s t M s. Proof. Positive supermartingales are bounded in L 1. By Theorem 2.3.7, M converges a.s. to an integrable limit M say. Note that lim sup s t M s = 0 implies that lim s t M s exists and equals 0. Hence by rightcontinuity τ(ω) t if and only if inf{ M q (ω) q Q [0,t]} = 0 or M t (ω) = 0. Hence τ is a stopping time. Define τ n =inf{t M t <n 1 }. Then τ n is a stopping time with τ n τ. Furthermore, for all q Q +, we have that τ + q is a stopping time. Then by the foregoing theorem EM τ+q EM τn 1 n P{τ n < } + EM 1 {τn= }. On the other hand, since τ n = implies τ + q = we have EM 1 {τn= } EM 1 {τ+q= }. Combination yields for all n Z + and all q Q + EM τ+q 1 {τ+q< } = EM τ+q EM 1 {τ+q= } 1 n P{τ n < }.

50 CHAPTER 2. MARTINGALES Taking the limit n yields EM τ+q 1 {τ+q< } = 0 for all q Q +. By non-negativity of M we get that M τ+q 1 {τ+q< } = 0 a.s. for all q Q +. But then also P{ q {M τ+q =0} {τ< }} = 1. By right continuity q {M τ+q =0} {τ< } = {M τ+t =0,τ <,t 0}. Note that the set where M τ+t = 0 belongs to F! 2.4 Applications to Brownian motion In this section we apply the developed theory to the study of Brownian motion. 2.4.1 Quadratic variation The following result extends the result of Exercise 1.15 of Chapter 1. Theorem 2.4.1 Let W be a Brownian motion and fix t>0. Forn Z +,letπ n be a partition of [0,t] given by 0=t n 0 tn 1 tn k n = t and suppose that the mesh π n = max k t n k tn k 1 tends to zero as n.then (W t n k Wn k 1 ) 2 L 2 t, t. If the partitions are nested we have (W t n k W t n k 1 ) 2 a.s. t, k k t. Proof. For the first statement see Exercise 1.15 in Chapter 1. To prove the second one, denote the sum by X n and put F n = σ(x n,x n+1,...). Then F n+1 F n for every n Z +. Now suppose that we can show that E(X n F n+1 )=X n+1 a.s. Then, since sup EX n <, the Lévy-Doob downward theorem 2.2.15 implies that X n converges a.s. to a finite limit X.By the first statement of the theorem the X n converge in probability to t. Hence, we must have X = t a.s. So it remains to prove that E(X n F n+1 )=X n+1 a.s. Without loss of generality, we assume that the number of elements of the partition π n equals n. In that case, there exists a sequence t n such that the partition π n has the numbers t 1,...,t n as its division points: the point t n is added to π n 1 to form the next partition π n. Now fix n and consider the process W defined by Ws = W s tn+1 (W s W s tn+1 ). By Exercise 1.12 of Chapter 1, W is again a BM. For W, denote the analogous sums X k by Xk. Then it is easily seen for k n + 1 that X k = X k. Moreover, it holds that Xn Xn+1 = X n+1 X n (check!). Since both W and W are BM s, the sequences (X 1,X 2,...) and (X1,X 2,...) have the same distribution. It follows that a.s. E(X n X n+1 F n+1 ) = E(X n X n+1 X n+1,x n+2,...) = E(X n X n+1 X n+1,x n+2,...) = E(X n+1 X n X n+1,x n+2,...) = E(X n+1 X n F n+1 ).

2.4. APPLICATIONS TO BROWNIAN MOTION 51 This implies that E(X n X n+1 F n+1 ) = 0 a.s. A real-valued function f is said to be of finite variation on an interval [a, b], if there exists afinitenumberk>0, such that for every finite partition a = t 0 < <t n = b of [a, b] it holds that f(t k ) f(t k 1 ) <K. k Roughly speaking, this means that the graph of the function f on [a, b] has finite length. Theorem 2.4.1 shows that the sample paths of BM have positive, finite quadratic variation. This has the following consequence. Corollary 2.4.2 Almost every sample path of BM has unbounded variation on every interval. Proof. Fix t>0. Let π n be nested partitions of [0,t] given by 0 = t n 0 tn 1 tn k n = t. Suppose that the mesh π n = max k t n k tn k 1 0 as n.then k (W t n k W t n k 1 ) 2 max k W t n k W t n k 1 W t n k W t n k 1. By uniform continuity of Brownian sample paths, the first factor on the right-hand side converges to zero a.s., as n. Hence, if the Brownian motion would have finite variation on [0,t] with positive probability, then k (W t n W k t n )2 would converge to 0 with positive k 1 probability. This contradicts Theorem 2.4.1. k 2.4.2 Exponential inequality Let W be a Brownian motion. We have the following exponential inequality for the tail properties of the running maximum of the Brownian motion. Theorem 2.4.3 For every t 0 and λ>0 P sup W s λ s t e λ2 /2t and P sup W s λ s t 2e λ2 /2t Proof. For a>0 consider the exponential martingale M defined by M t =exp{aw t a 2 t/2} (see Example 2.1.5). Observe that P{sup W s λ} P sup M s e aλ a2 t/2. s t s t By the submartingale inequality, the probability on the right-hand side is bounded by e a2 t/2 aλ EM t = e a2 t/2 aλ EM 0 = e a2 t/2 aλ.