MSH7 - APPLIED PROBABILITY AND STOCHASTIC CALCULUS. Contents

Similar documents
I forgot to mention last time: in the Ito formula for two standard processes, putting

1. Stochastic Processes and filtrations

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

Bernardo D Auria Stochastic Processes /12. Notes. March 29 th, 2012

MA8109 Stochastic Processes in Systems Theory Autumn 2013

Solution for Problem 7.1. We argue by contradiction. If the limit were not infinite, then since τ M (ω) is nondecreasing we would have

The concentration of a drug in blood. Exponential decay. Different realizations. Exponential decay with noise. dc(t) dt.

Stochastic Calculus February 11, / 33

Stochastic Differential Equations.

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Bernardo D Auria Stochastic Processes /10. Notes. Abril 13 th, 2010

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition

Solutions to the Exercises in Stochastic Analysis

Brownian Motion and Stochastic Calculus

p 1 ( Y p dp) 1/p ( X p dp) 1 1 p

Applications of Ito s Formula

Exercises. T 2T. e ita φ(t)dt.

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

n E(X t T n = lim X s Tn = X s

(A n + B n + 1) A n + B n

(B(t i+1 ) B(t i )) 2

HJB equations. Seminar in Stochastic Modelling in Economics and Finance January 10, 2011

Lecture 4: Ito s Stochastic Calculus and SDE. Seung Yeal Ha Dept of Mathematical Sciences Seoul National University

Lecture 22 Girsanov s Theorem

STOCHASTIC CALCULUS JASON MILLER AND VITTORIA SILVESTRI

Lecture 12: Diffusion Processes and Stochastic Differential Equations

Question 1. The correct answers are: (a) (2) (b) (1) (c) (2) (d) (3) (e) (2) (f) (1) (g) (2) (h) (1)

Optimal Stopping Problems and American Options

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

Stochastic Differential Equations

Feller Processes and Semigroups

A D VA N C E D P R O B A B I L - I T Y

CIMPA SCHOOL, 2007 Jump Processes and Applications to Finance Monique Jeanblanc

1 Brownian Local Time

A new approach for investment performance measurement. 3rd WCMF, Santa Barbara November 2009

Verona Course April Lecture 1. Review of probability

Stochastic Analysis I S.Kotani April 2006

ELEMENTS OF PROBABILITY THEORY

Stochastic Integration and Continuous Time Models

Lecture 17 Brownian motion as a Markov process

On the submartingale / supermartingale property of diffusions in natural scale

Lecture 21 Representations of Martingales

Stability of Stochastic Differential Equations

Optimal investment strategies for an index-linked insurance payment process with stochastic intensity

Stochastic Calculus and Black-Scholes Theory MTH772P Exercises Sheet 1

Part III Stochastic Calculus and Applications

Some Terminology and Concepts that We will Use, But Not Emphasize (Section 6.2)

Topics in fractional Brownian motion

1. Stochastic Process

Continuous Time Finance

(6, 4) Is there arbitrage in this market? If so, find all arbitrages. If not, find all pricing kernels.

Reflected Brownian Motion

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( )

Jump Processes. Richard F. Bass

FE 5204 Stochastic Differential Equations

STAT 331. Martingale Central Limit Theorem and Related Results

Stochastic integration. P.J.C. Spreij

Theoretical Tutorial Session 2

Errata for Stochastic Calculus for Finance II Continuous-Time Models September 2006

A Short Introduction to Diffusion Processes and Ito Calculus

Brownian Motion. An Undergraduate Introduction to Financial Mathematics. J. Robert Buchanan. J. Robert Buchanan Brownian Motion

Lecture 4: Introduction to stochastic processes and stochastic calculus

Survival Analysis: Counting Process and Martingale. Lu Tian and Richard Olshen Stanford University

A Class of Fractional Stochastic Differential Equations

STAT Sample Problem: General Asymptotic Results

Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals

On a class of stochastic differential equations in a financial network model

Universal examples. Chapter The Bernoulli process

µ X (A) = P ( X 1 (A) )

The Pedestrian s Guide to Local Time

Stochastic calculus without probability: Pathwise integration and functional calculus for functionals of paths with arbitrary Hölder regularity

Stochastic Calculus (Lecture #3)

STAT331 Lebesgue-Stieltjes Integrals, Martingales, Counting Processes

Infinite-dimensional methods for path-dependent equations

Predicting the Time of the Ultimate Maximum for Brownian Motion with Drift

1.1 Definition of BM and its finite-dimensional distributions

Robust Mean-Variance Hedging via G-Expectation

Uniformly Uniformly-ergodic Markov chains and BSDEs

Introduction Optimality and Asset Pricing

LECTURE 2: LOCAL TIME FOR BROWNIAN MOTION

Stochastic Calculus for Finance II - some Solutions to Chapter VII

A Change of Variable Formula with Local Time-Space for Bounded Variation Lévy Processes with Application to Solving the American Put Option Problem 1

The multidimensional Ito Integral and the multidimensional Ito Formula. Eric Mu ller June 1, 2015 Seminar on Stochastic Geometry and its applications

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R

Hardy-Stein identity and Square functions

An Introduction to Stochastic Calculus

Poisson random measure: motivation

On pathwise stochastic integration

Gaussian, Markov and stationary processes

Optional Stopping Theorem Let X be a martingale and T be a stopping time such

A Barrier Version of the Russian Option

Stochastic Processes II/ Wahrscheinlichkeitstheorie III. Lecture Notes

A Concise Course on Stochastic Partial Differential Equations

Convergence at first and second order of some approximations of stochastic integrals

The main purpose of this chapter is to prove the rst and second fundamental theorem of asset pricing in a so called nite market model.

MULTIDIMENSIONAL WICK-ITÔ FORMULA FOR GAUSSIAN PROCESSES

Problem 1. Construct a filtered probability space on which a Brownian motion W and an adapted process X are defined and such that

The Cameron-Martin-Girsanov (CMG) Theorem

Man Kyu Im*, Un Cig Ji **, and Jae Hee Kim ***

for all f satisfying E[ f(x) ] <.

Transcription:

MSH7 - APPLIED PROBABILITY AND STOCHASTIC CALCULUS ANDREW TULLOCH Contents 1. Lecture 1 - Tuesday 1 March 2 2. Lecture 2 - Thursday 3 March 2 2.1. Concepts of convergence 2 3. Lecture 3 - Tuesday 8 March 3 4. Lecture 4 - Thursday 1 March 5 5. Lecture 5 - Tuesday 15 March 6 5.1. Properties of paths of Brownian motion 7 6. Lecture 6 - Thursday 17 March 9 7. Lecture 7 - Tuesday 22 March 1 7.1. Construction of Brownian motion 1 8. Lecture 8 - Thursday 24 March 11 9. Lecture 9 - Tuesday 29 March 12 1. Lecture 1 - Thursday 31 March 15 11. Lecture 11 - Tuesday 5 March 16 12. Lecture 12 - Thursday 7 March 19 13. Lecture 13 - Tuesday 12 March 2 14. Lecture 14 - Thursday 14 March 22 15. Lecture 15 - Tuesday 19 March 24 16. Lecture 16 - Thursday 21 March 26 17. Lecture 17 - Tuesday 3 May 26 17.1. Stochastic integrals for martingales 26 18. Lecture 18 - Tudsay 1 May 3 18.1. Itô s integration for continuous semimartingales 31 18.2. Stochastic differential equations 32 19. Lecture 19 - Thursday 12 May 32 2. Lecture 2 - Tuesday 17 May 33 2.1. Numerical methods for stochastic differential equations 33 2.2. Applications to mathematical finance 34 1

MSH7 - APPLIED PROBABILITY AND STOCHASTIC CALCULUS 2 2.3. Martingale method 34 21. Lecture 21 - Thursday 19 May 35 21.1. Change of Measure 35 22. Lecture 22 - Tuesday 24 May 36 22.1. Black-Scholes model 38 23. Lecture 23 - Thursday 26 May 38 24. Lecture 24 - Tuesday 31 May 4 25. Lecture 25 - Thursday 2 June 42

MSH7 - APPLIED PROBABILITY AND STOCHASTIC CALCULUS 3 1. Lecture 1 - Tuesday 1 March Definition 1.1 (Finite dimensional distribution). The finite dimensional distribution of a stochastic process X is the joint distribution of (X t1, X t2,..., X tn ) Definition 1.2 (Equality in distribution). Two random variables X and Y are equal in distribution if P(X α) = P(Y α) for all α R. We write X = d Y. Definition 1.3 (Strictly stationary). A stochastic process X is strictly stationary if (X t1, X t2,..., X tn ) = (X t1+h, X t2+h,..., X tn+h) for all t i, h. Definition 1.4 (Weakly stationary). A stochastic process X is weakly stationary if E (X t ) = E (X t+h ) and Cov (X t, X s ) = Cov (X t+h, X s+h ) for all t, s, h. Lemma 1.5. If E ( ) Xt 2 <, then strictly stationary implies weakly stationary. Example 1.6. The stochastic process {X t } with X t all IID is strictly stationary. The stochastic process W t with W t N(, t) and X t X s independent of X s (for s < t) is not strictly or weakly stationary. Definition 1.7 (Stationary increments). A stochastic process has stationary increments if for all s, t, h. X t X s d = Xt h X s h 2. Lecture 2 - Thursday 3 March Example 2.1. Let X n, n 1 be IID random variables. Consider the stochastic process {S n } where S n = n X j. Then {S n } has stationary increments. 2.1. Concepts of convergence. There are three major concepts of convergence of random variables. Convergence in distribution Convergence in probability Almost surely convergence Definition 2.2 (Convergence in distribution). X n d X if P(Xn x) P(X x) for all x. Definition 2.3 (Convergence in probability). X n p X if P(( Xn X ϵ)) for all ϵ >. Definition 2.4 (Almost surely convergence). X n a.s. X if except on a null set A, X n X, that is, lim n X n = X. And hence P(lim n X n = X) = 1

MSH7 - APPLIED PROBABILITY AND STOCHASTIC CALCULUS 4 Definition 2.5 (σ-field generated by X). Let X be a random variable defined on a probability space (Ω, F, P). We call σ(x) the σ-field generated by X, and we have σ(x) = { X 1 (B) : B B } where X 1 (B) = {ω, X(ω) B} and B is the Borel set on R. Definition 2.6 (Conditional probability). We have P(A B) = P(A,B) P(B) if P(B). Definition 2.7 (Naive conditional expectation). We have E (X B) = E(XI B) P(B). Definition 2.8 (Conditional density). Let g(x, y) be the joint density function for X and Y. Then we have Y R g(x, y) dx g Y (y). We also have g X Y =y = which defines the conditional density given Y = y. g(x, y) g Y (y) Finally, we define E (X Y = y) = R xg X Y =y(x) dx. Definition 2.9 (Conditional expectation). Let (Ω, F, P) be a probability space. Let A be a sub σ-field of F. Let X be a random variable such that E( X ) <. We define E (X A) to be a random variable Z such that (i) Z is A-measurable, (ii) E(XI A ) = E(ZI A ) for all A A. Proposition 2.1 (Properties of the conditional expectation). Consider Z = E(X Y ) = E(X σ(y )) If T is σ(y )-measurable, then E(XT Y ) = T E(X Y ) a.s. If T is independent of Y, then E(T Y ) = E(T ). E(X) = E(E(X T )) 3. Lecture 3 - Tuesday 8 March Definition 3.1 (Martingale). Let {X t, t } be a right-continuous with left-hand limits. lim t t X t exists Let {F t, t } be a filtration. Then X is called a martingale with respect to F t if (i) X is adapted to F t, i.e. X t is F t -measurable (ii) E( X ) <, t (iii) E(X t F s ) = X s a.s. Example 3.2. Let X n be IID with E(X n ) =. Then {S k, k }, where S k = k i= X i, is a martingale.

MSH7 - APPLIED PROBABILITY AND STOCHASTIC CALCULUS 5 Example 3.3. An independent increment process {X t, t } with E(X t ) = and E( X t ) is a martingale with respect to F t = σ{x s, s t} Definition 3.4 (Gaussian process). Let {X t, t } be a stochastic process. If the finite dimensional distributions are multivariate normal, that is, (X t1,..., X tm ) N(µ, Σ) for all t 1,..., t m, then we call X t a Gaussian process Definition 3.5 (Markov process). A continuous time process X is a Markov process if for all t, each A σ(x s, s > t) and B σ(x s, s < t), we have P(A X t, B) = P(A X t ) Definition 3.6 (Diffusion process). Consider the stochastic differential equation dx t = µ(t, x)dt + σ(t, x)db t A diffusion process is path-continuous, strong Markov process such that lim h h 1 E(X t+h X t X t = x) = µ(t, x) lim h h 1 E([X t+h X t hµ(t, X)] 2 X t = x) = σ 2 (t, x) Definition 3.7 (Path-continuous). A process is path-continuous if lim t t X t = X t. Definition 3.8 (Lévy process). Let {X t, t be a stochastic process. We call X a Lévy process (i) X = a.s. (ii) It has stationary and independent increments (iii) X is stochastically continuous, that is, for all s, t, ϵ >, we have Equivalently, X s p Xt if s t. lim P( X s X t ϵ) =. s t Example 3.9 (Poisson process). Let (N(t), t ) be a stochastic process. We call N(t) a Poisson process if the following all hold: (i) N() = (ii) N has independent increments (iii) For all s, t, P(N(t + s) N(s) = n) = e λt (λt) n n! n =, 1, 2,...

MSH7 - APPLIED PROBABILITY AND STOCHASTIC CALCULUS 6 The Poisson process is stochastically continuous - that is, P( N(t) N(s) ϵ) = P( N(t s) N() ϵ) = 1 P( N(t s) < α) = 1 P( N(t s) = ) = 1 e λ(t s) as t s The Poisson process is not path-continuous, that is because P(lim t s N(t) N(s) = ) 1 P( t s ϵ N(t) N(s) > δ) P( N(s + 1) N(s) δ) > 4. Lecture 4 - Thursday 1 March Definition 4.1 (Self-similar process). For any t 1, t 2,..., t n, for any c >, there exists an H such that We call H the Hurst index. Example 4.2 (Fractional process). (X ct1, X ct2,..., X ctn ) d = (c H X t1, c H X t2,..., c H X tn ). (1 B) d X t = ϵ t, ϵ t martingale difference BX t = X t 1, < d < 1 Definition 4.3 (Brownian motion). Let {B t, t } be a stochastic process. We call B t a Brownian motion if the following hold: (i) B = a.s. (ii) {B t } has stationary, independent increments. (iii) For any t >, B t N(, t) (iv) The path t B t is continuous almost surely, i.e. P( lim t t B t = B t ) = 1 Definition 4.4 (Alternative formulations of Brownian motion). A process {B t, t } is a Brownian motion if and only if {B t, t } is a Gaussian process E(B t ) =, E(B s B t ) = min(s, t) The process {B t } is path-continuous

MSH7 - APPLIED PROBABILITY AND STOCHASTIC CALCULUS 7 Proof. ( ) For all t 1,..., t m, we have (B tm B tm 1,..., B t2 B t1 ) N(, Σ) as B t has stationary, independent increments. Hence, (B t1, B t2,,..., B tn ) is normally distributed. Thus B t is a Gaussian process. We can also show that E(B t ) = and E(B s B t ) = min(t, s). ( ) TO PROVE Corollary. Let B t be a Brownian motion. The so are the following: {B t+t B t, t } { B t, t } {cb t/c 2, t, c } {tb 1/t, t } Proof. Here, we prove that {X t } = {tb 1/t } is a Brownian motion. Consider n α i X ti. Then by a telescoping argument, we know that the process is Gaussian (can be written as a sum of X t1, X t2 X t1, etc). We can also easily show that E(X t ) = and E(X t X s ) = min(s, t) as required. We now show that lim t X. We must show t = a.s. Fix ϵ P { X t 1 n } = 1 n=1 m=1 <t< 1 m However, as X t has the same distribution as B t (as they are both Gaussian with same mean and covariance), we have that this is equivalent to P { B t 1 n } n=1 m=1 <t< 1 m which is clearly one. 5. Lecture 5 - Tuesday 15 March Theorem 5.1 (Properties of the Brownian motion). We have P(B t x B t = x,..., B tn = x n ) = P(B t x B s = x s ) = Φ( x x s t s ) Theorem 5.2. The joint density of (B t1,..., B tn is given by n g(x 1,..., x n ) = f(x tj x tj 1, t j t 1 ) where f(x, t) = 1 2πt e x2 2t

MSH7 - APPLIED PROBABILITY AND STOCHASTIC CALCULUS 8 Theorem 5.3 (Density and distribution of Brownian bridge). Let t 1 < t < t 2. Then ( g(b t B t1 = a, B t2 = b) N a + (b a)(t t 1), (t ) 2 t)(t t 1 ). t 2 t 1 t 2 t 1 The density of B t B t1 = a, B t2 = b is given as g t1,t,t 2 (a, b, t) g t1,t 2 (a, b) Theorem 5.4 ( Joint distribution of B t and B s ). We have P(B s x, B t y) = P(B s x, B t B s y B s ) = = x y z x y 5.1. Properties of paths of Brownian motion. 1 e x 2πs 1 e x 2πs 2 1 2s 2 1 2s 1 e z 2 2 2(t s) dz1 dz 2 2π(t s) Definition 5.5 (Variation). Let g be a real function. Then the variation of g over an interval [a, b] is defined as V ([a.b]) = lim g(t j ) g(t j 1 ) where is the size of the partition a = t,..., t n = b of [a, b]. Definition 5.6 (Quadratic variation). The quadratic variation of a function g over an interval [a, b] is defined by [g, g] = lim g(t j ) g(t j 1 ) 2 Theorem 5.7 (Non-differentiability of Brownian motion). Paths of Brownian motion are continuous almost everywhere, by definition. Consider now, the differentiability of B t. We claim that a Brownian motion is non-differentiable almost surely, that is, We claim that B t B s lim = a.s. t s t s (i) Brownian motion is not differentiable almost surely for any t. (ii) Brownian motion has infinite variation on any interval [a, b], that is, V B ([a, b]) = lim B(t j ) B(t j 1 ) = a.s.

MSH7 - APPLIED PROBABILITY AND STOCHASTIC CALCULUS 9 (iii) Brownian motion has quadratic variation t on [, t], that is [B, B]([, t]) = lim B(t j ) B(t j 1 ) 2 = t a.s. Proof. We know that (B ct1,..., B ctn ) d = (c H B t1,..., c H B tn ). This is true as B ct N(, ct) = c 1/2 N(, t) as required. Now suppose X t is H-self-similar with stationary increments for some < H < 1 with X =. Then, for any fixed t, we have X t X t lim = a.s. t t t t Consider X t X t P(lim sup M) t t t t which by stationary increments, is equal to X t P(lim sup M) = lim t t t P( B tn B t n t n t k n M) and as the RHS goes to zero, we have > lim n P( B t n B t t n t M) = lim n P( N(, 1) M (t n t ) 1/2 ) P( N(, 1) M (t n t ) 1/2 ) 1 as required. Now, Assume V B ([, t]) < almost surely. Consider Q n = n B t j B tj 1 2. Then we have Let. Then Q n max j n B t j B tj 1 lim Q n lim max B t j B tj 1 j n B tj B tj 1 B tj B tj 1 lim max j n B t j B tj 1 V B ([, t]) V B ([, t]) = because B s is uniformly continuous on [, t]. This is a contradiction to V B ([, t]) <.

MSH7 - APPLIED PROBABILITY AND STOCHASTIC CALCULUS 1 Proof. We now show that E((Q n t) 2 ) as n. In fact, we have E(Q n ) = E(B tj B tj 1 ) 2 We now show L 2 convergence. We have E(Q n t) 2 = E((Q n E(Q n )) 2 ) = E( = = i=1 (t j t j 1 ) = t Y j where Y j = B tj B tj 1 2 E( B tj B tj 1 2 ) E(Y 2 j )) t j t j 1 2 E N(, 1) 4 C t as. Thus we have convergence in L 2. 6. Lecture 6 - Thursday 17 March Theorem 6.1 (Martingales related to Brownian motion). Let B t, t be a Brownian motion. Then the following are martingales with respect to F t = σ(b s, s t). (1) B t, t. (2) B 2 t t, t. (3) For any u, e ubt u2 t 2 Proof. (1) is simple. a.s. (2). We know that E( B 2 t ) is finite for any t. We can also easily show E(B 2 t t F s ) = B 2 s s Theorem 6.2. Let X t be a martingale satisfying X 2 t t is also a martingale. Then X t is a Brownian motion. Definition 6.3 (Hitting time). Let T α = inf {t,bt=α} (1) If α =, T =. (2) If α >, then P(T α t) = 2P(B t α) = 2 2πt We clearly have T α > t sup s t < α a e x2 2t dx

MSH7 - APPLIED PROBABILITY AND STOCHASTIC CALCULUS 11 7. Lecture 7 - Tuesday 22 March Theorem 7.1 (Arcsine law). Let B t be a Brownian motion. Then P(B t =, for at least once, t [a, b]) = 2 π arcos a b Example 7.2. Processes derived from Brownian motion (1) Brownian bridge X t = B t tb 1 t [, 1] Consider X F (x), with X 1,... X n data. Our empirical distribution F n (x) = 1 1 {Xj X n We can then prove that (2) Diffusion process i=1 n F n (x) F (x) X t t (, 1) X t = µt + σb t This is a Gaussian process with E(X t ) = µt, Cov(X t, X s ) = σ 2 min(s, t). (3) Geometric Brownian motion This is not a Gaussian process. (4) Higher dimensional Brownian motion X t = X e µt+σbt B t = (B 1 t,..., B n t ) where the B i are independent Brownian motions, then 7.1. Construction of Brownian motion. Define a stochastic process ˆB n t = S [nt] E(S [nt] ) n With Then we can prove that ˆB B t n t n if t = i n = otherwise B t n B t on [,1] Definition 7.3 (Stochastic integral). We now turn to defining expressions of the form A X t dy t

MSH7 - APPLIED PROBABILITY AND STOCHASTIC CALCULUS 12 with X t, Y t stochastic processes. Definition 7.4 ( f(b s) ds). We have f(t) dt exists if f is bonded and continuous, except on a set of Lebesgue measure zero. Thus, we can set f(t) dt = lim f(b yj )(t j t j 1 ) if f(x) is bounded. We now seek to find B s ds. Consider Q n = n B y j (t j t j 1 ). As the sum of normal variables, we know that Q n N(µ n, σn). 2 Since B s ds is normally distributed with mean, we have E(X 2 ) = E = = = 1/3 B s dx E(B s B t ) ds dt min(s, t) ds dt B t dt 8. Lecture 8 - Thursday 24 March Recall f(b s ) ds = lim B s ds N(, 1 3 ) = lim Consider I = X s dy s. We have the following: f(b yj )(t j+1 t j ) i=1 B yj (t j+1 t j ) Theorem 8.1. I exists if (1) The functions f, g are not discontinuous at the same point x. (2) f is continuous and g has bounded variation or, (2) f has finite p-variation and g has finite q-variation, where 1/p + 1/q = 1

MSH7 - APPLIED PROBABILITY AND STOCHASTIC CALCULUS 13 For any p, we define p-variation by lim f(t j ) f(t j 1 ) p Theorem 8.2. If J = f(t) dg(t) exists for any continuous f, then g must have finite variation. Theorem 8.3. B s has bounded p variation for any p 2 and unbounded q-variation for any q < 2. Proof. We can write (for p 2), p = 2 + (p 2). Then we have t = lim max B tj B tj 1 p 2 B tj B tj 1 2 and hence t exists. Corollary. Thus db t is well defined if f has finite variation (as setting q = 1, p 2 gives 1/p + 1/q > 1). Consider B s db s - is this an R-S integral? Consider 1n = B tj (B tj+1 B tj ), 2n = B tj+1 (B tj+1 B tj ) We have that and Thus we know 2n 1n = 1n + 2n = (B tj+1 B tj ) 2 1 Bt 2 j+1 Bt 2 j ) = B1. 2 2n 1 2 (B2 1 + 1) 1n 1 2 (B2 1 1) Definition 8.4 (Itô integral). The Itô integral is defined by evaluating f(b tj ), the left-hand endpoint at each partition interval [t j, t j+1 ) 9. Lecture 9 - Tuesday 29 March Definition 9.1 (Itô integral). Consider f(s) db s. Where f(s) is a real function, B s a Brownian motion. We define the integral in two steps.

MSH7 - APPLIED PROBABILITY AND STOCHASTIC CALCULUS 14 (1) If f(s) is a step function, define I(f) = f(s) db s = = m j+1 t j f(s) db s m c j (B tj+1 B tj ) (2) If f(s) L 2 ([, 1]), then let f n be a sequence in L 2 ([, 1]) such that f n f in L 2 ([, 1]). Then define I(f) to be the limitation in such situations such that E(I(f n ) I(f)) 2 in L 2 ([, 1]) or Let I(f) = lim n I(f n ) in probability. Remark. If f(x), g(x) are given step functions then αi(f) + βi(g) = I(αf + βg). Remark. If f(x) is a step function, then I(f) N(, f 2 (s) ds) Proof. I(f) = m c j (B tj+1 B tj ) N(, σ 2 ) where σ 2 = m c2 j (t j+1 t j ). Theorem 9.2. I(f) is well defined (independent of the choice of f n.) Proof. Let f n, g n f in L 2 ([, 1]). We then need to only compute n,m = E(I(f n ) I(g m )) 2 as m, n. In fact, n,m = E(I(f n g m )) 2 = (f n g m ) 2 dx 2 (f n f) 2 dx + 2 (g m f) 2 dx as n, m.

MSH7 - APPLIED PROBABILITY AND STOCHASTIC CALCULUS 15 Remark. If f is continuous bounded variation, then f(s) db s = (R.S.) ) 1 f(s) db s Proof. Case 1). = lim δ f(t j )B(t j+1 B tj ) αi(f) + βi(g) = lim n [αi(f n) + βi(g n ) = lim n [I(αf n + βg n )] = I(αf + βg) and thus αf n + βg n αf + βg L 2 ([, 1]). Case 2). If I(f) = lim δ I(f n ) in probability. Then if σ 2 n f 2 (x) dx. In fact, as f 2 n dx = = I(f n ) N(, σ 2 n) N(, N(, (f n f + f) 2 dx f 2 dx + f 2 dx f 2 n(s) ds) f 2 (s) ds) (f n f) 2 dx + as other terms tend to zero by L 2 convergence and Hölder s inequality. Remark. (R.S.) f(s) db s exists if f is of bounded variation. Remark. If f is continuous then f 2 dx < and f n (t) = f(t j )I [tj,t j+1 ) f(t) in L 2 ([, 1]) Thus, I(f) = lim I(f n ) δ m = lim f(t j )(B tj+1 B tj ) δ (f n f)f dx

MSH7 - APPLIED PROBABILITY AND STOCHASTIC CALCULUS 16 We have to prove if f is continuous. (f n f) 2 dx We have 1. Lecture 1 - Thursday 31 March (Itô ) f(s) db s N(, f(t) 2 dt if f is continuous, and of finite variation. In this case, we can write (R.S.) f(s) db s = (Itô ) f(s) db s = lim f(t j )(B tj+1 B tj ) Example 1.1. We have (1 t) db t = (Itô ) and by integrating by parts, we have Now consider (1 t) db t = (1 t)b t 1 (1 t) db t N(, (Itô ) B t d(1 t) = + where we now allow X s to be a stochastic process. X s db s Write F t = σ(b s, s t). Let π be the collection of X s such that (1 t) 2 dt) = N(, 1 3 ) B t dt N(, 1 3 ) (1) X s is adapted to F s, that is, for any s, X s is F s measurable. (2) X2 s ds < almost surely (R.S) Let π = {X s X s π, E(X2 s ) < }. Then π π. Let X s = e B2 s. Then E(Xs 2 1 ) = E(e B2 s ) = 1 4s s < 1 4 s 1 4 Definition 1.2 (Itô integral for stochastic integrands). We proceed in two steps. (1) Let X s = n ζ i1 [tj,t j+1) where ζ j is F tj measurable. Then I(X) = ζ j (B tj+1 B tj ). (2) If X π, there exists a sequence X n π such that X n are step process with 1 X n s X s 2 ds

MSH7 - APPLIED PROBABILITY AND STOCHASTIC CALCULUS 17 as n in probability or in L 2 ([, 1]) if X s π. Proof. We show only for X s continuous, the general case can be found in Hu-Hsing Kuo (p.65). As X s is continuous, then it is in π. Then choose Xs n = X + j = 1 n X tj 1 [tj,t j+1) Then X s pi and for any s (, 1). We can also show that E( X n s X s 2 ) E( X n s X s 2 ) < and so by the dominated convergence theorem, we have lim n E( X n s X s 2 ) ds =. Finally if X s π, the Itô integral X s db s is defined as in probability or in L 2 if X π. I(X) = lim δ I(X n ) 11. Lecture 11 - Tuesday 5 March Let I(X) = X s db s in the Itô sense. We then require (1) X s is F s = σ{b t, t s}-measurable (2) X2 s ds < almost surely. Then I(X) = lim n I(X n ) where X n is a sequence of step functions converging to X in L 2, that is, (X n s X s ) n ds We then show that this definition is independent of the sequence of step functions. For any Y s step process, we have (Y n s Y m s ) 2 dx 2 Theorem 11.1 (Properties of the Itô integral). (1) For any α, β R, X, Y π, (Y m s X s ) 2 ds + 2 I(αX + βy ) = αi(x) + βi(y ) (X n s X s ) 2 dx

MSH7 - APPLIED PROBABILITY AND STOCHASTIC CALCULUS 18 (2) For any X s π, we have If X π, Y s π, then (3) If X s continuous then E(I(X)) =, E(I 2 (X)) = E(I(X)I(Y )) = I(X) = lim δ E(X s Y s ) ds E(X 2 s ) ds X tj (B tj+1 B tj ) in probability. (4) If X is continuous and of finite variation and (R.S.) = (Itô ) B s dx s < then X s db s = X 1 B 1 X B B s dx s Proposition 11.2. We now show why we require (1) X s is F s = σ{b t, t s}-measurable (2) X2 s ds < almost surely. Proof. Motivation for (1) X s = ζ j 1 (tj,t j+1) I(X) = ζ j (B tj+1 B tj ) E(I(X)) = = = E(ζ j (B tj+1 B tj )) E(E(ζ j (B tj+1 B tj ) F tj )) E(ζ j E(B tj+1 B tj F tj ))

MSH7 - APPLIED PROBABILITY AND STOCHASTIC CALCULUS 19 Motivation for (2) E(I 2 (X)) = E( ζ j (B tj+1 B tj )) 2 = 2 i<j E(E(ζ i ζ j (B tj+1 B tj )(B ti+1 B ti )) F ti ) + = = E(ζj 2 )(t j+1 t j ) E(X 2 t ) dt E(ζj 2 (B tj+1 B tj ) 2 ) We now show that there X n s π. such that E(Xn s X s ) 2 ds. We have E(I 2 (X)) = E(I(X)) I(X n ) + E(I(X n s )) 2 = E(I 2 (X n )) + 2E(I(X n )(I(X) I(X n ))) + E(I(X) I(X n )) 2 By Cauchy-Swartz, the middle term tens do zero, and by definition, the third term tends to zero. Thus we have lim n E(I2 (X n )) = E(I 2 (X)) Let Xs n be step processes. We now show By definition, we have E(X n s X s ) 2 I(X) = lim n I(Xn ) = lim and we proved the required result in lectures. E(X n s X s ) 2 ds δ X tj (B tj+1 B tj ). We now show that the (R.S.) integral exists if X is continuous and of finite variation, and B s dx s <, the our integration by parts formula holds. We have (R.S.) which is our Itô integral by definition. X s db s = lim δ = lim δ X yj (B tj+1 B tj ) X tj (B tj+1 B tj )

MSH7 - APPLIED PROBABILITY AND STOCHASTIC CALCULUS 2 Definition 11.3 (Itô process). Suppose Y t = X s db s, t is well defined, for X s π. Then Y t is an Itô processes. To show a process Y t is an Itô process, we need to show that X s ds)m a.s. and X s 2 ds < a.s. Theorem 11.4. We have that Y t is continuous (except on a null set), is of infinite variation, as j+1 Y tj+1 Y tj = X s db s t j C min X s B tj+1 B tj s n B tj+1 B tj 12. Lecture 12 - Thursday 7 March From before, consider the Itô process Y t = X s db s. Lemma 12.1. E( s X udb u F s ) =. Proof. Let X u = n ζ j1 [tj,t j+1). Then E( s X u db u F s ) = E(ζ j (B tj+1 B tj ) F s ) = = = E(E(ζ j (B tj+1 B tj ) F s ) F tj ) E(E(ζ j (B tj+1 B tj ) F tj ) F s ) where the third equality follows from the fact that (F t ) is an increasing sequence of σ-fields and the final follows from the fact that B tj+1 B tj is independent of F tj. Definition 12.2 (Local martingale). A process Y t is a local martingale if there exists a sequence of stopping times τ n, n 1 such that Y min(t,τ),ft is a martingale. Proposition 12.3. We have the following. (1) If X s π, that is, E(X2 s ) ds <, then (Y t, F t ) is a martingale. (2) If X s π, that is X2 s ds < a.s., then (Y t, F t ) is a local martingale. (3) For f(x) satisfying f 2 (z) dz <., we have Y t = f(s) db s

MSH7 - APPLIED PROBABILITY AND STOCHASTIC CALCULUS 21 is a Gaussian process. Proof. We have Y t is trivially F t -measurable. We have E( Y t ) < a.s. as E(Yt 2 ) = E(X2 s ) ds < by assumption. We have from the previous lemma. E(Y t F s ) = E( s X u db u + = E(Y s F s ) + E( = Y s Now assuming X s π, the exists an X n s as n. Set Y n t = Xn u db u. Then s X u db u = Y t Y s s s X u db u F s ) X u db u F s ) a step process such that E(X n u X u ) 2 du = Y t Y n t + Y n t Y n s + Y N s Y s Then for each n, we have have Z = E( s X u db u ) = E(Y t Yt n F s ) + E(Ys n Y s F s ) We only need to prove E( Z 2 ) which implies E(Z 2 ) = and thus Z = almost surely. We by definition of Y n t. E(Z 2 ) 2E(Y t Y n t ) 2 + 2E(Y s Y n s ) 2 13. Lecture 13 - Tuesday 12 March Theorem 13.1. Let f be continuous. Then lim f(θ j )(B tj+1 B tj ) 2 = δ f(b s ) ds

MSH7 - APPLIED PROBABILITY AND STOCHASTIC CALCULUS 22 Proof. Let Q n = f(θ j ) f(b tj ) B tj+1 B tj 2 Note that n f(b t j )(B tj+1 B tj ) 2 f(b s) ds in probability. It only needs to show that Q n in probability. We have Q n max 1 j n f(θ j) f(b tj ) B tj+1 B tj 2 t = in probability from the quadratic variance of the brownian motion Theorem 13.2. Let f be bounded on [, 1]. Then lim f(x tj )(B tj+1 B tj ) 2 = δ f(x s ds) Proof. We only prove for cases where E(X2 s ) ds is finite. In this case, there exists X n s step process such that as n. E(X n s X s ) 2 ds In fact, we can let Xs n = n X t j 1 [tj,t j+1]. Let Y n t = Then Yt n j+1 Yt n j = j+1 t j Xs n db s. Therefore, (Y tj+1 Y tj ) 2 = X n s db s (Y tj+1 Yt n j + Z 1j + Z 2j ) where Z 1j = Y tj+1 Y n t j+1 and Z 2j = Y tj = Y n t j. Continuing, we have = = (Y tj+1 Yt n j ) 2 + error Xt 2 j (B tj+1 B tj ) 2 + error if the error term goes to zero. We have X 2 s ds R n 2 Z 2 1j + 2 Z 2 2j + 2 Z 1j Y n t j+1 Y n t j + 2 Z 2j Y n t j+1 Y n t j + 2 Z 1j Z 2j + 2( Z 2 1j) 1/2 A 1/2 n + 2( Z 2 2j) 1/2 A 1/2 n + 2( Z 2 1j) 1/2 ( Z 2 2j) 1/2

MSH7 - APPLIED PROBABILITY AND STOCHASTIC CALCULUS 23 and as Z ij 2 in probability, we have our result. Theorem 13.3 (Itô s first formula). If f(x) is a twice-differentiable function then for any t, f(b t ) f(b s ) = f (B u ) db u + 1 s 2 s f (B u ) du Proof. Let s = t 1,..., t n = t. We have [ f(b t ) f(b s ) = f(btj+1) f(b tj ) ] Applying Taylor s expansion to f(b tj+1 ) f(b tj ) we get f(b tj+1 ) f(b tj = f (B tj )(B tj+1 B tj ) + 1 2 f (θ j )(B tj+1 B tj ) 2 and so f(b t ) f(b s ) = lim = δ s f (B tj )(B tj+1 B tj ) + lim δ f (B u ) db u + 1 2 s f (B u ) du 1 2 f (θ j )(B tj+1 B tj ) 2 Example 13.4. 14. Lecture 14 - Thursday 14 March Definition 14.1 (Covariation). The covariation of two stochastic processes X t, Y t is defined as [X, Y ](t) = 1 ([X + Y, X + Y ](t) [X Y, X Y ](t)) 4 where [, ] is the quadratic variation previously defined. Definition 14.2 (Stochastic differential equation). Let X t be an Itô process. Then We write X t = X a + µ(s) ds + b a a σ(s) db s dx t = µ(t) dt + σ(t) db s By convention, we write dx t dy t = d[x, Y ](t). In particular, (dy t ) 2 = d[y, Y ](t). Theorem 14.3. Let Y t be path continuous, and let X t have finite variation. Then [X, Y ](t) =.

MSH7 - APPLIED PROBABILITY AND STOCHASTIC CALCULUS 24 Proof. We have [X, Y ](t) = 1 ([X + Y, X + Y ] [X Y, X Y ]) 4 = lim (X tj+1 X tj )(Y tj+1 Y tj ) δ lim δ max Y tj+1 Y tj X tj+1 X tj as Y t is path continuous and X t has finite variation. Corollary. From this theorem, we then have db t dt =, (dt) 2 =, (db t ) 2 = dt Corollary. For an Itô process X t given above, we then have d[x, X](t) = dx t dx t = (µ(t)dt + σ(t) db t ) 2 = σ 2 (t) dt Corollary. If f(x) has a twice continuous derivative, then [f(b t ), B t ](t) = Proof. From Itô s formula, we have f (B s ) ds df(b t ) = f (B t ) db t + 1 2 f (B t ) dt Then from above, we have d[f(b t ), B t ](t) = df(b t ) db t = f (B t )dt Theorem 14.4 (Itô s lemma). By Taylor s theorem, we have df(t, X t ) = f(t, X t) f(t, x) dt + t x dx t + 1 2 f(t, x) x=xt 2 x 2 (dx t ) 2 x=xt + 1 2 f(t, x) 2 x t dx t dt x=xt = f(t, X t) f(t, x) dt + t x (µ(t) dt + σ(t) db t ) + 1 2 f(t, x) x=xt 2 x 2 σ 2 dt x=xt ( f = t + f x µ(t) + 1 2 ) f 2 x 2 σ2 (t) dt + f x σ(t) db t

MSH7 - APPLIED PROBABILITY AND STOCHASTIC CALCULUS 25 Example 14.5. Let f(b t ) = e Bt. Then df(b t ) = e Bt db t + 1 2 ebt dt Theorem 14.6. Assume T f 2 (t) dt <. Let X t = f(s) db s 1 t 2 f 2 (s) ds. Then Y t = e Xt is a martingale with respect to F t = σ(b s, s t). Proof. We have dy t = e X t dx t + 1 2 ex t (dx t ) 2 = e Xt [f(t) db t 1 2 f 2 (s) dt] + 1 2 ext f 2 dt = e X t f(t) db t and thus Y t = exs f(s) db s which is a martingale from previous work. 15. Lecture 15 - Tuesday 19 March Theorem 15.1 (Multivariate Itô s formula). Let B j (t) be a sequence of independent Brownian motions. Consider the Itô processes X i t, with dx i t = b i (t) dt + m σ ki (t) db k (t) Suppose f(t, x 1,..., x n ) is a continuous function of its components and has continuous partial derivatives f t, f f x i, 2 x i x j. Then We have df(t, X 1 t,..., X m t k=1 ) = f m t dt + f dxt i + 1 x i 2 dx i t dx j t = = i=1 m k,s=1 i, σ ki σ sj db k (t) db s (t) σ ki σ kj dt+ k=1 as d[b i, B j ](t) = when B i, B j are independent Example 15.2. Let dx t = µ 1 (t)dt + σ 1 (t) db 1 (t) dy t = µ 2 (t)dt + σ 2 (t) db 2 (t) 2 f x i x j dx i t dx j t

MSH7 - APPLIED PROBABILITY AND STOCHASTIC CALCULUS 26 with B 1, B 2 independent. Then and by independence, d[x, Y ](t) = dx t dy t =. d(x t Y t ) = Y t dx t + X t dy t + d[x, Y ](t) Example 15.3. Let Then In particular, if µ 1 = µ 2 =, then X t Y t = dx t = µ 1 (t)dt + σ 1 (t) db 1 (t) dy t = µ 2 (t)dt + σ 2 (t) db 1 (t). d(x t Y t ) = Y t dx t + X t dy t + σ 1 (t)σ 2 (t) dt. [σ 1 (s)y s + σ 2 (s)x s ] db s + Thus, Z t = X t Y t σ 1(s)σ 2 (s) dt is a martingale. Theorem 15.4 (Tanaka s formula). We have σ 1 (s)σ 2 (s) dt where B t a = a + 1 L(t, a) = lim ϵ> 2ϵ sgn(b s a) db s + L(t, a) 1 Bs a ϵ ds Proof. Let Then we have and Then by Itô s formula, we have f ϵ (B t ) = f ϵ () + x a ϵ 2 x a > ϵ f ϵ (x) = 1 (x a)2 x a ϵ 2ϵ 1 x > a + ϵ f ϵ(x) = 1 ϵ (x a) x a ϵ 1 x < a ϵ f ϵ (x) = x a > ϵ 1 x a ϵ ϵ f ϵ(b s) db s + 1 2 f ϵ (B s ) ds

MSH7 - APPLIED PROBABILITY AND STOCHASTIC CALCULUS 27 and Obviously 1 lim ϵ 2 f ϵ (B s ) ds = 1 ϵ Note that f ϵ(b s ) sgn(b s a) 2 ds = lim f ϵ() = a ϵ B s a ϵ 1 Bs a ϵ ds = L(t, a) 1 ϵ (B s a) sgn(b s a) 2 ds a.s. Theorem 15.5. L(t, a) = Theorem 15.6. If f is integrable on R, then L(t, s)f(s) ds = δ a (B s ) ds f(b s ) ds 16. Lecture 16 - Thursday 21 March Definition 16.1 (Linear cointegration). Consider two non stationary time series X t and Y t. If there exist coefficients α and β such that αx t + βy t = u t with u t stationary, then we say that X t and Y t are cointegrated. Definition 16.2 (Nonlinear cointegration). If Y t f(x t ) = u t is stationary, with f( ) a nonlinear function. 17. Lecture 17 - Tuesday 3 May 17.1. Stochastic integrals for martingales. We now seek to define stochastic integrals with respect to processes other than Brownian motion. Example 17.1. Let X t = Y s db s, and thus dx s = Y s db s. Then Z t = Y s dx s = Y s Y s db s Revus and Yov - Continuous Martingale and Brownian Motion Definition 17.2 (Martingale). A martingale with respect (1) M t adapted to F t. (2) E( M t ) <.

MSH7 - APPLIED PROBABILITY AND STOCHASTIC CALCULUS 28 (3) E(M t F s ) = M s a.s. A process is a submartingale if (3) is replace with E(M t F s ) M s. A process is a supermartingale if (3) is replaced with E(M t F s ) M s. Example 17.3. Let N t be a poisson process with intensity λ. Then (1) N t is a submartingale with respect to the natural filtration. (2) N t λt is a martingale with respect to the natural filtration. Theorem 17.4. If E(H2 (s)) ds < and H(s) is adapted to F s = σ(b t, t s), then Y t = H(s) db s, t is a continuous, square integrable martingale - that is, E(Y 2 t < ). Theorem 17.5. Let M t be a continuous, square integrable martingale with respect to F t. Then there exists an adapted process H(s) such that E(H2 (s)) ds < and M t = M + where B t is a Brownian motion with respect to F t. H(s) db s Theorem 17.6. M t, t is a Brownian motion if and only if it is a local continuous martingale with [M, M](t) = t, t under some probability measure Q. Proof. A local continuous martingale is of the form M t = M + H(s) db s. Then we have [M, M](t) = H 2 (s) ds = t H(s) = 1a.s. M t = B t. Theorem 17.7. Let M t, t e a continuous local martingale such that [M, M](t). Let τ t = inf{s : [M, M](s) t}. Then M(τ t ) is a Brownian motion. Moreover, M(t) = B([M, M](t)), t. This is an application of the change of time method. Example 17.8. B t is a Brownian motion - and then Y t = Bt 2 t is a martingale. We have dy t = H(s) db s = 2B s db s. Thus, B 2 t t = 2 B s db s.

MSH7 - APPLIED PROBABILITY AND STOCHASTIC CALCULUS 29 Definition 17.9 (Predictability of a stochastic process). A stochastic process X t, t is said to be predictable with respect to F t if X t F t for all t, where ( ) F t = F t+h, F t = σ F t h. h h> Example 17.1. Let g t be a step process, with g t = ζ j 1 [tj,t j+1 )(t) Then g t is not predictable. Let g t be a step process, with g t = Then g t is predictable. i=1 ζ j 1 (tj,t j+1 ](t) i=1 Example 17.11. Let N t be a Poisson process. Then N t is predictable, but N t is not predictable. From now on, assume M t, t is right continuous, square integrable martingale with left hand limits. Lemma 17.12. M 2 t is a submartingale. Proof. E(M 2 t F s ) = E(M 2 s + 2(M t M s ) + (M t M s ) 2 F s ) = M 2 s + E((M t M s ) 2 F s ) M 2 s Theorem 17.13 (Doob-Myer decomposition). By Doob-Myer we can write M 2 t = L t + A t where L t is a martingale, and A t is a predictable process, right continuous, and increasing, such that A =, E(A t ) <, t. A t is called the compensator of Mt 2, and is denoted by M, M (t). Example 17.14. Consider Bt 2 = Bt 2 t + t. Then B, B (t) = t = [B, B](t) Theorem 17.15. If M t is continuous then [M, M](t) = M, M (t).

MSH7 - APPLIED PROBABILITY AND STOCHASTIC CALCULUS 3 Example 17.16. Let N t be a Poisson process. Then we know Ñt = N t λt is a martingale. We may prove Ñt 2 λt is a martingale, that is, Ñ, Ñ (t) = λt. However, [Ñ, Ñ](t) = λt + Ñt Ñ, Ñ (t) Example 17.17. X t = f(s) db s is a continuous martingale. Thus, [X, X](t) = f 2 (s) ds = X, X (t) Theorem 17.18. If M t is a continuous, square integrable martingale, then Mt 2 [M, M](t) is a martingale, and so which implies [M, M](t) = M, M (t) + martingale E[M, M](t) = E M, M (t) = EM 2 t We now turn to defining integrals such as where M s is a martingale. X s dm s Definition 17.19. Let L 2 pred be the space of all predictable stochastic process X s satisfying the condition X 2 s d M, M (s) <. Then the integral X s dms is defined as before in two steps. (1) If X s L 2 pred and X s = n ζ j1 [tj,t j+1). Define I(X) = ζ j (M tj+1 M tj ). (2) For all X s L 2 pred, there exists a sequence of step process Xn s such that X n s X s in L 2. Define I(x) to be the limit in such situations such that E(I(X) I(X n )) 2. Proposition 17.2. Properties of the integral.

MSH7 - APPLIED PROBABILITY AND STOCHASTIC CALCULUS 31 (1) If M s is a (local) martingale, then is a (local) martingale. f(s) dm s Proof. E( s f(u) dm u F s ) = (2) If M t is a square integrable martingale and satisfies then E( f 2 (s) d M, M (t)) < I(f) = f(s) dm s is square integrable with E(I(f)) =, and E(I 2 (f)) = f 2 (s) d M, M (s). In particular, if M(s) = s σ(u) db u, then X s dm s = X s σ(s) db s provided X2 s σ 2 (s) ds < and σ2 (s) ds (3) If X t = f(x) dm s and M s is a continuous, square integrable martingale, then [X, X](t) = f 2 (s) d[m, M](s) = X, X (t) 18. Lecture 18 - Tudsay 1 May Let M t be a martingale. Then M 2 t [M, M](t) is a martingale, and M 2 t = martingale + M, M (t) Recall that if M t is a continuous square integrable martingale, then M, M (t) = [M, M](t) Generally speaking, Theorem 18.1. If [M, M](t) = M, M (t) + martingale t f 2 (s) d M, M (s) <

MSH7 - APPLIED PROBABILITY AND STOCHASTIC CALCULUS 32 a.s. then is well defined, and and EY 2 t = = Y t = f(s) dm s f 2 (s) d M, M (t) f 2 (s) d[m, M](t) Y, Y (t) = [Y, Y ](t) = if M t -continuous f 2 (s) d[m, M](s). Proof. By Itô s formula, we also have Hence dy t = f(t) dm t d[y, Y ](t) = dy t dy t [Y, Y ](t) = = f 2 (t) dm t dm t = f 2 (t) d[m, M](t) Y 2 t = Y 2 + 2 dy 2 t = 2 f 2 (s) d[m, M](s) = Y, Y (t) Y s dy s + 1 [Y, Y ](t) Y s f(s) dm s + Y, Y (t) = 2Y t f(t) dm t + d Y, Y (t) Y, Y (t) = Y 2 t 2 Y s f(s) dm s = [Y, Y ](t) since Y sf(s) dm s is a martingale. 18.1. Itô s integration for continuous semimartingales. Definition 18.2. Let (X t, F t ) be a continuous semimartingale. Then X t = M t + A t n where M t is a martingale and A t is a continuous adapted process of bounded variation (lim δ A t j+1 A tj < ).

MSH7 - APPLIED PROBABILITY AND STOCHASTIC CALCULUS 33 Definition 18.3 (Integrals for semimartingales). c s dx s = c s dm s + c s da s and since A t has bounded variation, the second integral is defined in the Riemann-Stiljes (R.S.) sense. Theorem 18.4 (Itô s formula). Let X t be a continuous semimartingale. Let f(x) have twice continuous derivatives. Then f(x t ) = f(x ) + f (X s ) dx s + 1 2 f (X s ) d[x, X](s) Proof. Partition the interval [, t], and use a Taylor expansion to express f(x) = f(x ) + f (x )(x x ) + 1 2 f (x )(x x ) 2. 18.2. Stochastic differential equations. Consider the equation We seek to solve for a function f(t, x) such that dx t = σ(t, X t ) db t = µ(t, X t ) dt X t = f(t, B t ). Such an f(t, B t ) is a solution to the stochastic differential equation. Definition 18.5 (Strong solution). X t = X + σ(s, X s) db s + µ(s, B s) ds 19. Lecture 19 - Thursday 12 May Theorem 19.1. Let dx t = a(t, X t ) dt + b(t, X t ) db t Assume EX <. X is independent of B s and there exists a constant c > such that (1) a(t, x) + b(t, x) C(1 + x ). (2) a(t, x), b(t, x) satisfy the Lipschitz condition in x, i.e. a(t, x) a(t, y) + b(t, x) b(t, y) C x y for all t (, T ). Then there exists a unique (strong) solution. Example 19.2. Let with c 1, c 2 constants. dx t = c 1 X t dt + c 2 X t db t,

MSH7 - APPLIED PROBABILITY AND STOCHASTIC CALCULUS 34 Example 19.3. Let dx t = [c 1 (t)x t + c 2 (t)] dt + [σ 1 (t)x t + σ 2 (t)] db t Let a(t, x) = c 1 (t)x + c 2 (t)x, b(t, x) = σ 1 (t)x + σ 2 (t). Just follow Kuo p. 233. Let H t = e Y t, Y t = Then by the Itô product formula, we have σ 1 (s) ds + c 1 (s) db s 1 2 c 2 1(s) ds d(h t X t ) = H t (dx t σ 1 (t)x t dt c 1 (t)x t db t c 2 (t)c 1 (t) dt) Then by definition of the X t, we obtain which can be integrated to yield d(h t X t ) = H t (c 2 (t) db t + σ 2 (t) dt c 1 (t)c 2 (t) dt) H t X t = C + H s c 2 (s) db s + Dividing both sides by H t we obtain our solution X t. H s (σ 2 (s) c 1 (t)c 2 (t)) ds Theorem 19.4. The solution to the linear stochastic differential equation is given by X t = Ce Yt + where Y t = c 1(s) db s + dx t = [c 1 (t)x t + c 2 (t)] dt + [σ 1 (t)x t + σ 2 (t)] db t e Yt Ys c 2 (t) db s + ( σ1 (s) 1 2 c2 1(s) ) ds e Yt Ys (σ 2 (s) c 1 (t)c 2 (t)) dt 2. Lecture 2 - Tuesday 17 May 2.1. Numerical methods for stochastic differential equations. Theorem 2.1 (Euler s method). For the stochastic differential equation dx t = a(x t ) dt + b(x t ) db t, we simulate X t according to X tj = X tj 1 + a(x tj 1 ) t j + b(x tj 1 ) B tj Theorem 2.2 (Milstein scheme). For the stochastic differential equation dx t = a(x t ) dt + b(x t ) db t,

MSH7 - APPLIED PROBABILITY AND STOCHASTIC CALCULUS 35 we simulate X t according to X tj = X tj 1 + a(x tj 1 ) t j + b(x tj 1 ) B tj + 1 2 b (X tj 1 )( B 2 t j t j ) 2.2. Applications to mathematical finance. 2.3. Martingale method. Consider a market with risky security S t and riskless security β t. Definition 2.3 (Contingent claim). A random variable C T : Ω R, F T -measurable is called a contingent claim. If C T is σ(s T )-measurable it is path-independent. Definition 2.4 (Strategy). Let a t represent number of units of S t, and b t represent number of units of β t. If a t, b t are F t -adapted, then they are strategies in our market model. Our strategy value V t at time t is V t = a t X t + b t β t Definition 2.5 (Self-financing strategy). A strategy (a t, b t ) is self financing if dv t = a t ds t + b t dβ t The intuition is that we make one investment at t =, and after that only rebalance between S t and β t. Definition 2.6 (Admissible strategy). (a t, b t ) is an admissible strategy if it is self financing and V t for all t T. Definition 2.7 (Arbitrage). An arbitrage is an admissible strategy such that V =, V T and P(V T > ) >. Alternatively, an arbitrage is a trading strategy with V =, and E(V T ) >. Definition 2.8 (Attainable claim). A contingent claim C T is said to be attainable if there exists an admissible strategy (a t, b t ) such that V T = C T. In this case, the portfolio is said to replicate the claim. By the law of one price, C t = V t at all t. Definition 2.9 (Complete). The market is said to be complete if every contingent claim is attainable Theorem 2.1 (Harrison and Pliska). Let P denote the real world measure of the underlying asset price X t. If the market is arbitrage free, there exists an equivalent measure P, such that the discounted asset price ˆX t and every discounted attainable claim Ĉt are P -martingales. Further, if the market is complete, then P is unique. In mathematical terms, C t = β t E (β 1 T C T F t ). P is called the equivalent martingale measure (EMM) or the risk-neutral measure.

MSH7 - APPLIED PROBABILITY AND STOCHASTIC CALCULUS 36 21. Lecture 21 - Thursday 19 May For a trading strategy (a t, b t ), then the value V t satisfies V t = V + where B s is the riskless asset. α s ds s + b s dβ s To price an attainable option X, let (a t, b t ) be a trading strategy with value V t that replicates X. Then to avoid arbitrage, the value of X at time t = is given by V. 21.1. Change of Measure. Let (Ω, F, P) be a probability space. Definition 21.1 (Equivalent measure). Let P and Q be measures on (Ω, F). Then for any A F, if P(A) = Q(A) = then we say the measures P and Q are equivalent. If P(A) = Q(A) =, we write Q << P. Theorem 21.2 (Radon-Nikodyn). Let Q << P. Then there exists a random variable λ such that λ, E P (λ) = 1 and Q(A) = for any A F. λ is P=almost surely unique. A dp = E p (λ1 A ) Conversely, if there exists λ such that λ 1, E P (λ) = 1, then defining Q(A) = λ dp and then Q is a probability measure and Q << P. Consequently, if Q << P, then whenever E Q ( Z ) <. A E Q (Z) = E P (λz) The random variable λ is called the density of Q with respect to P, and denoted by λ = dq dp Example 21.3. Let X N(, 1) and Y N(µ, 1) under probability P. Then there exists a Q such that Q is equivalent to P and Y N(, 1) under Q.

MSH7 - APPLIED PROBABILITY AND STOCHASTIC CALCULUS 37 Proof. P(X A) = 1 2π Define Q(A) = A = 1 2π = 1 2π e t2 2 A dt µ2 µx e 2 dp A A µ2 µx e 2 e x2 2 dx e (µ+x)2 2 dx Then Q << P, P << Q and let λ = dq dp = µ 2 e µx 2 Then λ satisfies the conditions of Radon-Nikodyn theorem. Then we have E Q (Y ) = E P ((X + µ)λ) µ2 µx = (X + µ)e 2 dp = 1 2π (x + µ)e (µ+x)2 2 dx = 22. Lecture 22 - Tuesday 24 May Theorem 22.1. Let λ(t), t T be a positive martingale with respect to F t such that E P (λ(t )) = 1. Define a new probability measure Q by Q(A) = λ(t ) dp Then Q << P and for any random variable X, we have A E Q (X) = E P (λ(t )X) E Q (X F t ) = E P ( λ(t )X λ(t) F t ) = E P(λ(T )X F t ) E P (λ(t ) F t ) a.s. and if X F t, then for any s t, we have ( ) λ(t)x E Q (X F s ) = E P λ(s) F s ( ) ( )

MSH7 - APPLIED PROBABILITY AND STOCHASTIC CALCULUS 38 Consequently a process S(t) is a Q-martingale if and only if S(t)λ(t) ( ) is a P-martingale Proof. ( ). We have Q(E P (λ(t )) F t = ) = E p (λ(t )1 EP (λ(t ) F t=)) = We have ( ). We have ( ) ( EP (λ(t )X F t ) E Q E P (λ(t ) F t ) 1 A = E Q λ(t) E ) P(λ(T )X F t ) E P (λ(t ) F t ) 1 A = E P (E P (λ(t )X F t )1 A ) = E P (λ(t )X1 A ) = E Q (X1 A ) ( ) λ(t)x 1 E P λ(s) F s λ(s) E P(λ(t)X F s ) = 1 λ(s) E P (E P (λ(t )X F t ) F s ) = 1 λ(s) E P(λ(T )X F s ) = E Q (X F s ) because of ( ). ( ). We have E Q (S(t) F u ) = S(u) E P ( λ(t)s(t) λ(u) F u ) = S(u) E P (λ(t)s(t) F u ) = λ(u)s(u) as required. Theorem 22.2. Let B s, s T be a Brownian motion under P. Let S(t) = B t + µ t, u. Then there exists a Q equivalent to P such thats(t) is a Q-Brownian motion and λ(t ) = dq dp = e µb T 1 2 µ 2T. Note that S(t) is not a martingale under P, but it is a martingale under Q.

MSH7 - APPLIED PROBABILITY AND STOCHASTIC CALCULUS 39 Proof. Under Q, Q(B = ) = λ(t ) dp = 1 B = S(t) is a Q-martingale if and only if S(t)λ(t) is a P-martingale. But we have X t = S(t)λ(t) = (B t + µt) e µb t 1 2 µ2 t is a martingale. Finally, note that [S, S](t) = [B, B](t) = t 22.1. Black-Scholes model. Definition 22.3 (Black-Scholes model). The Black-Scholes model assumes the risky asset S t follows the diffusion process given by ds t = µ dt + σ db t and the riskless asset follows the diffusion S t Define the discounted process as follows: dβ t β t = r dt Ŝ t = S t β t, ˆVt = V t β t, Ĉ t = C t β t. 23. Lecture 23 - Thursday 26 May Lemma 23.1. (a) By a simple application of Itô s lemma, (b) Ŝt is a Q-martingale with with q = µ r σ. (c) Note that dŝt Ŝ t dŝt Ŝ t = (µ r) dt + σ db t. λ = dq dp = e qb T 1 2 q 2 T = σd(b t + µ r σ t) = σd ˆB t where ˆB t = B t + qt is a Brownian motion under Q. (d) dŝt = σŝt d ˆB t.

MSH7 - APPLIED PROBABILITY AND STOCHASTIC CALCULUS 4 (e) In a finite market, where S t takes only finitely many values, Ŝt is a Q-martingale is a necessary condition for no-arbitrage. (f) Note that ds t S t = µ dt + σ db t = r dt + σ d ˆB t Theorem 23.2. A value process V t is self financing if and only if the discounted value process ˆV t is a Q-martingale. d ˆV t = a t dŝt dv t = αds t + b t dβ t V t = V + a s ds s + b s dβ s V t is self financing. Proof. By Itô s formula, we have d ˆV t = e rt dv t re rt V t dt = e rt (a t ds t + b t dβ t ) re rt (a t S t + b t β t ) dt a t (e rt ds t re rt dt) = a t dŝt Theorem 23.3. In the Black-Scholes model, there are no arbitrage opportunities. Proof. For any admissible trading strategy (a t, b t ), we have that the discounted value process ˆV t is a Q-martingale. So if V =, then E( ˆV ) =, and we have E Q ( ˆV T ) = E Q ( ˆV T F ) = ˆV = which implies Q( ˆV T > ) =, which implies P(V T > ) =, which implies that E P (V T ) =, which then implies no arbitrage. Theorem 23.4. For any self financing strategy, V t = a t S t + b t β t = V + And so a strategy is self financing if a u ds u + S t da t + β t db t + d[a, S](t) = We now consider several cases. (1) If a t is of bounded variation, then [a, S](t) =. Hence S t da t + β t db t =, b u dβ u

MSH7 - APPLIED PROBABILITY AND STOCHASTIC CALCULUS 41 which implies Hence da t db t <. db t = S t β t da t (2) If a t is a semi-martingale a t = a 2 t +A t, then b t must be a semi-martingale, where b t = b 2 t +B t where a 2 t, b 2 t are the martingale parts and A t, B t are of bounded variation. 24. Lecture 24 - Tuesday 31 May Theorem 24.1. Given a claim C T under the self-financing assumption, there exists a Q-martingale such that In particular, we have V t = E Q ( e r(t t)c T F t ), F t = σ(b s, s t). V = E Q (e rt C T ) Proof. For any Q-martingale ˆV t, we have ˆV t = E Q ( ˆV T F t ). Then V t = E Q (e r(t t) C T F t ), as V T = C T. Theorem 24.2. A claim is attainable (there exists a trading strategy replicating the claim), that is, G(t) = V t = V + G(t) a u ds u + b u dβ u V T, V T = C T Theorem 24.3. Suppose that C T is a non-negative random variable, C t F T and E Q (C 2 T f < ), where Q is defined as before. Then (a) The claim is replicable. (b) where ĈT = e rt C T. V t = E Q ( e r(t t) C T F t ) In particular, V = E Q (e rt C T ) = E Q (ĈT ). Theorem 24.4. Assume ˆV t = E Q (ĈT F t ). exists an adapted process H(s) such that ˆV t = E Q (ĈT F t ) Using the martingale representation theorem, there ˆV t = ˆV H(s) d ˆB s d ˆV t = H(t) d ˆB t.

MSH7 - APPLIED PROBABILITY AND STOCHASTIC CALCULUS 42 On the other hand, d ˆV t = a t dŝt = a t σŝt d ˆB t. Hence we obtain our required result, and then solve for b t. a t = H(t), σŝt ( ) Example 24.5. Let C T = f(s T ). Then V t = E Q e r(t t)f(s T ) F t. Since Ft = σ(b s, s t) = σ( ˆB s, s t), and Ŝt is a Q-martingale, we have Ŝ t = Ŝe σ2 2 t+σ ˆB t and so Then and so where S T = e rt Ŝ T = Ŝte rt e σ2 2 (T t)+σ( ˆB T ˆB t). V t = E Q [e r(t t) f F (t, x) = E Q [e r(t t) f = e r(t t) f (e rt Ŝ t e σ2 2 (T t)+σ( ˆB T ˆB t ) )] V t = F (t, S t ) R σ2 ( (e )] 2 )(T t)+σ T tz (xe σ2 2 (T t)+σz T t ) ϕ(z) dz where ϕ(z) = 1 2π e z2 2. Example 24.6. In particular, if f(y) = (y K) +, then we obtain F (t, x) = e rθ d 1 xe σ2 2 θ+xz θ z2 2 dz K = xφ(d 1 + σ θ) Ke rθ Φ(d 1) = xφ(d 1 ) Ke rθ Φ(d 2 ), d 1 e rθ z2 2 dz where d 1 = log ( ) x K + (r + σ 2 2 )θ σ, d 2 = d 1 σ θ θ Theorem 24.7 (Black-Scholes model summary). V t = a t S t + b t β t, where ds t S t = µ dt + σ db t dβ t β t = r dt

MSH7 - APPLIED PROBABILITY AND STOCHASTIC CALCULUS 43 (a) Ŝt = e rt S t is a Q-martingale, where dq dp = e qb T 1 2 q 2 T and q = µ r σ. Then ˆB t = B t + µ r σ t is a Q-Brownian motion, and dŝt = σŝt d ˆB t. (b) V t is self-financing if (a t, b t ) satisfies S t da t + β t db t + d[a, S](t) = which then implies ˆV t is a Q-martingale, ˆV t = e rt V t. (c) There are no arbitrage opportunities in the Black-Scholes model. Theorem 25.1 (Feyman-Kac formula). 25. Lecture 25 - Thursday 2 June (1) Suppose the function F (x, t) solves the boundary value problem F (t, x) t such that F (T, x) = Ψ(x). (2) Let S t be a solution of the SDE F (t, x) + µ(t, x) + 1 x 2 σ2 (t, x) 2 F (t, x) x 2 = ds t = µ(t, S t ) dt + σ(t, S t ) db t ( ) where B t is a Q-Brownian motion (3) Assume Then where F t = σ(b s, s t). T E(σ(t, S t ) 2 F (t, S t ) x 2 ) dt < F (t, S t ) = E Q (Ψ(S T ) F t ) = E Q (F (T, S T ) F t ). Proof. It is enough to show that F (t, S t ) is a martingale with respect to F t under Q. By Itô s lemma, we have df (t, S t ) = f(t, S t) t = which is a Q-martingale. dt + F (t, S t) x [ F t + µ(t, S t) F = F (t, S t) σ(t, S t ) db t x x + σ2 (t, S t ) 2 ds t + 1 2 F (t, S t ) 2 x 2 (ds t ) 2 ] 2 F (t, S t ) x 2 dt + F (t, S t) σ(t, S t ) db t x

MSH7 - APPLIED PROBABILITY AND STOCHASTIC CALCULUS 44 Theorem 25.2 (General Feyman-Kac formula). Let S t be a solution of the SDE ( ). Assume that there is a solution to the PDE Then F (t, x) t Proof. Again by Itô s lemma, df (t, S t ) = F (t, x) + µ(t, x) + 1 x 2 σ2 (t, x) 2 F (t, x) x 2 = r(t, x)f (t, x). F (t, S t ) = E Q ( e T t r(u,s u) du F (T, S T ) F t ) ( F t + µ F x + σ2 2 ) F 2 x 2 dt + F x σ(t, S t) db t = r(t, x)f (t, S t ) dt + dm t where M t = F x σ(u, S u) db u. Hence we have df (t, S t ) = r(t, S t )X t dt + dm t d [e ] ν t r(u,su) du X ν = e ν t r(u,su) du dm ν ( ) e T t r(u,s u) du F (T, S T ) = F (t, S t ) + e ν t r(u,s u) du dm ν ( ) t ( E Q e ) T t r(u,su) du F (T, S T ) F t = F (t, S t ) ( ) ) and so we obtain our result, T + E Q ( T e ν t r(u,su) du F t x σ(ν, S ν) db ν F t }{{} = F (t, S t ) = E Q ( e T t r(u,su) du F (T, S T ) F t )