MSH7 - APPLIED PROBABILITY AND STOCHASTIC CALCULUS. Contents

Size: px
Start display at page:

Download "MSH7 - APPLIED PROBABILITY AND STOCHASTIC CALCULUS. Contents"

Transcription

1 MSH7 - APPLIED PROBABILITY AND STOCHASTIC CALCULUS ANDREW TULLOCH Contents 1. Lecture 1 - Tuesday 1 March 2 2. Lecture 2 - Thursday 3 March Concepts of convergence 2 3. Lecture 3 - Tuesday 8 March 3 4. Lecture 4 - Thursday 1 March 5 5. Lecture 5 - Tuesday 15 March Properties of paths of Brownian motion 7 6. Lecture 6 - Thursday 17 March 9 7. Lecture 7 - Tuesday 22 March Construction of Brownian motion 1 8. Lecture 8 - Thursday 24 March Lecture 9 - Tuesday 29 March Lecture 1 - Thursday 31 March Lecture 11 - Tuesday 5 March Lecture 12 - Thursday 7 March Lecture 13 - Tuesday 12 March Lecture 14 - Thursday 14 March Lecture 15 - Tuesday 19 March Lecture 16 - Thursday 21 March Lecture 17 - Tuesday 3 May Stochastic integrals for martingales Lecture 18 - Tudsay 1 May Itô s integration for continuous semimartingales Stochastic differential equations Lecture 19 - Thursday 12 May Lecture 2 - Tuesday 17 May Numerical methods for stochastic differential equations Applications to mathematical finance 34 1

2 MSH7 - APPLIED PROBABILITY AND STOCHASTIC CALCULUS Martingale method Lecture 21 - Thursday 19 May Change of Measure Lecture 22 - Tuesday 24 May Black-Scholes model Lecture 23 - Thursday 26 May Lecture 24 - Tuesday 31 May Lecture 25 - Thursday 2 June 42

3 MSH7 - APPLIED PROBABILITY AND STOCHASTIC CALCULUS 3 1. Lecture 1 - Tuesday 1 March Definition 1.1 (Finite dimensional distribution). The finite dimensional distribution of a stochastic process X is the joint distribution of (X t1, X t2,..., X tn ) Definition 1.2 (Equality in distribution). Two random variables X and Y are equal in distribution if P(X α) = P(Y α) for all α R. We write X = d Y. Definition 1.3 (Strictly stationary). A stochastic process X is strictly stationary if (X t1, X t2,..., X tn ) = (X t1+h, X t2+h,..., X tn+h) for all t i, h. Definition 1.4 (Weakly stationary). A stochastic process X is weakly stationary if E (X t ) = E (X t+h ) and Cov (X t, X s ) = Cov (X t+h, X s+h ) for all t, s, h. Lemma 1.5. If E ( ) Xt 2 <, then strictly stationary implies weakly stationary. Example 1.6. The stochastic process {X t } with X t all IID is strictly stationary. The stochastic process W t with W t N(, t) and X t X s independent of X s (for s < t) is not strictly or weakly stationary. Definition 1.7 (Stationary increments). A stochastic process has stationary increments if for all s, t, h. X t X s d = Xt h X s h 2. Lecture 2 - Thursday 3 March Example 2.1. Let X n, n 1 be IID random variables. Consider the stochastic process {S n } where S n = n X j. Then {S n } has stationary increments Concepts of convergence. There are three major concepts of convergence of random variables. Convergence in distribution Convergence in probability Almost surely convergence Definition 2.2 (Convergence in distribution). X n d X if P(Xn x) P(X x) for all x. Definition 2.3 (Convergence in probability). X n p X if P(( Xn X ϵ)) for all ϵ >. Definition 2.4 (Almost surely convergence). X n a.s. X if except on a null set A, X n X, that is, lim n X n = X. And hence P(lim n X n = X) = 1

4 MSH7 - APPLIED PROBABILITY AND STOCHASTIC CALCULUS 4 Definition 2.5 (σ-field generated by X). Let X be a random variable defined on a probability space (Ω, F, P). We call σ(x) the σ-field generated by X, and we have σ(x) = { X 1 (B) : B B } where X 1 (B) = {ω, X(ω) B} and B is the Borel set on R. Definition 2.6 (Conditional probability). We have P(A B) = P(A,B) P(B) if P(B). Definition 2.7 (Naive conditional expectation). We have E (X B) = E(XI B) P(B). Definition 2.8 (Conditional density). Let g(x, y) be the joint density function for X and Y. Then we have Y R g(x, y) dx g Y (y). We also have g X Y =y = which defines the conditional density given Y = y. g(x, y) g Y (y) Finally, we define E (X Y = y) = R xg X Y =y(x) dx. Definition 2.9 (Conditional expectation). Let (Ω, F, P) be a probability space. Let A be a sub σ-field of F. Let X be a random variable such that E( X ) <. We define E (X A) to be a random variable Z such that (i) Z is A-measurable, (ii) E(XI A ) = E(ZI A ) for all A A. Proposition 2.1 (Properties of the conditional expectation). Consider Z = E(X Y ) = E(X σ(y )) If T is σ(y )-measurable, then E(XT Y ) = T E(X Y ) a.s. If T is independent of Y, then E(T Y ) = E(T ). E(X) = E(E(X T )) 3. Lecture 3 - Tuesday 8 March Definition 3.1 (Martingale). Let {X t, t } be a right-continuous with left-hand limits. lim t t X t exists Let {F t, t } be a filtration. Then X is called a martingale with respect to F t if (i) X is adapted to F t, i.e. X t is F t -measurable (ii) E( X ) <, t (iii) E(X t F s ) = X s a.s. Example 3.2. Let X n be IID with E(X n ) =. Then {S k, k }, where S k = k i= X i, is a martingale.

5 MSH7 - APPLIED PROBABILITY AND STOCHASTIC CALCULUS 5 Example 3.3. An independent increment process {X t, t } with E(X t ) = and E( X t ) is a martingale with respect to F t = σ{x s, s t} Definition 3.4 (Gaussian process). Let {X t, t } be a stochastic process. If the finite dimensional distributions are multivariate normal, that is, (X t1,..., X tm ) N(µ, Σ) for all t 1,..., t m, then we call X t a Gaussian process Definition 3.5 (Markov process). A continuous time process X is a Markov process if for all t, each A σ(x s, s > t) and B σ(x s, s < t), we have P(A X t, B) = P(A X t ) Definition 3.6 (Diffusion process). Consider the stochastic differential equation dx t = µ(t, x)dt + σ(t, x)db t A diffusion process is path-continuous, strong Markov process such that lim h h 1 E(X t+h X t X t = x) = µ(t, x) lim h h 1 E([X t+h X t hµ(t, X)] 2 X t = x) = σ 2 (t, x) Definition 3.7 (Path-continuous). A process is path-continuous if lim t t X t = X t. Definition 3.8 (Lévy process). Let {X t, t be a stochastic process. We call X a Lévy process (i) X = a.s. (ii) It has stationary and independent increments (iii) X is stochastically continuous, that is, for all s, t, ϵ >, we have Equivalently, X s p Xt if s t. lim P( X s X t ϵ) =. s t Example 3.9 (Poisson process). Let (N(t), t ) be a stochastic process. We call N(t) a Poisson process if the following all hold: (i) N() = (ii) N has independent increments (iii) For all s, t, P(N(t + s) N(s) = n) = e λt (λt) n n! n =, 1, 2,...

6 MSH7 - APPLIED PROBABILITY AND STOCHASTIC CALCULUS 6 The Poisson process is stochastically continuous - that is, P( N(t) N(s) ϵ) = P( N(t s) N() ϵ) = 1 P( N(t s) < α) = 1 P( N(t s) = ) = 1 e λ(t s) as t s The Poisson process is not path-continuous, that is because P(lim t s N(t) N(s) = ) 1 P( t s ϵ N(t) N(s) > δ) P( N(s + 1) N(s) δ) > 4. Lecture 4 - Thursday 1 March Definition 4.1 (Self-similar process). For any t 1, t 2,..., t n, for any c >, there exists an H such that We call H the Hurst index. Example 4.2 (Fractional process). (X ct1, X ct2,..., X ctn ) d = (c H X t1, c H X t2,..., c H X tn ). (1 B) d X t = ϵ t, ϵ t martingale difference BX t = X t 1, < d < 1 Definition 4.3 (Brownian motion). Let {B t, t } be a stochastic process. We call B t a Brownian motion if the following hold: (i) B = a.s. (ii) {B t } has stationary, independent increments. (iii) For any t >, B t N(, t) (iv) The path t B t is continuous almost surely, i.e. P( lim t t B t = B t ) = 1 Definition 4.4 (Alternative formulations of Brownian motion). A process {B t, t } is a Brownian motion if and only if {B t, t } is a Gaussian process E(B t ) =, E(B s B t ) = min(s, t) The process {B t } is path-continuous

7 MSH7 - APPLIED PROBABILITY AND STOCHASTIC CALCULUS 7 Proof. ( ) For all t 1,..., t m, we have (B tm B tm 1,..., B t2 B t1 ) N(, Σ) as B t has stationary, independent increments. Hence, (B t1, B t2,,..., B tn ) is normally distributed. Thus B t is a Gaussian process. We can also show that E(B t ) = and E(B s B t ) = min(t, s). ( ) TO PROVE Corollary. Let B t be a Brownian motion. The so are the following: {B t+t B t, t } { B t, t } {cb t/c 2, t, c } {tb 1/t, t } Proof. Here, we prove that {X t } = {tb 1/t } is a Brownian motion. Consider n α i X ti. Then by a telescoping argument, we know that the process is Gaussian (can be written as a sum of X t1, X t2 X t1, etc). We can also easily show that E(X t ) = and E(X t X s ) = min(s, t) as required. We now show that lim t X. We must show t = a.s. Fix ϵ P { X t 1 n } = 1 n=1 m=1 <t< 1 m However, as X t has the same distribution as B t (as they are both Gaussian with same mean and covariance), we have that this is equivalent to P { B t 1 n } n=1 m=1 <t< 1 m which is clearly one. 5. Lecture 5 - Tuesday 15 March Theorem 5.1 (Properties of the Brownian motion). We have P(B t x B t = x,..., B tn = x n ) = P(B t x B s = x s ) = Φ( x x s t s ) Theorem 5.2. The joint density of (B t1,..., B tn is given by n g(x 1,..., x n ) = f(x tj x tj 1, t j t 1 ) where f(x, t) = 1 2πt e x2 2t

8 MSH7 - APPLIED PROBABILITY AND STOCHASTIC CALCULUS 8 Theorem 5.3 (Density and distribution of Brownian bridge). Let t 1 < t < t 2. Then ( g(b t B t1 = a, B t2 = b) N a + (b a)(t t 1), (t ) 2 t)(t t 1 ). t 2 t 1 t 2 t 1 The density of B t B t1 = a, B t2 = b is given as g t1,t,t 2 (a, b, t) g t1,t 2 (a, b) Theorem 5.4 ( Joint distribution of B t and B s ). We have P(B s x, B t y) = P(B s x, B t B s y B s ) = = x y z x y 5.1. Properties of paths of Brownian motion. 1 e x 2πs 1 e x 2πs 2 1 2s 2 1 2s 1 e z 2 2 2(t s) dz1 dz 2 2π(t s) Definition 5.5 (Variation). Let g be a real function. Then the variation of g over an interval [a, b] is defined as V ([a.b]) = lim g(t j ) g(t j 1 ) where is the size of the partition a = t,..., t n = b of [a, b]. Definition 5.6 (Quadratic variation). The quadratic variation of a function g over an interval [a, b] is defined by [g, g] = lim g(t j ) g(t j 1 ) 2 Theorem 5.7 (Non-differentiability of Brownian motion). Paths of Brownian motion are continuous almost everywhere, by definition. Consider now, the differentiability of B t. We claim that a Brownian motion is non-differentiable almost surely, that is, We claim that B t B s lim = a.s. t s t s (i) Brownian motion is not differentiable almost surely for any t. (ii) Brownian motion has infinite variation on any interval [a, b], that is, V B ([a, b]) = lim B(t j ) B(t j 1 ) = a.s.

9 MSH7 - APPLIED PROBABILITY AND STOCHASTIC CALCULUS 9 (iii) Brownian motion has quadratic variation t on [, t], that is [B, B]([, t]) = lim B(t j ) B(t j 1 ) 2 = t a.s. Proof. We know that (B ct1,..., B ctn ) d = (c H B t1,..., c H B tn ). This is true as B ct N(, ct) = c 1/2 N(, t) as required. Now suppose X t is H-self-similar with stationary increments for some < H < 1 with X =. Then, for any fixed t, we have X t X t lim = a.s. t t t t Consider X t X t P(lim sup M) t t t t which by stationary increments, is equal to X t P(lim sup M) = lim t t t P( B tn B t n t n t k n M) and as the RHS goes to zero, we have > lim n P( B t n B t t n t M) = lim n P( N(, 1) M (t n t ) 1/2 ) P( N(, 1) M (t n t ) 1/2 ) 1 as required. Now, Assume V B ([, t]) < almost surely. Consider Q n = n B t j B tj 1 2. Then we have Let. Then Q n max j n B t j B tj 1 lim Q n lim max B t j B tj 1 j n B tj B tj 1 B tj B tj 1 lim max j n B t j B tj 1 V B ([, t]) V B ([, t]) = because B s is uniformly continuous on [, t]. This is a contradiction to V B ([, t]) <.

10 MSH7 - APPLIED PROBABILITY AND STOCHASTIC CALCULUS 1 Proof. We now show that E((Q n t) 2 ) as n. In fact, we have E(Q n ) = E(B tj B tj 1 ) 2 We now show L 2 convergence. We have E(Q n t) 2 = E((Q n E(Q n )) 2 ) = E( = = i=1 (t j t j 1 ) = t Y j where Y j = B tj B tj 1 2 E( B tj B tj 1 2 ) E(Y 2 j )) t j t j 1 2 E N(, 1) 4 C t as. Thus we have convergence in L Lecture 6 - Thursday 17 March Theorem 6.1 (Martingales related to Brownian motion). Let B t, t be a Brownian motion. Then the following are martingales with respect to F t = σ(b s, s t). (1) B t, t. (2) B 2 t t, t. (3) For any u, e ubt u2 t 2 Proof. (1) is simple. a.s. (2). We know that E( B 2 t ) is finite for any t. We can also easily show E(B 2 t t F s ) = B 2 s s Theorem 6.2. Let X t be a martingale satisfying X 2 t t is also a martingale. Then X t is a Brownian motion. Definition 6.3 (Hitting time). Let T α = inf {t,bt=α} (1) If α =, T =. (2) If α >, then P(T α t) = 2P(B t α) = 2 2πt We clearly have T α > t sup s t < α a e x2 2t dx

11 MSH7 - APPLIED PROBABILITY AND STOCHASTIC CALCULUS Lecture 7 - Tuesday 22 March Theorem 7.1 (Arcsine law). Let B t be a Brownian motion. Then P(B t =, for at least once, t [a, b]) = 2 π arcos a b Example 7.2. Processes derived from Brownian motion (1) Brownian bridge X t = B t tb 1 t [, 1] Consider X F (x), with X 1,... X n data. Our empirical distribution F n (x) = 1 1 {Xj X n We can then prove that (2) Diffusion process i=1 n F n (x) F (x) X t t (, 1) X t = µt + σb t This is a Gaussian process with E(X t ) = µt, Cov(X t, X s ) = σ 2 min(s, t). (3) Geometric Brownian motion This is not a Gaussian process. (4) Higher dimensional Brownian motion X t = X e µt+σbt B t = (B 1 t,..., B n t ) where the B i are independent Brownian motions, then 7.1. Construction of Brownian motion. Define a stochastic process ˆB n t = S [nt] E(S [nt] ) n With Then we can prove that ˆB B t n t n if t = i n = otherwise B t n B t on [,1] Definition 7.3 (Stochastic integral). We now turn to defining expressions of the form A X t dy t

12 MSH7 - APPLIED PROBABILITY AND STOCHASTIC CALCULUS 12 with X t, Y t stochastic processes. Definition 7.4 ( f(b s) ds). We have f(t) dt exists if f is bonded and continuous, except on a set of Lebesgue measure zero. Thus, we can set f(t) dt = lim f(b yj )(t j t j 1 ) if f(x) is bounded. We now seek to find B s ds. Consider Q n = n B y j (t j t j 1 ). As the sum of normal variables, we know that Q n N(µ n, σn). 2 Since B s ds is normally distributed with mean, we have E(X 2 ) = E = = = 1/3 B s dx E(B s B t ) ds dt min(s, t) ds dt B t dt 8. Lecture 8 - Thursday 24 March Recall f(b s ) ds = lim B s ds N(, 1 3 ) = lim Consider I = X s dy s. We have the following: f(b yj )(t j+1 t j ) i=1 B yj (t j+1 t j ) Theorem 8.1. I exists if (1) The functions f, g are not discontinuous at the same point x. (2) f is continuous and g has bounded variation or, (2) f has finite p-variation and g has finite q-variation, where 1/p + 1/q = 1

13 MSH7 - APPLIED PROBABILITY AND STOCHASTIC CALCULUS 13 For any p, we define p-variation by lim f(t j ) f(t j 1 ) p Theorem 8.2. If J = f(t) dg(t) exists for any continuous f, then g must have finite variation. Theorem 8.3. B s has bounded p variation for any p 2 and unbounded q-variation for any q < 2. Proof. We can write (for p 2), p = 2 + (p 2). Then we have t = lim max B tj B tj 1 p 2 B tj B tj 1 2 and hence t exists. Corollary. Thus db t is well defined if f has finite variation (as setting q = 1, p 2 gives 1/p + 1/q > 1). Consider B s db s - is this an R-S integral? Consider 1n = B tj (B tj+1 B tj ), 2n = B tj+1 (B tj+1 B tj ) We have that and Thus we know 2n 1n = 1n + 2n = (B tj+1 B tj ) 2 1 Bt 2 j+1 Bt 2 j ) = B1. 2 2n 1 2 (B ) 1n 1 2 (B2 1 1) Definition 8.4 (Itô integral). The Itô integral is defined by evaluating f(b tj ), the left-hand endpoint at each partition interval [t j, t j+1 ) 9. Lecture 9 - Tuesday 29 March Definition 9.1 (Itô integral). Consider f(s) db s. Where f(s) is a real function, B s a Brownian motion. We define the integral in two steps.

14 MSH7 - APPLIED PROBABILITY AND STOCHASTIC CALCULUS 14 (1) If f(s) is a step function, define I(f) = f(s) db s = = m j+1 t j f(s) db s m c j (B tj+1 B tj ) (2) If f(s) L 2 ([, 1]), then let f n be a sequence in L 2 ([, 1]) such that f n f in L 2 ([, 1]). Then define I(f) to be the limitation in such situations such that E(I(f n ) I(f)) 2 in L 2 ([, 1]) or Let I(f) = lim n I(f n ) in probability. Remark. If f(x), g(x) are given step functions then αi(f) + βi(g) = I(αf + βg). Remark. If f(x) is a step function, then I(f) N(, f 2 (s) ds) Proof. I(f) = m c j (B tj+1 B tj ) N(, σ 2 ) where σ 2 = m c2 j (t j+1 t j ). Theorem 9.2. I(f) is well defined (independent of the choice of f n.) Proof. Let f n, g n f in L 2 ([, 1]). We then need to only compute n,m = E(I(f n ) I(g m )) 2 as m, n. In fact, n,m = E(I(f n g m )) 2 = (f n g m ) 2 dx 2 (f n f) 2 dx + 2 (g m f) 2 dx as n, m.

15 MSH7 - APPLIED PROBABILITY AND STOCHASTIC CALCULUS 15 Remark. If f is continuous bounded variation, then f(s) db s = (R.S.) ) 1 f(s) db s Proof. Case 1). = lim δ f(t j )B(t j+1 B tj ) αi(f) + βi(g) = lim n [αi(f n) + βi(g n ) = lim n [I(αf n + βg n )] = I(αf + βg) and thus αf n + βg n αf + βg L 2 ([, 1]). Case 2). If I(f) = lim δ I(f n ) in probability. Then if σ 2 n f 2 (x) dx. In fact, as f 2 n dx = = I(f n ) N(, σ 2 n) N(, N(, (f n f + f) 2 dx f 2 dx + f 2 dx f 2 n(s) ds) f 2 (s) ds) (f n f) 2 dx + as other terms tend to zero by L 2 convergence and Hölder s inequality. Remark. (R.S.) f(s) db s exists if f is of bounded variation. Remark. If f is continuous then f 2 dx < and f n (t) = f(t j )I [tj,t j+1 ) f(t) in L 2 ([, 1]) Thus, I(f) = lim I(f n ) δ m = lim f(t j )(B tj+1 B tj ) δ (f n f)f dx

16 MSH7 - APPLIED PROBABILITY AND STOCHASTIC CALCULUS 16 We have to prove if f is continuous. (f n f) 2 dx We have 1. Lecture 1 - Thursday 31 March (Itô ) f(s) db s N(, f(t) 2 dt if f is continuous, and of finite variation. In this case, we can write (R.S.) f(s) db s = (Itô ) f(s) db s = lim f(t j )(B tj+1 B tj ) Example 1.1. We have (1 t) db t = (Itô ) and by integrating by parts, we have Now consider (1 t) db t = (1 t)b t 1 (1 t) db t N(, (Itô ) B t d(1 t) = + where we now allow X s to be a stochastic process. X s db s Write F t = σ(b s, s t). Let π be the collection of X s such that (1 t) 2 dt) = N(, 1 3 ) B t dt N(, 1 3 ) (1) X s is adapted to F s, that is, for any s, X s is F s measurable. (2) X2 s ds < almost surely (R.S) Let π = {X s X s π, E(X2 s ) < }. Then π π. Let X s = e B2 s. Then E(Xs 2 1 ) = E(e B2 s ) = 1 4s s < 1 4 s 1 4 Definition 1.2 (Itô integral for stochastic integrands). We proceed in two steps. (1) Let X s = n ζ i1 [tj,t j+1) where ζ j is F tj measurable. Then I(X) = ζ j (B tj+1 B tj ). (2) If X π, there exists a sequence X n π such that X n are step process with 1 X n s X s 2 ds

17 MSH7 - APPLIED PROBABILITY AND STOCHASTIC CALCULUS 17 as n in probability or in L 2 ([, 1]) if X s π. Proof. We show only for X s continuous, the general case can be found in Hu-Hsing Kuo (p.65). As X s is continuous, then it is in π. Then choose Xs n = X + j = 1 n X tj 1 [tj,t j+1) Then X s pi and for any s (, 1). We can also show that E( X n s X s 2 ) E( X n s X s 2 ) < and so by the dominated convergence theorem, we have lim n E( X n s X s 2 ) ds =. Finally if X s π, the Itô integral X s db s is defined as in probability or in L 2 if X π. I(X) = lim δ I(X n ) 11. Lecture 11 - Tuesday 5 March Let I(X) = X s db s in the Itô sense. We then require (1) X s is F s = σ{b t, t s}-measurable (2) X2 s ds < almost surely. Then I(X) = lim n I(X n ) where X n is a sequence of step functions converging to X in L 2, that is, (X n s X s ) n ds We then show that this definition is independent of the sequence of step functions. For any Y s step process, we have (Y n s Y m s ) 2 dx 2 Theorem 11.1 (Properties of the Itô integral). (1) For any α, β R, X, Y π, (Y m s X s ) 2 ds + 2 I(αX + βy ) = αi(x) + βi(y ) (X n s X s ) 2 dx

18 MSH7 - APPLIED PROBABILITY AND STOCHASTIC CALCULUS 18 (2) For any X s π, we have If X π, Y s π, then (3) If X s continuous then E(I(X)) =, E(I 2 (X)) = E(I(X)I(Y )) = I(X) = lim δ E(X s Y s ) ds E(X 2 s ) ds X tj (B tj+1 B tj ) in probability. (4) If X is continuous and of finite variation and (R.S.) = (Itô ) B s dx s < then X s db s = X 1 B 1 X B B s dx s Proposition We now show why we require (1) X s is F s = σ{b t, t s}-measurable (2) X2 s ds < almost surely. Proof. Motivation for (1) X s = ζ j 1 (tj,t j+1) I(X) = ζ j (B tj+1 B tj ) E(I(X)) = = = E(ζ j (B tj+1 B tj )) E(E(ζ j (B tj+1 B tj ) F tj )) E(ζ j E(B tj+1 B tj F tj ))

19 MSH7 - APPLIED PROBABILITY AND STOCHASTIC CALCULUS 19 Motivation for (2) E(I 2 (X)) = E( ζ j (B tj+1 B tj )) 2 = 2 i<j E(E(ζ i ζ j (B tj+1 B tj )(B ti+1 B ti )) F ti ) + = = E(ζj 2 )(t j+1 t j ) E(X 2 t ) dt E(ζj 2 (B tj+1 B tj ) 2 ) We now show that there X n s π. such that E(Xn s X s ) 2 ds. We have E(I 2 (X)) = E(I(X)) I(X n ) + E(I(X n s )) 2 = E(I 2 (X n )) + 2E(I(X n )(I(X) I(X n ))) + E(I(X) I(X n )) 2 By Cauchy-Swartz, the middle term tens do zero, and by definition, the third term tends to zero. Thus we have lim n E(I2 (X n )) = E(I 2 (X)) Let Xs n be step processes. We now show By definition, we have E(X n s X s ) 2 I(X) = lim n I(Xn ) = lim and we proved the required result in lectures. E(X n s X s ) 2 ds δ X tj (B tj+1 B tj ). We now show that the (R.S.) integral exists if X is continuous and of finite variation, and B s dx s <, the our integration by parts formula holds. We have (R.S.) which is our Itô integral by definition. X s db s = lim δ = lim δ X yj (B tj+1 B tj ) X tj (B tj+1 B tj )

20 MSH7 - APPLIED PROBABILITY AND STOCHASTIC CALCULUS 2 Definition 11.3 (Itô process). Suppose Y t = X s db s, t is well defined, for X s π. Then Y t is an Itô processes. To show a process Y t is an Itô process, we need to show that X s ds)m a.s. and X s 2 ds < a.s. Theorem We have that Y t is continuous (except on a null set), is of infinite variation, as j+1 Y tj+1 Y tj = X s db s t j C min X s B tj+1 B tj s n B tj+1 B tj 12. Lecture 12 - Thursday 7 March From before, consider the Itô process Y t = X s db s. Lemma E( s X udb u F s ) =. Proof. Let X u = n ζ j1 [tj,t j+1). Then E( s X u db u F s ) = E(ζ j (B tj+1 B tj ) F s ) = = = E(E(ζ j (B tj+1 B tj ) F s ) F tj ) E(E(ζ j (B tj+1 B tj ) F tj ) F s ) where the third equality follows from the fact that (F t ) is an increasing sequence of σ-fields and the final follows from the fact that B tj+1 B tj is independent of F tj. Definition 12.2 (Local martingale). A process Y t is a local martingale if there exists a sequence of stopping times τ n, n 1 such that Y min(t,τ),ft is a martingale. Proposition We have the following. (1) If X s π, that is, E(X2 s ) ds <, then (Y t, F t ) is a martingale. (2) If X s π, that is X2 s ds < a.s., then (Y t, F t ) is a local martingale. (3) For f(x) satisfying f 2 (z) dz <., we have Y t = f(s) db s

21 MSH7 - APPLIED PROBABILITY AND STOCHASTIC CALCULUS 21 is a Gaussian process. Proof. We have Y t is trivially F t -measurable. We have E( Y t ) < a.s. as E(Yt 2 ) = E(X2 s ) ds < by assumption. We have from the previous lemma. E(Y t F s ) = E( s X u db u + = E(Y s F s ) + E( = Y s Now assuming X s π, the exists an X n s as n. Set Y n t = Xn u db u. Then s X u db u = Y t Y s s s X u db u F s ) X u db u F s ) a step process such that E(X n u X u ) 2 du = Y t Y n t + Y n t Y n s + Y N s Y s Then for each n, we have have Z = E( s X u db u ) = E(Y t Yt n F s ) + E(Ys n Y s F s ) We only need to prove E( Z 2 ) which implies E(Z 2 ) = and thus Z = almost surely. We by definition of Y n t. E(Z 2 ) 2E(Y t Y n t ) 2 + 2E(Y s Y n s ) Lecture 13 - Tuesday 12 March Theorem Let f be continuous. Then lim f(θ j )(B tj+1 B tj ) 2 = δ f(b s ) ds

22 MSH7 - APPLIED PROBABILITY AND STOCHASTIC CALCULUS 22 Proof. Let Q n = f(θ j ) f(b tj ) B tj+1 B tj 2 Note that n f(b t j )(B tj+1 B tj ) 2 f(b s) ds in probability. It only needs to show that Q n in probability. We have Q n max 1 j n f(θ j) f(b tj ) B tj+1 B tj 2 t = in probability from the quadratic variance of the brownian motion Theorem Let f be bounded on [, 1]. Then lim f(x tj )(B tj+1 B tj ) 2 = δ f(x s ds) Proof. We only prove for cases where E(X2 s ) ds is finite. In this case, there exists X n s step process such that as n. E(X n s X s ) 2 ds In fact, we can let Xs n = n X t j 1 [tj,t j+1]. Let Y n t = Then Yt n j+1 Yt n j = j+1 t j Xs n db s. Therefore, (Y tj+1 Y tj ) 2 = X n s db s (Y tj+1 Yt n j + Z 1j + Z 2j ) where Z 1j = Y tj+1 Y n t j+1 and Z 2j = Y tj = Y n t j. Continuing, we have = = (Y tj+1 Yt n j ) 2 + error Xt 2 j (B tj+1 B tj ) 2 + error if the error term goes to zero. We have X 2 s ds R n 2 Z 2 1j + 2 Z 2 2j + 2 Z 1j Y n t j+1 Y n t j + 2 Z 2j Y n t j+1 Y n t j + 2 Z 1j Z 2j + 2( Z 2 1j) 1/2 A 1/2 n + 2( Z 2 2j) 1/2 A 1/2 n + 2( Z 2 1j) 1/2 ( Z 2 2j) 1/2

23 MSH7 - APPLIED PROBABILITY AND STOCHASTIC CALCULUS 23 and as Z ij 2 in probability, we have our result. Theorem 13.3 (Itô s first formula). If f(x) is a twice-differentiable function then for any t, f(b t ) f(b s ) = f (B u ) db u + 1 s 2 s f (B u ) du Proof. Let s = t 1,..., t n = t. We have [ f(b t ) f(b s ) = f(btj+1) f(b tj ) ] Applying Taylor s expansion to f(b tj+1 ) f(b tj ) we get f(b tj+1 ) f(b tj = f (B tj )(B tj+1 B tj ) f (θ j )(B tj+1 B tj ) 2 and so f(b t ) f(b s ) = lim = δ s f (B tj )(B tj+1 B tj ) + lim δ f (B u ) db u s f (B u ) du 1 2 f (θ j )(B tj+1 B tj ) 2 Example Lecture 14 - Thursday 14 March Definition 14.1 (Covariation). The covariation of two stochastic processes X t, Y t is defined as [X, Y ](t) = 1 ([X + Y, X + Y ](t) [X Y, X Y ](t)) 4 where [, ] is the quadratic variation previously defined. Definition 14.2 (Stochastic differential equation). Let X t be an Itô process. Then We write X t = X a + µ(s) ds + b a a σ(s) db s dx t = µ(t) dt + σ(t) db s By convention, we write dx t dy t = d[x, Y ](t). In particular, (dy t ) 2 = d[y, Y ](t). Theorem Let Y t be path continuous, and let X t have finite variation. Then [X, Y ](t) =.

24 MSH7 - APPLIED PROBABILITY AND STOCHASTIC CALCULUS 24 Proof. We have [X, Y ](t) = 1 ([X + Y, X + Y ] [X Y, X Y ]) 4 = lim (X tj+1 X tj )(Y tj+1 Y tj ) δ lim δ max Y tj+1 Y tj X tj+1 X tj as Y t is path continuous and X t has finite variation. Corollary. From this theorem, we then have db t dt =, (dt) 2 =, (db t ) 2 = dt Corollary. For an Itô process X t given above, we then have d[x, X](t) = dx t dx t = (µ(t)dt + σ(t) db t ) 2 = σ 2 (t) dt Corollary. If f(x) has a twice continuous derivative, then [f(b t ), B t ](t) = Proof. From Itô s formula, we have f (B s ) ds df(b t ) = f (B t ) db t f (B t ) dt Then from above, we have d[f(b t ), B t ](t) = df(b t ) db t = f (B t )dt Theorem 14.4 (Itô s lemma). By Taylor s theorem, we have df(t, X t ) = f(t, X t) f(t, x) dt + t x dx t f(t, x) x=xt 2 x 2 (dx t ) 2 x=xt f(t, x) 2 x t dx t dt x=xt = f(t, X t) f(t, x) dt + t x (µ(t) dt + σ(t) db t ) f(t, x) x=xt 2 x 2 σ 2 dt x=xt ( f = t + f x µ(t) ) f 2 x 2 σ2 (t) dt + f x σ(t) db t

25 MSH7 - APPLIED PROBABILITY AND STOCHASTIC CALCULUS 25 Example Let f(b t ) = e Bt. Then df(b t ) = e Bt db t ebt dt Theorem Assume T f 2 (t) dt <. Let X t = f(s) db s 1 t 2 f 2 (s) ds. Then Y t = e Xt is a martingale with respect to F t = σ(b s, s t). Proof. We have dy t = e X t dx t ex t (dx t ) 2 = e Xt [f(t) db t 1 2 f 2 (s) dt] ext f 2 dt = e X t f(t) db t and thus Y t = exs f(s) db s which is a martingale from previous work. 15. Lecture 15 - Tuesday 19 March Theorem 15.1 (Multivariate Itô s formula). Let B j (t) be a sequence of independent Brownian motions. Consider the Itô processes X i t, with dx i t = b i (t) dt + m σ ki (t) db k (t) Suppose f(t, x 1,..., x n ) is a continuous function of its components and has continuous partial derivatives f t, f f x i, 2 x i x j. Then We have df(t, X 1 t,..., X m t k=1 ) = f m t dt + f dxt i + 1 x i 2 dx i t dx j t = = i=1 m k,s=1 i, σ ki σ sj db k (t) db s (t) σ ki σ kj dt+ k=1 as d[b i, B j ](t) = when B i, B j are independent Example Let dx t = µ 1 (t)dt + σ 1 (t) db 1 (t) dy t = µ 2 (t)dt + σ 2 (t) db 2 (t) 2 f x i x j dx i t dx j t

26 MSH7 - APPLIED PROBABILITY AND STOCHASTIC CALCULUS 26 with B 1, B 2 independent. Then and by independence, d[x, Y ](t) = dx t dy t =. d(x t Y t ) = Y t dx t + X t dy t + d[x, Y ](t) Example Let Then In particular, if µ 1 = µ 2 =, then X t Y t = dx t = µ 1 (t)dt + σ 1 (t) db 1 (t) dy t = µ 2 (t)dt + σ 2 (t) db 1 (t). d(x t Y t ) = Y t dx t + X t dy t + σ 1 (t)σ 2 (t) dt. [σ 1 (s)y s + σ 2 (s)x s ] db s + Thus, Z t = X t Y t σ 1(s)σ 2 (s) dt is a martingale. Theorem 15.4 (Tanaka s formula). We have σ 1 (s)σ 2 (s) dt where B t a = a + 1 L(t, a) = lim ϵ> 2ϵ sgn(b s a) db s + L(t, a) 1 Bs a ϵ ds Proof. Let Then we have and Then by Itô s formula, we have f ϵ (B t ) = f ϵ () + x a ϵ 2 x a > ϵ f ϵ (x) = 1 (x a)2 x a ϵ 2ϵ 1 x > a + ϵ f ϵ(x) = 1 ϵ (x a) x a ϵ 1 x < a ϵ f ϵ (x) = x a > ϵ 1 x a ϵ ϵ f ϵ(b s) db s f ϵ (B s ) ds

27 MSH7 - APPLIED PROBABILITY AND STOCHASTIC CALCULUS 27 and Obviously 1 lim ϵ 2 f ϵ (B s ) ds = 1 ϵ Note that f ϵ(b s ) sgn(b s a) 2 ds = lim f ϵ() = a ϵ B s a ϵ 1 Bs a ϵ ds = L(t, a) 1 ϵ (B s a) sgn(b s a) 2 ds a.s. Theorem L(t, a) = Theorem If f is integrable on R, then L(t, s)f(s) ds = δ a (B s ) ds f(b s ) ds 16. Lecture 16 - Thursday 21 March Definition 16.1 (Linear cointegration). Consider two non stationary time series X t and Y t. If there exist coefficients α and β such that αx t + βy t = u t with u t stationary, then we say that X t and Y t are cointegrated. Definition 16.2 (Nonlinear cointegration). If Y t f(x t ) = u t is stationary, with f( ) a nonlinear function. 17. Lecture 17 - Tuesday 3 May Stochastic integrals for martingales. We now seek to define stochastic integrals with respect to processes other than Brownian motion. Example Let X t = Y s db s, and thus dx s = Y s db s. Then Z t = Y s dx s = Y s Y s db s Revus and Yov - Continuous Martingale and Brownian Motion Definition 17.2 (Martingale). A martingale with respect (1) M t adapted to F t. (2) E( M t ) <.

28 MSH7 - APPLIED PROBABILITY AND STOCHASTIC CALCULUS 28 (3) E(M t F s ) = M s a.s. A process is a submartingale if (3) is replace with E(M t F s ) M s. A process is a supermartingale if (3) is replaced with E(M t F s ) M s. Example Let N t be a poisson process with intensity λ. Then (1) N t is a submartingale with respect to the natural filtration. (2) N t λt is a martingale with respect to the natural filtration. Theorem If E(H2 (s)) ds < and H(s) is adapted to F s = σ(b t, t s), then Y t = H(s) db s, t is a continuous, square integrable martingale - that is, E(Y 2 t < ). Theorem Let M t be a continuous, square integrable martingale with respect to F t. Then there exists an adapted process H(s) such that E(H2 (s)) ds < and M t = M + where B t is a Brownian motion with respect to F t. H(s) db s Theorem M t, t is a Brownian motion if and only if it is a local continuous martingale with [M, M](t) = t, t under some probability measure Q. Proof. A local continuous martingale is of the form M t = M + H(s) db s. Then we have [M, M](t) = H 2 (s) ds = t H(s) = 1a.s. M t = B t. Theorem Let M t, t e a continuous local martingale such that [M, M](t). Let τ t = inf{s : [M, M](s) t}. Then M(τ t ) is a Brownian motion. Moreover, M(t) = B([M, M](t)), t. This is an application of the change of time method. Example B t is a Brownian motion - and then Y t = Bt 2 t is a martingale. We have dy t = H(s) db s = 2B s db s. Thus, B 2 t t = 2 B s db s.

29 MSH7 - APPLIED PROBABILITY AND STOCHASTIC CALCULUS 29 Definition 17.9 (Predictability of a stochastic process). A stochastic process X t, t is said to be predictable with respect to F t if X t F t for all t, where ( ) F t = F t+h, F t = σ F t h. h h> Example Let g t be a step process, with g t = ζ j 1 [tj,t j+1 )(t) Then g t is not predictable. Let g t be a step process, with g t = Then g t is predictable. i=1 ζ j 1 (tj,t j+1 ](t) i=1 Example Let N t be a Poisson process. Then N t is predictable, but N t is not predictable. From now on, assume M t, t is right continuous, square integrable martingale with left hand limits. Lemma M 2 t is a submartingale. Proof. E(M 2 t F s ) = E(M 2 s + 2(M t M s ) + (M t M s ) 2 F s ) = M 2 s + E((M t M s ) 2 F s ) M 2 s Theorem (Doob-Myer decomposition). By Doob-Myer we can write M 2 t = L t + A t where L t is a martingale, and A t is a predictable process, right continuous, and increasing, such that A =, E(A t ) <, t. A t is called the compensator of Mt 2, and is denoted by M, M (t). Example Consider Bt 2 = Bt 2 t + t. Then B, B (t) = t = [B, B](t) Theorem If M t is continuous then [M, M](t) = M, M (t).

30 MSH7 - APPLIED PROBABILITY AND STOCHASTIC CALCULUS 3 Example Let N t be a Poisson process. Then we know Ñt = N t λt is a martingale. We may prove Ñt 2 λt is a martingale, that is, Ñ, Ñ (t) = λt. However, [Ñ, Ñ](t) = λt + Ñt Ñ, Ñ (t) Example X t = f(s) db s is a continuous martingale. Thus, [X, X](t) = f 2 (s) ds = X, X (t) Theorem If M t is a continuous, square integrable martingale, then Mt 2 [M, M](t) is a martingale, and so which implies [M, M](t) = M, M (t) + martingale E[M, M](t) = E M, M (t) = EM 2 t We now turn to defining integrals such as where M s is a martingale. X s dm s Definition Let L 2 pred be the space of all predictable stochastic process X s satisfying the condition X 2 s d M, M (s) <. Then the integral X s dms is defined as before in two steps. (1) If X s L 2 pred and X s = n ζ j1 [tj,t j+1). Define I(X) = ζ j (M tj+1 M tj ). (2) For all X s L 2 pred, there exists a sequence of step process Xn s such that X n s X s in L 2. Define I(x) to be the limit in such situations such that E(I(X) I(X n )) 2. Proposition Properties of the integral.

31 MSH7 - APPLIED PROBABILITY AND STOCHASTIC CALCULUS 31 (1) If M s is a (local) martingale, then is a (local) martingale. f(s) dm s Proof. E( s f(u) dm u F s ) = (2) If M t is a square integrable martingale and satisfies then E( f 2 (s) d M, M (t)) < I(f) = f(s) dm s is square integrable with E(I(f)) =, and E(I 2 (f)) = f 2 (s) d M, M (s). In particular, if M(s) = s σ(u) db u, then X s dm s = X s σ(s) db s provided X2 s σ 2 (s) ds < and σ2 (s) ds (3) If X t = f(x) dm s and M s is a continuous, square integrable martingale, then [X, X](t) = f 2 (s) d[m, M](s) = X, X (t) 18. Lecture 18 - Tudsay 1 May Let M t be a martingale. Then M 2 t [M, M](t) is a martingale, and M 2 t = martingale + M, M (t) Recall that if M t is a continuous square integrable martingale, then M, M (t) = [M, M](t) Generally speaking, Theorem If [M, M](t) = M, M (t) + martingale t f 2 (s) d M, M (s) <

32 MSH7 - APPLIED PROBABILITY AND STOCHASTIC CALCULUS 32 a.s. then is well defined, and and EY 2 t = = Y t = f(s) dm s f 2 (s) d M, M (t) f 2 (s) d[m, M](t) Y, Y (t) = [Y, Y ](t) = if M t -continuous f 2 (s) d[m, M](s). Proof. By Itô s formula, we also have Hence dy t = f(t) dm t d[y, Y ](t) = dy t dy t [Y, Y ](t) = = f 2 (t) dm t dm t = f 2 (t) d[m, M](t) Y 2 t = Y dy 2 t = 2 f 2 (s) d[m, M](s) = Y, Y (t) Y s dy s + 1 [Y, Y ](t) Y s f(s) dm s + Y, Y (t) = 2Y t f(t) dm t + d Y, Y (t) Y, Y (t) = Y 2 t 2 Y s f(s) dm s = [Y, Y ](t) since Y sf(s) dm s is a martingale Itô s integration for continuous semimartingales. Definition Let (X t, F t ) be a continuous semimartingale. Then X t = M t + A t n where M t is a martingale and A t is a continuous adapted process of bounded variation (lim δ A t j+1 A tj < ).

33 MSH7 - APPLIED PROBABILITY AND STOCHASTIC CALCULUS 33 Definition 18.3 (Integrals for semimartingales). c s dx s = c s dm s + c s da s and since A t has bounded variation, the second integral is defined in the Riemann-Stiljes (R.S.) sense. Theorem 18.4 (Itô s formula). Let X t be a continuous semimartingale. Let f(x) have twice continuous derivatives. Then f(x t ) = f(x ) + f (X s ) dx s f (X s ) d[x, X](s) Proof. Partition the interval [, t], and use a Taylor expansion to express f(x) = f(x ) + f (x )(x x ) f (x )(x x ) Stochastic differential equations. Consider the equation We seek to solve for a function f(t, x) such that dx t = σ(t, X t ) db t = µ(t, X t ) dt X t = f(t, B t ). Such an f(t, B t ) is a solution to the stochastic differential equation. Definition 18.5 (Strong solution). X t = X + σ(s, X s) db s + µ(s, B s) ds 19. Lecture 19 - Thursday 12 May Theorem Let dx t = a(t, X t ) dt + b(t, X t ) db t Assume EX <. X is independent of B s and there exists a constant c > such that (1) a(t, x) + b(t, x) C(1 + x ). (2) a(t, x), b(t, x) satisfy the Lipschitz condition in x, i.e. a(t, x) a(t, y) + b(t, x) b(t, y) C x y for all t (, T ). Then there exists a unique (strong) solution. Example Let with c 1, c 2 constants. dx t = c 1 X t dt + c 2 X t db t,

34 MSH7 - APPLIED PROBABILITY AND STOCHASTIC CALCULUS 34 Example Let dx t = [c 1 (t)x t + c 2 (t)] dt + [σ 1 (t)x t + σ 2 (t)] db t Let a(t, x) = c 1 (t)x + c 2 (t)x, b(t, x) = σ 1 (t)x + σ 2 (t). Just follow Kuo p Let H t = e Y t, Y t = Then by the Itô product formula, we have σ 1 (s) ds + c 1 (s) db s 1 2 c 2 1(s) ds d(h t X t ) = H t (dx t σ 1 (t)x t dt c 1 (t)x t db t c 2 (t)c 1 (t) dt) Then by definition of the X t, we obtain which can be integrated to yield d(h t X t ) = H t (c 2 (t) db t + σ 2 (t) dt c 1 (t)c 2 (t) dt) H t X t = C + H s c 2 (s) db s + Dividing both sides by H t we obtain our solution X t. H s (σ 2 (s) c 1 (t)c 2 (t)) ds Theorem The solution to the linear stochastic differential equation is given by X t = Ce Yt + where Y t = c 1(s) db s + dx t = [c 1 (t)x t + c 2 (t)] dt + [σ 1 (t)x t + σ 2 (t)] db t e Yt Ys c 2 (t) db s + ( σ1 (s) 1 2 c2 1(s) ) ds e Yt Ys (σ 2 (s) c 1 (t)c 2 (t)) dt 2. Lecture 2 - Tuesday 17 May 2.1. Numerical methods for stochastic differential equations. Theorem 2.1 (Euler s method). For the stochastic differential equation dx t = a(x t ) dt + b(x t ) db t, we simulate X t according to X tj = X tj 1 + a(x tj 1 ) t j + b(x tj 1 ) B tj Theorem 2.2 (Milstein scheme). For the stochastic differential equation dx t = a(x t ) dt + b(x t ) db t,

35 MSH7 - APPLIED PROBABILITY AND STOCHASTIC CALCULUS 35 we simulate X t according to X tj = X tj 1 + a(x tj 1 ) t j + b(x tj 1 ) B tj b (X tj 1 )( B 2 t j t j ) 2.2. Applications to mathematical finance Martingale method. Consider a market with risky security S t and riskless security β t. Definition 2.3 (Contingent claim). A random variable C T : Ω R, F T -measurable is called a contingent claim. If C T is σ(s T )-measurable it is path-independent. Definition 2.4 (Strategy). Let a t represent number of units of S t, and b t represent number of units of β t. If a t, b t are F t -adapted, then they are strategies in our market model. Our strategy value V t at time t is V t = a t X t + b t β t Definition 2.5 (Self-financing strategy). A strategy (a t, b t ) is self financing if dv t = a t ds t + b t dβ t The intuition is that we make one investment at t =, and after that only rebalance between S t and β t. Definition 2.6 (Admissible strategy). (a t, b t ) is an admissible strategy if it is self financing and V t for all t T. Definition 2.7 (Arbitrage). An arbitrage is an admissible strategy such that V =, V T and P(V T > ) >. Alternatively, an arbitrage is a trading strategy with V =, and E(V T ) >. Definition 2.8 (Attainable claim). A contingent claim C T is said to be attainable if there exists an admissible strategy (a t, b t ) such that V T = C T. In this case, the portfolio is said to replicate the claim. By the law of one price, C t = V t at all t. Definition 2.9 (Complete). The market is said to be complete if every contingent claim is attainable Theorem 2.1 (Harrison and Pliska). Let P denote the real world measure of the underlying asset price X t. If the market is arbitrage free, there exists an equivalent measure P, such that the discounted asset price ˆX t and every discounted attainable claim Ĉt are P -martingales. Further, if the market is complete, then P is unique. In mathematical terms, C t = β t E (β 1 T C T F t ). P is called the equivalent martingale measure (EMM) or the risk-neutral measure.

36 MSH7 - APPLIED PROBABILITY AND STOCHASTIC CALCULUS Lecture 21 - Thursday 19 May For a trading strategy (a t, b t ), then the value V t satisfies V t = V + where B s is the riskless asset. α s ds s + b s dβ s To price an attainable option X, let (a t, b t ) be a trading strategy with value V t that replicates X. Then to avoid arbitrage, the value of X at time t = is given by V Change of Measure. Let (Ω, F, P) be a probability space. Definition 21.1 (Equivalent measure). Let P and Q be measures on (Ω, F). Then for any A F, if P(A) = Q(A) = then we say the measures P and Q are equivalent. If P(A) = Q(A) =, we write Q << P. Theorem 21.2 (Radon-Nikodyn). Let Q << P. Then there exists a random variable λ such that λ, E P (λ) = 1 and Q(A) = for any A F. λ is P=almost surely unique. A dp = E p (λ1 A ) Conversely, if there exists λ such that λ 1, E P (λ) = 1, then defining Q(A) = λ dp and then Q is a probability measure and Q << P. Consequently, if Q << P, then whenever E Q ( Z ) <. A E Q (Z) = E P (λz) The random variable λ is called the density of Q with respect to P, and denoted by λ = dq dp Example Let X N(, 1) and Y N(µ, 1) under probability P. Then there exists a Q such that Q is equivalent to P and Y N(, 1) under Q.

37 MSH7 - APPLIED PROBABILITY AND STOCHASTIC CALCULUS 37 Proof. P(X A) = 1 2π Define Q(A) = A = 1 2π = 1 2π e t2 2 A dt µ2 µx e 2 dp A A µ2 µx e 2 e x2 2 dx e (µ+x)2 2 dx Then Q << P, P << Q and let λ = dq dp = µ 2 e µx 2 Then λ satisfies the conditions of Radon-Nikodyn theorem. Then we have E Q (Y ) = E P ((X + µ)λ) µ2 µx = (X + µ)e 2 dp = 1 2π (x + µ)e (µ+x)2 2 dx = 22. Lecture 22 - Tuesday 24 May Theorem Let λ(t), t T be a positive martingale with respect to F t such that E P (λ(t )) = 1. Define a new probability measure Q by Q(A) = λ(t ) dp Then Q << P and for any random variable X, we have A E Q (X) = E P (λ(t )X) E Q (X F t ) = E P ( λ(t )X λ(t) F t ) = E P(λ(T )X F t ) E P (λ(t ) F t ) a.s. and if X F t, then for any s t, we have ( ) λ(t)x E Q (X F s ) = E P λ(s) F s ( ) ( )

38 MSH7 - APPLIED PROBABILITY AND STOCHASTIC CALCULUS 38 Consequently a process S(t) is a Q-martingale if and only if S(t)λ(t) ( ) is a P-martingale Proof. ( ). We have Q(E P (λ(t )) F t = ) = E p (λ(t )1 EP (λ(t ) F t=)) = We have ( ). We have ( ) ( EP (λ(t )X F t ) E Q E P (λ(t ) F t ) 1 A = E Q λ(t) E ) P(λ(T )X F t ) E P (λ(t ) F t ) 1 A = E P (E P (λ(t )X F t )1 A ) = E P (λ(t )X1 A ) = E Q (X1 A ) ( ) λ(t)x 1 E P λ(s) F s λ(s) E P(λ(t)X F s ) = 1 λ(s) E P (E P (λ(t )X F t ) F s ) = 1 λ(s) E P(λ(T )X F s ) = E Q (X F s ) because of ( ). ( ). We have E Q (S(t) F u ) = S(u) E P ( λ(t)s(t) λ(u) F u ) = S(u) E P (λ(t)s(t) F u ) = λ(u)s(u) as required. Theorem Let B s, s T be a Brownian motion under P. Let S(t) = B t + µ t, u. Then there exists a Q equivalent to P such thats(t) is a Q-Brownian motion and λ(t ) = dq dp = e µb T 1 2 µ 2T. Note that S(t) is not a martingale under P, but it is a martingale under Q.

39 MSH7 - APPLIED PROBABILITY AND STOCHASTIC CALCULUS 39 Proof. Under Q, Q(B = ) = λ(t ) dp = 1 B = S(t) is a Q-martingale if and only if S(t)λ(t) is a P-martingale. But we have X t = S(t)λ(t) = (B t + µt) e µb t 1 2 µ2 t is a martingale. Finally, note that [S, S](t) = [B, B](t) = t Black-Scholes model. Definition 22.3 (Black-Scholes model). The Black-Scholes model assumes the risky asset S t follows the diffusion process given by ds t = µ dt + σ db t and the riskless asset follows the diffusion S t Define the discounted process as follows: dβ t β t = r dt Ŝ t = S t β t, ˆVt = V t β t, Ĉ t = C t β t. 23. Lecture 23 - Thursday 26 May Lemma (a) By a simple application of Itô s lemma, (b) Ŝt is a Q-martingale with with q = µ r σ. (c) Note that dŝt Ŝ t dŝt Ŝ t = (µ r) dt + σ db t. λ = dq dp = e qb T 1 2 q 2 T = σd(b t + µ r σ t) = σd ˆB t where ˆB t = B t + qt is a Brownian motion under Q. (d) dŝt = σŝt d ˆB t.

40 MSH7 - APPLIED PROBABILITY AND STOCHASTIC CALCULUS 4 (e) In a finite market, where S t takes only finitely many values, Ŝt is a Q-martingale is a necessary condition for no-arbitrage. (f) Note that ds t S t = µ dt + σ db t = r dt + σ d ˆB t Theorem A value process V t is self financing if and only if the discounted value process ˆV t is a Q-martingale. d ˆV t = a t dŝt dv t = αds t + b t dβ t V t = V + a s ds s + b s dβ s V t is self financing. Proof. By Itô s formula, we have d ˆV t = e rt dv t re rt V t dt = e rt (a t ds t + b t dβ t ) re rt (a t S t + b t β t ) dt a t (e rt ds t re rt dt) = a t dŝt Theorem In the Black-Scholes model, there are no arbitrage opportunities. Proof. For any admissible trading strategy (a t, b t ), we have that the discounted value process ˆV t is a Q-martingale. So if V =, then E( ˆV ) =, and we have E Q ( ˆV T ) = E Q ( ˆV T F ) = ˆV = which implies Q( ˆV T > ) =, which implies P(V T > ) =, which implies that E P (V T ) =, which then implies no arbitrage. Theorem For any self financing strategy, V t = a t S t + b t β t = V + And so a strategy is self financing if a u ds u + S t da t + β t db t + d[a, S](t) = We now consider several cases. (1) If a t is of bounded variation, then [a, S](t) =. Hence S t da t + β t db t =, b u dβ u

41 MSH7 - APPLIED PROBABILITY AND STOCHASTIC CALCULUS 41 which implies Hence da t db t <. db t = S t β t da t (2) If a t is a semi-martingale a t = a 2 t +A t, then b t must be a semi-martingale, where b t = b 2 t +B t where a 2 t, b 2 t are the martingale parts and A t, B t are of bounded variation. 24. Lecture 24 - Tuesday 31 May Theorem Given a claim C T under the self-financing assumption, there exists a Q-martingale such that In particular, we have V t = E Q ( e r(t t)c T F t ), F t = σ(b s, s t). V = E Q (e rt C T ) Proof. For any Q-martingale ˆV t, we have ˆV t = E Q ( ˆV T F t ). Then V t = E Q (e r(t t) C T F t ), as V T = C T. Theorem A claim is attainable (there exists a trading strategy replicating the claim), that is, G(t) = V t = V + G(t) a u ds u + b u dβ u V T, V T = C T Theorem Suppose that C T is a non-negative random variable, C t F T and E Q (C 2 T f < ), where Q is defined as before. Then (a) The claim is replicable. (b) where ĈT = e rt C T. V t = E Q ( e r(t t) C T F t ) In particular, V = E Q (e rt C T ) = E Q (ĈT ). Theorem Assume ˆV t = E Q (ĈT F t ). exists an adapted process H(s) such that ˆV t = E Q (ĈT F t ) Using the martingale representation theorem, there ˆV t = ˆV H(s) d ˆB s d ˆV t = H(t) d ˆB t.

42 MSH7 - APPLIED PROBABILITY AND STOCHASTIC CALCULUS 42 On the other hand, d ˆV t = a t dŝt = a t σŝt d ˆB t. Hence we obtain our required result, and then solve for b t. a t = H(t), σŝt ( ) Example Let C T = f(s T ). Then V t = E Q e r(t t)f(s T ) F t. Since Ft = σ(b s, s t) = σ( ˆB s, s t), and Ŝt is a Q-martingale, we have Ŝ t = Ŝe σ2 2 t+σ ˆB t and so Then and so where S T = e rt Ŝ T = Ŝte rt e σ2 2 (T t)+σ( ˆB T ˆB t). V t = E Q [e r(t t) f F (t, x) = E Q [e r(t t) f = e r(t t) f (e rt Ŝ t e σ2 2 (T t)+σ( ˆB T ˆB t ) )] V t = F (t, S t ) R σ2 ( (e )] 2 )(T t)+σ T tz (xe σ2 2 (T t)+σz T t ) ϕ(z) dz where ϕ(z) = 1 2π e z2 2. Example In particular, if f(y) = (y K) +, then we obtain F (t, x) = e rθ d 1 xe σ2 2 θ+xz θ z2 2 dz K = xφ(d 1 + σ θ) Ke rθ Φ(d 1) = xφ(d 1 ) Ke rθ Φ(d 2 ), d 1 e rθ z2 2 dz where d 1 = log ( ) x K + (r + σ 2 2 )θ σ, d 2 = d 1 σ θ θ Theorem 24.7 (Black-Scholes model summary). V t = a t S t + b t β t, where ds t S t = µ dt + σ db t dβ t β t = r dt

43 MSH7 - APPLIED PROBABILITY AND STOCHASTIC CALCULUS 43 (a) Ŝt = e rt S t is a Q-martingale, where dq dp = e qb T 1 2 q 2 T and q = µ r σ. Then ˆB t = B t + µ r σ t is a Q-Brownian motion, and dŝt = σŝt d ˆB t. (b) V t is self-financing if (a t, b t ) satisfies S t da t + β t db t + d[a, S](t) = which then implies ˆV t is a Q-martingale, ˆV t = e rt V t. (c) There are no arbitrage opportunities in the Black-Scholes model. Theorem 25.1 (Feyman-Kac formula). 25. Lecture 25 - Thursday 2 June (1) Suppose the function F (x, t) solves the boundary value problem F (t, x) t such that F (T, x) = Ψ(x). (2) Let S t be a solution of the SDE F (t, x) + µ(t, x) + 1 x 2 σ2 (t, x) 2 F (t, x) x 2 = ds t = µ(t, S t ) dt + σ(t, S t ) db t ( ) where B t is a Q-Brownian motion (3) Assume Then where F t = σ(b s, s t). T E(σ(t, S t ) 2 F (t, S t ) x 2 ) dt < F (t, S t ) = E Q (Ψ(S T ) F t ) = E Q (F (T, S T ) F t ). Proof. It is enough to show that F (t, S t ) is a martingale with respect to F t under Q. By Itô s lemma, we have df (t, S t ) = f(t, S t) t = which is a Q-martingale. dt + F (t, S t) x [ F t + µ(t, S t) F = F (t, S t) σ(t, S t ) db t x x + σ2 (t, S t ) 2 ds t F (t, S t ) 2 x 2 (ds t ) 2 ] 2 F (t, S t ) x 2 dt + F (t, S t) σ(t, S t ) db t x

44 MSH7 - APPLIED PROBABILITY AND STOCHASTIC CALCULUS 44 Theorem 25.2 (General Feyman-Kac formula). Let S t be a solution of the SDE ( ). Assume that there is a solution to the PDE Then F (t, x) t Proof. Again by Itô s lemma, df (t, S t ) = F (t, x) + µ(t, x) + 1 x 2 σ2 (t, x) 2 F (t, x) x 2 = r(t, x)f (t, x). F (t, S t ) = E Q ( e T t r(u,s u) du F (T, S T ) F t ) ( F t + µ F x + σ2 2 ) F 2 x 2 dt + F x σ(t, S t) db t = r(t, x)f (t, S t ) dt + dm t where M t = F x σ(u, S u) db u. Hence we have df (t, S t ) = r(t, S t )X t dt + dm t d [e ] ν t r(u,su) du X ν = e ν t r(u,su) du dm ν ( ) e T t r(u,s u) du F (T, S T ) = F (t, S t ) + e ν t r(u,s u) du dm ν ( ) t ( E Q e ) T t r(u,su) du F (T, S T ) F t = F (t, S t ) ( ) ) and so we obtain our result, T + E Q ( T e ν t r(u,su) du F t x σ(ν, S ν) db ν F t }{{} = F (t, S t ) = E Q ( e T t r(u,su) du F (T, S T ) F t )

I forgot to mention last time: in the Ito formula for two standard processes, putting

I forgot to mention last time: in the Ito formula for two standard processes, putting I forgot to mention last time: in the Ito formula for two standard processes, putting dx t = a t dt + b t db t dy t = α t dt + β t db t, and taking f(x, y = xy, one has f x = y, f y = x, and f xx = f yy

More information

1. Stochastic Processes and filtrations

1. Stochastic Processes and filtrations 1. Stochastic Processes and 1. Stoch. pr., A stochastic process (X t ) t T is a collection of random variables on (Ω, F) with values in a measurable space (S, S), i.e., for all t, In our case X t : Ω S

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

Bernardo D Auria Stochastic Processes /12. Notes. March 29 th, 2012

Bernardo D Auria Stochastic Processes /12. Notes. March 29 th, 2012 1 Stochastic Calculus Notes March 9 th, 1 In 19, Bachelier proposed for the Paris stock exchange a model for the fluctuations affecting the price X(t) of an asset that was given by the Brownian motion.

More information

MA8109 Stochastic Processes in Systems Theory Autumn 2013

MA8109 Stochastic Processes in Systems Theory Autumn 2013 Norwegian University of Science and Technology Department of Mathematical Sciences MA819 Stochastic Processes in Systems Theory Autumn 213 1 MA819 Exam 23, problem 3b This is a linear equation of the form

More information

Solution for Problem 7.1. We argue by contradiction. If the limit were not infinite, then since τ M (ω) is nondecreasing we would have

Solution for Problem 7.1. We argue by contradiction. If the limit were not infinite, then since τ M (ω) is nondecreasing we would have 362 Problem Hints and Solutions sup g n (ω, t) g(ω, t) sup g(ω, s) g(ω, t) µ n (ω). t T s,t: s t 1/n By the uniform continuity of t g(ω, t) on [, T], one has for each ω that µ n (ω) as n. Two applications

More information

The concentration of a drug in blood. Exponential decay. Different realizations. Exponential decay with noise. dc(t) dt.

The concentration of a drug in blood. Exponential decay. Different realizations. Exponential decay with noise. dc(t) dt. The concentration of a drug in blood Exponential decay C12 concentration 2 4 6 8 1 C12 concentration 2 4 6 8 1 dc(t) dt = µc(t) C(t) = C()e µt 2 4 6 8 1 12 time in minutes 2 4 6 8 1 12 time in minutes

More information

Stochastic Calculus February 11, / 33

Stochastic Calculus February 11, / 33 Martingale Transform M n martingale with respect to F n, n =, 1, 2,... σ n F n (σ M) n = n 1 i= σ i(m i+1 M i ) is a Martingale E[(σ M) n F n 1 ] n 1 = E[ σ i (M i+1 M i ) F n 1 ] i= n 2 = σ i (M i+1 M

More information

Stochastic Differential Equations.

Stochastic Differential Equations. Chapter 3 Stochastic Differential Equations. 3.1 Existence and Uniqueness. One of the ways of constructing a Diffusion process is to solve the stochastic differential equation dx(t) = σ(t, x(t)) dβ(t)

More information

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3 Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................

More information

Bernardo D Auria Stochastic Processes /10. Notes. Abril 13 th, 2010

Bernardo D Auria Stochastic Processes /10. Notes. Abril 13 th, 2010 1 Stochastic Calculus Notes Abril 13 th, 1 As we have seen in previous lessons, the stochastic integral with respect to the Brownian motion shows a behavior different from the classical Riemann-Stieltjes

More information

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition Filtrations, Markov Processes and Martingales Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition David pplebaum Probability and Statistics Department,

More information

Solutions to the Exercises in Stochastic Analysis

Solutions to the Exercises in Stochastic Analysis Solutions to the Exercises in Stochastic Analysis Lecturer: Xue-Mei Li 1 Problem Sheet 1 In these solution I avoid using conditional expectations. But do try to give alternative proofs once we learnt conditional

More information

Brownian Motion and Stochastic Calculus

Brownian Motion and Stochastic Calculus ETHZ, Spring 17 D-MATH Prof Dr Martin Larsson Coordinator A Sepúlveda Brownian Motion and Stochastic Calculus Exercise sheet 6 Please hand in your solutions during exercise class or in your assistant s

More information

p 1 ( Y p dp) 1/p ( X p dp) 1 1 p

p 1 ( Y p dp) 1/p ( X p dp) 1 1 p Doob s inequality Let X(t) be a right continuous submartingale with respect to F(t), t 1 P(sup s t X(s) λ) 1 λ {sup s t X(s) λ} X + (t)dp 2 For 1 < p

More information

Applications of Ito s Formula

Applications of Ito s Formula CHAPTER 4 Applications of Ito s Formula In this chapter, we discuss several basic theorems in stochastic analysis. Their proofs are good examples of applications of Itô s formula. 1. Lévy s martingale

More information

Exercises. T 2T. e ita φ(t)dt.

Exercises. T 2T. e ita φ(t)dt. Exercises. Set #. Construct an example of a sequence of probability measures P n on R which converge weakly to a probability measure P but so that the first moments m,n = xdp n do not converge to m = xdp.

More information

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539 Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory

More information

n E(X t T n = lim X s Tn = X s

n E(X t T n = lim X s Tn = X s Stochastic Calculus Example sheet - Lent 15 Michael Tehranchi Problem 1. Let X be a local martingale. Prove that X is a uniformly integrable martingale if and only X is of class D. Solution 1. If If direction:

More information

(A n + B n + 1) A n + B n

(A n + B n + 1) A n + B n 344 Problem Hints and Solutions Solution for Problem 2.10. To calculate E(M n+1 F n ), first note that M n+1 is equal to (A n +1)/(A n +B n +1) with probability M n = A n /(A n +B n ) and M n+1 equals

More information

(B(t i+1 ) B(t i )) 2

(B(t i+1 ) B(t i )) 2 ltcc5.tex Week 5 29 October 213 Ch. V. ITÔ (STOCHASTIC) CALCULUS. WEAK CONVERGENCE. 1. Quadratic Variation. A partition π n of [, t] is a finite set of points t ni such that = t n < t n1

More information

HJB equations. Seminar in Stochastic Modelling in Economics and Finance January 10, 2011

HJB equations. Seminar in Stochastic Modelling in Economics and Finance January 10, 2011 Department of Probability and Mathematical Statistics Faculty of Mathematics and Physics, Charles University in Prague petrasek@karlin.mff.cuni.cz Seminar in Stochastic Modelling in Economics and Finance

More information

Lecture 4: Ito s Stochastic Calculus and SDE. Seung Yeal Ha Dept of Mathematical Sciences Seoul National University

Lecture 4: Ito s Stochastic Calculus and SDE. Seung Yeal Ha Dept of Mathematical Sciences Seoul National University Lecture 4: Ito s Stochastic Calculus and SDE Seung Yeal Ha Dept of Mathematical Sciences Seoul National University 1 Preliminaries What is Calculus? Integral, Differentiation. Differentiation 2 Integral

More information

Lecture 22 Girsanov s Theorem

Lecture 22 Girsanov s Theorem Lecture 22: Girsanov s Theorem of 8 Course: Theory of Probability II Term: Spring 25 Instructor: Gordan Zitkovic Lecture 22 Girsanov s Theorem An example Consider a finite Gaussian random walk X n = n

More information

STOCHASTIC CALCULUS JASON MILLER AND VITTORIA SILVESTRI

STOCHASTIC CALCULUS JASON MILLER AND VITTORIA SILVESTRI STOCHASTIC CALCULUS JASON MILLER AND VITTORIA SILVESTRI Contents Preface 1 1. Introduction 1 2. Preliminaries 4 3. Local martingales 1 4. The stochastic integral 16 5. Stochastic calculus 36 6. Applications

More information

Lecture 12: Diffusion Processes and Stochastic Differential Equations

Lecture 12: Diffusion Processes and Stochastic Differential Equations Lecture 12: Diffusion Processes and Stochastic Differential Equations 1. Diffusion Processes 1.1 Definition of a diffusion process 1.2 Examples 2. Stochastic Differential Equations SDE) 2.1 Stochastic

More information

Question 1. The correct answers are: (a) (2) (b) (1) (c) (2) (d) (3) (e) (2) (f) (1) (g) (2) (h) (1)

Question 1. The correct answers are: (a) (2) (b) (1) (c) (2) (d) (3) (e) (2) (f) (1) (g) (2) (h) (1) Question 1 The correct answers are: a 2 b 1 c 2 d 3 e 2 f 1 g 2 h 1 Question 2 a Any probability measure Q equivalent to P on F 2 can be described by Q[{x 1, x 2 }] := q x1 q x1,x 2, 1 where q x1, q x1,x

More information

Optimal Stopping Problems and American Options

Optimal Stopping Problems and American Options Optimal Stopping Problems and American Options Nadia Uys A dissertation submitted to the Faculty of Science, University of the Witwatersrand, in fulfilment of the requirements for the degree of Master

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 15. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

Stochastic Differential Equations

Stochastic Differential Equations CHAPTER 1 Stochastic Differential Equations Consider a stochastic process X t satisfying dx t = bt, X t,w t dt + σt, X t,w t dw t. 1.1 Question. 1 Can we obtain the existence and uniqueness theorem for

More information

Feller Processes and Semigroups

Feller Processes and Semigroups Stat25B: Probability Theory (Spring 23) Lecture: 27 Feller Processes and Semigroups Lecturer: Rui Dong Scribe: Rui Dong ruidong@stat.berkeley.edu For convenience, we can have a look at the list of materials

More information

A D VA N C E D P R O B A B I L - I T Y

A D VA N C E D P R O B A B I L - I T Y A N D R E W T U L L O C H A D VA N C E D P R O B A B I L - I T Y T R I N I T Y C O L L E G E T H E U N I V E R S I T Y O F C A M B R I D G E Contents 1 Conditional Expectation 5 1.1 Discrete Case 6 1.2

More information

CIMPA SCHOOL, 2007 Jump Processes and Applications to Finance Monique Jeanblanc

CIMPA SCHOOL, 2007 Jump Processes and Applications to Finance Monique Jeanblanc CIMPA SCHOOL, 27 Jump Processes and Applications to Finance Monique Jeanblanc 1 Jump Processes I. Poisson Processes II. Lévy Processes III. Jump-Diffusion Processes IV. Point Processes 2 I. Poisson Processes

More information

1 Brownian Local Time

1 Brownian Local Time 1 Brownian Local Time We first begin by defining the space and variables for Brownian local time. Let W t be a standard 1-D Wiener process. We know that for the set, {t : W t = } P (µ{t : W t = } = ) =

More information

A new approach for investment performance measurement. 3rd WCMF, Santa Barbara November 2009

A new approach for investment performance measurement. 3rd WCMF, Santa Barbara November 2009 A new approach for investment performance measurement 3rd WCMF, Santa Barbara November 2009 Thaleia Zariphopoulou University of Oxford, Oxford-Man Institute and The University of Texas at Austin 1 Performance

More information

Verona Course April Lecture 1. Review of probability

Verona Course April Lecture 1. Review of probability Verona Course April 215. Lecture 1. Review of probability Viorel Barbu Al.I. Cuza University of Iaşi and the Romanian Academy A probability space is a triple (Ω, F, P) where Ω is an abstract set, F is

More information

Stochastic Analysis I S.Kotani April 2006

Stochastic Analysis I S.Kotani April 2006 Stochastic Analysis I S.Kotani April 6 To describe time evolution of randomly developing phenomena such as motion of particles in random media, variation of stock prices and so on, we have to treat stochastic

More information

ELEMENTS OF PROBABILITY THEORY

ELEMENTS OF PROBABILITY THEORY ELEMENTS OF PROBABILITY THEORY Elements of Probability Theory A collection of subsets of a set Ω is called a σ algebra if it contains Ω and is closed under the operations of taking complements and countable

More information

Stochastic Integration and Continuous Time Models

Stochastic Integration and Continuous Time Models Chapter 3 Stochastic Integration and Continuous Time Models 3.1 Brownian Motion The single most important continuous time process in the construction of financial models is the Brownian motion process.

More information

Lecture 17 Brownian motion as a Markov process

Lecture 17 Brownian motion as a Markov process Lecture 17: Brownian motion as a Markov process 1 of 14 Course: Theory of Probability II Term: Spring 2015 Instructor: Gordan Zitkovic Lecture 17 Brownian motion as a Markov process Brownian motion is

More information

On the submartingale / supermartingale property of diffusions in natural scale

On the submartingale / supermartingale property of diffusions in natural scale On the submartingale / supermartingale property of diffusions in natural scale Alexander Gushchin Mikhail Urusov Mihail Zervos November 13, 214 Abstract Kotani 5 has characterised the martingale property

More information

Lecture 21 Representations of Martingales

Lecture 21 Representations of Martingales Lecture 21: Representations of Martingales 1 of 11 Course: Theory of Probability II Term: Spring 215 Instructor: Gordan Zitkovic Lecture 21 Representations of Martingales Right-continuous inverses Let

More information

Stability of Stochastic Differential Equations

Stability of Stochastic Differential Equations Lyapunov stability theory for ODEs s Stability of Stochastic Differential Equations Part 1: Introduction Department of Mathematics and Statistics University of Strathclyde Glasgow, G1 1XH December 2010

More information

Optimal investment strategies for an index-linked insurance payment process with stochastic intensity

Optimal investment strategies for an index-linked insurance payment process with stochastic intensity for an index-linked insurance payment process with stochastic intensity Warsaw School of Economics Division of Probabilistic Methods Probability space ( Ω, F, P ) Filtration F = (F(t)) 0 t T satisfies

More information

Stochastic Calculus and Black-Scholes Theory MTH772P Exercises Sheet 1

Stochastic Calculus and Black-Scholes Theory MTH772P Exercises Sheet 1 Stochastic Calculus and Black-Scholes Theory MTH772P Exercises Sheet. For ξ, ξ 2, i.i.d. with P(ξ i = ± = /2 define the discrete-time random walk W =, W n = ξ +... + ξ n. (i Formulate and prove the property

More information

Part III Stochastic Calculus and Applications

Part III Stochastic Calculus and Applications Part III Stochastic Calculus and Applications Based on lectures by R. Bauerschmidt Notes taken by Dexter Chua Lent 218 These notes are not endorsed by the lecturers, and I have modified them often significantly

More information

Some Terminology and Concepts that We will Use, But Not Emphasize (Section 6.2)

Some Terminology and Concepts that We will Use, But Not Emphasize (Section 6.2) Some Terminology and Concepts that We will Use, But Not Emphasize (Section 6.2) Statistical analysis is based on probability theory. The fundamental object in probability theory is a probability space,

More information

Topics in fractional Brownian motion

Topics in fractional Brownian motion Topics in fractional Brownian motion Esko Valkeila Spring School, Jena 25.3. 2011 We plan to discuss the following items during these lectures: Fractional Brownian motion and its properties. Topics in

More information

1. Stochastic Process

1. Stochastic Process HETERGENEITY IN QUANTITATIVE MACROECONOMICS @ TSE OCTOBER 17, 216 STOCHASTIC CALCULUS BASICS SANG YOON (TIM) LEE Very simple notes (need to add references). It is NOT meant to be a substitute for a real

More information

Continuous Time Finance

Continuous Time Finance Continuous Time Finance Lisbon 2013 Tomas Björk Stockholm School of Economics Tomas Björk, 2013 Contents Stochastic Calculus (Ch 4-5). Black-Scholes (Ch 6-7. Completeness and hedging (Ch 8-9. The martingale

More information

(6, 4) Is there arbitrage in this market? If so, find all arbitrages. If not, find all pricing kernels.

(6, 4) Is there arbitrage in this market? If so, find all arbitrages. If not, find all pricing kernels. Advanced Financial Models Example sheet - Michaelmas 208 Michael Tehranchi Problem. Consider a two-asset model with prices given by (P, P 2 ) (3, 9) /4 (4, 6) (6, 8) /4 /2 (6, 4) Is there arbitrage in

More information

Reflected Brownian Motion

Reflected Brownian Motion Chapter 6 Reflected Brownian Motion Often we encounter Diffusions in regions with boundary. If the process can reach the boundary from the interior in finite time with positive probability we need to decide

More information

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( )

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( ) Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio (2014-2015) Etienne Tanré - Olivier Faugeras INRIA - Team Tosca November 26th, 2014 E. Tanré (INRIA - Team Tosca) Mathematical

More information

Jump Processes. Richard F. Bass

Jump Processes. Richard F. Bass Jump Processes Richard F. Bass ii c Copyright 214 Richard F. Bass Contents 1 Poisson processes 1 1.1 Definitions............................. 1 1.2 Stopping times.......................... 3 1.3 Markov

More information

FE 5204 Stochastic Differential Equations

FE 5204 Stochastic Differential Equations Instructor: Jim Zhu e-mail:zhu@wmich.edu http://homepages.wmich.edu/ zhu/ January 20, 2009 Preliminaries for dealing with continuous random processes. Brownian motions. Our main reference for this lecture

More information

STAT 331. Martingale Central Limit Theorem and Related Results

STAT 331. Martingale Central Limit Theorem and Related Results STAT 331 Martingale Central Limit Theorem and Related Results In this unit we discuss a version of the martingale central limit theorem, which states that under certain conditions, a sum of orthogonal

More information

Stochastic integration. P.J.C. Spreij

Stochastic integration. P.J.C. Spreij Stochastic integration P.J.C. Spreij this version: April 22, 29 Contents 1 Stochastic processes 1 1.1 General theory............................... 1 1.2 Stopping times...............................

More information

Theoretical Tutorial Session 2

Theoretical Tutorial Session 2 1 / 36 Theoretical Tutorial Session 2 Xiaoming Song Department of Mathematics Drexel University July 27, 216 Outline 2 / 36 Itô s formula Martingale representation theorem Stochastic differential equations

More information

Errata for Stochastic Calculus for Finance II Continuous-Time Models September 2006

Errata for Stochastic Calculus for Finance II Continuous-Time Models September 2006 1 Errata for Stochastic Calculus for Finance II Continuous-Time Models September 6 Page 6, lines 1, 3 and 7 from bottom. eplace A n,m by S n,m. Page 1, line 1. After Borel measurable. insert the sentence

More information

A Short Introduction to Diffusion Processes and Ito Calculus

A Short Introduction to Diffusion Processes and Ito Calculus A Short Introduction to Diffusion Processes and Ito Calculus Cédric Archambeau University College, London Center for Computational Statistics and Machine Learning c.archambeau@cs.ucl.ac.uk January 24,

More information

Brownian Motion. An Undergraduate Introduction to Financial Mathematics. J. Robert Buchanan. J. Robert Buchanan Brownian Motion

Brownian Motion. An Undergraduate Introduction to Financial Mathematics. J. Robert Buchanan. J. Robert Buchanan Brownian Motion Brownian Motion An Undergraduate Introduction to Financial Mathematics J. Robert Buchanan 2010 Background We have already seen that the limiting behavior of a discrete random walk yields a derivation of

More information

Lecture 4: Introduction to stochastic processes and stochastic calculus

Lecture 4: Introduction to stochastic processes and stochastic calculus Lecture 4: Introduction to stochastic processes and stochastic calculus Cédric Archambeau Centre for Computational Statistics and Machine Learning Department of Computer Science University College London

More information

Survival Analysis: Counting Process and Martingale. Lu Tian and Richard Olshen Stanford University

Survival Analysis: Counting Process and Martingale. Lu Tian and Richard Olshen Stanford University Survival Analysis: Counting Process and Martingale Lu Tian and Richard Olshen Stanford University 1 Lebesgue-Stieltjes Integrals G( ) is a right-continuous step function having jumps at x 1, x 2,.. b f(x)dg(x)

More information

A Class of Fractional Stochastic Differential Equations

A Class of Fractional Stochastic Differential Equations Vietnam Journal of Mathematics 36:38) 71 79 Vietnam Journal of MATHEMATICS VAST 8 A Class of Fractional Stochastic Differential Equations Nguyen Tien Dung Department of Mathematics, Vietnam National University,

More information

STAT Sample Problem: General Asymptotic Results

STAT Sample Problem: General Asymptotic Results STAT331 1-Sample Problem: General Asymptotic Results In this unit we will consider the 1-sample problem and prove the consistency and asymptotic normality of the Nelson-Aalen estimator of the cumulative

More information

Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals

Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals Noèlia Viles Cuadros BCAM- Basque Center of Applied Mathematics with Prof. Enrico

More information

On a class of stochastic differential equations in a financial network model

On a class of stochastic differential equations in a financial network model 1 On a class of stochastic differential equations in a financial network model Tomoyuki Ichiba Department of Statistics & Applied Probability, Center for Financial Mathematics and Actuarial Research, University

More information

Universal examples. Chapter The Bernoulli process

Universal examples. Chapter The Bernoulli process Chapter 1 Universal examples 1.1 The Bernoulli process First description: Bernoulli random variables Y i for i = 1, 2, 3,... independent with P [Y i = 1] = p and P [Y i = ] = 1 p. Second description: Binomial

More information

µ X (A) = P ( X 1 (A) )

µ X (A) = P ( X 1 (A) ) 1 STOCHASTIC PROCESSES This appendix provides a very basic introduction to the language of probability theory and stochastic processes. We assume the reader is familiar with the general measure and integration

More information

The Pedestrian s Guide to Local Time

The Pedestrian s Guide to Local Time The Pedestrian s Guide to Local Time Tomas Björk, Department of Finance, Stockholm School of Economics, Box 651, SE-113 83 Stockholm, SWEDEN tomas.bjork@hhs.se November 19, 213 Preliminary version Comments

More information

Stochastic calculus without probability: Pathwise integration and functional calculus for functionals of paths with arbitrary Hölder regularity

Stochastic calculus without probability: Pathwise integration and functional calculus for functionals of paths with arbitrary Hölder regularity Stochastic calculus without probability: Pathwise integration and functional calculus for functionals of paths with arbitrary Hölder regularity Rama Cont Joint work with: Anna ANANOVA (Imperial) Nicolas

More information

Stochastic Calculus (Lecture #3)

Stochastic Calculus (Lecture #3) Stochastic Calculus (Lecture #3) Siegfried Hörmann Université libre de Bruxelles (ULB) Spring 2014 Outline of the course 1. Stochastic processes in continuous time. 2. Brownian motion. 3. Itô integral:

More information

STAT331 Lebesgue-Stieltjes Integrals, Martingales, Counting Processes

STAT331 Lebesgue-Stieltjes Integrals, Martingales, Counting Processes STAT331 Lebesgue-Stieltjes Integrals, Martingales, Counting Processes This section introduces Lebesgue-Stieltjes integrals, and defines two important stochastic processes: a martingale process and a counting

More information

Infinite-dimensional methods for path-dependent equations

Infinite-dimensional methods for path-dependent equations Infinite-dimensional methods for path-dependent equations (Università di Pisa) 7th General AMaMeF and Swissquote Conference EPFL, Lausanne, 8 September 215 Based on Flandoli F., Zanco G. - An infinite-dimensional

More information

Predicting the Time of the Ultimate Maximum for Brownian Motion with Drift

Predicting the Time of the Ultimate Maximum for Brownian Motion with Drift Proc. Math. Control Theory Finance Lisbon 27, Springer, 28, 95-112 Research Report No. 4, 27, Probab. Statist. Group Manchester 16 pp Predicting the Time of the Ultimate Maximum for Brownian Motion with

More information

1.1 Definition of BM and its finite-dimensional distributions

1.1 Definition of BM and its finite-dimensional distributions 1 Brownian motion Brownian motion as a physical phenomenon was discovered by botanist Robert Brown as he observed a chaotic motion of particles suspended in water. The rigorous mathematical model of BM

More information

Robust Mean-Variance Hedging via G-Expectation

Robust Mean-Variance Hedging via G-Expectation Robust Mean-Variance Hedging via G-Expectation Francesca Biagini, Jacopo Mancin Thilo Meyer Brandis January 3, 18 Abstract In this paper we study mean-variance hedging under the G-expectation framework.

More information

Uniformly Uniformly-ergodic Markov chains and BSDEs

Uniformly Uniformly-ergodic Markov chains and BSDEs Uniformly Uniformly-ergodic Markov chains and BSDEs Samuel N. Cohen Mathematical Institute, University of Oxford (Based on joint work with Ying Hu, Robert Elliott, Lukas Szpruch) Centre Henri Lebesgue,

More information

Introduction Optimality and Asset Pricing

Introduction Optimality and Asset Pricing Introduction Optimality and Asset Pricing Andrea Buraschi Imperial College Business School October 2010 The Euler Equation Take an economy where price is given with respect to the numéraire, which is our

More information

LECTURE 2: LOCAL TIME FOR BROWNIAN MOTION

LECTURE 2: LOCAL TIME FOR BROWNIAN MOTION LECTURE 2: LOCAL TIME FOR BROWNIAN MOTION We will define local time for one-dimensional Brownian motion, and deduce some of its properties. We will then use the generalized Ray-Knight theorem proved in

More information

Stochastic Calculus for Finance II - some Solutions to Chapter VII

Stochastic Calculus for Finance II - some Solutions to Chapter VII Stochastic Calculus for Finance II - some Solutions to Chapter VII Matthias hul Last Update: June 9, 25 Exercise 7 Black-Scholes-Merton Equation for the up-and-out Call) i) We have ii) We first compute

More information

A Change of Variable Formula with Local Time-Space for Bounded Variation Lévy Processes with Application to Solving the American Put Option Problem 1

A Change of Variable Formula with Local Time-Space for Bounded Variation Lévy Processes with Application to Solving the American Put Option Problem 1 Chapter 3 A Change of Variable Formula with Local Time-Space for Bounded Variation Lévy Processes with Application to Solving the American Put Option Problem 1 Abstract We establish a change of variable

More information

The multidimensional Ito Integral and the multidimensional Ito Formula. Eric Mu ller June 1, 2015 Seminar on Stochastic Geometry and its applications

The multidimensional Ito Integral and the multidimensional Ito Formula. Eric Mu ller June 1, 2015 Seminar on Stochastic Geometry and its applications The multidimensional Ito Integral and the multidimensional Ito Formula Eric Mu ller June 1, 215 Seminar on Stochastic Geometry and its applications page 2 Seminar on Stochastic Geometry and its applications

More information

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R In probabilistic models, a random variable is a variable whose possible values are numerical outcomes of a random phenomenon. As a function or a map, it maps from an element (or an outcome) of a sample

More information

Hardy-Stein identity and Square functions

Hardy-Stein identity and Square functions Hardy-Stein identity and Square functions Daesung Kim (joint work with Rodrigo Bañuelos) Department of Mathematics Purdue University March 28, 217 Daesung Kim (Purdue) Hardy-Stein identity UIUC 217 1 /

More information

An Introduction to Stochastic Calculus

An Introduction to Stochastic Calculus An Introduction to Stochastic Calculus Haijun Li lih@math.wsu.edu Department of Mathematics Washington State University Weeks 3-4 Haijun Li An Introduction to Stochastic Calculus Weeks 3-4 1 / 22 Outline

More information

Poisson random measure: motivation

Poisson random measure: motivation : motivation The Lévy measure provides the expected number of jumps by time unit, i.e. in a time interval of the form: [t, t + 1], and of a certain size Example: ν([1, )) is the expected number of jumps

More information

On pathwise stochastic integration

On pathwise stochastic integration On pathwise stochastic integration Rafa l Marcin Lochowski Afican Institute for Mathematical Sciences, Warsaw School of Economics UWC seminar Rafa l Marcin Lochowski (AIMS, WSE) On pathwise stochastic

More information

Gaussian, Markov and stationary processes

Gaussian, Markov and stationary processes Gaussian, Markov and stationary processes Gonzalo Mateos Dept. of ECE and Goergen Institute for Data Science University of Rochester gmateosb@ece.rochester.edu http://www.ece.rochester.edu/~gmateosb/ November

More information

Optional Stopping Theorem Let X be a martingale and T be a stopping time such

Optional Stopping Theorem Let X be a martingale and T be a stopping time such Plan Counting, Renewal, and Point Processes 0. Finish FDR Example 1. The Basic Renewal Process 2. The Poisson Process Revisited 3. Variants and Extensions 4. Point Processes Reading: G&S: 7.1 7.3, 7.10

More information

A Barrier Version of the Russian Option

A Barrier Version of the Russian Option A Barrier Version of the Russian Option L. A. Shepp, A. N. Shiryaev, A. Sulem Rutgers University; shepp@stat.rutgers.edu Steklov Mathematical Institute; shiryaev@mi.ras.ru INRIA- Rocquencourt; agnes.sulem@inria.fr

More information

Stochastic Processes II/ Wahrscheinlichkeitstheorie III. Lecture Notes

Stochastic Processes II/ Wahrscheinlichkeitstheorie III. Lecture Notes BMS Basic Course Stochastic Processes II/ Wahrscheinlichkeitstheorie III Michael Scheutzow Lecture Notes Technische Universität Berlin Sommersemester 218 preliminary version October 12th 218 Contents

More information

A Concise Course on Stochastic Partial Differential Equations

A Concise Course on Stochastic Partial Differential Equations A Concise Course on Stochastic Partial Differential Equations Michael Röckner Reference: C. Prevot, M. Röckner: Springer LN in Math. 1905, Berlin (2007) And see the references therein for the original

More information

Convergence at first and second order of some approximations of stochastic integrals

Convergence at first and second order of some approximations of stochastic integrals Convergence at first and second order of some approximations of stochastic integrals Bérard Bergery Blandine, Vallois Pierre IECN, Nancy-Université, CNRS, INRIA, Boulevard des Aiguillettes B.P. 239 F-5456

More information

The main purpose of this chapter is to prove the rst and second fundamental theorem of asset pricing in a so called nite market model.

The main purpose of this chapter is to prove the rst and second fundamental theorem of asset pricing in a so called nite market model. 1 2. Option pricing in a nite market model (February 14, 2012) 1 Introduction The main purpose of this chapter is to prove the rst and second fundamental theorem of asset pricing in a so called nite market

More information

MULTIDIMENSIONAL WICK-ITÔ FORMULA FOR GAUSSIAN PROCESSES

MULTIDIMENSIONAL WICK-ITÔ FORMULA FOR GAUSSIAN PROCESSES MULTIDIMENSIONAL WICK-ITÔ FORMULA FOR GAUSSIAN PROCESSES D. NUALART Department of Mathematics, University of Kansas Lawrence, KS 6645, USA E-mail: nualart@math.ku.edu S. ORTIZ-LATORRE Departament de Probabilitat,

More information

Problem 1. Construct a filtered probability space on which a Brownian motion W and an adapted process X are defined and such that

Problem 1. Construct a filtered probability space on which a Brownian motion W and an adapted process X are defined and such that Stochatic Calculu Example heet 4 - Lent 5 Michael Tehranchi Problem. Contruct a filtered probability pace on which a Brownian motion W and an adapted proce X are defined and uch that dx t = X t t dt +

More information

The Cameron-Martin-Girsanov (CMG) Theorem

The Cameron-Martin-Girsanov (CMG) Theorem The Cameron-Martin-Girsanov (CMG) Theorem There are many versions of the CMG Theorem. In some sense, there are many CMG Theorems. The first version appeared in ] in 944. Here we present a standard version,

More information

Man Kyu Im*, Un Cig Ji **, and Jae Hee Kim ***

Man Kyu Im*, Un Cig Ji **, and Jae Hee Kim *** JOURNAL OF THE CHUNGCHEONG MATHEMATICAL SOCIETY Volume 19, No. 4, December 26 GIRSANOV THEOREM FOR GAUSSIAN PROCESS WITH INDEPENDENT INCREMENTS Man Kyu Im*, Un Cig Ji **, and Jae Hee Kim *** Abstract.

More information

for all f satisfying E[ f(x) ] <.

for all f satisfying E[ f(x) ] <. . Let (Ω, F, P ) be a probability space and D be a sub-σ-algebra of F. An (H, H)-valued random variable X is independent of D if and only if P ({X Γ} D) = P {X Γ}P (D) for all Γ H and D D. Prove that if

More information