Stochastic Calculus February 11, / 33

Similar documents
p 1 ( Y p dp) 1/p ( X p dp) 1 1 p

I forgot to mention last time: in the Ito formula for two standard processes, putting

The concentration of a drug in blood. Exponential decay. Different realizations. Exponential decay with noise. dc(t) dt.

Stochastic Differential Equations.

From Random Variables to Random Processes. From Random Variables to Random Processes

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( )

Bernardo D Auria Stochastic Processes /10. Notes. Abril 13 th, 2010

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

Theoretical Tutorial Session 2

The multidimensional Ito Integral and the multidimensional Ito Formula. Eric Mu ller June 1, 2015 Seminar on Stochastic Geometry and its applications

Bernardo D Auria Stochastic Processes /12. Notes. March 29 th, 2012

Exercises. T 2T. e ita φ(t)dt.

Some Tools From Stochastic Analysis

Lecture 4: Ito s Stochastic Calculus and SDE. Seung Yeal Ha Dept of Mathematical Sciences Seoul National University

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

MA8109 Stochastic Processes in Systems Theory Autumn 2013

Stochastic Differential Equations

1. Stochastic Process

Simulation of conditional diffusions via forward-reverse stochastic representations

Applications of Ito s Formula

Stochastic Differential Equations

Stochastic Calculus. Kevin Sinclair. August 2, 2016

Stochastic Integration.

Elliptic Operators with Unbounded Coefficients

An Introduction to Malliavin calculus and its applications

Poisson random measure: motivation

Discretization of SDEs: Euler Methods and Beyond

Lecture 4: Introduction to stochastic processes and stochastic calculus

Introduction to Diffusion Processes.

Man Kyu Im*, Un Cig Ji **, and Jae Hee Kim ***

Homogenization with stochastic differential equations

Lecture 21: Stochastic Differential Equations

Infinite-dimensional methods for path-dependent equations

A Short Introduction to Diffusion Processes and Ito Calculus

Week 9 Generators, duality, change of measure

Lecture notes for Numerik IVc - Numerics for Stochastic Processes, Wintersemester 2012/2013. Instructor: Prof. Carsten Hartmann

The Smoluchowski-Kramers Approximation: What model describes a Brownian particle?

Kolmogorov Equations and Markov Processes

Backward martingale representation and endogenous completeness in finance

FE610 Stochastic Calculus for Financial Engineers. Stevens Institute of Technology

Intertwinings for Markov processes

Brownian Motion. An Undergraduate Introduction to Financial Mathematics. J. Robert Buchanan. J. Robert Buchanan Brownian Motion

B8.3 Mathematical Models for Financial Derivatives. Hilary Term Solution Sheet 2

13 The martingale problem

A Barrier Version of the Russian Option

Solutions to the Exercises in Stochastic Analysis

Martingale Problems. Abhay G. Bhatt Theoretical Statistics and Mathematics Unit Indian Statistical Institute, Delhi

On Integration-by-parts and the Itô Formula for Backwards Itô Integral

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Poisson Jumps in Credit Risk Modeling: a Partial Integro-differential Equation Formulation

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition

Xt i Xs i N(0, σ 2 (t s)) and they are independent. This implies that the density function of X t X s is a product of normal density functions:

Generalized Gaussian Bridges of Prediction-Invertible Processes

Weak convergence and large deviation theory

Errata for Stochastic Calculus for Finance II Continuous-Time Models September 2006

1. Stochastic Processes and filtrations

An Overview of the Martingale Representation Theorem

OPTIMAL STOPPING OF A BROWNIAN BRIDGE

Gaussian processes for inference in stochastic differential equations

STATISTICS 385: STOCHASTIC CALCULUS HOMEWORK ASSIGNMENT 4 DUE NOVEMBER 23, = (2n 1)(2n 3) 3 1.

Stochastic Integration and Continuous Time Models

SDE Coefficients. March 4, 2008

Exercises in stochastic analysis

Continuous Time Finance

Stochastic Differential Equations. Introduction to Stochastic Models for Pollutants Dispersion, Epidemic and Finance

Definition: Lévy Process. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 2: Lévy Processes. Theorem

MSH7 - APPLIED PROBABILITY AND STOCHASTIC CALCULUS. Contents

Uniformly Uniformly-ergodic Markov chains and BSDEs

Stochastic Calculus and Black-Scholes Theory MTH772P Exercises Sheet 1

CIMPA SCHOOL, 2007 Jump Processes and Applications to Finance Monique Jeanblanc

Lecture 12: Diffusion Processes and Stochastic Differential Equations

Model Specification Testing in Nonparametric and Semiparametric Time Series Econometrics. Jiti Gao

Stochastic Volatility and Correction to the Heat Equation

EQUITY MARKET STABILITY

LECTURE 2: LOCAL TIME FOR BROWNIAN MOTION

Stochastic Integrals and Itô s formula.

Semiclassical analysis and a new result for Poisson - Lévy excursion measures

Malliavin calculus and central limit theorems

Optimal portfolio strategies under partial information with expert opinions

Fourier transforms of measures on the Brownian graph

DISCRETE-TIME STOCHASTIC MODELS, SDEs, AND NUMERICAL METHODS. Ed Allen. NIMBioS Tutorial: Stochastic Models With Biological Applications

Stochastic differential equation models in biology Susanne Ditlevsen

Stochastic integration. P.J.C. Spreij

(B(t i+1 ) B(t i )) 2

MA 8101 Stokastiske metoder i systemteori

A new approach for investment performance measurement. 3rd WCMF, Santa Barbara November 2009

Properties of an infinite dimensional EDS system : the Muller s ratchet

MULTIDIMENSIONAL WICK-ITÔ FORMULA FOR GAUSSIAN PROCESSES

BMIR Lecture Series on Probability and Statistics Fall 2015 Discrete RVs

1 Brownian Local Time

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

LAN property for sde s with additive fractional noise and continuous time observation

An Analytic Method for Solving Uncertain Differential Equations

Stability of Stochastic Differential Equations

Selected Exercises on Expectations and Some Probability Inequalities

Jump-type Levy Processes

Partial Differential Equations with Applications to Finance Seminar 1: Proving and applying Dynkin s formula

First passage time for Brownian motion and piecewise linear boundaries

Kolmogorov equations in Hilbert spaces IV

Introduction to Algorithmic Trading Strategies Lecture 3

Transcription:

Martingale Transform M n martingale with respect to F n, n =, 1, 2,... σ n F n (σ M) n = n 1 i= σ i(m i+1 M i ) is a Martingale E[(σ M) n F n 1 ] n 1 = E[ σ i (M i+1 M i ) F n 1 ] i= n 2 = σ i (M i+1 M i ) + σ n 1 E[M n M n 1 F n 1 ] i= = (σ M) n 1 Called Martingale Transform Fortune at time n using strategy σ Stochastic Integral σdb is continuous time version based on martingale B(t). () Stochastic Calculus February 11, 29 1 / 33

Note one can also define stochastic integrals with respect to continuous martingales M(t) other than Brownian motion. The only thing that changes is that we have the quadratic variation M, M t = lim (M(t i+1 ) M(t i )) 2. () Stochastic Calculus February 11, 29 2 / 33

Itô s Lemma Let f (x) be twice continuously differentiable. Then df (B) = f (B)dB + 1 2 f (B)dt Proof First of all we can assume without loss of generality that f, f and f are all uniformly bounded, for if we can establish the lemma in the uniformly bounded case, we can approximate f by f n so that all the corresponding derivatives are bounded and converge to those of f uniformly on compact sets. () Stochastic Calculus February 11, 29 3 / 33

Let s = t < t 1 < t 2 < < t n = t. We have f (B(t)) f (B(s)) = = n 1 [f (B(t j+1 )) f (B(t j ))] j= n 1 f (B(t j ))(B(t j+1 ) B(t j )) j= n 1 1 ξ j [t j, t j+1 ] + 2 f (B(ξ j ))(B(t j+1 ) B(t j )) 2, j= Let the width of the partition go to zero. By definition of the stochastic integral n 1 f (B(t j ))(B(t j+1 ) B(t j )) f db. j= s () Stochastic Calculus February 11, 29 4 / 33

Finally we want to show that n 1 f (B(ξ j ))(B(t j+1 ) B(t j )) 2 j= s f (B(u))du It s a lot like the computation of the quadratic variation. Suppose we had f (B(t j ) inside instead of f (B(ξ j )) n 1 [ )] 2 E f (B(t j )) (B(t j+1 ) B(t j )) 2 (t j+1 t j j= = n 1 i,j= E [ f (B(t i ))X i f (B(t j ))X j ] X j =(B(t j+1 ) B(t j )) 2 (t j+1 t j ) Suppose i < j E [ f (B(t i ))X i f (B(t j ))X j ] = E [ E [ f (B(t i ))X i f (B(t j ))X j F(t j ) ]] = E [ f (B(t i ))X i f (B(t j ))E [ X j F(t j ) ]] = () Stochastic Calculus February 11, 29 5 / 33

Hence so n 1 [ )] 2 E f (B(t j )) (B(t j+1 ) B(t j )) 2 (t j+1 t j j= n 1 = E j= [ [ ] ] 2 (f (B(t i )) 2 (B(t j+1 ) B(t j )) 2 (t j+1 t j ) n 1 f (B(t j ))(B(t j+1 ) B(t j )) 2 L 2 f (B(u))du s j= () Stochastic Calculus February 11, 29 6 / 33

We still have to show n 1 j= f (B(ξ j )) f (B(t j ) (B(t j+1 ) B(t j )) 2 Taking expectation we get n 1 E[ f (B(ξ j )) f (B(t j ) (B(t j+1 ) B(t j )) 2 ] j= n 1 E[(f (B(ξ j )) f (B(t j )) 2 ] E[(B(t j+1 ) B(t j )) 4 ] j= n 1 = C E[(f (B(ξ j )) f (B(t j )) 2 ](t j+1 t j ) j= by continuity So we have proved that f (B(t) f (B(s)) = which is Itô s formula. s f (B(u))dB(u) + 1 2 s f (B(u))du () Stochastic Calculus February 11, 29 7 / 33

1 In differential notation Itô s formula reads df (B) = f (B)dB + 1 2 f (B)dt. The Taylor series is df (B) = n=1 1 n! f (n) (B)(dB) n. In normal calculus we would have (db) n = if n 2, but because of the finite quadratic variation of Brownian paths we have (db) 2 = dt, while still (db) n = if n 3. 2 If the function f depends on t as well as B(t), the formula is df (t, B(t)) = f f (t, B(t))dt + t x (t, B(t))dB(t) + 1 2 The proof is about the same as the special case above. 2 f (t, B(t))dt. x 2 () Stochastic Calculus February 11, 29 8 / 33

Exponential Martingales M t = e λb t 1 2 λ2 t Ito s formula df (t, B(t)) = ( f t + 1 2 f f )(t, B(t))dt + (t, B(t))dB(t) 2 x 2 x f (t, x) = e λx 1 2 λ2 t f t + 1 2 f 2 = f x 2 x = λf M t = λe λbs 1 2 λ2s db s () Stochastic Calculus February 11, 29 9 / 33

Geometric Brownian motion σ2 (µ S t = S e 2 )t+σb t is called Geometric (or Exponential) Brownian Motion S t so it is (Samuelson) a better model of stock prices than B t (Bachelier) µ = drift σ = volatility Ito s formula f (t, x) = e df (t, B(t)) = ( f t + 1 2 f f )(t, B(t))dt + (t, B(t))dB(t) 2 x 2 x σ2 (µ )t+σx f 2 t + 1 2 f 2 f = µf x 2 x = σf ds t = µs t dt + σs t db t Sometimes people write ds t S t = µdt + σdb t but note that ds t S t d log S t () Stochastic Calculus February 11, 29 1 / 33

Local time f continuous function on R + L t (x) = δ x (f (s))ds = lim ɛ (2ɛ) 1 { s t : f (s) x ɛ} 1 A (f (s))ds = A L t (x)dx f C 1 L t (x) = s i [,t]:f (s i )=x f (s i ) 1 discontinuous in t Itô s lemma applied to B t x gives Tanaka s formula for Brownian Local Time L t (x) = B t x B x In particular, L t (x) continuous in t a.s. sgn(b s x)db s () Stochastic Calculus February 11, 29 11 / 33

But x not bounded, so no fair!!! Proof. Itô f ɛ (x) = (2ɛ) 1 1 [ ɛ,ɛ] (2ɛ) 1 { s t : B s ɛ} = f ɛ (B t ) f ɛ (B ) f ɛ(b s )db s ɛ L t (x) = B t x B x sgn(b s x)db s () Stochastic Calculus February 11, 29 12 / 33

Feynman-Kac formula V a nice function (say bounded). u C 1,2 solves u t = 1 2 u 2 x 2 + Vu, u(, x) = u (x) u(x) exp{ x 2 /2t}dx <. Then [er t ] u(t, x) = E x V (B(s))ds u (B(t)) Proof. For s t let Z (s) = u(t s, B(s))e R s V (B(u))du. By Îto s lemma Z (t) Z () = { u + s + 1 2 } u er 2 x 2 + Vu s V (B(u))du ds = u x (t s, B(s))e R s V (B(u))du ds = martingale so E x [Z (t)] = E x [Z ()] () Stochastic Calculus February 11, 29 13 / 33

1 If V = V (t, x) [er t ] u(t, x) = E x V (t s,b(s))ds u (B(t)) 2 Historical remark. Feynman s thesis was that solution u of it Schrödinger equation u t = i[ 1 2 u 2 + Vu] should have x 2 representation u(x, t) = e i R t V (fs)ds i R t 2 f 2ds u (f t ) where is supposed to be average over functions starting at x. Kac pointed out that it is rigorous if i 1 () Stochastic Calculus February 11, 29 14 / 33

Arcsin law ξ(t) = 1 t 1 [, )(B(s))ds = the fraction of time that Brownian motion is positive up to time t a < ; P(ξ(t) a) = 2 π arcsin a a 1; 1 a > 1. Simple explanation why distribution of ξ(t) indep of t ξ(t) = 1 1 [, ) (B(ts))ds = 1 1 [, ) ( 1 t B(ts))ds = 1 1 [, ) ( B(s))ds ξ(t) d = ξ(1) () Stochastic Calculus February 11, 29 15 / 33

Proof By Feynman-Kac if we can find a nice solution of u t = 1 2 u 2 x 2 u1 [, ) u(, x) = 1 Then and u(t, x) = E x [e R t 1 [, )(B(s))ds ] u(t, ) = 1 e at dp(ξ a) α >, φ α (x) = α u(t, x)e αt dt 1 2 φ α + (α + 1 [, ) )φ α = α { α φ α (x) = α+1 + Aex 2(α+1) + Be x 2(α+1), x, 1 + Ce x 2α + De x 2α, x. u 1 φ α 1 A = D = () Stochastic Calculus February 11, 29 16 / 33

Proof. φ α ( ) = φ α ( + ), φ α( ) = φ α( + ) B = α 1/2 (1+α)( α+ α+1), C = 1 φ α () = (1+α) 1/2 ( α+ α+1) 1/2 α α + 1 = By Fubini s theorem this reads E 1 [ α α+ξ ] E[e tξ αe αt ]dt = α α+1 or 1 1 + γa dp(ξ a) = 1 1 + γ Looking up a table of transforms we find dp(ξ a) = 2 π 1 a(1 a) da a 1 which is the density of the arcsin distribution () Stochastic Calculus February 11, 29 17 / 33

Stochastic differential equations σ(x, t), b(x, t) mble Definition A stochastic process X t is a solution of a stochastic differential equation dx t = b(x t, t)dt + σ(x t, t)db t, X = x on [, T ] if X t is progressively measurable with respect to F t, T b(x t, t) dt <, T σ(x t, t) 2 dt < a.s. and T X t = x + b(x s, s)ds + σ(x s, s)db s t T The main point is that σ(ω, t) = σ(x t, t), b(ω, t) = b(x t, t) () Stochastic Calculus February 11, 29 18 / 33

If B(t) is a d-dimensional Brownian motion and f (t, x) is a function on [, ) R d which has one continuous derivative in t and two continuous derivatives in x, then Ito s formula reads df (t, B(t)) = f t (t, B(t))dt + f ((t, B(t)) db(t) + 1 f (t, B(t))dt. 2 () Stochastic Calculus February 11, 29 19 / 33

Bessel process (d = 2) Let B t = (Bt 1, B2 t ) be 2d Brownian motion starting at, r t = B t = (Bt 1)2 + (Bt 2)2. By Ito s lemma, dr t = B1 B db1 + B2 B db2 + 1 2 This is not a stochastic differential equation. 1 B dt. () Stochastic Calculus February 11, 29 2 / 33

Y (t) = Let f (t, y) be a smooth function B 1 B db1 + B 2 B db2 Use Itô s lemma. Intuitively df (t, Y t ) = t fdt + y fdy + 1 2 2 y f (dy ) 2 (dy ) 2 = ( B1 B db1 + B2 B db2 ) 2 ( ) B 1 2 = (db 1 ) 2 + 2 B1 B 2 B B 2 db1 db 2 + = dt f (t, Y t ) = f (, Y ) + + y f B1 B db1 + ( ) B 2 2 (db 2 ) 2 B ( t f + 1 2 2 y f )(s, Y s )ds y f B2 B db2 () Stochastic Calculus February 11, 29 21 / 33

Itô s lemma dx t = σ(t, X t )db t + b(t, X t )dt f (t, X t ) = f (, X ) + Lf (t, x) = 1 2 + d i,j=1 d i,j=1 { } s f (s, X s ) + Lf (s, X s ) ds σ ij (s, X s ) x i f (s, X s )db j s a ij (t, x) 2 f x i x j (t, x) + d i=1 b i (t, x) f x i (t, x) d a ij = σ ik σ jk k=1 a = σσ T () Stochastic Calculus February 11, 29 22 / 33

dr t = B1 B db1 + B2 B db2 + 1 2 1 B dt = dy t + 1 2 r t 1 dt f (t, Y t ) = f (, Y ) + + y f B1 B db1 + ( t f + 1 2 2 y f )(s, Y s )ds y f B2 B db2 In particular e λy t λ 2 t/2 is a martingale So Y t is a Brownian motion. Therefore dr t = dy t + 1 2 r t 1 dt is a stochastic differential equation for the new Brownian motion Y t () Stochastic Calculus February 11, 29 23 / 33

Itô s lemma f (t, X t ) f (, X ) = Proof { } s + L f (s, X s )ds + f (s, X s ) σdb s = i = i f (t i+1, X ti+1 ) f (t i, X ti ) f t (t i, X ti )(t i+1 t i ) + f (t i, X ti ) (X ti+1 X ti ) + 1 2 d j,k=1 2 f x j x k (t i, X ti )(X j t i+1 X j t i )(X k t i+1 X k t i ) + higher order terms () Stochastic Calculus February 11, 29 24 / 33

Proof continued f t t (t f i, X ti )(t i+1 t i ) t (s, X s)ds i i f (t i, X ti ) (X ti+1 X ti ) i = i + i i+1 f (t i, X ti ) ( σ(s, X s )db s ) t i i+1 f (t i, X ti ) ( b(s, X s )ds) t i 2 f t (t i, X ti )(X j t x j x i+1 X j t i )(Xt k i+1 Xt k i ) k f σdb f b ds 2 f x j x k (s, X s )a jk (s, X s )ds () Stochastic Calculus February 11, 29 25 / 33

Proof continued To show the last convergence, ie i g(t i, X ti )(X j t i+1 X j t i )(Xt k i+1 Xt k i ) g(s, X s )a jk (s, X s )ds i+1 Z (t i, t i+1 ) = ( t i i+1 t i l i+1 σ jl (s, X s )dbs)( l t i σ jl σ kl (s, X s )ds l E[ Z (t i, t i+1 ) 2 ] = O((t i+1 t i ) 2 ) σ km (s, X s )dbs m ) m () Stochastic Calculus February 11, 29 26 / 33

Proof continued g(t i, X ti )E[ i i+1 t i a ij (s, X s )ds] g(s, X s )a ij (s, X s )ds E[( i g(t i, X ti )Z (t i, t i+1 )) 2 ] = i,j E[g(t i, X ti )Z (t i, t i+1 )g(t j, X tj )Z (t j, t j+1 )] i < j E[E[g(t i, X ti )Z (t i, t i+1 )g(t j, X tj )Z (t j, t j+1 ) F tj ]] = i = j E[E[g 2 (t i, X ti )Z 2 (t i, t i+1 ) F ti ]] = E[g 2 (t i, X ti )E[Z 2 (t i, t i+1 ) F ti ]] = O((t i+1 t i ) 2 ) () Stochastic Calculus February 11, 29 27 / 33

{ } f (t, X t ) f (, X ) s + L f (s, X s )ds = f (s, X s ) σdb s L = 1 2 d 2 a ij (t, x) + x i x j i,j=1 M t = f (t, X t ) { } s + L d i=1 b i (t, x) x i = generator f (s, X s )ds is a martingale { } = E[f (t, X t ) f (s, X s ) u + L f (u, X u )du F s ] s = f (t, y)p(s, x, t, y)dy f (s, x) { u + L} f (u, y)p(s, x, u, y)dydu, s X s = x () Stochastic Calculus February 11, 29 28 / 33

For any f, = f (t, y)p(s, x, t, y)dy f (s, x) { u + L} f (u, y)p(s, x, u, y)dydu s Fokker-Planck (Forward) Equation t p(s, x, t, y) = 1 2 d i,j=1 d i=1 = L yp(s, x, t, y) 2 y i y j (a i,j (t, y)p(s, x, t, y)) y i (b i (t, y)p(s, x, t, y)) lim p(s, x, t, y) = δ(y x). t s () Stochastic Calculus February 11, 29 29 / 33

Kolmogorov (Backward) Equation s p(s, x, t, y) = 1 2 + d i,j=1 d i=1 = L x p(s, x, t, y) a ij (s, x) 2 p(s, x, t, y) x i x j p(s, x, t, y) b i (s, x) x i lim p(s, x, t, y) = δ(y x). s t () Stochastic Calculus February 11, 29 3 / 33

Proof. f (x) smooth s u = L su s < t u(t, x) = f (x) Ito s formula: u(s, X(s)) martingale up to time t u(s, x) = E s,x [u(s, X(s))] = E s,x [u(t, X(t))] = f (z)p(s, x, t, z)dz Let f n (z) smooth functions tending to δ(y z). We get in the limit that p satisfy the backward equations. () Stochastic Calculus February 11, 29 31 / 33

Example. Brownian motion d = 1 2 L = 1 2 x 2 Forward p(s, x, t, y t = 1 2 p(s, x, t, y) 2 y 2, t > s p(s, x, s, y) = δ(y x) Backward p(s, x, t, y s = 1 2 p(s, x, t, y) 2 x 2, s < t, p(t, x, t, y) = δ(y x) () Stochastic Calculus February 11, 29 32 / 33

Example. Ornstein-Uhlenbeck Process L = σ2 2 2 x 2 αx x Forward p(s, x, t, y t = 1 2 p(s, x, t, y) 2 y 2 + (αyp(s, x, t, y), t > s, y p(s, x, s, y) = δ(y x) Backward p(s, x, t, y s = 1 2 p(s, x, t, y) 2 x 2 αx p(s, x, t, y), s < t, x p(t, x, t, y) = δ(y x) () Stochastic Calculus February 11, 29 33 / 33