Stochastic Integration.

Similar documents
Brownian Motion. Chapter Stochastic Process

6. Brownian Motion. Q(A) = P [ ω : x(, ω) A )

Reflected Brownian Motion

Stochastic Differential Equations.

Lecture 3. This operator commutes with translations and it is not hard to evaluate. Ae iξx = ψ(ξ)e iξx. T t I. A = lim

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

Wiener Measure and Brownian Motion

Stochastic Integrals and Itô s formula.

p 1 ( Y p dp) 1/p ( X p dp) 1 1 p

Stochastic Calculus February 11, / 33

I forgot to mention last time: in the Ito formula for two standard processes, putting

Applications of Ito s Formula

Lecture 4: Ito s Stochastic Calculus and SDE. Seung Yeal Ha Dept of Mathematical Sciences Seoul National University

Bernardo D Auria Stochastic Processes /12. Notes. March 29 th, 2012

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Some Tools From Stochastic Analysis

Lecture 12. F o s, (1.1) F t := s>t

EXISTENCE OF NEUTRAL STOCHASTIC FUNCTIONAL DIFFERENTIAL EQUATIONS WITH ABSTRACT VOLTERRA OPERATORS

A Concise Course on Stochastic Partial Differential Equations

Brownian Motion. Chapter Stochastic Process

Homework #6 : final examination Due on March 22nd : individual work

STAT 331. Martingale Central Limit Theorem and Related Results

Verona Course April Lecture 1. Review of probability

SDE Coefficients. March 4, 2008

GAUSSIAN PROCESSES; KOLMOGOROV-CHENTSOV THEOREM

A Short Introduction to Diffusion Processes and Ito Calculus

Stability of Stochastic Differential Equations

The concentration of a drug in blood. Exponential decay. Different realizations. Exponential decay with noise. dc(t) dt.

n E(X t T n = lim X s Tn = X s

Exercises. T 2T. e ita φ(t)dt.

The first order quasi-linear PDEs

4.6 Example of non-uniqueness.

Solutions to the Exercises in Stochastic Analysis

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( )

Kolmogorov Equations and Markov Processes

On pathwise stochastic integration

Theoretical Tutorial Session 2

MFE6516 Stochastic Calculus for Finance

Contents. 1 Preliminaries 3. Martingales

Bernardo D Auria Stochastic Processes /10. Notes. Abril 13 th, 2010

An Introduction to Malliavin calculus and its applications

MA8109 Stochastic Processes in Systems Theory Autumn 2013

LECTURE 2: LOCAL TIME FOR BROWNIAN MOTION

1 Brownian Local Time

The Pedestrian s Guide to Local Time

Solution for Problem 7.1. We argue by contradiction. If the limit were not infinite, then since τ M (ω) is nondecreasing we would have

Lecture 4: Introduction to stochastic processes and stochastic calculus

for all f satisfying E[ f(x) ] <.

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition

Poisson random measure: motivation

Stochastic Analysis I S.Kotani April 2006

Itô s Theory of Stochastic Integration

ELEMENTS OF PROBABILITY THEORY

Homework # , Spring Due 14 May Convergence of the empirical CDF, uniform samples

1. Stochastic Processes and filtrations

Man Kyu Im*, Un Cig Ji **, and Jae Hee Kim ***

Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals

Universal examples. Chapter The Bernoulli process

(B(t i+1 ) B(t i )) 2

Stochastic Volatility and Correction to the Heat Equation

On the martingales obtained by an extension due to Saisho, Tanemura and Yor of Pitman s theorem

Optimal investment strategies for an index-linked insurance payment process with stochastic intensity

Weak convergence and large deviation theory

1 Independent increments

Feller Processes and Semigroups

STATISTICS 385: STOCHASTIC CALCULUS HOMEWORK ASSIGNMENT 4 DUE NOVEMBER 23, = (2n 1)(2n 3) 3 1.

P (A G) dp G P (A G)

Lecture 17 Brownian motion as a Markov process

Topics in fractional Brownian motion

Branching Processes II: Convergence of critical branching to Feller s CSB

9 Brownian Motion: Construction

B8.3 Mathematical Models for Financial Derivatives. Hilary Term Solution Sheet 2

More Empirical Process Theory

An introduction to Mathematical Theory of Control

Poisson Jumps in Credit Risk Modeling: a Partial Integro-differential Equation Formulation

Preliminary Exam: Probability 9:00am 2:00pm, Friday, January 6, 2012

SMSTC (2007/08) Probability.

Gaussian Processes. 1. Basic Notions

Hardy-Stein identity and Square functions

Example 4.1 Let X be a random variable and f(t) a given function of time. Then. Y (t) = f(t)x. Y (t) = X sin(ωt + δ)

Harmonic Functions and Brownian motion

Weak convergence and Brownian Motion. (telegram style notes) P.J.C. Spreij

ON THE PATHWISE UNIQUENESS OF SOLUTIONS OF STOCHASTIC DIFFERENTIAL EQUATIONS

Convergence of Feller Processes

Numerical methods for solving stochastic differential equations

Infinite-dimensional methods for path-dependent equations

Lifshitz tail for Schödinger Operators with random δ magnetic fields

Summary of Stochastic Processes

ITO OLOGY. Thomas Wieting Reed College, Random Processes 2 The Ito Integral 3 Ito Processes 4 Stocastic Differential Equations

d(x n, x) d(x n, x nk ) + d(x nk, x) where we chose any fixed k > N

Linear Ordinary Differential Equations

FE610 Stochastic Calculus for Financial Engineers. Stevens Institute of Technology

Stochastic Processes II/ Wahrscheinlichkeitstheorie III. Lecture Notes

Stochastic Differential Equations

A brief introduction to ordinary differential equations

{σ x >t}p x. (σ x >t)=e at.

STOCHASTIC CALCULUS JASON MILLER AND VITTORIA SILVESTRI

1. Stochastic Process

Transcription:

Chapter Stochastic Integration..1 Brownian Motion as a Martingale P is the Wiener measure on (Ω, B) where Ω = C, T B is the Borel σ-field on Ω. In addition we denote by B t the σ-field generated by x(s) for s t. It is easy to see tha x(t) is a martingale with respect to (Ω, B t, P ), i.e for each t > s in, T E P {x(t) B s } = x(s) a.e. P (.1) so is x(t) t. In other words E P {x(t) t F s } = x(s) s a.e. P (.) The proof is rather straight forward. We write x(t) = x(s) + Z where Z = x(t) x(s) is a rom variable independent of the past history B s is distributed as a Gaussian rom variable with mean variance t s. Therefore E P {Z B s } = E P {Z B s } = t s a.e P. Conversely, Theorem.1. Lévy s theorem. If P is a measure on (C, T, B) such that P x() = = 1 the the functions x(t) x (t) t are martingales with respect to (C, T, B t, P ) then P is the Wiener measure. Proof. The proof is based on the observation that a Gaussian distribution is determined by two moments. But that the distribution is Gaussian is a consequence of the fact that the paths are almost surely continuous not part of our assumptions. The actual proof is carried out by establishing that for each real number λ X λ (t) = exp λx(t) λ t (.3) is a martingale with respect to (C, T, B t, P ). Once this is established it is elementary to compute E P exp λ(x(t) x(s)) λ B s = exp (t s) 13

14 CHAPTER. STOCHASTIC INTEGRATION. which shows that we have a Gaussian Process with independent increments with two matching moments. The proof of (.3) is more or less the same as proving the central limit theorem. In order to prove (.3) we can assume with out loss of generality that s = will show that E P exp λx(t) λ t = 1 (.4) To this end let us define successively τ,ɛ =, τ k+1,ɛ = min inf { s : s τ k,ɛ, x(s) x(τ k,ɛ ) ɛ }, t, τ k,ɛ + ɛ Then each τ k,ɛ is a stopping time eventually τ k,ɛ = t by continuity of paths. The continuity of paths also guarantees that x(τ k+1,ɛ ) x(τ k,ɛ ) ɛ. We write x(t) = k x(τ k+1,ɛ ) x(τ k,ɛ ) t = k τ k+1,ɛ τ k,ɛ To establish (.4) we calculate the quantity on the left h side as lim n EP exp λx(τk+1,ɛ ) x(τ k,ɛ ) λ τ k+1,ɛ τ k,ɛ k n show that it is equal to 1. Let us cosider the σ-field F k = B τk,ɛ the quantity q k (ω) = E P exp λx(τ k+1,ɛ ) x(τ k,ɛ ) λ τ k+1,ɛ τ k,ɛ F k Clearly, if we use Taylor expansion the fact that x(t) as well as x(t) t are martingales q k (ω) 1 CE P λ 3 x(τ k+1,ɛ ) x(τ k,ɛ ) 3 + λ τ k+1,ɛ τ k,ɛ F k C λ ɛ E P x(τ k+1,ɛ ) x(τ k,ɛ ) + τ k+1,ɛ τ k,ɛ Fk = C λ ɛ E P τ k+1,ɛ τ k,ɛ Fk In particular for some constant C depending on λ q k (ω) E P exp C ɛ τ k+1,ɛ τ k,ɛ Fk by induction lim sup E P exp λx(τk+1,ɛ ) x(τ k,ɛ ) λ n τ k+1,ɛ τ k,ɛ k n expc ɛ t

.1. BROWNIAN MOTION AS A MARTINGALE 15 Since ɛ > is arbitrary we prove one half of (.4). Notice that in any case sup ω q k (ω) 1 ɛ. Hence we have the lower bound q k (ω) E P exp C ɛ τ k+1,ɛ τ k ɛ F k which can be used to prove the other half. theorem. This completes the proof of the Exercise.1. Why does Theorem.1 fail for the process x(t) = N(t) t where N(t) is the stard Poisson Process with rate 1? Remark.1. One can use the Martingale inequality in order to estimate the probability P {sup s t x(s) l}. For λ >, by Doob s inequality P sup exp λx(s) λ s t s A 1 A P sup x(s) l P sup x(s) λs s t s t l λt Optimizing over λ >, we obtain by symmetry = P sup λx(s) λ s s t λl λ t exp λl + λ t P sup x(s) l exp l s t t P sup x(s) l exp l s t t The estimate is not too bad because by reflection principle P sup x(s) l = P x(t) l = exp x s t π t t dx Exercise.. One can use the estimate above to prove the result of Paul Lévy P lim sup δ sup s,t 1 s t δ x(s) x(t) = = 1 δ log 1 δ We had an exercise in the previous section that established the lower bound. Let us concentrate on the upper bound. If we define δ (ω) = sup x(s) x(t) s,t 1 s t δ l

16 CHAPTER. STOCHASTIC INTEGRATION. first check that it is sufficient to prove that for any ρ < 1, a > P ρ n(ω) a nρ n log ρ 1 < (.5) n To estimate ρ n(ω) it is sufficient to estimate sup t I x(t) x(t ) for k ɛ ρ n overlapping intervals {I } of the form t, t + (1 + ɛ)ρ n with length (1 + ɛ)ρ n. For each ɛ >, k ɛ = ɛ 1 is a constant such that any interval s, t of length no larger than ρ n is completely contained in some I with t s t + ɛρ n. Then ρ n(ω) sup sup x(t) x(t ) + sup x(s) x(t ) t I t s t +ɛρ n Therefore, for any a = a 1 + a, P ρ n(ω) a nρ n log 1 ρ P sup sup x(t) x(t ) a 1 nρ n log 1 t I ρ + P sup sup x(s) x(t ) a nρ n log 1 t s t +ɛρ n ρ k ɛ ρ n exp a 1 nρn log 1 ρ (1 + ɛ)ρ n + exp a nρn log 1 ρ ɛρ n Since a >, we can pick a 1 > a >. For ɛ > sufficiently small (.5) is easily verified.. Brownian Motion as a Markov Process. Brownian motion is a process with independent increments, the increment over any interval of length t has the Gaussian distribution with density 1 q(t, y) = (πt) d e y t It is therefore a Markov process with transition probability The operators 1 p(t, x, y) = q(t, y x) = (πt) d (T t f)(x) = f(y)p(t, x, y)dy e y x t satisfy T t T s = T s T t = T t+s, i.e the semigroup property. This is seen to be an easy consequence of the Chapman-Kolmogorov equations p(t, x, y)p(s, y, z)dy = p(t + s, x, z)

.. BROWNIAN MOTION AS A MARKOV PROCESS. 17 The infinitesimal generator of the semigroup (Af)(x) = lim t T t f f t is easily calculated as 1 (Af)(x) = lim f(x + y) f(x)q(t, y)dy t t 1 = lim f(x + ty) f(x)q(t, y)dy t t = 1 ( f)(x) by exping f in a Taylor series in t. The term that is linear in y integrates to the quadratic term leads to the Laplace operator. The differential equation dt t dt = T ta = AT t implies that u(t, x) = (T t f)(x) satisfies the heat equation d dt f(y)p(t, x, y)dy = u t = 1 u 1 ( f)(y)p(t, x, y)dy In particular if E x is expectation with respect to Brownian motion starting from x, 1 E x f(x(t) f(x) = E x ( f)(x(s))ds By the Markov property E x f(x(t) f(x(s)) s 1 ( f)(x(τ))dτ Fs = or 1 f(x(t) f(x()) ( f)(x(τ))dτ is a Martingale with respect to Brownian Motion. It is ust one step from here to show that for functions u(t, x) that are smooth u(t, x(t)) u(, x()) u t + 1 u(s, x(s))ds (.6) is a martingale. There are in addition some natural exponential Martingales associated with Brownian motion. For instance for any λ R d, exp< λ, x(t) x() > 1 λ t

18 CHAPTER. STOCHASTIC INTEGRATION. is a martingale. More generally for any smooth function u(t, x) that is bounded away from, u(t, x(t)) exp is a martingale. In particular if then u t + 1 u u u t + 1 u + v(t, x)u(t, x) = u(t, x(t)) exp v(s, x(s))ds (s, x(s))ds (.7) is a Martingale, which is the Feynman-Kac formula. To prove (.7) from (.6), we make use of the following elementary lemma. Lemma.. Suppose M(t) is almost surely continuous martingale with respect to (Ω, F t, P ) A(t) is a progressively measurable function, which is almost surely continuous of bounded variation in t. Then, under the assumption that sup s t M(s) V ar,t A(, ω) is integrable, is again a Martingle. M(t)A(t) M()A() Proof. The main step is to see why EM(t)A(t) M()A() M(s)dA(s) M(s)dA(s) = Then the same argument, repeated conditionally will prove the martingale property. EM(t)A(t) M()A() = lim = lim EM(t )A(t ) M(t 1 )A(t 1 ) EM(t )A(t 1 ) M(t 1 )A(t 1 ) + lim EM(t )A(t ) A(t 1 ) = lim EM(t )A(t ) A(t 1 ) = E M(s)dA(s) The limit is over the partition {t } becoming dense in, t ones uses the integrability of sup s t M(s) V ar,t A( ) the dominated convergence theorem to complete the proof.

.3. STOCHASTIC INTEGRALS 19 Now, to go from (.6) to (.7), we choose M(t) = u(t, x(t)) u(, x()) A(t) = exp u t + 1 u u u t + 1 u(s, x(s))ds (s, x(s))ds.3 Stochastic Integrals If y 1,..., y n is a martingale relative to the σ-fields F, if e (ω) are rom functions that are F measurable, the sequence 1 z = e k (ω)y k+1 y k k= is again a martingale with respect to the σ-fields F, provided the expectations are finite. A computation shows that if a (ω) = E P (y +1 y ) F then or more precisely 1 E P z = E P a k (ω) e k (ω) k= E P (z +1 z ) F = a (ω) e (ω) a.e. P Formally one can write δz = z +1 z = e (ω)δy = e (ω)(y +1 y ) z is called a martingale transform of y the size of z n measured by its mean square is exactly equal to E P n 1 = e (ω) a (ω). The stochastic integral is ust the continuous analog of this. Theorem.3. Let y(t) be an almost surely continuous martingale relative to (Ω, F t, P ) such that y() = a.e. P, y (t) a(s, ω)ds is again a martingale relative to (Ω, F t, P ), where a(s, ω)ds is a bounded progressively measurable function. Then for progressively measurable functions e(, ) satisfying, for every t >, E P e (s)a(s)ds <

CHAPTER. STOCHASTIC INTEGRATION. the stochastic integral z(t) = e(s)dy(s) makes sense as an almost surely continuous martingale with respect to (Ω, F t, P ) z (t) e (s)a(s)ds is again a martingale with respect to (Ω, F t, P ). In particular E P z (t) = E P e (s)a(s)ds (.8) Proof. Step 1. The statements are obvious if e(s) is a constant. Step. Assume that e(s) is a simple function given by e(s, ω) = e (ω) for t s < t +1 where e (ω) is F t measurable bounded for N t N+1 =. Then we can define inductively for t t t +1. Clearly z(t) z(t) = z(t ) + e(t, ω)y(t) y(t ) z (t) e (s, ω)a(s, ω)ds are martingales in the interval t, t +1. Since the definitions match at the end points the martingale property holds for t. Step 3. If e k (s, ω) is a sequence of uniformly bounded progressively measurable functions converging to e(s, ω) as k in such a way that lim k e k (s) a(s)ds = for every t >, because of the relation (.8) lim z k,k EP k (t) z k (t) = lim k,k EP e k (s) e k (s) a(s)ds =. Combined with Doob s inequality, we conclude the existence of a an almost surely continuous martingale z(t) such that lim k EP sup z k (s) z(s) = s t

.3. STOCHASTIC INTEGRALS 1 clearly is an (Ω, F t, P ) martingale. z (t) e (s)a(s)ds Step 4. All we need to worry now is about approximating e(, ). Any bounded progressively measurable almost surely continuous e(s, ω) can be approximated by e k (s, ω) = e( ks k k, ω) which is piecewise constant levels off at time k. It is trivial to see that for every t >, lim k e k (s) e(s) a(s) ds = Step 5. Any bounded progressively measurable e(s, ω) can be approximated by continuous ones by defining e k (s, ω) = k again it is trivial to see that it works. s (s 1 k ) e (u, ω)du Step 6. Finally if e(s, ω) is un bounded we can approximate it by truncation, e k (s, ω) = f k (e(s, ω)) where f k (x) = x for x k otherwise. This completes the proof of the theorem. Suppose we have an almost surely continuous process x(t, ω) defined on some (Ω, F t, P ), progressively measurable functions b(s, ω), a(s, ω) with a, such that where y(t, ω) x(t, ω) = x(, ω) + y (t, ω) b(s, ω)ds + y(t, ω) a(s, ω)ds are martingales with respect to (Ω, F t, P ). The stochastic integral z(t) = e(s)dx(s) is defined by z(t) = e(s)dx(s) = For this to make sense we need for every t, e(s)b(s)ds + e(s)dy(s) E P e(s)b(s) ds < E P e(s) a(s)ds <

CHAPTER. STOCHASTIC INTEGRATION. If we assume for simplicity that eb e a are uniformly bounded functions in t ω. It then follows, that for any F measurable z(), that z(t) = z() + e(s)dx(s) is again an almost surely continuous process such that where y (t) z(t) = z() + y (t) b (s, ω)ds + y (t, ω) are martingales with b = eb a = e a. a (s, ω)ds Exercise.3. If e is such that eb e a are bounded, then prove directly that the exponentials exp λ(z(t) z()) λ are (Ω, F t, P ) martingales. e(s)b(s)ds λ a(s)e (s)ds We can easily do the mutidimensional generalization. Let y(t) be a vector valued martingale with n components y 1 (t),, y n (t) such that y i (t)y (t) o a i, (s, ω)ds are again martingales with respect to (Ω, F t, P ). Assume that the progressively measurable functions{a i, (t, ω)} are symmetric positive semidefinite for every t ω are uniformly bounded in t ω. Then the stochastic integral z(t) = z() + < e(s), dy(s) >= z() + i e i (s)dy i (s) is well defined for vector velued progressively measurable functions e(s, ω) such that E P < e(s), a(s)e(s) > ds < In a similar fashion to the scalar case, for any diffusion process x(t) corresponding to b(s, ω) = {b i (s, ω)} a(s, ω) = {a i, (s, ω)} any e(s, ω)) = {e i (s, ω)} which is progressively measurable uniformly bounded z(t) = z() + < e(s), dx(s) >

.3. STOCHASTIC INTEGRALS 3 is well defined is a diffusion corresponding to the coefficients b(s, ω) =< e(s, ω), b(s, ω) > ã(s, ω) =< e(s, ω), a(s, ω)e(s, ω) > It is now a simple exercise to define stocahstic integrals of the form z(t) = z() + σ(s, ω)dx(s) where σ(s, ω) is a matrix of dimension m n that has the suitable properties of boundedness progressive measurability. z(t) is seen easily to correspond to the coefficients b(s) = σ(s)b(s) ã(s) = σ(s)a(s)σ (s) The analogy here is to linear transformations of Gaussian variables. If ξ is a Gaussian vector in R n with mean µ covariance A, if η = T ξ is a linear transformation from R n to R m, then η is again Gaussian in R m has mean T µ covariance matrix T AT. Exercise.4. If x(t) is Brownian motion in R n σ(s, ω) is a progreessively measurable bounded function then z(t) = σ(s, ω)dx(s) is again a Brownian motion in R n if only if σ is an orthogonal matrix for almost all s (with repect to Lebesgue Measure) ω (with respect to P ) Exercise.5. We can mix stochastic ordinary integrals. If we define z(t) = z() + σ(s)dx(s) + f(s)ds where x(s) is a process corresponding to b(s), a(s), then z(t) corresponds to b(s) = σ(s)b(s) + f(s) ã(s) = σ(s)a(s)σ (s) The analogy is again to affine linear transformations of Gaussians η = T ξ + γ. Exercise.6. Chain Rule. If we transform from x to z again from z to w, it is the same as makin a single transformation from z to w. dz(s) = σ(s)dx(s) + f(s)ds dw(s) = τ(s)dz(s) + g(s)ds can be rewritten as dw(s) = τ(s)σ(s)dx(s) + τ(s)f(s) + g(s)ds

4 CHAPTER. STOCHASTIC INTEGRATION..4 Ito s Formula. The chain rule in ordinary calculus allows us to compute df(t, x(t)) = f t (t, x(t))dt + f(t, x(t)).dx(t) We replace x(t) by a Brownian path, say in one dimension to keep things simple for f take the simplest nonlinear function f(x) = x that is independent of t. We are looking for a formula of the type β (t) β () = We have already defined integrals of the form β(s) dβ(s) (.9) β(s) dβ(s) (.1) as Ito s stochastic integrals. But still a formula of the type (.9) cannot possibly hold. The left h side has expectation t while the right h side as a stochastic integral with respect to β( ) is mean zero. For Ito s theory it was important to evaluate β(s) at the back end of the interval t 1, t before multiplying by the increment (β(t ) β(t 1 ) to keeep things progressively measurable. That meant the stochastic integral (.1) was approximated by the sums β(t 1 )(β(t ) β(t 1 ) over successive partitions of, t. We could have approximated by sums of the form β(t )(β(t ) β(t 1 ). In ordinary calculus, because β( ) would be a continuous function of bounded variation in t, the difference would be negligible as the partitions became finer leading to the same answer. But in Ito calculus the differnce does not go to. The difference D π is given by D π = β(t )(β(t ) β(t 1 ) β(t 1 (β(t ) β(t 1 ) = = (β(t ) β(t 1 )(β(t ) β(t 1 ) (β(t ) β(t 1 ) An easy computation gives ED π = t E(D π t) = 3 (t t 1 ) tends to as the partition is refined. On the other h if we are neutral approximate the integral (.1) by 1 (β(t 1) + β(t ))(β(t ) β(t 1 )

.4. ITO S FORMULA. 5 then we can simplify calculate the limit as lim β(t ) β(t 1 ) = 1 (β (t) β ()) This means as we defined it (.1) can be calculated as or the correct version of (.9) is β(s) dβ(s) = 1 (β (t) β ()) t β (t) β () = β(s) dβ(s) + t Now we can attempt to calculate f(β(t)) f(β()) for a smooth function of one variable. Roughly speaking, by a two term Taylor expansion f(β(t)) f(β()) = = f(β(t )) f(β(t 1 )) f (β(t 1 )(β(t )) β(t 1 )) + 1 f (β(t 1 )(β(t )) β(t 1 )) + O β(t )) β(t 1 ) 3 The expected value of the error term is approximately E O β(t )) β(t 1 ) 3 = O t t 1 3 = o(1) leading to Ito s formula f(β(t)) f(β()) = f (β(s))dβ(s) + 1 It takes some effort to see that f (β(t 1 )(β(t )) β(t 1 )) f (β(s))ds (.11) f (β(s))ds But the idea is, that because f (β(s)) is continuous in t, we can pretend that it is locally constant use that calculation we did for x where f is a constant. While we can make a proof after a careful estimation of all the errors, in fact we do not have to do it. After all we have already defined the stochastic integral (.1). We should be able to verify (.11) by computing the mean square of the difference showing that it is. In fact we will do it very generally with out much effort. We have the tools already.

6 CHAPTER. STOCHASTIC INTEGRATION. Theorem.4. Let x(t) be an almost surely continuous process with values on R d such that y i (t) = x i (t) x i () b i (s, ω)ds (.1) y i (t)y (t) a i, (s, ω)ds (.13) are martingales for 1 i, d. For any smooth function u(t, x) on, ) R d u(t, x(t)) u(, x()) = s u s (s, x(s))ds + + 1 < ( u)(s, x(s)), dx(s) > u a i, (s, ω) (s, x(s))ds x i x i, Proof. Let us define the stochastic process ξ(t) =u(t, x(t)) u(, x()) s u s (s, x(s))ds < ( u)(s, x(s)), dx(s) > 1 u a i, (s, ω) (s, x(s))ds x i x i, (.14) We define a d + 1 dimensional process x(t) = {u(t, x(t)), x(t)} which is again a process with almost surely continuous paths satisfying relations analogous to (.1) (.13) with b, ã. If we number the extra coordinate by, then bi = { u s + L s,ωu(s, x(s)) if i = b i (s, ω) if i 1 < a(s, ω) u, u > if i = = ã i, = a(s, ω) u i if =, i 1 a i, (s, ω) if i, 1 The actual computation is interesting reveals the connection between ordinary calculus, second order operators Ito calculus. If we want to know the parametrs of the process y(t), then we need to know what to subtract from v(t, y(t)) v(, y()) to obtain a martingale. But v(t,, y(t)) = w(t, x(t)), where

.4. ITO S FORMULA. 7 w(t, x) = v(t, u(t, x), x) if we compute with ( w t + L s,ωw)(t, x) = v t + v u u t + b i u xi + b i v xi + 1 a i, u xi,x i i i, 1 + v u,u a i, u xi u x + v u,xi a i, u x i, i + 1 a i, v xi,x L t,ω v = i i, = v t + L t,ω v bi (s, ω)v yi + 1 ã i, (s, ω)v yi,y i, We can construct stochastic integrals with respect to the d + 1 dimensional process y( ) ξ(t) defined by (.14) is again an almost surely continuous process its parameters can be calculated. After all with ξ(t) = f i (s, ω) = g(s, ω) = u s + 1 < f(s, ω), dy(s) > + g(s, ω)ds { 1 if i = ( u) i (s, x(s)) if i 1 u a i, (s, ω) (s, x(s)) x i x Denoting the parameters of ξ( ) by B(s, ω), A(s, ω), we find i, A(s, ω) =< f(s, ω), ã(s, ω)f(s, ω) > =< a u, u > < a u, u > + < a u, u > = B(s, ω) =< b, f > +g = b (s, ω) < b(s, ω), u(s, x(s)) > u s (s, ω) + 1 u a i, (s, ω) (s, x(s)) x i x = Now all we are left with is the following Lemma.5. If ξ(t) is a scalar process corresponding to the coefficients, then ξ(t) ξ() a.e. i,

8 CHAPTER. STOCHASTIC INTEGRATION. Proof. Just compute E(ξ(t) ξ()) = E ds = This concludes the proof of the theorem. Exercise.7. Ito s formula is a local formula that is valid for almost all paths. If u is a smooth function i.e. with one continuous t derivative two continuous x derivatives (.11) must still be valid a.e. We cannot do it with moments, because for moments to exist we need control on growth at infinity. But it should not matter. Should it? Application: Local time in one dimension. Tanaka Formula. If β(t) is the one dimensional Brownian Motion, for any path β( ) any t, the occupation meausre L t (A, ω) is defined by L t (A, ω) = m{s : s t & β(s) A} Theorem.6. There exists a function l(t, y, ω) such that, for almost all ω, L t (A, ω) = l(t, y, ω) dy identically in t. Proof. Formally l(t, y, ω) = A δ(β(s) y)ds but, we have to make sense out of it. From Ito s formula f(β(t)) f(β()) = f (β(s)) dβ(s) + 1 f (β(s))ds If we take f(x) = x y then f (x) = sign x 1 f (x) = δ(x y). We get the identity β(t) y β() y sign β(s)dβ(s) = δ(β(s) y)ds = l(t, y, ω) While we have not proved the identity, we can use it to define l(,, ). It is now well defined as a continuous function of t for almost all ω for each y, by Fubini s theorem for almost all y ω.

.4. ITO S FORMULA. 9 Now all we need to do is to check that it works. It is enough to check that for any smooth test function φ with compact support The function R φ(y)l(t, y, ω) dy = ψ(x) = R x y φ(y) dy is smooth a straigt forward calculation shows ψ (x) = sign (x y)φ(y) dy R ψ (x) = φ(x) It is easy to see that (.15) is nothing but Ito s formuls for ψ. Remark.. One can estimate φ(β(s))ds (.15) E sign (β(s) y) sign (β(s) z)dβ(s) 4 C y z by Garsia- Rodemich- Rumsey or Kolmogorov one can conclude that for each t, l(t, y, ω) is almost surely a continuous function of y. Remark.3. With a little more work one can get it to be ointly continuous in t y for almost all ω.