Stochastic Differential Equations.

Similar documents
The concentration of a drug in blood. Exponential decay. Different realizations. Exponential decay with noise. dc(t) dt.

Reflected Brownian Motion

p 1 ( Y p dp) 1/p ( X p dp) 1 1 p

Infinite-dimensional methods for path-dependent equations

Stochastic Integration.

I forgot to mention last time: in the Ito formula for two standard processes, putting

Introduction to numerical simulations for Stochastic ODEs

Stochastic Calculus February 11, / 33

Introduction to Diffusion Processes.

MA8109 Stochastic Processes in Systems Theory Autumn 2013

SDE Coefficients. March 4, 2008

Lecture 4: Ito s Stochastic Calculus and SDE. Seung Yeal Ha Dept of Mathematical Sciences Seoul National University

Exercises. T 2T. e ita φ(t)dt.

Bernardo D Auria Stochastic Processes /12. Notes. March 29 th, 2012

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( )

n E(X t T n = lim X s Tn = X s

Lecture 12: Diffusion Processes and Stochastic Differential Equations

A Concise Course on Stochastic Partial Differential Equations

Lecture 3. This operator commutes with translations and it is not hard to evaluate. Ae iξx = ψ(ξ)e iξx. T t I. A = lim

Stability of Stochastic Differential Equations

Man Kyu Im*, Un Cig Ji **, and Jae Hee Kim ***

A Class of Fractional Stochastic Differential Equations

The multidimensional Ito Integral and the multidimensional Ito Formula. Eric Mu ller June 1, 2015 Seminar on Stochastic Geometry and its applications

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

Simulation methods for stochastic models in chemistry

Verona Course April Lecture 1. Review of probability

Itô s Theory of Stochastic Integration

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

Stochastic Differential Equations

Brownian Motion. An Undergraduate Introduction to Financial Mathematics. J. Robert Buchanan. J. Robert Buchanan Brownian Motion

Lecture 9. d N(0, 1). Now we fix n and think of a SRW on [0,1]. We take the k th step at time k n. and our increments are ± 1

STOCHASTIC CALCULUS JASON MILLER AND VITTORIA SILVESTRI

Bernardo D Auria Stochastic Processes /10. Notes. Abril 13 th, 2010

Stochastic Integration and Stochastic Differential Equations: a gentle introduction

Stochastic Integrals and Itô s formula.

(B(t i+1 ) B(t i )) 2

6. Brownian Motion. Q(A) = P [ ω : x(, ω) A )

APPLICATION OF THE KALMAN-BUCY FILTER IN THE STOCHASTIC DIFFERENTIAL EQUATIONS FOR THE MODELING OF RL CIRCUIT

Lecture 12. F o s, (1.1) F t := s>t

A Short Introduction to Diffusion Processes and Ito Calculus

d(x n, x) d(x n, x nk ) + d(x nk, x) where we chose any fixed k > N

13 The martingale problem

Approximation of BSDEs using least-squares regression and Malliavin weights

Applications of Ito s Formula

Convergence at first and second order of some approximations of stochastic integrals

Stochastic Differential Equations

Controlled Diffusions and Hamilton-Jacobi Bellman Equations

Half of Final Exam Name: Practice Problems October 28, 2014

Solution for Problem 7.1. We argue by contradiction. If the limit were not infinite, then since τ M (ω) is nondecreasing we would have

Brownian Motion. Chapter Stochastic Process

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition

UNCERTAINTY FUNCTIONAL DIFFERENTIAL EQUATIONS FOR FINANCE

Stochastic optimal control with rough paths

GAUSSIAN PROCESSES; KOLMOGOROV-CHENTSOV THEOREM

Properties of an infinite dimensional EDS system : the Muller s ratchet

Numerical methods for solving stochastic differential equations

HJB equations. Seminar in Stochastic Modelling in Economics and Finance January 10, 2011

1 Brownian Local Time

Discretization of SDEs: Euler Methods and Beyond

Applied Mathematics Letters. Stationary distribution, ergodicity and extinction of a stochastic generalized logistic system

Week 9 Generators, duality, change of measure

Stochastic calculus without probability: Pathwise integration and functional calculus for functionals of paths with arbitrary Hölder regularity

An adaptive numerical scheme for fractional differential equations with explosions

Stochastic Integration and Continuous Time Models

On the stochastic nonlinear Schrödinger equation

Fast-slow systems with chaotic noise

Stochastic Modelling in Climate Science

Krzysztof Burdzy University of Washington. = X(Y (t)), t 0}

On the Duality of Optimal Control Problems with Stochastic Differential Equations

1. Stochastic Processes and filtrations

Theoretical Tutorial Session 2

Nonlinear representation, backward SDEs, and application to the Principal-Agent problem

Lecture 4: Introduction to stochastic processes and stochastic calculus

The Wiener Itô Chaos Expansion

THE INVERSE FUNCTION THEOREM

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

ELEMENTS OF STOCHASTIC CALCULUS VIA REGULARISATION. A la mémoire de Paul-André Meyer

More Empirical Process Theory

ELEMENTS OF PROBABILITY THEORY

EXISTENCE OF NEUTRAL STOCHASTIC FUNCTIONAL DIFFERENTIAL EQUATIONS WITH ABSTRACT VOLTERRA OPERATORS

ON THE DIFFUSION OPERATOR IN POPULATION GENETICS

Regularity of the density for the stochastic heat equation

Lecture 6. Differential Equations

Solving Stochastic Partial Differential Equations as Stochastic Differential Equations in Infinite Dimensions - a Review

Some Properties of NSFDEs

Preliminary Exam: Probability 9:00am 2:00pm, Friday, January 6, 2012

An Introduction to Malliavin calculus and its applications

B8.3 Mathematical Models for Financial Derivatives. Hilary Term Solution Sheet 2

Homework # , Spring Due 14 May Convergence of the empirical CDF, uniform samples

56 4 Integration against rough paths

The Chaotic Character of the Stochastic Heat Equation

Lecture 6: Multiple Model Filtering, Particle Filtering and Other Approximations

The Pedestrian s Guide to Local Time

THE SKOROKHOD OBLIQUE REFLECTION PROBLEM IN A CONVEX POLYHEDRON

WEAK VERSIONS OF STOCHASTIC ADAMS-BASHFORTH AND SEMI-IMPLICIT LEAPFROG SCHEMES FOR SDES. 1. Introduction

Fast-slow systems with chaotic noise

Stochastic Calculus (Lecture #3)

BOOK REVIEW. Review by Denis Bell. University of North Florida

Branching Processes II: Convergence of critical branching to Feller s CSB

Transcription:

Chapter 3 Stochastic Differential Equations. 3.1 Existence and Uniqueness. One of the ways of constructing a Diffusion process is to solve the stochastic differential equation dx(t) = σ(t, x(t)) dβ(t) + b(t, x(t))dt ; x() = x (3.1) where x R d is either nonrandom or measurable with respect to F. This is of course written as a stochastic integral equation x(t) = x() + σ(s, x(s)) dβ(s) + If σ(s, x) and b(s, x) satisfy the following conditions b(s, x(s))ds (3.2) σ(s, x) C(1 + x ) ; b(s, x) C(1 + x ) (3.3) σ(s, x) σ(s, y) C x y ; b(s, x) b(s, y) C x y (3.4) by a Picard type iteration scheme one can prove existence and uniqueness. Theorem 3.1. Given σ, b that satisfy (3.3) and (3.4), for given x which is F measurable, there is a unique solution x(t) of (3.2), with in the class of progressively measurable almost surely continuous solutions. Proof. Define iteratively x (t) x x n (t) = x + σ(s, x n 1 (s)) dβ(s) + 31 b(s, x n 1 (s))ds (3.5)

32 CHAPTER 3. STOCHASTIC DIFFERENTIAL EQUATIONS. If we denote the difference x n (t) x n 1 (t) by z n (t), then z n+1 (t) = + [σ(s, x n (s)) σ(s, x n 1 (s))] dβ(s) [b(s, x n (s)) b(s, x n 1 (s))]ds If we limit ourselves to a finite interval t T, then and E [ Therefore E [ [σ(s, x n (s)) σ(s, x n 1 (s))] dβ(s) 2 ] [ CE z n (s) 2 ds ] [b(s, x n (s)) b(s, x n 1 (s))]ds 2 ] [ CT E z n (s) 2 ds ] E [ z n+1 (t) 2] C T E [ z n (s) 2 ds ] With the help of Doob s inequality one can get n+1 (t) = E [ sup z n+1 (s) 2] C T E [ s t By induction this yields n (t) A Cn T tn n! z n (t) 2 dt ] C T n (s)ds which is sufficient to prove the existence of an almost sure uniform limit x(t) of x n (t) on bounded intervals [, T ]. The limit x(t) is clearly a solution of (3.2). Uniqueness is essentially the same proof. For the difference z(t) of two solutions one quickly establishes which suffices to prove that z(t) =. E [ z(t) 2] C T E [ z(s) 2 ds ] Once we have uniqueness one should thing of x(t) as a map of x and the Brownian increments dβ in the interval [, t]. In particular x(t) is a map of x(s) and the Brownian increments over the interval [s, t]. Since x(s) is F s measurable, we can conclude that x(t) is a Markov process with transition probability p(s, x, t, A) = P [x(t; s, x) A] where x(t; s, x) is the solution of (3.2) for t s, initialised to start with x(s) = x.

3.2. SMOOTH DEPENDENCE ON PARAMETERS. 33 It is easy to see, by an application of Itô s lemma that [ u M(t) =u(t, x(t)) u(s, x(s)) (s, x(s)) s s + 1 2 u a i,j (s, x(s)) (s, x(s)) + 2 x i i i,j b i (s, x(s)) u x i (s, x(s)) ] ds is a martingale, where a = σσ, i.e. a i,j (s, x) = k σ i,k (s, x)σ k,j (s, x) The process x(t) is then clearly the Diffusion process associated with L s = 1 2 a i,j (s, x) + 2 x i i i,j b i (s, x) x i 3.2 Smooth dependence on parameters. If σ and b depend smoothly on an additional parameter θ then we will show that the solution x(t) = x(t, θ) will depend smoothly on the parameter. The idea is to start with the solution x(t, θ) = x (θ) + σ(s, x(s, θ), θ) dβ(s) + b(s, x(s, θ), θ)ds Differentiating with respect to θ, and denoting by Y the derivative, we get Y (t, θ) = y (θ) + + [ σx (s, x(s, θ), θ)y (s, θ) + σ θ (s, x(s, θ), θ) ] dβ(s) [ bx (s, x(s, θ), θ)y (s, θ) + b θ (s, x(s, θ), θ) ] ds We look at (x, Y ) as an enlarged system satisfying dx =σ dβ + bdt dy =[σ x Y + σ θ ] dβ + [b x Y + b θ ]dt The solution Y can easily be shown to be the derivative D θ x(t, θ). This procedure can be repeated for higher derivatives. We therefore arrive at the following Theorem. Theorem 3.2. Let σ, b depend on θ in such a way that all derivatives of σ and b with respect to θ of order at most k, exist and are uniformly bounded. Then the random variables x(t, θ) have derivatives with respect to θ up to order k and we can get moment estimates of all orders for these derivatives.

34 CHAPTER 3. STOCHASTIC DIFFERENTIAL EQUATIONS. Proof. We can go through the iteration scheme in such manner that the approximations Y n are the derivatives of x n. Therefore the limit Y has to be the derivative of x. The moment estimates on the other hand depend only on general results, stated in Lemma 3.3 below, concerning solutions of stochastic differential equations. Successive differentiations produces for the highest derivative Z = Z k of order k a linear equation of the form dz(t) = σ k (t, ω))z(t)dβ(t) + b k (t, ω)z(t)dt + c k (t, ω)dβ(t) + e k (t, ω)dt (3.6) where σ k and b k involve derivatives of σ(t, x) and b(t, x) with respect x and are uniformly bounded progressively measurable functions. c k, d k on the other hand are polynomials in lower order derivatives with progressively measurable bounded coefficients. One gets estimates on the moments of Z = Z k by induction on k. Lemma 3.3. Let σ k, b k be uniformly bounded sup [ σ k (t, ω)) + b k (t, ω) ] C(T ) < ω, t T and c k, e k satisfy moment estimates of the form sup E [ c k (t, ω) r + e k (t, ω) r] C r (T ) < t T Then the solution Z(t) of (3.6) has finite moments of all orders and sup E [ Z(s) r] C r (T ) < s T Proof. We consider u r (x) = (1 + x 2 ) r 2 and use Itô s formula. We apply Hölder s inequality to separate the terms with both Z and c k or e k. E [ u r (Z(t)) ] E [ u r (Z()) ] + A r E [ u r (Z(s)) ] ds + B r (s)ds which is sufficient to provide the estimates. Again Doob s inequality makes it possible to take the supremum inside the expectation with out any trouble. Corollary 3.4. If x(t) is viewed as a function of the starting point x, then one can view x as the parameter and conclude that if the coefficients have bounded derivatives of all orders then the solution x(t) is almost surely an infinitely differentiable function of its starting point. Remark 3.1. Since smoothness is a local property, if σ and b have at most linear growth, the solution exists for all time with out explosion, and then one can modify the coefficients outside a bounded domain with out changing much. This implies that with out uniform bounds on derivatives the solutions x(t) will still depend smoothly on the initial point, but the derivatives may not have moment estimates. Remark 3.2. This means one can view the solution u(t, x) of the equation du(t, x) = σ(u(t, x)) dβ + b(u(t, x))dt; u(, x) = x as random flow u(t) : R d R d. The flow as we saw is almost surely smooth.

3.3. ITÔ AND STRATONOVICH INTEGRALS. 35 3.3 Itô and Stratonovich Integrals. In the definition of the stochastic integral η(t) = f(s)dx(s) we approximated it by sums of the form f(t j 1 )[x(t j ) x(t j 1 )] j always sticking the increments in the future. This allowed the integrands to be more or less arbitrary, so long as it was measurable with respect to the past. This meshed well with the theory of martingales and made estimation easier. Another alternative, symmetric with respect to past and future, is to use the approximation [f(t j 1 ) + f(t j )] [x(t j ) x(t j 1 )] 2 j It is not clear when this limit exists. When it exists it is called the Stratonovich integral and is denoted by f(s) dx(s). If f(s) = f(s, x(s)), then the difference between the two integrals can be explicitly calculated. where f(s, x(s)) dx(s) = a(s)ds = lim j f(s, x(s)) dx(s) + 1 2 a(s)ds [f(t j, x(t j )) f(t j 1, x(t j 1 ))][x(t j ) x(t j 1 )] If x(t) is just Brownian motion in R d, then a(s) = ( f)(s, x(s)). More generally if then Solutions of lim j [x i (t j ) x i (t j 1 )][x k (t j ) x k (t j 1 )] = a(s) = i,k can be recast as solutions of with b and b related by a i,k (s)ds f i,k (s, x(s))a i,k (s) = T r[(df)(s, x(s))a(s)] dx(t) = σ(t, x(t)) dβ(t) + b(t, x(t))dt dx(t) = σ(t, x(t)) dβ(t) + b(t, x(t))dt b i (t, x) = b i (t, x) + 1 2 σj,k (t, x) σ i,k (t, x)

36 CHAPTER 3. STOCHASTIC DIFFERENTIAL EQUATIONS. To see the relevance of this, one can try to solve dx(t) = σ(t, x(t)) dβ j (t) + b(t, x(t))dt by approximating β(t) by a piecewise linear approximation β (n) (t) with derivative f (n) (t). Then we will have just ODE s dx (n) (t) dt = σ(t, x (n) (t))f (n) (t) + b(t, x (n) (t)) where f (n) ( ) are piecewise constant. An elementary calculation shows that over an interval of constancy [t, t + h], where x (n) i (t + h) = x (n) i (t) + σ(t, x (n) (t)) Z h + b i (t, x (n) (t))h + 1 2 < Z h, c i (t, x (n) (t))z h > +o((z h ) 2 ) c i (t, x) = σ j,k (t, x) σ i,k (t, x) and Z h is a Gaussian with mean and variance hi while β (n) (t + h) = β n (t) + Z h It is not hard to see that the limit of x (n) ( ) exists and the limit solves or dx(t) = σ(t, x(t)) dβ(t) + b(t, x(t))dt + 1 c(t, x(t))dt 2 dx(t) = σ(t, x(t)) dβ(t) + b(t, x(t))dt It is convenient to consider a vector field and its square X = i σ i (x) X 2 = i,j 2 σ i (x)σ j (x) + x i j c j (x) where Then the solution of c j (x) = i σ i (x) σ j(x) x i dx(t) = σ(t, x(t)) dβ(t) + b(t, x(t))dt = X i (t, x(t)) dβ i (t) + Y (t, x(t))dt

3.3. ITÔ AND STRATONOVICH INTEGRALS. 37 is a Diffusion with generator L t = 1 2 Xi (t) 2 + Y (t) When we change variables the vector fields change like ordinary first order calculus and L t = 1 2 Xi (t) 2 + Ŷ (t) and the Stratonovich solution transforms like dx(t) = σ(t, x(t)) dβ(t) + b(t, x(t))dt df (x(t)) = DF dx(t) = (DF )(x(t))[σ(t, x(t)) dβ(t) + b(t, x(t))dt] The Itô corrections are made up by the difference between the two integrals. Remark 3.3. Following up on remark (3.1), for each t >, the solution actually maps R d R d as a diffeomorphism. To see this it is best to view this through Stratonovich equations. Take t = 1. If the forward flow is therough vector fileds X i (t), Y (t), the reverse flow is through X i (1 t), Y (1 t) and the reversed noise is ˆβ(t) = β(1) β(1 t). One can see by the piecewise linear approximations that these are actually inverses of each other. Remark 3.4. One can perhaps consider more general solutions to the stochastic differential equation. On some (Ω, F t, P ), one can try to define x(t) and β(t), both progressively measurable, that satisfy the relation x(t) = x() + σ(s, x(s)) dβ(s) + b(s, x(s))ds x(t, ω) may not be measurable with respect to G t the sub σ-field generated by β( ). It is not hard to show, assuming Lipshhitz conditions on σ and b, that the Picard iteration scheme produces a solution of the above equation which is a progressively measurable function of x() and the Brownian increments and any other solution is equal to it.