Stochastic Processes III/ Wahrscheinlichkeitstheorie IV. Lecture Notes

Similar documents
Stochastic Processes II/ Wahrscheinlichkeitstheorie III. Lecture Notes

1. Stochastic Processes and filtrations

OPTIMAL SOLUTIONS TO STOCHASTIC DIFFERENTIAL INCLUSIONS

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Lecture 17 Brownian motion as a Markov process

Applications of Ito s Formula

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Lecture 22 Girsanov s Theorem

Stochastic Processes. Winter Term Paolo Di Tella Technische Universität Dresden Institut für Stochastik

Stochastic integration. P.J.C. Spreij

1 Brownian Local Time

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

n E(X t T n = lim X s Tn = X s

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

(B(t i+1 ) B(t i )) 2

Some SDEs with distributional drift Part I : General calculus. Flandoli, Franco; Russo, Francesco; Wolf, Jochen

Lecture 12. F o s, (1.1) F t := s>t

Solutions to the Exercises in Stochastic Analysis

The multidimensional Ito Integral and the multidimensional Ito Formula. Eric Mu ller June 1, 2015 Seminar on Stochastic Geometry and its applications

Lecture 21 Representations of Martingales

Weak convergence and Brownian Motion. (telegram style notes) P.J.C. Spreij

Jump Processes. Richard F. Bass

On the martingales obtained by an extension due to Saisho, Tanemura and Yor of Pitman s theorem

Introduction to Random Diffusions

ON THE POLICY IMPROVEMENT ALGORITHM IN CONTINUOUS TIME

for all f satisfying E[ f(x) ] <.

On continuous time contract theory

{σ x >t}p x. (σ x >t)=e at.

Lecture 19 L 2 -Stochastic integration

A NOTE ON STOCHASTIC INTEGRALS AS L 2 -CURVES

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition

STOCHASTIC CALCULUS JASON MILLER AND VITTORIA SILVESTRI

Convergence at first and second order of some approximations of stochastic integrals

A Concise Course on Stochastic Partial Differential Equations

16.1. Signal Process Observation Process The Filtering Problem Change of Measure

Doléans measures. Appendix C. C.1 Introduction

Exercises. T 2T. e ita φ(t)dt.

Nonlinear representation, backward SDEs, and application to the Principal-Agent problem

EULER MARUYAMA APPROXIMATION FOR SDES WITH JUMPS AND NON-LIPSCHITZ COEFFICIENTS

Exercises in stochastic analysis

The Pedestrian s Guide to Local Time

Risk-Minimality and Orthogonality of Martingales

The Uniform Integrability of Martingales. On a Question by Alexander Cherny

Weak solutions of mean-field stochastic differential equations

An essay on the general theory of stochastic processes

Generalized Gaussian Bridges of Prediction-Invertible Processes

µ X (A) = P ( X 1 (A) )

Stochastic Differential Equations.

Reflected Brownian Motion

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.

Proofs of the martingale FCLT

Part III Stochastic Calculus and Applications

MATH 6605: SUMMARY LECTURE NOTES

Tools of stochastic calculus

p 1 ( Y p dp) 1/p ( X p dp) 1 1 p

MAXIMAL COUPLING OF EUCLIDEAN BROWNIAN MOTIONS

Fundamental Inequalities, Convergence and the Optional Stopping Theorem for Continuous-Time Martingales

Topics in fractional Brownian motion

ON THE PATHWISE UNIQUENESS OF SOLUTIONS OF STOCHASTIC DIFFERENTIAL EQUATIONS

I forgot to mention last time: in the Ito formula for two standard processes, putting

FOUNDATIONS OF MARTINGALE THEORY AND STOCHASTIC CALCULUS FROM A FINANCE PERSPECTIVE

Quasi-sure Stochastic Analysis through Aggregation

Stochastic Calculus (Lecture #3)

4th Preparation Sheet - Solutions

On a class of stochastic differential equations in a financial network model

SUMMARY OF RESULTS ON PATH SPACES AND CONVERGENCE IN DISTRIBUTION FOR STOCHASTIC PROCESSES

LECTURE 2: LOCAL TIME FOR BROWNIAN MOTION

LogFeller et Ray Knight

On Reflecting Brownian Motion with Drift

MATH 51H Section 4. October 16, Recall what it means for a function between metric spaces to be continuous:

Wiener Measure and Brownian Motion

ELEMENTS OF STOCHASTIC CALCULUS VIA REGULARISATION. A la mémoire de Paul-André Meyer

Some Properties of NSFDEs

Estimates for the density of functionals of SDE s with irregular drift

On pathwise stochastic integration

Stochastic Processes

The concentration of a drug in blood. Exponential decay. Different realizations. Exponential decay with noise. dc(t) dt.

MAT 570 REAL ANALYSIS LECTURE NOTES. Contents. 1. Sets Functions Countability Axiom of choice Equivalence relations 9

e - c o m p a n i o n

Mean-field dual of cooperative reproduction

STAT 331. Martingale Central Limit Theorem and Related Results

2 (Bonus). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure?

3 (Due ). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure?

A Short Introduction to Diffusion Processes and Ito Calculus

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( )

Random Process Lecture 1. Fundamentals of Probability

Functional Analysis I

Martingale Problems. Abhay G. Bhatt Theoretical Statistics and Mathematics Unit Indian Statistical Institute, Delhi

Part II Probability and Measure

Chapter 1. Poisson processes. 1.1 Definitions

Verona Course April Lecture 1. Review of probability

4 Sums of Independent Random Variables

Lecture 9. d N(0, 1). Now we fix n and think of a SRW on [0,1]. We take the k th step at time k n. and our increments are ± 1

Lecture Notes in Advanced Calculus 1 (80315) Raz Kupferman Institute of Mathematics The Hebrew University

Fast-slow systems with chaotic noise

The Wiener Itô Chaos Expansion

Stochastic Analysis I S.Kotani April 2006

LOCAL TIMES OF RANKED CONTINUOUS SEMIMARTINGALES

Probability Theory II. Spring 2016 Peter Orbanz

Convergence of Markov Processes. Amanda Turner University of Cambridge

Transcription:

BMS Advanced Course Stochastic Processes III/ Wahrscheinlichkeitstheorie IV Michael Scheutzow Lecture Notes Technische Universität Berlin Wintersemester 218/19 preliminary version November 28th 218

Contents 1 Girsanov s Theorem 1 1.1 Preparation for Girsanov s Theorem......................... 1 1.2 Novikov s Condition.................................. 5 1.3 Application: Brownian Motion with Drift...................... 6 2 Local Time 7 3 Weak Solutions and Martingale Problems 9 3.1 Weak solutions of stochastic differential equations.................. 9 3.2 Weak solutions via Girsanov s theorem........................ 11 3.3 Martingale Problems.................................. 12 3.4 Martingale Problems: Existence of solutions..................... 16 Bibliography 19 i

Chapter 1 Girsanov s Theorem 1.1 Preparation for Girsanov s Theorem Before stating the important Girsanov theorem we recall Lévy s characterization of Brownian motion and provide a proof of the statement which we skipped in WTIII. We start with a lemma see Lemma 6.2.13 in [KS91]. Here, i is the complex number 1. Lemma 1.1. Let X, Y be R d -valued random variables on Ω, F, P and let G be a sub σ-algebra of F. Assume that for each u R d we have Eexp{i u, Y } G = E exp{i u, X } a.s. Then the laws of X and Y coincide and Y and G are independent. Proof. The first statement is clear since the characteristic function determines a probability measure uniquely. To see the second statement, we represent the conditional expectation via a regular conditional probability see the chapter Bedingte Erwartungen und Wahrscheinlichkeiten in WT1: Eexp{i u, Y } Gω = exp{i u, y } Qω, dy, a.s. R d Since a characteristic function is continuous, we can assume that the exceptional set does not depend on u R d. The assumption in the lemma now shows that for almost all ω Ω, the characteristic functions of Qω,. and the law of X coincide and therefore, the conditional law of Y given G and the law of X coincide for almost all ω Ω, so for each Borel set B R d, we have PY B G = PX B which immediately implies that Y and G are independent. Now we are ready to state Lévy s characterization of Brownian motion. Theorem 1.2. Let M t = M 1 t,..., M d t be a continuous R d -valued local martingale starting at on the filtered probability space Ω, F, F, P. If M j, M k t = δ jk t; j, k {1,..., d}, then M is a d-dimensional F-Brownian motion. 1

2 Wahrscheinlichkeitstheorie IV Proof. We basically follow [KS91], Theorem 3.3.16. We need to show that for s < t, the random vector M t M s is independent of F s and has distribution N, t si. Thanks to the previous lemma it is enough to prove that for any u = u 1,..., u d R d and s t, we have E exp{i u, M t M s } F s = exp { 1 2 u 2 t s }. Fix u R d and s and let fx := e i u,x, x R d. Applying Itô s formula to the real and imaginary part of f, we get e i u,mt = e i u,ms + i d u j e i u,mv dmv j 1 2 j=1 s d j=1 u 2 j s e i u,mv dv. 1.1.1 By Burkholder s inequality Theorem 2.19 in WTIII with p = 2, we have, using M j t = t, E sup Ms j E sup Ms j 2 + 1 C 2 t + 1 < s t s t showing that M j is a martingale Chapter 1 in WTIII which is even square integrable. Further, the real and imaginary parts of ei u,mv dmv j are square integrable martingales since the integrand is bounded and therefore E For any A F s, we get from 1.1.1 s e i u,xv dmv j F s = a.s. E e i u,mt Ms 1 A = PA 1 2 u 2 s E e i u,mv Ms 1 A dv. This integral equation for the function t E e i u,mt Ms 1 A with s fixed and t s has a unique solution, namely so the claim follows. E e i u,mt Ms 1 A = P A exp{ 1 2 u 2 t s}, Remark 1.3. In WTIII we defined the stochastic integral with respect to a continuous local martingale M for integrands f for which there exists a progressive process g which is square integrable with respect to the Doleans measure µ M such that f g M =, i.e. f and g are in the same equivalence class in that space. We mention without proof see e.g. [KS91] the fact that if the process t M t is almost surely absolutely continuous, then this holds for every adapted process f in L 2 µ M. Note that absolute continuity of t M t holds in particular if M is Brownian motion. Let Ω, F, F, P be a filtered probability space satisfying the usual conditions and let W =,..., W d } t, t be a d-dimensional F-Brownian motion on that space. Let,..., X d } t, t be progressive or just jointly measurable and adapted and { Wt = W 1 t X = { X t = X 1 t satisfy T P X i t 2 dt < = 1, 1 i d, T <.

Version November 28th 218 3 Then, the process { d Z t := exp i=1 X s i dw s i 1 2 } X s 2 ds is well-defined here, x denotes the Euclidean norm of x R d. Writing N t := d the formula becomes { Z t = exp N t 1 } 2 N t i=1 Xi s 1.1.2 dw i s, 1.1.3 and Z is called stochastic exponential of N even in cases where N is an arbitrary continuous local martingale starting at, in short: Z = EN. Further, Z t = 1 + d i=1 Z s X i s dw s i = 1 + Z s dn s. 1.1.4 This follows from Itô s formula applied to 1.1.3. On the other hand, there is only one solution Z of equation 1.1.4 we will show this in class; hint: let Z and Z be two solutions, define Y t := Z t τ Z t τ for a suitable stopping time τ and apply Burkholder-Davis-Gundy s inequality with p = 2. Note that 1.1.4 shows that the process Z is a continuous local martingale with Z = 1. We will see later that it is of great interest to find conditions under which Z is even a martingale in which case EZ t = 1 for all t. It is not hard to see that a rather strong sufficient condition for Z to be a martingale is that there exists a deterministic function Lt, t such that Psup s t X s Lt = 1 for every t. A weaker condition is Novikov s condition which we will provide in Theorem 1.7. We will see in class that for Z as defined in 1.1.3, the condition EZ t = 1 for all t is not only necessary but also sufficient for Z to be a martingale hint: a nonnegative local martingale is a supermartingale [use Fatou s lemma for conditional expectations] and a supermartingale is a martingale iff its expected value is constant. If Z is a martingale, then we define, for each T [,, a probability measure P T on Ω, F T by P T A := E 1 A Z T, A FT. Note that the probability measures P T and P FT are mutually absolutely continuous with density d P T /dp FT = Z T. In the following Girsanov theorem, we use the symbol F T to denote the restricted filtration F t, t T. It should be clear what we mean by an F T -Brownian motion on Ω, F T, F T, P T, so we refrain from providing a precise definition. Theorem 1.4 Girsanov 196. Assume that Z defined by 1.1.2 is a martingale. Define W i t := W i t X i s ds, i {1,..., d}, t. Then, for each fixed T, the process { W t }, t [, T ] is an F T -Brownian motion on Ω, F T, F T, P T. For the proof we require two lemmas. We denote by ẼT the expectation with respect to P T. Lemma 1.5. Let T > and assume that Z is a martingale. real-valued F t -measurable satisfying ẼT Y <, then Ẽ T Y Fs = 1 Z s E Y Z t F s, a.s. w.r.t. P and PT. If s t T and Y is

4 Wahrscheinlichkeitstheorie IV Proof. We have, for A F s, Ẽ T 1A Y = E 1 A Y Z T = E 1A Y EZ T F t = E 1 A Y Z t = E 1A E Y Z t F s and 1 Ẽ T 1 A E 1 Y Z t F s = E 1 A E 1 Y Z t F s ZT = E 1 A E Y Z t F s Zs = E 1 A E Y Z t F s. Z s Z s Z s In the following proposition, we denote by M loc,t the set of continuous local F-martingales on [, T ] with initial condition M = with respect to the original measure P and by M loc,t the corresponding set with P replaced by P T. Proposition 1.6. Fix T > and assume that Z defined as in 1.1.2 is a martingale and M M loc,t. Then d M t := M t X s i d M, W i s ; t T is in M loc,t. If N M loc,t and i=1 then Ñ t := N t d i=1 X i s d N, W i s ; t T, M, Ñ t = M, N t ; t T, a.s. Proof. We first assume that M and N are bounded martingales with bounded quadratic variation and that Z and i 2ds X s are bounded in t and ω the general case can then be shown by stopping. Kunita-Watanabe s inequality WTIII, Proposition 2.8 a shows that X i s d M, W i s 2 M t X i s 2 ds, so M is also bounded. The integration-by-parts formula WTIII, Proposition 2.12 implies Z t Mt = Z s dm s + d i=1 M s X s i Z s dw s i, which is a martingale under P. Now Lemma 1.5 immediately implies that M M loc,t. The last claim in the statement of the proposition follows since M and M resp. N and Ñ differ only by a process of locally finite variation. Proof of Theorem 1.4. We show that the process W in the statement of the theorem satisfies the assumptions of Theorem 1.2. Setting M = W j in the previous proposition, we see that M = W j is in M loc,t. Setting N = W k, we see that so the claim follows. W j, W k = W j, W k = δ j,k t; t T, a.s.,

Version November 28th 218 5 1.2 Novikov s Condition Theorem 1.7. If N is a continuous local martingale with N = such that E exp { 1 2 N } T <, T <, then the process Z defined by 1.1.3 is a martingale. In particular, if X, W and Z are defined as in 1.1.2 and { 1 T } E exp X s 2 ds <, T <, 2 then Z is a martingale. Proof. Following [Eb16] we prove the result only under the slightly stronger condition { p } E exp 2 N T <, T <, for some p > 1. This simplifies the proof considerably. For the general case see [KS91] or [Bo18]. Since Z is a continuous local martingale, there exists a localizing sequence of stopping times T n, n N such that T n almost surely and t Z t Tn is a continuous martingale for each n N. If, for each t, the sequence Z t Tn is uniformly integrable, then by Satz 1.39 in WT2 lim n Z t Tn = Z t in L 1 and therefore the martingale property of Z t Tn is inherited by Z letting n. It remains to show that the sequence Z t Tn is uniformly integrable for each fixed t >. Let c > and define q > 1 such that 1 p + 1 q = 1. Then, using Hölder s inequality, we get E Z t Tn 1 {Zt Tn c} = E exp { N t Tn p 2 N } {p 1 t T n exp 2 N } t T n 1{Zt Tn c} E exp { pn t Tn p2 2 N } 1/p t T n E exp { q p 1 2 N } 1/q t T n 1{Zt Tn c} E exp { p 2 N } 1/q t 1{Zt Tn c}. To show uniform integrability we have to show that if we first take the supremum over all n N and then the limit as c, then this limit is. Lemma 1.38 in WT2 shows that it is actually sufficient to take the lim sup n instead of sup n N since N is countable. Observe that lim sup E Z t Tn 1 {Zt Tn c} E exp { p n 2 N } 1/q t 1{Zt c} by dominated convergence since exp { p 2 N t} is integrable and hence the limit as c is, so the proof is complete. Remark 1.8. The previous theorem is known to be wrong if 1/2 is replaced by any number p < 1/2 see Remark 5.17 in [KS91], even in case d = 1.

6 Wahrscheinlichkeitstheorie IV 1.3 Application: Brownian Motion with Drift We show how Girsanov s theorem can be employed to compute the density of the first passage times for one-dimensional Brownian motion with drift. For a one-dimensional Brownian motion W and b > let τ b := inf{t : W t = b} denote the first passage time to level b. Note that Pτ b t = P sup s t W s b. Note that we computed the latter quantity in WT2 by applying Donsker s invariance principle. It is therefore easy to show that τ b has the density f b t = b { exp 2πt 3 b2 } ; t >. 2t Can we also compute the density f µ b of τ b for Brownian motion with drift µ? Indeed we can and Girsanov s theorem helps us doing this. Let µ. Let d = 1 and define W t := W t tµ, t. This corresponds to the choice X s = µ in Girsanov s theorem. Clearly, Novikov s condition is satisfied and hence Z t := exp { µw t 1 2 µ2 t }, t is a martingale. Fix T >. By Girsanov s theorem W is a Brownian motion on [, T ] under P T and hence W t = W t + µt, t T is a Brownian motion with drift µ under P T. Denoting by τ b the first passage time of W to b, we get for t [, T ] P T τ b t = 1 τb td P T = 1 τb tz T d P = 1 τb tz τb d P = E 1 τb t exp{µb 1 2 µ2 τ b } = exp{µb 1 2 µ2 s}f b s ds, where we applied the optional sampling theorem for the continuous martingale Z. It follows that the first passage time τ b for Brownian motion with drift has a density given by f µ b s = exp{µb 1 2 µ2 s}f b s = b { exp 2πs 3 b } µs2, s >. 2s Note that f b µ is a true density i.e. has integral 1 iff µ >. If µ <, then the integral is strictly smaller than 1 which is due to the fact that Brownian motion with drift µ < has a positive probability of never reaching level b >.

Chapter 2 Local Time In this section we introduce the concept of local time of a real-valued continuous semimartingale X. Is X a semimartingale? Itô s formula can certainly not be applied, but we will see that the answer is nevertheless yes. To show this we define fx = x, x R, approximate f by smoother functions f n, apply Itô s fomula to f n X and then take the limit n. Specifically, we choose f n C 2 R, R as follows: i f n x = x for x 1/n, ii f n x = f n x, x R, iii f nx, x R. Hence, f n converges to f uniformly and f nx converges to sgnx pointwise we define sgnx = 1 if x >, sgnx = 1 if x < and sgn =. In addition 1 1 f nx dx = 2 for all n. Itô s formula applied to f n yields f n X t f n X = f nx s dx s + 1 2 f nx s d X s. 2..1 In order to see that the stochastic integral converges as n we need the following lemma note that Lemma 2.16 in WT3 about ucp-convergence does not apply here. We formulate the lemma in a way which meets our demands but not more. Lemma 2.1. Let H n, n N be a sequence of uniformly bounded progressive processes, i.e. there exists a number C R such that H n t ω C for all t, n N and ω Ω and assume that H t ω := lim n Ht n ω exists for all t and ω Ω. Let Y be a continuous semimartingale such that Y =. Then.. Hs n dy s H s dy s ucp. Proof. We follow Section 8.3 of [WW9]. Let Y = M + A be the unique decomposition of Y such that M M loc and A A. Clearly we have by dominated convergence. H n s da s. 7 H s da s ucp,

8 Wahrscheinlichkeitstheorie IV so it remains to show the claim in case Y = M. Let T k := inf{t : M t + M t k} k, k N. The BDG inequality implies E sup t k 1 [,Tk ]shs n 2 k H s dm s C 2 E 1 [,Tk ]shs n H s 2 d M s, as n by dominated convergence. In particular we have, for fixed k N,. 1 [,Tk ]H n s dm s. 1 [,Tk ]H s dm s ucp. Letting k the claim follows since T k almost surely check this!. Now we are able to pass to the limit in 2..1. The previous lemma shows that. f nx s dx s. sgnx s dx s ucp taking C = 1. Therefore, the second integral in 2..1 converges ucp to a process 1 L t := lim n 2 f nx s d X s = X t X sgnx s dx s. 2..2 The first equality shows that L A +. The second representation shows that the process L does not depend on the particular choice of f n. L is called local time at of the semimartingale X the reason for this will become clear in the next theorem. Note that we have shown the following formula for fx = x : fx t = fx + sgnx s dx s + L t, showing that fx t is indeed a continuous semimartingale. Theorem 2.2. Let X be a continuous semimartingale with local time L given by 2..2. Then 1 L t = lim ε 2ε 1 Xs ε d X s in probability. In particular, for an F-Brownian motion X = W, where λ denotes Lebesgue measure on [,. 1 L t = lim ε 2ε λ{s t : W s ε}, Proof. The second claim clearly follows from the first since W s = s. To show the first claim we choose f n in a particular way namely such that in addition to the previous assumptions Then n 2 1 Xs [ 1 n+1, 1 n+1] d X s 1 2 n1 [ ] 1 n+1, 1 f n n + 11 [. 1 n+1 n n], 1 f nx s d X s n + 1 2 1 d X s Xs [ 1 n n], 1 which implies the first claim check this! Note that L t = lim n... implies L t = lim ε... by monotonicity.

Chapter 3 Weak Solutions and Martingale Problems 3.1 Weak solutions of stochastic differential equations We will basically follow the approach presented in [KS91]. Consider the following stochastic differential equation m dx t = bt, X t dt + σ k t, X t dwt k, 3.1.1 where W 1,..., W m are independent Brownian motions and b and σ 1,..., σ m are measurable functions mapping [, R d to R d. We start with the definition of a weak solution of 3.1.1. Definition 3.1. A weak solution of 3.1.1 is a tuple X, W, Ω, F, F, P, where k=1 i Ω, F, F, P is a FPS which satisfies the usual conditions, ii X is a continuous, adapted R d -valued process and W is an m-dimensional F-Brownian motion, iii bs, X s + m k=1 σ k s, X s 2 ds < almost surely for every t, iv X t = X + holds almost surely. bs, X s ds + m k=1 σ k s, X s dw k s ; t < Remark 3.2. Occasionally, we fix T > and consider weak solutions on the interval [, T ] instead of [,. It should be clear what we mean by this, so we do not provide a formal definition. Note that it might well happen that a weak solution X, W exists but that X is not adapted to the augmented filtration generated by W here augmented means the smallest filtration which contains the filtration generated by W which satisfies the usual conditions. We will see an example of this kind soon. Note that in WT3 we established a stronger form of solution to 9

1 Wahrscheinlichkeitstheorie IV 3.1.1: we showed in Theorem 2.27 under appropriate assumptions called H that the sde has a unique solution with initial condition x R d on any FPS carrying an F-Brownian motion W. In particular, we can choose the filtration F to be the augmented filtration generated by W. Such a solution is called a strong solution. We will often allow the initial condition X = ξ to be random. In this case ξ is F -measurable and therefore independent of the filtration generated by W. Then a process X on a given FPS Ω, F, F, P with a given F-Brownian motion W and a given F -measurable ξ is called a strong solution, if X, W, Ω, F, F, P is a weak solution which is adapted to the augmented filtration generated by W and ξ and which satisfies the initial condition X = ξ almost surely. By definition, every strong solution is a weak solution. We will see that the converse is not true. One may ask which of the two concepts of a solution is the more natural one. There is no clear answer to this question. It depends on which kind of phenomenon one would like to model by an sde. If we think of W as an input to a system and X as the output, then one can argue that the output should be a function of the input and in this case strong solutions seem more natural. On the other hand one can take a pragmatic viewpoint by saying that one just wants to describe some random phenomenon via an sde and one does not care whether the solution is a function of the input. In this case weak solutions are more appropriate. After having defined the concept of a weak solution, we define what we mean by uniqueness of solutions. Definition 3.3. We say that pathwise uniqueness holds for 3.1.1 if, whenever X, W, Ω, F, F, P and X, W, Ω, F, F, P are two weak solutions on the same probability space Ω, F, P with possibly different filtrations such that PX = X = 1, then the processes X and X are indistinguishable i.e. PX t = X t ; t < = 1. Remark 3.4. Some authors define pathwise uniqueness slightly differently by assuming that F and F coincide. Formally, our definition is stronger i.e. more restrictive. In fact, the two definitions can be shown to be equivalent see [IW89], Remark IV.1.3.. Remark 3.5. The proof of Theorem 2.27 in WT3 shows that pathwise uniqueness holds under the assumptions H stated there. Definition 3.6. We say that uniqueness in law or weak uniqueness holds for 3.1.1 if, for any two weak solutions X, W, Ω, F, F, P and X, W, Ω, F, F, P with the same initial distribution i.e. LX = L X the processes X and X have the same law. The following classical example due to H. Tanaka shows that a weak solution may not be a strong solution and that uniqueness in law does not imply pathwise uniqueness. Example 3.7. Consider the one-dimensional sde dx t = signx t dw t ; t <, 3.1.2 where signx = { 1; x >, 1; x. If X, W, Ω, F, F, P is a weak solution, then B t := signx s dw s

Version November 28th 218 11 is a continuous local F-martingale starting at with quadratic variation B t = sign2 X s ds = t. Therefore, B is an F-Brownian motion by Lévy s theorem. Note that X t = X +B t, t and that the process B is independent of F. Since X is F -measurable, B and X are independent, so the law of X is uniquely determined by that of X and so weak uniqueness holds for 3.1.2. Let us now show existence of a weak solution. Let X be an F-Brownian motion on some FPS Ω, F, F, P with F -measurable initial condition X and define W t := signx s dx s. Then W is an F-Brownian motion by Lévy s theorem and signx s dw s = sign 2 X s dx s = X t X, so X, W is a weak solution! Next we show that there is no pathwise uniqueness in case the initial condition is X =. Let X, W, Ω, F, F, P be the weak solution which we just constructed with X =. Then X, W, Ω, F, F, P is also a weak solution in spite of the slight asymmetry in the definition of the function sign! Therefore, pathwise uniqueness does not hold for 3.1.2. If X, W is any weak solution of 3.1.2 with initial condition, then signx s dx s = sign 2 X s dw s = W t, so W is measurable with respect to the filtration generated by X. We will see in a few seconds that X is not measurable with respect to the filtration generated by W, so X is not a strong solution. Since any strong solution is a weak solution this shows that no strong solution of 3.1.2 with initial condition exists. Since the process X is an F-Brownian motion for some filtration F it can be represented as X t = sgnx s dx s + L t = signx s dx s + L t = W t + L t, by the results of the last chapter, where L is the local time of the semimartingale X at. The second part of Theorem 2.2 shows that L is not only F-adapted but even adapted to the filtration generated by X t, t. Therefore the same holds true for W. Obviously, the filtration G t, t generated by X is strictly smaller than that generated by X the event {X 1 > } is for example contained σx s, s 1 but not in G 1. 3.2 Weak solutions via Girsanov s theorem The following proposition shows how Girsanov s theorem can be used to show existence of a weak solution to a certain class of stochastic differential equations. Proposition 3.8. Fix T > and consider the stochastic differential equation dx t = bt, X t dt + dw t ; t T, 3.2.1 where W is d-dimensional Brownian motion and b : [, T ] R d R d is jointly measurable and bounded. Then, for any x R d, 3.2.1 has a weak solution with initial condition X = x almost surely.

12 Wahrscheinlichkeitstheorie IV Proof. Fix x R d and let X t t be a d-dimensional F-Brownian motion starting at x on some FPS Ω, F, F, P. By Theorem 1.7, the process { d Z t := exp j=1 b j s, X s dx j s 1 2 } bs, X s 2 ds, t is a continuous martingale, so Girsanov s theorem implies that under Q given by dq/dp FT = Z T, the process W t := X t x is an F T -Brownian motion starting at. Therefore, X t = x + bs, X s ds; t T bs, X s ds + W t ; t T, and hence X, W, Ω, F T, F T, Q is a weak solution of 3.2.1 with initial condition QX = x = 1. Remark 3.9. The assumption that b is bounded is stronger than necessary but simplifies the argument. For a more general result which also allows the initial condition to be random, see [KS91], Proposition 5.3.6 and the remark following that proposition. Uniqueness in law for 3.2.1 is discussed in Proposition 5.3.1 in [KS91]. 3.3 Martingale Problems We continue to consider stochastic differential equations of the form dx t = bt, X t dt + m σ k t, X t dwt k, 3.3.1 with measurable coefficients as before. The d d symmetric and nonnegative definite matrix a ij t, x := k=1 m σ k t, x i σ k t, x j ; i, j {1,..., d} k=1 is called the diffusion matrix associated to 3.3.1. We denote by C k c R d the space of real-valued k-times continuously differentiable functions on R d with compact support, and C c R d := k N Ck c R d. Then the following holds. Proposition 3.1. Let X, W, Ω, F, F, P be a weak solution of 3.3.1. Then for all f C 2 R d, the process M f t := fx t fx d b i s, X s i fx s + 1 2 i=1 d i,j=1 a ij s, X s ijfx 2 s ds is a continuous local F-martingale with M f =. If b and σ 1,..., σ k are locally bounded in both variables and f C 2 c R d, then M f is a continuous L 2 -martingale.

Version November 28th 218 13 Proof. This is a straightforward application of Itô s formula. The idea of Stroock and Varadhan [SV79] was to characterize a weak solution as the solution of a local martingale problem. Let S d := {A R d d : A is symmetric and nonnegative definite} and let b : [, R d R d and a : [, R d S d be measurable. For t we define the second order differential operator A t by A t fx := d b i t, x i fx + 1 2 i=1 d a ij t, x ijfx 2 i,j=1 = bt, x, fx + 1 2 trace at, xhess f x, f C 2 R d. Definition 3.11. Let b and a be as above. A continuous R d -valued stochastic process X = X t t on some probability space Ω, F, P is called a solution to the local martingale problem for a, b or for the associated family of operators A t t if for each f Cc R d, the process M f t := fx t fx A s fx s ds 3.3.2 is a continuous local martingale with respect to the filtration generated by X. In short, we say that X solves LMPa, b. If X has initial distribution µ = PX 1, then we say that X solves LMPa, b, µ. If 3.3.2 is a martingale for each f Cc R d, then we say that X solves the associated martingale problem and write MPa, b and MPa, b, µ. Finally, we say that the solution to LMPa, b, µ or MPa, b, µ is unique if for any two solutions X and X on possibly different spaces the laws of X and X coincide. Remark 3.12. If X solves LMPa, b, µ, then we can transfer X to the following canonical setup: consider the probability space Ω, F, P := C[,, R d, BC[,, R d, LX, where LX denotes the law of X and C[,, R d is equipped with the usual topology one can easily find a complete metric which makes the space Polish. Define the canonical process π t ω := ω t, t. Then π has the same law as X and therefore π also solves LMPa, b, µ. In particular, we can identify a solution X to LMPa, b, µ with the probability measure LX on the measurable space C[,, R d, BC[,, R d. Analogous statements hold for LMPa, b, MPa, b, and MPa, b, µ. Therefore, we will often use formulations like Let P be a solution of LMPa, b, µ if P is the law of a solution of LMPa, b, µ. We know already from the previous proposition that weak solutions solve the corresponding local martingale problem. Theorem 3.15 shows that the converse is also true. Before we formulate that result, we provide a representation theorem due to Doob of an R d -valued local martingale as a stochastic integral with respect to a Brownian motion on a possibly enlarged probability space. We will need that result in the proof of Theorem 3.15 but it is also of interest otherwise. Definition 3.13. If Ω, F, F, P is a FPS and m N is fixed, then we define an m-extension Ω, F, F, P of Ω, F, F, P as follows. Let W be an m-dimensional ˆF-Brownian motion on a FPS ˆΩ, ˆF, ˆF, ˆP and define Ω := Ω ˆΩ, ˇF := F ˆF, ˇF t := F t ˆF t, ˇP := P ˆP. Let Ω, F, F, P be the

14 Wahrscheinlichkeitstheorie IV smallest FPS which contains or extends Ω, ˆF, ˆF, ˆP and which satisfies the usual conditions. Define W t ω = W t ω, ˆω := W t ˆω, t. Note that W is a F-Brownian motion. For any adapted process A on Ω, F, F, P we define Ãt ω = Ãtω, ˆω := A t ω, t. Note that à is F-adapted and à is F-progressive if A is F-progressive. We will drop the tildes whenever there is no danger of confusion. Theorem 3.14. Let M be a continuous R d -valued local F-martingale with M = on some FPS Ω, F, F, P such that M i, M j t = m k=1 Vs ik Vs jk ds, t 3.3.3 for progressive processes Vt ik t, 1 i d, 1 k m. Then on any m-extension Ω, there exists an R m -valued F-Brownian motion B such that M i t = m k=1 V ik s db k s. F, F, P Proof. For each t we regard V t := Vt ik i=1,...,d;k=1,...,m as a linear map from Rm to R d or a d m-matrix. Let N t R m be its null space and R t R d its range. Let Vt 1 : R t Nt be the bijective inverse of the restriction of V t to Nt and denote the orthogonal projection from a Euclidean space to a subspace U by π U. Let W be an m-dimensional Brownian motion as in the previous definition in particular W is independent of M and define B t := V 1 s π Rs dm s + π Ns dw s, t, where we interpret V 1 s π Rs as an m d-matrix or linear map from R d to R m. Note that the integrands are progressive and the stochastic integrals are well-defined! Then the covariation matrix of the continuous local martingale B is d dt Bk, B l = d V 1 t π Rt ki V 1 t π Rt d kj dt M i, M j + i,j=1 m π N t Inserting 3.3.3 we obtain in matrix notation with. T denoting the transpose d dt B t = Vt 1 π Rt V t Vt T V 1 t π Rt T +π N t π N t T = π Nt π N t ν=1 kν π N t lν. T +π N t π N t T = π N t +π Nt = I m. Therefore, Lévy s theorem shows that B is an m-dimensional Brownian motion. Further, using the fact that V s π Ns and for N M loc, N implies N, so the proof is complete. V s db s = = = V s V 1 s π Rs dm s + π Rs dm s + π Rs dm s + V s π Ns dw s π R s dm s = M t,

Version November 28th 218 15 Theorem 3.15. Let b : [, R d R d and a : [, R d S d be measurable. Suppose that X is a solution to LMPa, b on some probability space Ω, F, P. Let m N and let σ : [, R d R d m be a measurable function such that a = σσ T note that such a function always exist when m d. Let F be the complete filtration on Ω, F, P generated by X. Then, on any m-extension Ω, F, F, P of the FPS Ω, F, F, P there exists an R m -valued F-Brownian motion B such that X, B, Ω, F, F, P is a weak solution of 3.3.1. Proof. Fix i {1,..., d}. For each n N, choose f n i Cc R d with f n i x = x i for all x n. For each n N, choose a localizing sequence T m n m N of stopping times for the continuous local martingale M f n i n+1 t. We can and will assume that T t 1 T n n T 1 n+1 for all n N. Then M f n i t T n m t τ n := inf{t : X t n}. Then is a martingale for every m, n N. Define the stopping time M f n i t = X i t X i b i s, X s ds, t τ n. Observe that S n := τ n T n n is an increasing sequence of stopping times with S n almost surely and that M f n i t S n = X i t S n X i t Sn b i s, X s ds is a martingale for each n. Consequently, M i t := X i t X i b i s, X s ds 3.3.4 is a continuous local martingale. This formula also shows that X is a continuous semimartingale. Next, fix i, j {1,..., d} and choose f n i,j Cc R d such that f n i,j x = x i x j whenever x n. As above, one can show that M i,j t := X i t X j t X i Xj X i s b j s, X s + X s j b i s, X s + a ij s, X s ds is a continuous local martingale. Integrating by parts and using 3.3.4, we obtain M i,j t = = X s i dx j X i s s + X s i dm s j + X s j dx s i + X i, X j t b j s, X s + X j b i s, X s + a ij s, X s ds s X s j dm s i + M i, M j t a ij s, X s ds. Therefore, the process M i, M j t a ijs, X s ds is a continuous local martingale starting at which is of locally finite variation, so Theorem 1.57 in WT3 implies that the process is almost surely. We have shown that the processes M i are continuous local martingales starting at such that M i, M j t = a ijs, X s ds. Therefore, the claim follows from Theorem 3.14.

16 Wahrscheinlichkeitstheorie IV Remark 3.16. Note that the previous theorem establishes a one-to-one correspondence between weak solutions and solutions of martingale problems up to the non-unique choice of the matrix σ when a is given. In particular: the sde 3.3.1 with coefficients b and σ has a weak solution if and only if LMPσσ T, b has a solution and we have weak uniqueness if and only LMPσσ T, b, µ has at most one solution P M 1 C[,, R d for every µ MR d. Note that the law of a weak solution depends on σ only via σσ T. Note also that the local martingale formulation does not say much about pathwise uniqueness. Indeed, for equation 3.1.2, we have σx = signx and therefore ax = 1. We showed that weak uniqueness holds and that a weak solution exists for any initial law µ. Therefore the associated LMP1,, µ has a unique solution namely standard Brownian motion with initial distribution µ. Note that for the particular square root σx = signx of a = 1 pathwise uniqueness does not hold and we do not have a strong solution at least not if the initial condition is, while for the root σx = 1 pathwise uniqueness holds and we have a strong solution. 3.4 Martingale Problems: Existence of solutions We already stated an existence result for the solution of a local martingale problem in case a I d using Girsanov s theorem. Our aim in this section is to prove a similar statement for more general functions a. Before we formulate and prove an existence result for a solution of LMPa, b we present a useful result about weak convergence see [K2], Theorem 4.27 and then a tightness criterion for probability measures on the space C[,, R d. Proposition 3.17. Let E and Ẽ be metric spaces, let µ, µ 1,... M 1 E such that µ n µ and let f, f 1, f 2,... : E Ẽ be Borel-measurable mappings. Assume that there exists a set C BE such that µc = 1 and f n x n fx whenever x n x C. Then µ n fn 1 µf 1. Remark 3.18. In WT2 we treated the special case in which all functions f n equal f and f is continuous at µ-almost every point in E. Proof of Proposition 3.17. Fix an open set G Ẽ and let x f 1 G C in case the set in nonempty. Then there exists an open neighborhood N of x and some m N such that f k x G for all k m and all x N. Thus, N k m f 1 k G, and so f 1 G C, f 1 k G m where A denotes the interior of the set A. Using the Portmanteau theorem from WT2, we get µ f 1 G µ f 1 k G = sup µ f 1 k G m k m sup lim inf µ n m n k m k m Again using the Portmanteau theorem the claim follows. m k m f 1 k G lim inf n µ n Consider the space C d := C[,, R d equipped with the metric ρf, g := k= 2 k max fx gx 1. x [k,k+1] f 1 n G.

Version November 28th 218 17 It is easy to see that C d, ρ is complete and separable and hence Polish and that a sequence of elements of C d converges with respect to ρ iff the sequence converges uniformly on every compact subset of [,. For t, define the projection map π t : C d C[, t], R d by π t fs := f s, s [, t]. Clearly, π t is continuous if C[, t], R d is equipped with the supremum norm. We define the modulus of continuity of f C d on [, t] by w f t h := sup{ f r f s ; r, s [, t], r s h}, h >. Proposition 3.19. The set Γ M 1 C is relatively compact and hence tight iff the following two conditions hold: i For each ε > there exists a compact set K ε R d such that µ{f C d : f K ε } 1 ε for every µ Γ. ii For each T >, ε > and δ > there exists some h > such that Proof. We will show this in class. µ {f C d : w f T h δ} ε for all µ Γ. We will use the following sufficient tightness criterion on C c. Proposition 3.2. Let Y i, i I be a family of C d -valued random variables on possibly different spaces Ω i, F i, P i. A sufficient condition for tightness of LY i, i I is that the following two conditions hold: i For each ε > there exists a compact set K ε R d such that P i {Y i K ε} 1 ε for every i I. ii For each T > there exist a T, b T, c T > such that E Y i t Y i s a T c T t s 1+b T ; s, t [, T ], i I. Proof. Fix T > and define Zt i := YtT i, t. Then, for s, t [, 1], E Z i t Z i s a T = E Y i tt Y i st a T c T T 1+b T t s 1+b T. Choose κ T, b T /a T. Then Kolmogorov s continuity theorem as e.g. in Theorem 5.2 in the WT3 Lecture notes implies P i w Y i T h δ = P i w Z i 1 T/h δ 1 w δ a E Z i i T 1 T/h a T 1 δ a hκ T a T ξ T T, for some ξ T > which does not depend on i I nor on δ and h, so the claim follows from the previous proposition. The following is extremely preliminary. Please ignore it! Theorem 3.21. Let a : R d S d and b : R d R d be bounded and continuous and let x R d. Then LMPa, b, δ x has a solution.

18 Wahrscheinlichkeitstheorie IV Proof. The idea of the proof is to approximate the functions a and b by smoother functions a n and b n such that we know that LMPa n, b n, δ x has a solution P n M 1 C d and to show that the sequence P n, n N is tight. Since C d is a Polish space, we know from Prohorov s theorem WT2 that there exists a subsequence of P n, n N which converges weakly to some P M 1 C d. Then it remains to show that P solves LMPa, b, δ x. We choose b n : R d R d such that b n C R d, R d and b n b uniformly on R d it is clear that such a sequence exists. In addition we can and will assume that sup x R d b n x sup x R d bx =: B. To define a n, we first choose a continuous map σ : R d R d d such that ax = σxσ T x note that σ is automatically bounded. Denote the k-th column of σ by σ k, k = 1,...d and choose σ k n C R d, R d such that σ k n toσ k uniformly on R d. Define a n ij := σ.... It remains to verify that X solves LMPa, b, δ x which is equivalent to showing that for every s < t <, any f Cc R d and any g C[, s], R d E fx t fx s s A r fx r dr gx s, s [, s] =. We write this equation as EϕX = where ϕ : C[, t], R d R. We know that the corresponding equation Eϕ n X n = holds where ϕ n is defined like ϕ but with A r replaced by A n r.... Remark 3.22. The previous theorem can be generalized considerably see for example [K2]: the functions a and b can be allowed to be time-dependent, random, and to depend also on the past values of the solution process. Further, boundedness can be weakened to a linear growth condition and δ x can be replaced by an arbitrary µ M 1 R d.

Bibliography [Bo18] A. Bovier, Introduction to stochastic analysis, Course Notes, Bonn, https://www.dropbox.com/s/1rys3xij2fb4ys/stochana.pdf?dl= 218. [DZ96] G. Da Prato and J. Zabczyk, Ergodicity for Infinite Dimensional Systems, Cambridge Univ. Press, Cambridge, 1996. [Eb16] A. Eberle, Introduction to stochastic analysis, Course Notes, Bonn, https://wt.iam.unibonn.de/eberle/skripten/ 216 [EK86] S. N. Ethier and T. G. Kurtz, Markov Processes: Characterization and Convergence, Wiley, New York, 1986. [IW89] N. Ikeda and S. Watanabe, Stochastic Differential Equations and Diffusion Processes, North-Holland, Ansterdam, 1989. [K2] O. Kallenberg, Foundations of Modern Probability, second edition, Springer, New York, 22. [KS91] I. Karatzas and S. Shreve, Brownian Motion and Stochastic Calculus, second edition, Springer, New York, 1991. [Pr92] P. Protter, Stochastic Integration and Stochastic Differential Equations, second edition, Springer, New York, 1992. [Re13] M. v.renesse, Stochastische Differentialgleichungen, Course Notes, Leipzig, http://www.math.uni-leipzig.de/ renesse, 213. [RY94] D. Revuz and M. Yor, Continuous Martingales and Brownian Motion, second edition, Springer, 1994. [RW194] L.C.G. Rogers and D. Williams, Diffusions, Markov Processes and Martingales, Volume I: Foundations, second edition, Wiley, 1994. [RW294] L.C.G. Rogers and D. Williams, Diffusions, Markov Processes and Martingales, Volume 2: Itô Calculus, second edition, Wiley, 1994. [SV79] D. Stroock and S. Varadhan, Multidimensional Diffusion Processes, Springer, 1979. [St13] T. Sturm, Stochastische Analysis, Course Notes, Bonn, http://www-wt.iam.unibonn.de/ sturm/skript/stochana.pdf [WW9] H. v.weizsäcker and G. Winkler, Stochastic Integrals, Vieweg, 199. 19