Martingale problems and stochastic equations for Markov processes

Size: px
Start display at page:

Download "Martingale problems and stochastic equations for Markov processes"

Transcription

1 First Prev Next Go To Go Back Full Screen Close Quit 1 Martingale problems and stochastic equations for Markov processes 1. Basics of stochastic processes 2. Markov processes and generators 3. Martingale problems 4. Exisence of solutions and forward equations 5. Stochastic integrals for Poisson random measures 6. Weak and strong solutions of stochastic equations 7. Stochastic equations for Markov processes in R d 8. Convergence for Markov processes characterized by martingale problems

2 First Prev Next Go To Go Back Full Screen Close Quit 2 9. Convergence for Markov processes characterized by stochastic differential equations 1. Martingale problems for conditional distributions 11. Equivalence of stochastic equations and martingale problems 12. Genealogies and ordered representations of measure-valued processes 13. Poisson representations 14. Stochastic partial differenctial equations 15. Information and conditional expectation 16. Technical lemmas 17. Exercises 18. Stochastic analysis exercises 19. References

3 kurtz/franklect.htm First Prev Next Go To Go Back Full Screen Close Quit 3

4 First Prev Next Go To Go Back Full Screen Close Quit 4 1. Basics of stochastic processes Filtrations Stopping times Martingales Optional sampling theorem Doob s inequalities Stochastic integrals Local martingales Semimartingales Computing quadratic variations Covariation Itô s formula

5 First Prev Next Go To Go Back Full Screen Close Quit 5 Conventions and caveats State spaces are always complete, separable metric spaces (sometimes called Polish spaces), usually denoted (E, r). All probability spaces are complete. All identities involving conditional expectations (or conditional probabilities) only hold almost surely (even when I don t say so). If the filtration {F t } involved is obvious, I will say adapted, rather than {F t }-adapted, stopping time, rather than {F t }-stopping time, etc. All processes are cadlag (right continuous with left limits at each t > ), unless otherwise noted. A process is real-valued if that is the only way the formula makes sense.

6 First Prev Next Go To Go Back Full Screen Close Quit 6 References Kurtz, Lecture Notes for Math kurtz/m735.htm Ethier and Kurtz, Markov Processes: Characterization and Convergence Protter, Stochastic Integration and Differential Equations, Second Edition

7 First Prev Next Go To Go Back Full Screen Close Quit 7 Filtrations (Ω, F, P ) a probability space Available information is modeled by a sub-σ-algebra of F F t information available at time t {F t } is a filtration. t < s implies F t F s {F t } is complete if F contains all subsets of sets of probability zero. A stochastic process X is adapted to {F t } if X(t) is F t -measurable for each t. An E-valued stochastic process X adapted to {F t } is {F t }-Markov if E[f(X(t + r)) F t ] = E[f(X(t + r)) X(t)], t, r, f B(E)

8 First Prev Next Go To Go Back Full Screen Close Quit 8 Measurability for stochastic processes A stochastic process is an indexed family of random variables, but if the index set is [, ), then we may want to know more about X(t, ω) than that it is a measurable function of ω for each t. For example, for a R-valued process X, when are random variables? b a X(s, ω)ds and X(τ(ω), ω) X is measurable if (t, ω) [, ) Ω X(t, ω) E is B([, )) F- measurable. Lemma 1.1 If X is measurable and b a X(s, ω) ds <, then b a X(s, ω)ds is a random variable. If, in addition, τ is a nonnegative random variable, then X(τ(ω), ω) is a random variable.

9 First Prev Next Go To Go Back Full Screen Close Quit 9 Proof. The first part is a standard result for measurable functions on a product space. Verify the result for X(s, ω) = 1 A (s)1 B (ω), A B[, ), B F and apply the Dynkin class theorem to extend the result to 1 C, C B[, ) F. If τ is a nonnegative random variable, then ω Ω (τ(ω), ω) [, ) Ω is measurable. Consequently, X(τ(ω), ω) is the composition of two measurble functions.

10 First Prev Next Go To Go Back Full Screen Close Quit 1 Measurability continued A stochastic process X is {F t }-adapted if for all t, X(t) is F t - measurable. If X is measurable and adapted, the restriction of X to [, t] Ω is B[, t] F-measurable, but it may not be B[, t] F t -measurable. X is progressive if for each t, (s, ω) [, t] Ω X(s, ω) E is B[, t] F t -measurable. Let W = {A B[, ) F : A [, t] Ω B[, t] F t, t }. Then W is a σ-algebra and X is progressive if and only if (s, ω) X(s, ω) is W-measurable. Since pointwise limits of measurable functions are measurable, pointwise limits of progressive processes are progressive.

11 First Prev Next Go To Go Back Full Screen Close Quit 11 Stopping times Let {F t } be a filtration. τ is a F t -stopping time if and only if {τ t} F t for each t. If τ is a stopping time, F τ {A F : A {τ t} F t, t }. If τ 1 and τ 2 are stopping times with τ 1 τ 2, then F τ1 F τ2. If τ 1 and τ 2 are stopping times then τ 1 and τ 1 τ 2 are F τ1 -measurable.

12 First Prev Next Go To Go Back Full Screen Close Quit 12 A process observed at a stopping time If X is measurable and τ is a stopping time, then X(τ(ω), ω) is a random variable. Lemma 1.2 If τ is a stopping time and X is progressive, then X(τ) is F τ - measurable. Proof. ω Ω (τ(ω) t, ω) [, t] Ω is measurable as a mapping from (Ω, F t ) to ([, t] Ω, B[, t] F t ). Consequently, is F t -measurable, and ω X(τ(ω) t, ω) {X(τ) A} {τ t} = {X(τ t) A} {τ t} F t.

13 First Prev Next Go To Go Back Full Screen Close Quit 13 Right continuous processes Most of the processes you know are either continuous (e.g., Brownian motion) or right continuous (e.g., Poisson process). Lemma 1.3 If X is right continuous and adapted, then X is progressive. Proof. If X is adapted, then (s, ω) [, t] Ω Y n (s, ω) X( [ns] + 1 t, ω) n = X( k + 1 n t, ω)1 [ k n, k+1)(s) n k is B[, t] F t -measurable. By the right continuity of X, Y n (s, ω) X(s, ω) on B[, t] F t, so (s, ω) [, t] Ω X(s, ω) is B[, t] F t - measurable and X is progressive.

14 First Prev Next Go To Go Back Full Screen Close Quit 14 Examples and properties Define F t+ s>t F s. {F t } is right continuous if F t = F t+ for all t. If {F t } is right continuous, then τ is a stopping time if and only if {τ < t} F t for all t >. Let X be cadlag and adapted. If K E is closed, τ h K = inf{t : X(t) or X(t ) K} is a stopping time, but inf{t : X(t) K} may not be; however, if {F t } is right continuous and complete, then for any B B(E), τ B = inf{t : X(t) B} is an {F t }-stopping time. This result is a special case of the debut theorem, a very technical result from set theory. Note that {ω : τ B (ω) < t} = {ω : s < t X(s, ω) B} = proj Ω {(s, ω) : X(s, ω) B, s < t}

15 First Prev Next Go To Go Back Full Screen Close Quit 15 Piecewise constant approximations ɛ >, τ ɛ =, τ ɛ i+1 = inf{t > τ ɛ i : r(x(t), X(τ ɛ i )) r(x(t ), X(τ ɛ i )) ɛ} Define X ɛ (t) = X(τ ɛ i ), τ ɛ i t < τ ɛ i+1. Then r(x(t), Xɛ (t)) ɛ. If X is adapted to {F t }, then the {τ ɛ i } are {F t}-stopping times and X ɛ is {F t }-adapted. See Exercise 4.

16 First Prev Next Go To Go Back Full Screen Close Quit 16 Martingales An R-valued stochastic process M adapted to {F t } is an {F t }-martingale if E[M(t + r) F t ] = M(t), t, r Every martingale has finite quadratic variation: [M] t = lim (M(t t i+1 ) M(t t i )) 2 where = t < t 1 <, t i, and the limit is in probability as max(t i+1 t i ). More precisely, for ɛ > and t >, lim P {sup t t [M] t lim (M(t t i+1 ) M(t t i )) 2 > ɛ} =. For standard Brownian motion W, [W ] t = t.

17 First Prev Next Go To Go Back Full Screen Close Quit 17 Optional sampling theorem A real-valued process is a submartingale if E[ X(t) ] <, t, and E[X(t + s) F t ] X(t), t, s. If τ 1 and τ 2 are stopping times, then E[X(t τ 2 ) F τ1 ] X(t τ 1 τ 2 ). If τ 2 is finite a.s. E[ X(τ 2 ) ] < and lim t E[ X(t) 1 {τ2 >t}] =, then E[X(τ 2 ) F τ1 ] X(τ 1 τ 2 ). Of course, if X is a martingale E[X(t τ 2 ) F τ1 ] = X(t τ 1 τ 2 ).

18 First Prev Next Go To Go Back Full Screen Close Quit 18 Square integrable martingales M a martingale satisfying E[M(t) 2 ] <. Then M(t) 2 [M] t is a martingale. In particular, for t > s E[(M(t) M(s)) 2 ] = E[[M] t [M] s ].

19 First Prev Next Go To Go Back Full Screen Close Quit 19 Doob s inequalities Let X be a submartingale. Then for x >, P {sup X(s) x} x 1 E[X(t) + ] s t P {inf X(s) x} s t x 1 (E[X(t) + ] E[X()]) If X is nonnegative and α > 1, then E[sup X(s) α ] s t ( α α 1 ) α E[X(t) α ]. Note that by Jensen s inequality, if M is a martingale, then M is a submartingale. In particular, if M is a square integrable martingale, then E[sup M(s) 2 ] 4E[M(t) 2 ]. s t

20 First Prev Next Go To Go Back Full Screen Close Quit 2 Stochastic integrals Definition 1.4 For cadlag processes X, Y, t X Y (t) X(s )dy (s) = lim X(ti )(Y (t i+1 t) Y (t i t)) max t i+1 t i whenever the limit exists in probability. Sample paths of bounded variation: If Y is a finite variation process, the stochastic integral exists (apply dominated convergence theorem) and t X(s )dy (s) = X(s )α Y (ds) α Y is the signed measure with (,t] α Y (, t] = Y (t) Y ()

21 First Prev Next Go To Go Back Full Screen Close Quit 21 Existence for square integrable martingales If M is a square integrable martingale, then E[(M(t + s) M(t)) 2 F t ] = E[[M] t+s [M] t F t ] For partitions {t i } and {r i } [( E X(ti )(M(t i+1 t) M(t i t)) ) ] 2 X(r i )(M(r i+1 t) M(r i t)) [ t ] = E (X(t(s )) X(r(s ))) 2 d[m] s [ ] = E (X(t(s )) X(r(s ))) 2 α [M] (ds) (,T ] t(s) = t i for s [t i, t i+1 ) r(s) = r i for s [r i, r i+1 )

22 First Prev Next Go To Go Back Full Screen Close Quit 22 Cauchy property Let X be bounded by a constant. As sup(t i+1 t i ) + sup(r i+1 r i ), the right side converges to zero by the dominated convergence theorem. M {t i} X (t) X(t i )(M(t i+1 t) M(t i t)) is a square integrable martingale, so [ E ( X(ti )(M(t i+1 t) M(t i t)) ) ] 2 X(r i )(M(r i+1 t) M(r i t)) [ ] 4E (X(t(s )) X(r(s ))) 2 α [M] (ds) (,t] sup t T A completeness argument gives existence of the stochastic integral and the uniformity implies the integral is cadlag.

23 First Prev Next Go To Go Back Full Screen Close Quit 23 Local martingales Definition 1.5 M is a local martingale if there exist stopping times {τ n } satisfying τ 1 τ 2 and τ n a.s. such that M τ n defined by M τ n (t) = M(τ n t) is a martingale. M is a local square-integrable martingale if the τ n can be selected so that M τ n is square integrable. {τ n } is called a localizing sequence for M. Remark 1.6 If {τ n } is a localizing sequence for M, and {γ n } is another sequence of stopping times satisfying γ 1 γ 2, γ n a.s. then the optional sampling theorem implies that {τ n γ n } is localizing.

24 First Prev Next Go To Go Back Full Screen Close Quit 24 Local martingales with bounded jumps Remark 1.7 If M is a continuous, local martingale, then τ n = inf{t : M(t) n} will be a localizing sequence. More generally, if for some constant c, then will be a localizing sequence. M(t) c τ n = inf{t : M(t) M(t ) n} Note that M τ n n + c, so M is local square integrable.

25 First Prev Next Go To Go Back Full Screen Close Quit 25 Semimartingales Definition 1.8 Y is an {F t }-semimartingale if and only if Y = M + V, where M is a local square integrable martingale with respect to {F t } and V is an {F t }-adapted finite variation process. In particular, if X is cadlag and adapted and Y is a semimartingale, then X dy exists.

26 First Prev Next Go To Go Back Full Screen Close Quit 26 Computing quadratic variations Let Z(t) = Z(t) = Z(t ). Lemma 1.9 If Y is finite variation, then [Y ] t = s t Y (s) 2 Lemma 1.1 If Y is a semimartingale, X is adapted, and Z(t) = t X(s )dy (s) then [Z] t = t X(s ) 2 d[y ] s. Proof. Check first for piecewise constant X and then approximate general X by piecewise constant processes.

27 First Prev Next Go To Go Back Full Screen Close Quit 27 Covariation The covariation of Y 1, Y 2 is defined by [Y 1, Y 2 ] t lim i (Y 1 (t i+1 t) Y 1 (t i t)) (Y 2 (t i+1 t) Y 2 (t i t))

28 First Prev Next Go To Go Back Full Screen Close Quit 28 Itô s formula If f : R R is C 2 and Y is a semimartingale, then f(y (t)) = f(y ()) + t f (Y (s ))dy (s) + t 1 2 f (Y (s))d[y ] c s + s t (f(y (s)) f(y (s )) f (Y (s )) Y (s) where [Y ] c is the continuous part of the quadratic variation given by [Y ] c t = [Y ] t s t Y (s) 2.

29 First Prev Next Go To Go Back Full Screen Close Quit 29 Itô s formula for vector-valued semimartingales If f : R m R is C 2, Y 1,..., Y m are semimartingales, and Y = (Y 1,..., Y m ), then defining [Y k, Y l ] c t = [Y k, Y l ] t s t Y k (s) Y l (s), f (Y (t)) = f (Y ()) + + m k,l=1 + s t m t k=1 t 1 2 k f (Y (s )) dy k (s) k l f (Y (s )) d[y k, Y l ] c s (f (Y (s)) f (Y (s )) m k f (Y (s )) Y k (s)). k=1

30 First Prev Next Go To Go Back Full Screen Close Quit 3 Examples W standard Brownian motion Z(t) = exp{w (t) 1 t 2 t} = Z(s)d(W (s) 1 t 2 s) + = t Z(s)dW (s) 1 2 Z(s)ds

31 First Prev Next Go To Go Back Full Screen Close Quit Markov processes and generators Time homogeneous Markov processes Markov processes and semigroups Semigroup generators Martingale properties Dynkin s identity Strongly continuous contraction semigroups Resolvent operator Transition functions

32 First Prev Next Go To Go Back Full Screen Close Quit 32 Time homogeneous Markov processes A process X is Markov with respect to a filtration {F t } provided E[f(X(t + r)) F t ] = E[f(X(t + r)) X(t)] for all t, r and all f B(E). The conditional expectation on the right can be written as g f,t,r (X(t)) for a measurable funtion g f,t,r depending on f, t, and r. If the function can be selected independently of t, that is E[f(X(t + r)) X(t)] = g f,r (X(t)), then the Markov process is time homogeneous. A time inhomogeneous Markov process can be made time homogeneous by including time in the state. That is, set Z(t) = (X(t), t). Note that g f,r will be linear in f, so we can write g f,r = T (r)f, where T (r) is a linear operator on B(E) (the bounded measurable functions on E). The Markov property then implies T (r + s)f = T (r)t (s)f.

33 First Prev Next Go To Go Back Full Screen Close Quit 33 Markov processes and semigroups {T (t) : B(E) B(E), t } is an operator semigroup if T (t)t (s)f = T (t + s)f X is a Markov process with operator semigroup {T (t)} if and only if E[f(X(t + s)) F X t ] = T (s)f(x(t)), t, s, f B(E). T (s + r)f(x(t)) = E[f(X(t + s + r)) Ft X ] = E[E[f(X(t + s + r)) Ft+s] F X t X ] = E[T (r)f(x(t + s)) Ft X ] = T (s)t (r)f(x(t))

34 First Prev Next Go To Go Back Full Screen Close Quit 34 Semigroup and finite dimensional distributions Lemma 2.1 If X is a Markov process corresponding to {T (t)}, then the finite dimensional distributions of X are determined by {T (t)} and the distribution of X(). Proof.For t 1 t 2, E[f 1 (X(t 1 ))f 2 (X(t 2 ))] = E[f 1 (X(t 1 ))T (t 2 t 1 )f 2 (X(t 1 ))] = E[T (t 1 )[f 1 T (t 2 t 1 )f 2 ](X())]

35 First Prev Next Go To Go Back Full Screen Close Quit 35 Semigroup generators f is in the domain of the strong generator of the semigroup if there exists g B(E) such that Then Af g. T (t)f f lim g =. t + t f is in the domain of the weak generator à (see Dynkin (1965)), if sup t t 1 (T (t)f f) < and there exists g B(E) such that T (t)f(x) f(x) lim t + t = g(x) Ãf(x), x E. The full generator  (see Ethier and Kurtz (1986)) is A à Â.  = {(f, g) B(E) B(E) : T (t)f = f + t T (s)gds

36 First Prev Next Go To Go Back Full Screen Close Quit 36 Martingale properties Lemma 2.2 If X is a progressive Markov process corresponding to {T (t)} and (f, g) Â, then M f (t) = f(x(t)) f(x()) t is a martingale (not necessarily right continuous). g(x(s))ds Proof. E[M f (t + r) M f (t) F t ] = E[f(X(t + r)) f(x(t)) = T (r)f(x(t)) f(x(t)) = t+r t t+r t g(x(s))ds F t ] T (s t)g(x(t))ds

37 First Prev Next Go To Go Back Full Screen Close Quit 37 Dynkin s identity Change of notation: Simply write Âf for g, if (f, g) Â. If M f is right continuous, the optional sampling theorem implies E[f(X(t τ))] = E[f(X())] + E[ t τ Âf(X(s))ds].

38 First Prev Next Go To Go Back Full Screen Close Quit 38 Exit times Assume D is open and X is right continuous. Let τ h D = inf{t : X(t) or X(t ) / D}. Write E x for expectations under the condition that X() = x. Suppose f is bounded and continuous, Âf =, and τ D Then f(x) = E x [f(x(τd))]. h < a.s. If f is bounded and continuous, Âf(x) = 1, x D, and f(y) =, y / D, and P {X(τD h ) D} =, then f(x) = E x [τ h D]

39 First Prev Next Go To Go Back Full Screen Close Quit 39 Exit distributions in one dimension For a one-dimensional diffusion process Lf(x) = 1 2 a(x)f (x) + b(x)f (x). Find f such that Lf(x) = (i.e., solve the linear first order differential equation for f ). Then f(x(t)) is a local martingale. Fix a < b, and define τ = inf{t : X(t) / (a, b)}. If sup a<x<b f(x) <, then E x [f(x(t τ))] = f(x). Moreover, if τ < a.s. E x [f(x(τ))] = f(x). Hence f(a)p x (X(τ) = a) + f(b)p x (X(τ) = b) = f(x), and therefore the probability of exiting the interval at the right endpoint is given by P x (X(τ) = b) = f(x) f(a) f(b) f(a) (2.1)

40 First Prev Next Go To Go Back Full Screen Close Quit 4 Exit time To find conditions under which P x (τ < ) = 1, or more precisely, under which E x [τ] <, solve Lg(x) = 1. Then g(x(t)) g((x()) t, is a local martingale and C = sup a<x<b g(x) <, E x [g(x(t τ))] = g(x) + E x [t τ] and 2C E[t τ], so 2C E[τ], which implies τ < a.s. By (2.1), E x [τ] = E x [g(x(τ))] g(x) f(x) f(a) f(x) = g(b) + g(a)f(b) f(b) f(a) f(b) f(a) g(x)

41 First Prev Next Go To Go Back Full Screen Close Quit 41 Strongly continuous contraction semigroup Semigroups associated with Markov processes are contraction semigroups, i.e., T (t)f f, f B(E). Let L = {f B(E) : lim t + T (t)f f =. Then D(A) is dense in L. λf Af λ f, f D(A), λ >. R(λ A) = L, λ >.

42 First Prev Next Go To Go Back Full Screen Close Quit 42 The resolvent Lemma 2.3 For λ > and h L, (λ A) 1 h = Proof. Let f = e λt T (t)hdt. Then r 1 (T (r)f f) = r 1 ( e λt T (t)hdt e λt T (t + r)hdt = r 1 (e λr e λt T (t)hdt λf h r e λt T (t)hdt) e λt T (t)hdt)

43 First Prev Next Go To Go Back Full Screen Close Quit 43 Hille-Yosida theorem Theorem 2.4 The closure of A is the generator of a strongly continuous contraction semigroup on L if and only if D(A) is dense in L. λf Af λ f, f D(A), λ >. R(λ A) is dense in L. Proof. Necessity is discussed above. Assuming A is closed (otherwise, replace A by its closure), the conditions imply R(λ A) = L and the semigroup is obtained by T (t)f = lim n (I 1 n A) [nt] f. (One must show that the right side is Cauchy.)

44 First Prev Next Go To Go Back Full Screen Close Quit 44 Probabilistic interpretation of the limit If T (t) corresponds to a Markov process X, then (I 1 n A) 1 f(x) = E x [f(x( 1 n ))], where is a unit exponential independent of X, and [nt] (I 1 n A) [nt] f(x) = E x [f(x( 1 i ))] n i=1

45 First Prev Next Go To Go Back Full Screen Close Quit 45 Transition functions Definition 2.5 P (t, x, Γ) defined on [, ) E B(E) is a transition function if P (,, Γ) is Borel measurable for each Γ B(E), P (t, x, ) P(E) for each (t, x) [, ) E, and P satisfies the Chapman-Kolmogorov relation P (t + s, x, Γ) = P (s, y, Γ)P (t, x, dy). E A Markov process X corresponds to a transition function P provided P {X(t) Γ X() = x} = P (t, x, Γ). T (t)f(x) = E f(y)p (t, x, dy) defines a semigroup on B(E).

46 First Prev Next Go To Go Back Full Screen Close Quit 46 The resolvent for the full generator Lemma 2.6 Suppose T (t) : B(E) B(E) is given by a transition function, T (t)f(x) = E f(y)p (t, x, dy). For h B(E), define f(x) = Then (f, λf h) Â. e λt T (t)h(x)dt. Proof. t T (s)(λf h)ds = λ = λ t t e λu T (s + u)hduds e λs e λu T (u)hduds = e λt e λu T (u)hdu = T (t)f f t s t t T (s)hds T (s)hds e λu T (u)hdu

47 First Prev Next Go To Go Back Full Screen Close Quit 47 A convergence lemma Lemma 2.7 Let E be compact and suppose {f k } C(E) separates points. If {x n } satisfies lim n f k (x n ) exists for every f k, then lim n x n exists. Proof. If x and x are limit points of {x n }, we must have f k (x) = f k (x ) for all k. But then x = x, since {f k } separates points.

48 First Prev Next Go To Go Back Full Screen Close Quit 48 Feller processes Lemma 2.8 Assume E is compact, T (t) : C(E) C(E), and lim T (t)f(x) = f(x), x E, f C(E). t If X is a Markov process corresponding to {T (t)}, then X has a modification with cadlag sample paths. Proof. For h C(E), f = R λ h e λt T (t)hdt C(E), so setting g = λf h, f(x(t)) f(x()) t g(x(s))ds is a martingale. By the upcrossing inequality, there exists a set Ω f Ω with P (Ω f ) = 1 such that for ω Ω f, lim s t+,s Q f(x(s, ω)) exists for each t and lim s t,s Q f(x(s, ω)) exists for each t >. Suppose {h k, k 1} C(E) is dense. Then {R λ h k : λ Q (, ), k 1} separates points in E.

49 First Prev Next Go To Go Back Full Screen Close Quit Martingale problems Definition Equivalent formulations Uniqueness of 1-dimensional distributions implies uniqueness of fdd Uniqueness under the Hille-Yosida conditions Markov property Quasi-left continuity kurtz/franklect.htm

50 First Prev Next Go To Go Back Full Screen Close Quit 5 Martingale problems: Definition E state space (a complete, separable metric space) A generator (a linear operator with domain and range in B(E) µ P(E) X is a solution of the martingale problem for (A, µ) if and only if µ = P X() 1 and there exists a filtration {F t } such that M f (t) = f(x(t)) t is an {F t }-martingale for each f D(A). Af(X(s))ds

51 First Prev Next Go To Go Back Full Screen Close Quit 51 Examples of generators Standard Brownian motion (E = R d ) Af = 1 2 f, D(A) = C2 c (R d ) Poisson process (E = {, 1, 2...}, D(A) = B(E)) Af(k) = λ(f(k + 1) f(k)) Pure jump process (E arbitrary) Af(x) = λ(x) (f(y) f(x))µ(x, dy) E Diffusion (E = R d, D(A) = Cc 2 (R d )) Af(x) = 1 2 a ij (x) f(x) + 2 x i x j i i,j b i (x) x i f(x) (3.1)

52 First Prev Next Go To Go Back Full Screen Close Quit 52 Equivalent formulations Suppose, without loss of generality, that D(A) is closed under addition of constants (A1 = ). Then the following are equivalent: a) X is a solution of the martingale problems for (A, µ). b) P X() 1 = µ and there exists a filtration {F t } such that for each λ > and each f D(A), e λt f(x(t)) is a {F t }-martingale. t e λs (λf(x(s)) Af(X(s)))ds c) P X() 1 = µ and there exists a filtration {F t } such that for each f D(A) with inf x E f(x) >, R f (t) = f(x(t)) f(x() exp{ is a {F t }-martingale. t Af(X(s)) f(x(s)) ds}

53 First Prev Next Go To Go Back Full Screen Close Quit 53 Proof. For Part (c), assume D(A) C b (E) and X is right continuous. t Af(X(s)) f(x(t)) exp{ f(x(s)) ds} t r Af(X(s)) = f(x()) + exp{ f(x(s)) ds}df(x(r)) t f(x(r)) Af(X(r)) r f(x(r)) exp{ Af(X(s)) f(x(s)) ds}dr = f(x()) + t exp{ r Af(X(s)) f(x(s)) ds}dm f(r) so if M f is a martingale, then R f is a martingale.

54 First Prev Next Go To Go Back Full Screen Close Quit 54 Conversely, if R f is a martingale, then is a martingale. M f (t) = f(x()) + t exp{ r Af(X(s)) f(x(s)) ds}dr f(r) Note that considering only f that are strictly positive is no restriction since we can always add a constant to f.

55 First Prev Next Go To Go Back Full Screen Close Quit 55 Conditions for the martingale property Lemma 3.1 For (f, g) A, h 1,..., h m C(E), and t 1 t 2 t m+1, let η(y ) η(y, (f, g), {h i }, {t i }) = (f(y (t m+1 ) f(y (t m )) tm+1 t m g(y (s)ds) m h i (Y (t i )). Then Y is a solution of the martingale problem for A if and only if E[η(Y )] = for all such η. i=1 The assertion that Y is a solution of the martingale problem for A is an assertion about the finite dimensional distributions of Y.

56 First Prev Next Go To Go Back Full Screen Close Quit 56 Uniqueness of 1-dimensional distributions implies uniqueness of fdd Theorem 3.2 If any two solutions of the martingale problem for A satisfying P X 1 () 1 = P X 2 () 1 also satisfy P X 1 (t) 1 = P X 2 (t) 1 for all t, then the f.d.d. of a solution X are uniquely determined by P X() 1 Proof. If X is a solution of the MGP for A and X a (t) = X(a + t), then X a is a solution of the MGP for A. Further more, for positive f i B(E) and t 1 < t 2 < < t m = a, define Q(B) = E[1 B(X a ) m i=1 f i(x(t i ))] E[ m i=1 f i(x(t i ))] defines a probability measure on F = σ(x a (s), s ) and under Q, X a is a solution of the martingale problem for A with initial distribution µ(γ) = E[1 Γ(X(a)) m i=1 f i(x(t i ))] E[ m i=1 f. i(x(t i ))]

57 First Prev Next Go To Go Back Full Screen Close Quit 57 Proceeding by induction, fix m, suppose E[ m i=1 f i(x(t i ))] is uniquely determined for all t 1 < t 2 < < t m and all f i. The µ is uniquely determined and the one dimensional distributions of X a under Q are uniquely determined, that is E[f m+1 (X(t m+1 )) m i=1 f i(x(t i ))] E[ m i=1 f i(x(t i ))] is uniquely determined for t m+1 a. Since a is arbitrary and the denominator is uniquely determined, the numerator is uniquely determined completing the induction step.

58 First Prev Next Go To Go Back Full Screen Close Quit 58 Adding a time component Lemma 3.3 Suppose that g(t, x) has the property that g(t, ) D(A) for each t and that g, t g, and Ag are all bounded in t and x and are continuous functions of t. If X is a solution of the martingale problem for A, then is a martingale. g(t, X(t)) t ( s g(x, X(s)) + Ag(s, X(s)))ds

59 First Prev Next Go To Go Back Full Screen Close Quit 59 Proof. E[g(t + r, X(t + r)) g(t, X(t)) F t ] = k E[g(t + s k+1, X(t + s k+1 )) g(t + s k, X(t + s k )) F t ] = k E[g(t + s k+1, X(t + s k+1 )) g(t + s k+1, X(t + s k )) F t ] + k E[g(t + s k+1, X(t + s k )) g(t + s k, X(t + s k )) F t ] = k t+sk+1 E[ Ag(t + s k+1, X(t + r))dr F t ] t+s k + k E[ t+sk+1 t+s k r g(t + r, X(t + s k ))dr F t ] To complete the proof, see Exercise 14.

60 First Prev Next Go To Go Back Full Screen Close Quit 6 Uniqueness under the Hille-Yosida conditions Theorem 3.4 If A satisfies the conditions of Theorem 2.4 and D(A) is separating, then there is at most one solution to the martingale problem. Proof. If X is a solution of the martingale problem for A, then by Lemma 3.3, for each t > and each f D(A), T (t s)f(x(s)) is a martingale. This martingale property extends to all f in the closure of D(A). Consequently, E[f(X(t)) F s ] = T (t s)f(x(s)), and E[f(X(t))] = E[T (t)f(x())] which determines the one dimensional distributions implying uniqueness.

61 First Prev Next Go To Go Back Full Screen Close Quit 61 Markov property Theorem 3.5 Suppose the conclusion of Theorem 3.2 holds. If X is a solution of the martingale problem for A with respect to a filtration {F t }, then X is Markov with respect to {F t }. Proof. Assuming that P (F ) >, let F F r and for B F, define P 1 (B) = E[1 F E[1 B F r ]] P (F ) Define Y (t) = X(r + t). Then P 1 {Y () Γ} = E[1 F E[1 {Y () Γ} F r ]] P (F ) = E[1 F 1 {X(r) Γ} ] P (F ), P 2 (B) = E[1 F E[1 B X(r)]]. P (F ) = E[1 F E[1 {X(r) Γ} X(r)]] P (F ) = E[1 F E[1 {X(r) Γ} F r ]] P (F ) = P 2 {Y () Γ} Check that E P 1 [η(y )] = EP 2 [η(y )] = for all η(y ) as in Lemma 3.1.

62 First Prev Next Go To Go Back Full Screen Close Quit 62 Therefore E[1 F E[f(X(r + t)) F r ]] = P (F )E P 1 [f(y (t))] = P (F )E P 2 [f(y (t))] = E[1 F E[f(X(r + t)) X(r)]] Since F F r is arbitrary, E[f(X(r + t)) F r ] = E[f(X(r + t) X(r)] and the Markov property follows.

63 First Prev Next Go To Go Back Full Screen Close Quit 63 Cadlag versions Lemma 3.6 Suppose E is compact and A C(E) B(E). If D(A) is separating, then any solution of the martingale problem for A has a cadlag modification. Proof. See Lemma 2.8

64 First Prev Next Go To Go Back Full Screen Close Quit 64 Quasi-left continuity X is quasi-left continuous if and only if for each sequence of stopping times τ 1 τ 2 such that τ lim n τ n < a.s., lim X(τ n) = X(τ) n a.s. Lemma 3.7 Let A C(E) B(E), and suppose that D(A) is separating. Let X be a cadlag solution of the martingale problems for A. Then X is quasi-left continuous Proof. For (f, g) A, τ t lim f(x(τ n t)) = lim E[f(X(τ t)) g(x(s))ds F τn ] n n τ n t = E[f(X(τ t)) n F τn ].

65 First Prev Next Go To Go Back Full Screen Close Quit 65 Since X is cadlag, lim X(τ n t) = n { X(τ t) if τn t = τ t for n sufficiently large X(τ t ) To complete the proof, see Exercise 6. if τ n t < τ t for all n

66 First Prev Next Go To Go Back Full Screen Close Quit 66 Continuity of diffusion process Lemma 3.8 Suppose E = R d and Af(x) = 1 2 a ij (x) f(x)+ 2 x i x j i i,j b i (x) x i f(x), D(A) = C 2 c (R d ). If X is a solution of the martingale problem for A, then X has a modification that is cadlag in R d { }. If X is cadlag, then X is continuous. Proof. The existence of a cadlag modification follows by Lemma 3.6. To show continuity, it is enough to show that for f Cc (R d ), f X is continuous. To show f X is continuous, it is enough to show lim max t i+1 t i (f(x(ti+1 t) f(x(t i t))) 4 =.

67 First Prev Next Go To Go Back Full Screen Close Quit 67 From the martingale properties, E[(f(X(t + h)) f(x(t))) 4 ] = Check that t+h t E[Af 4 (X(s)) 4f(X(t))Af 3 (X(s)) +6f 2 (X(t))Af 2 (X(s)) 4f 3 (X(t))Af(X(s))]ds Af 4 (x) 4f(x)Af 3 (x) + 6f 2 (x)af 2 (x) 4f 3 (x)af(x) = (3.2)

68 First Prev Next Go To Go Back Full Screen Close Quit Existence of solutions and forward equations Conditions for relative compactness for cadlag processes Forward equations Uniqueness of the forward equation under a range condition Construction of a solution of a martingale problem Stationary distributions Echeverria s theorem Equivalence of the forward equation and the MGP

69 First Prev Next Go To Go Back Full Screen Close Quit 69 Conditions for relative compactness Let (E, r) be complete, separable metric space, and define q(x, y) = 1 r(x, y) (so q is an equivalent metric under which E is complete). Let {X n } be a sequence of cadlag processes with values in E, X n adapted to {F n t }. Theorem 4.1 Assume the following: a) For t T, a dense subset of [, ), {X n (t)} is relatively compact. b) For T >, there exist β > and random variables γ n (δ, T ) such that for t T, u δ, E[q β (X n (t + u), X n (t)) F n t ] E[γ n (δ, T ) F n t ] and lim δ lim sup n E[γ n (δ, T )] =. Then {X n } is relatively compact in D E [, ).

70 First Prev Next Go To Go Back Full Screen Close Quit 7 Relative compactness for martingale problems Theorem 4.2 Let E be compact, and let {A n } be a sequence of generators. Suppose there exists a dense subset D C(E) such that for each f D there exist f n D(A n ) such that lim n f n f = and C f = sup n A n f n <. If {X n } is a sequence of cadlag processes in E such that for each n, X n is a solution of the martingale problem for A n, then {X n } is relatively compact in D E [, ).

71 First Prev Next Go To Go Back Full Screen Close Quit 71 Proof. Let f D and for δ >, let h δ D be such that there exist lim sup δ δch δ < and lim δ h δ f 2 =. Then for u δ, E[(f(X n (t + u) f(x n (t)) 2 F n t ] = E[f 2 (X n (t + u)) f 2 (X n (t)) F n t ] +2f(X n (t)e[f(x n (t + u)) f(x n (t)) F n t ] 2 f 2 h δ n + 4 f f f n + 2 f C f δ + C h δδ 2 f h δ + 2 f C f δ + C h δδ, which implies relative compactness for {f(x n )}. Since the collection of such f is dense in C(E), relative compactness of {X n } follows.

72 First Prev Next Go To Go Back Full Screen Close Quit 72 The forward equation for a general Markov process Let A B(E) B(E). If X is a solution of the martingale problem for A and ν t is the distribution of X(t), then = E[f(X(t)) f(x()) so ν t f = ν f + t t Af(X(s))ds] = ν t f ν f t ν s Afds ν s Afds, f D(A). (4.1) (4.1) gives the weak form of the forward equation. Definition 4.3 A measurable mapping t [, ) ν t P(E) is a solution of the forward equation for A if (4.1) holds for all f D(A).

73 First Prev Next Go To Go Back Full Screen Close Quit 73 Fokker-Planck equation Let Af = 1 2 a(x)f (x) + b(x)f (x), f D(A) = Cc 2 (R). If ν t has a C 2 density, then ν t Af = = Af(x)ν(t, x)ds ( 1 2 f(x) 2 x2(a(x)ν(t, x)) x and the forward equation is equivalent to ) (b(x)ν(t, x)) dx t ν(t, x) = x2(a(x)ν(t, x)) (b(x)ν(t, x)), x known as the Fokker-Planck equation in the physics literature.

74 First Prev Next Go To Go Back Full Screen Close Quit 74 Uniqueness for the forward equation Lemma 4.4 If {ν t } and {µ t } are solutions of the forward equation for A with ν = µ and R(λ A) is separating for each λ >, then e λt ν t dt = e λt µ t dt and µ t = ν t for almost every t. Consequently, if ν and µ are weakly right continuous or if D(A) is separating, ν t = µ t for all t. Proof. and hence λ e λt ν t fdt = ν f + λ = ν f + λ = ν f + t e λt ν s Afds dt s e λt ν s Afdt ds e λs ν s Af ds e λt ν t (λf Af)dt = ν f. Since R(λ A) is separating, the result holds.

75 First Prev Next Go To Go Back Full Screen Close Quit 75 The semigroup and the forward equation If ν P(E), then ν T (t)f = ν f + t ν T (s)afds and if {T (t)} is given by a transition function, ν t = E P (t, x, )ν (dx) satisfies ν t f = ν f + t ν s Afds, f D(A). If A is the strong generator and D(A) is separating, then uniqueness follows by Lemma 4.4.

76 First Prev Next Go To Go Back Full Screen Close Quit 76 Dissipativity and the positive maximum principle For λ >, λf t 1 (T (t)f f) (λ + t 1 ) f t 1 T (t)f λ f so A is dissipative λf Af λ f, λ >. Definition 4.5 A satisfies the positive maximum principle if f(x) = f implies Af(x). Lemma 4.6 The weak generator for a Markov process satisfies the positive maximum principle. Lemma 4.7 Let E be compact and D(A) C(E). If A satisfies the positive maximum principle, then A is dissipative.

77 First Prev Next Go To Go Back Full Screen Close Quit 77 Digression on the proof of the Hille-Yosida theorem The conditions of the Hille-Yosida theorem 2.4 imply (I n 1 A) 1 exists and (I n 1 A) 1 f f. In addition (I n 1 A) 1 f f 1 n Af. One proof of the Hille-Yosida theorem is to show that T n (t)f = (I n 1 A) [nt] f is a Cauchy sequence and to observe that [nt] T n (t)f = f + 1 (I n 1 A) k Af = f + n k=1 [nt] n T n (s + n 1 )Afds

78 First Prev Next Go To Go Back Full Screen Close Quit 78 Probabilistic interpretation (n A) 1 = e nt T (t)dt and (I n 1 A) 1 = n e nt T (t)dt. If {T (t)} is given by a transition function, then η n (x, dy) = n e nt P (t, x, dy)dt is a discrete time transition function. If {Yk n } is a Markov chain with transition function η n, then E[f(Yk n )] = E[(I n 1 A) k f(y )] = E[f(X( k ))], n where the i are independent unit exponentials, and X n (t) = Y[nt] n can be written as X n (t) = X( 1 [nt] k ) n k=1

79 First Prev Next Go To Go Back Full Screen Close Quit 79 Construction of a solution of a martingale problem Theorem 4.8 Assume that E is compact, A C(E) C(E), (1, ) A, A is linear, and D(A) is dense in C(E). Assume that A satisfies the positive maximum principle (and is consequently dissipative). Then there exists a transition function η n such that f(y)η n (x, dy) = (I n 1 A) 1 f(x) (4.2) for all f R(I n 1 A). E

80 First Prev Next Go To Go Back Full Screen Close Quit 8 Proof. Note that D((I n 1 A) 1 ) = R(I n 1 A). For each x E, η x h = (I n 1 A) 1 h(x) is a linear functional on R(I n 1 A). Since h(x) = f(x) 1 naf(x) for some f D(A) and A satisfies the positive maximum principle, η x h = f(x) h and η x 1 = 1. The Hahn-Banach theorem implies η x extends to a positive linear functional on C(E) (hence a probability measure). Γ x = {η P(E) : ηf = (I n 1 A) 1 f(x), f R(I n 1 A)} is closed and lim sup y x Γ y Γ x. The measurable selection theorem implies the existence of η satisfying (4.2).

81 First Prev Next Go To Go Back Full Screen Close Quit 81 Approximating Markov chain For η n as in (4.2), define A n f = n( E f(y)η n (x, dy) f(x)) Then A is the generator of a pure-jump Markov process of the form X n (t) = YN n n n (t), where {Yk } is a Markov chain with transition function η n and N n is a Poisson process with parameter n. Then f(x n (t)) f(x n ()) t A n f(x n (s))ds is a martingale, and in particular, if f D(A) and f n = f n 1 Af, then A n f n = Af and Mf n (t) = f n (X n (t)) f n (X n ()) is a martingale. t Af(X n (s))ds

82 First Prev Next Go To Go Back Full Screen Close Quit 82 Theorem 4.2 implies {X n } is relatively compact, and (see Lemma 3.1), if X is a limit point of {X n }, for t 1 < < t m+1, = E[(f n (X n (t m+1 ) f n (X n (t m )) E[(f(X(t m+1 ) f(x(t m )) tm+1 t m tm+1 t m Af(X n (s)ds) Af(X(s)ds) m h i (X n (t i ))] i=1 m h i (X(t i ))], at least if the {t i } are selected outside the at most countable set of times at which X has a fixed point of discontinuity. Since X is right continuous, the right side is in fact zero for all choices of {t i }, so X is a solution of the martingale problem for A. i=1

83 First Prev Next Go To Go Back Full Screen Close Quit 83 Stationary distributions Definition 4.9 A stochastic process X is stationary if the distribution of X t X(t + ) does not depend on t. Definition 4.1 µ is a stationary distribution for the martingale problem for A if there exists a stationary solution of the martingale problem for A with marginal distribution µ. Theorem 4.11 Suppose that D(A) and R(λ A) are separating and that for each ν P(E), there exists a solution of the martingale problem for (A, ν). If µ P(E) satisfies Afdµ =, f D(A), E then µ is a stationary distribution for A. (See Lemma 4.4.)

84 First Prev Next Go To Go Back Full Screen Close Quit 84 Echeverria s theorem Theorem 4.12 Let E be compact, and let A C(E) C(E) be linear and satisfy the positive maximum principle. Suppose that D(A) is closed under multiplication and dense in C(E). If µ P(E) satisfies Afdµ =, f D(A), then µ is a stationary distribution of A. Example 4.13 E = [, 1], Af(x) = 1 2 f (x) E D(A) = {f C 2 [, 1] : f () = f (1) =, f ( 1 3 ) = f ( 2 3 )} Let µ(dx) = 3I [ 1 3, 2 3 ] (x)dx.

85 First Prev Next Go To Go Back Full Screen Close Quit 85 Outline of proof In the proof of Theorem 4.8, we constructed η n so that f n (y)η n (x, dy) = (f(y) 1 n Af(y))η n(x, dy) = f(x) E Consequently E E f n (y)η n (x, dy)µ(dx) = E E f(x)µ(dx) = E (f(x) 1 n Af(x))µ(dx) = E f n (x)µ(dx) For F (x, y) = m i=1 h i(x)(f i (y) 1 n Af i(y)) + h (y), f i D(A), define [ m ] Λ n F = h i (x)f i (x) + h (x) µ(dx) i=1 If Λ n is given by a measure ν n, then both marginals are µ, and letting η n satisfy ν n (dx, dy) = η n (x, dy)µ(dx), for f D(A), (f(y) 1 n Af(y))η n(x, dy) = f(x), µ a.s.

86 The work is to show that Λ n is a positive linear functional. First Prev Next Go To Go Back Full Screen Close Quit 86

87 First Prev Next Go To Go Back Full Screen Close Quit 87 Extensions Theorem 4.14 Let E be locally compact (e.g., E = R d ), and let A Ĉ(E) Ĉ(E) satisfy the positive maximum principle. Suppose that D(A) is an algebra and dense in Ĉ(E). If µ P(E) satisfies Afdµ =, f D(A), E then µ is a stationary distribution of A. Proof. Let Ê = E { } and extend A to include (1, ). There exists an Ê-valued stationary solution X of the martingale problem for the extended A, but P {X(t) E} = µ(e).

88 First Prev Next Go To Go Back Full Screen Close Quit 88 Complete, separable E E complete, separable. A C(E) C(E). Assume that {g k } is closed under multiplication. Let I be the collection of finite subsets of positive integers, and for I I, let k(i) satisfy g k(i) = i I g i. For each k, there exists a k g k. Let Ê = {z i, a i ] : z k(i) = i=1[ a z i, I I}. i I Note that Ê is compact, and define G : E Ê by G(x) = (g 1(x), g 2 (x),...). Then G has a measurable inverse defined on the (measurable) set G(E). Lemma 4.15 Let µ P(E). Then there exists a unique measure ν P(Ê) satisfying E g kdµ = Ê z kν(dz). In particular, if Z has distribution ν, then G 1 (Z) has distribution µ.

89 First Prev Next Go To Go Back Full Screen Close Quit 89 Equivalence of the forward equation and the MGP Suppose Define and Then ν t f = ν f + t ν s Afds B λ f(x, θ) = Af(x, θ) + λ( f(y, θ)ν (dy) f(x, θ)) E E µ λ = λ e λt ν t dt ( 1 2 δ δ 1). B λ fdµ λ =, f(x, θ) = f 1 (x)f 2 (θ), f 1 D(A). Let τ 1 = inf{t > : Θ(t) Θ()}, τ k+1 = inf{t > τ k : Θ(t) Θ(τ k ).

90 Theorem 4.16 Let (Y, Θ) be a stationary solution of the martingale problem for B λ with marginal distribution µ λ. Let τ 1 = inf{t > : Θ(t) Θ()}, τ k+1 = inf{t > τ k : Θ(t) Θ(τ k ). Define X(t) = Y (τ 1 + t). Then conditioned on τ 2 τ 1 > t, X is a solution of the martingale problem for A and the distribution of X(t) is ν t for t t. First Prev Next Go To Go Back Full Screen Close Quit 9

91 First Prev Next Go To Go Back Full Screen Close Quit Integration with respect Poisson random measures Poisson random measures Stochastic integrals for space-time Poisson random measures The predictable σ-algebra Martingale properties Representation of counting processes Stochastic integrals for centered space-time Poisson random measures Quadratic variation Lévy processes Gaussian white noise

92 First Prev Next Go To Go Back Full Screen Close Quit 92 Poisson distribution Definition 5.1 A random variable X has a Poisson distribution with parameter λ > (write X P oisson(λ)) if for each k {, 1, 2,...} P {X = k} = λk k! e λ. E[X] = λ V ar(x) = λ and the characteristic function of X is E[e iθx ] = e λ(eiθ 1). Since the characteristic function of a random variable characterizes its distribution, a direct computation gives

93 First Prev Next Go To Go Back Full Screen Close Quit 93 Proposition 5.2 If X 1, X 2,... are independent random variables with X i P oisson(λ i ) and i=1 λ i <, then ( ) X = X i P oisson λ i i=1 i=1

94 First Prev Next Go To Go Back Full Screen Close Quit 94 Poisson sums of Bernoulli random variables Proposition 5.3 Let N P oisson(λ), and suppose that Y 1, Y 2,... are i.i.d. Bernoulli random variables with parameter p [, 1]. If N is independent of the Y i, then N i= Y i P oisson(λp). For j = 1,..., m, let e j be the vector in R m that has all its entries equal to zero, except for the jth which is 1. For θ, y R m, let θ, y = m j=1 θ jy j. Proposition 5.4 Let N P oisson(λ). Suppose that Y 1, Y 2,... are independent R m -valued random variables such that for all k and j {1,..., m} P {Y k = e j } = p j, where m j=1 p j = 1. Define X = (X 1,..., X m ) = N k= Y k. If N is independent of the Y k, then X 1,..., X m are independent random variables and X j P oisson(λp j ).

95 First Prev Next Go To Go Back Full Screen Close Quit 95 Poisson random measures Let (U, d U ) be a complete, separable metric space, and let ν be a σ- finite measure on U. Let N (U) denote the collection of counting measures on U. Definition 5.5 A Poisson random measure on U with mean measure ν is a random counting measure ξ (that is, a N (U)-valued random variable) such that a) For A B(U), ξ(a) has a Poisson distribution with expectation ν(a) b) ξ(a) and ξ(b) are independent if A B =. For f M(U), f, define ψ ξ (f) = E[exp{ f(u)ξ(du)}] = exp{ (1 e f )dν} U (Verify the second equality by approximating f by simple functions.)

96 First Prev Next Go To Go Back Full Screen Close Quit 96 Existence Proposition 5.6 Suppose that ν is a measure on U such that ν(u) <. Then there exists a Poisson random measure with mean measure ν. Proof. The case ν(u) = is trivial, so assume that ν(u) (, ). Let N be a Poisson random variable defined on a probability space (Ω, F, P ) with E[N] = ν(u). Let X 1, X 2,... be iid U-valued random variables such that for every A B(U), P {X j A} = ν(a) ν(u), and assume that N is independent of the X j. Define ξ by ξ(a) = N k= 1 {X k A}. In other words ξ = N k= δ X k where, for each x U, δ x is the Dirac mass at x. Extend the existence result to σ-finite measures by partitioning U = i U i, where ν(u i ) <.

97 First Prev Next Go To Go Back Full Screen Close Quit 97 Identities Let ξ be a Poisson random measure with mean measure ν. Lemma 5.7 Suppose f M(U), f. Then E[ f(y)ξ(dy)] = f(y)ν(dy) Lemma 5.8 Suppose ν is nonatomic and let f M(N (U) U), f. Then E[ f(ξ, y)ξ(dy)] = E[ f(ξ + δ y, y)ν(dy)] U U

98 First Prev Next Go To Go Back Full Screen Close Quit 98 Proof. Suppose f 1 U, where ν(u ) <. Let U = k Uk n, where the Uk n are disjoint and diam(u k n) n 1. If ξ(uk n ) is or 1, then f(ξ, y)ξ(dy) = f(ξ( U n,c k ) + δ y, y)ξ(dy) U n k U n k Consequently, if max k ξ(uk n) 1, U f(ξ, y)ξ(dy) = k U n k f(ξ( U n,c k ) + δ y, y)ξ(dy) Since ξ(u ) <, for n sufficiently large, max k ξ(uk n) 1, E[ f(ξ, y)ξ(dy)] = E[ f(ξ, y)ξ(dy)] U U = lim E[ f(ξ( U n,c n k ) + δ y, y)ξ(dy)] k Uk n = lim E[ f(ξ( U n,c k ) + δ y, y)ν(dy)] n k U n k

99 First Prev Next Go To Go Back Full Screen Close Quit 99 = E[ f(ξ + δ y, y)ν(dy)]. U Note that the last equality follows from the fact that f(ξ( U n,c k ) + δ y, y) f(ξ + δ y, y) only if ξ(uk n) >, and hence, assuming f 1 U, f(ξ( U n,c k )+δ y, y)ν(dy) f(ξ+δ y, y)ν(dy) k Uk n U k ξ(u n k )ν(u n k ), where the expectation of the right side is k ν(u k n)2 = U ν(u n (y))ν(dy) U ν(u B 1/n (y))ν(dy), where U n (y) = Uk n if y U k n. lim n ν(u B 1/n (y)) =, since ν is nonatomic.

100 First Prev Next Go To Go Back Full Screen Close Quit 1 Space-time Poisson random measures Let ξ be a Poisson random measure on U [, ) with mean measure ν l (where l denotes Lebesgue measure). ξ(a, t) ξ(a [, t]) is a Poisson process with parameter ν(a). If ν(a) <, ξ(a, t) ξ(a [, t]) ν(a)t is a martingale. Definition 5.9 ξ is {F t } compatible, if for each A B(U), ξ(a, ) is {F t } adapted and for all t, s, ξ(a (t, t + s]) is independent of F t.

101 First Prev Next Go To Go Back Full Screen Close Quit 11 Stochastic integrals for Poisson random measures For i = 1,..., m, let t i < r i and A i B(U), and let η i be F ti -measurable. Let X(u, t) = i η i1 Ai (u)1 [ti,r i )(t), and note that X(u, t ) = i η i 1 Ai (u)1 (ti,r i ](t). (5.1) Define I ξ (X, t) = X(u, s )ξ(du ds) = U [,t] i Then [ E [ I ξ (X, t) ] E = U [,t] U [,t] η i ξ(a i (t i, r i ]). ] X(u, s ) ξ(du ds) E[ X(u, s) ]ν(du)ds and if the right side is finite, E[I ξ (X, t)] = U [,t] E[X(u, s)]ν(du)ds.

102 First Prev Next Go To Go Back Full Screen Close Quit 12 Estimates in L 1, If U [,t] X(u, s ) 1ξ(du ds) < then ξ{(u, s) : X(u, s ) > 1} < and [ ] E sup I ξ (X 1, t) t T E[ X(u, s) 1]ν(du)ds U [,T ] Definition 5.1 Let L 1, (U, ν) denote the space of B(U) B[, ) F- measurable mappings (u, s, ω) X(u, s, ω) such that e s E[ X(u, s) 1]ν(du)ds <. U Let S denote the collection of B(U) B[, ) F measurable mappings (u, s, t) m i=1 η i(ω)1 Ai (u)1 (ti,r i ](t) defined as in (5.1).

103 First Prev Next Go To Go Back Full Screen Close Quit 13 Lemma 5.11 d 1, (X, Y ) = e s U E[ X(u, s) Y (u, s) 1]ν(du)ds defines a metric on L 1, (U, ν), and the definition of I ξ extends to the closure of S in L 1, (U, ν).

104 First Prev Next Go To Go Back Full Screen Close Quit 14 The predictable σ-algebra Warning: Let N be a unit Poisson process. Then e s E[ N(s) N(s ) 1]ds =, but P { t N(s)dN(s) t N(s )dn(s)} = 1 e t. Definition 5.12 Let (Ω, F, P ) be a probability space and let {F t } be a filtration in F. The σ-algebra P of predictable sets is the smallest σ- algebra in B(U) B[, ) F containing sets of the form A (t, t +r ] B for A B(U), t, r, and B F t. Remark 5.13 Note that for B F t, 1 A (t,t +r ] B(u, t, ω) is left continuous in t and adapted and that the mapping (u, t, ω) X(u, t, ω), where X(u, t, ω) is defined in (5.1), is P-measurable. Definition 5.14 A stochastic process X on U [, ) is predictable if the mapping (u, t, ω) X(u, t, ω) is P-measurable.

105 First Prev Next Go To Go Back Full Screen Close Quit 15 Lemma 5.15 If the mapping (u, t, ω) X(u, t, ω) is B(U) B[, ) Fmeasurable and adapted and is left continuous in t, then X is predictable. Proof.Let = t n < t n 1 < and t n i+1 tn i n 1. Define X n (u, t, ω) = X(u, t n i, ω) for tn i < t t n i+1. Then X n is predictable and for all (u, t, ω). lim X n(u, t, ω) = X(u, t, ω) n

106 First Prev Next Go To Go Back Full Screen Close Quit 16 Stochastic integrals for predictable processes Lemma 5.16 Let G P, B B(U) with ν(b) < and b >. Then 1 B [,b] (u, t)1 G (u, t, ω) is a predictable process and I ξ (1 B [,b] 1 G, t)(ω) = 1 B [,b] (u, s)1 G (u, s, ω)ξ(du ds, ω) a.s. and U [,t] (5.2) E[ 1 B [,b] (u, s)1 G (u, s, )ξ(du ds)] (5.3) U [,t] = E[ 1 B [,b] 1 G (u, s, )ν(du)ds] U [,t]

107 First Prev Next Go To Go Back Full Screen Close Quit 17 Proof. Let A = { m i=1a i (t i, t i + r i ] G i : t i, r i, A i B(U), G i F ti }. Then A is an algebra. For G A, (5.2) holds by definition, and (5.3) holds by direct calculation. The collection of G that satisfy (5.2) and (5.3) is closed under increasing unions and decreasing intersections, and the monotone class theorem (see Theorem 4.1 of the Appendix of Ethier and Kurtz (1986)) gives the lemma.

108 First Prev Next Go To Go Back Full Screen Close Quit 18 Lemma 5.17 Let X be a predictable process satisfying e s E[ X(u, s) 1]ν(du)ds <. U Then U [,t] X(u, t) ξ(du ds) < a.s. and I ξ (X, t)(ω) = X(u, t, ω)ξ(du ds, ω) U [,t] a.s. Proof. Approximate by simple functions.

109 First Prev Next Go To Go Back Full Screen Close Quit 19 Consequences of predictability Lemma 5.18 If X is predictable and U [,t] X(u, s) 1ν(du)ds < a.s. for all t, then X(u, s) ξ(du ds) < a.s. (5.4) and exists a.s. U [,t] U [,t] X(u, s)ξ(du ds) Proof. Let τ c = inf{t : U [,t] X(u, s) 1ν(du)ds c}, and consider X c (s, u) = 1 [,τc ](s)x(u, s). Then X c satisfies the conditions of Lemma 5.17, so X(u, s) 1ξ(du ds) < a.s. U [,t] But this implies ξ{(u, s) : s t, X(u, s) > 1} <, so (5.4) holds.

110 First Prev Next Go To Go Back Full Screen Close Quit 11 Martingale properties Theorem 5.19 Suppose X is predictable and U [,t] E[ X(u, s) ]ν(du)ds < for each t >. Then t X(u, s)ξ(du ds) X(u, s)ν(du)ds U [,t] is a {F t }-martingale. U

111 First Prev Next Go To Go Back Full Screen Close Quit 111 Proof. Let A F t and define X A (u, s) = 1 A X(u, s)1 (t,t+r] (s). Then X A is predictable and E[1 A X(u, s)ξ(du ds)] = E[ X A (u, s)ξ(du ds)] U (t,t+r] U [,t+r] = E[ X A (u, s)ν(du)ds] U [,t+r] = E[1 A X(u, s)ν(du)ds] and hence E[ U (t,t+r] U (t,t+r] X(u, s)ξ(du ds) F t ] = E[ X(u, s)ν(du)ds F t ]. U (t,t+r]

112 First Prev Next Go To Go Back Full Screen Close Quit 112 Local martingales Lemma 5.2 If then U [,t] U [,t] is a local martingale. X(u, s) ν(du)ds < a.s. t, X(u, s)ξ(du ds) U [,t] X(u, s)ν(du)ds

113 First Prev Next Go To Go Back Full Screen Close Quit 113 Proof. If τ is a stopping time and X is predictable, then 1 [,τ] (t)x(u, t) is predictable. Let τ c = {t > : X(u, s) ν(du)ds c}. Then U [,t τ c ] = U [,t] X(u, s)ξ(du ds) X(u, s)ν(du)ds U [,t τ c ] 1 [,τc ](s)x(u, s)ξ(du ds) 1 [,τc ](s)x(u, s)ν(du)ds. U [,t] is a martingale. U [,t]

Weak convergence and large deviation theory

Weak convergence and large deviation theory First Prev Next Go To Go Back Full Screen Close Quit 1 Weak convergence and large deviation theory Large deviation principle Convergence in distribution The Bryc-Varadhan theorem Tightness and Prohorov

More information

{σ x >t}p x. (σ x >t)=e at.

{σ x >t}p x. (σ x >t)=e at. 3.11. EXERCISES 121 3.11 Exercises Exercise 3.1 Consider the Ornstein Uhlenbeck process in example 3.1.7(B). Show that the defined process is a Markov process which converges in distribution to an N(0,σ

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

for all f satisfying E[ f(x) ] <.

for all f satisfying E[ f(x) ] <. . Let (Ω, F, P ) be a probability space and D be a sub-σ-algebra of F. An (H, H)-valued random variable X is independent of D if and only if P ({X Γ} D) = P {X Γ}P (D) for all Γ H and D D. Prove that if

More information

Math 735: Stochastic Analysis

Math 735: Stochastic Analysis First Prev Next Go To Go Back Full Screen Close Quit 1 Math 735: Stochastic Analysis 1. Introduction and review 2. Notions of convergence 3. Continuous time stochastic processes 4. Information and conditional

More information

Martingale Problems. Abhay G. Bhatt Theoretical Statistics and Mathematics Unit Indian Statistical Institute, Delhi

Martingale Problems. Abhay G. Bhatt Theoretical Statistics and Mathematics Unit Indian Statistical Institute, Delhi s Abhay G. Bhatt Theoretical Statistics and Mathematics Unit Indian Statistical Institute, Delhi Lectures on Probability and Stochastic Processes III Indian Statistical Institute, Kolkata 20 24 November

More information

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition Filtrations, Markov Processes and Martingales Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition David pplebaum Probability and Statistics Department,

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 15. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

Feller Processes and Semigroups

Feller Processes and Semigroups Stat25B: Probability Theory (Spring 23) Lecture: 27 Feller Processes and Semigroups Lecturer: Rui Dong Scribe: Rui Dong ruidong@stat.berkeley.edu For convenience, we can have a look at the list of materials

More information

Stochastic Processes II/ Wahrscheinlichkeitstheorie III. Lecture Notes

Stochastic Processes II/ Wahrscheinlichkeitstheorie III. Lecture Notes BMS Basic Course Stochastic Processes II/ Wahrscheinlichkeitstheorie III Michael Scheutzow Lecture Notes Technische Universität Berlin Sommersemester 218 preliminary version October 12th 218 Contents

More information

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3 Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................

More information

Reflected Brownian Motion

Reflected Brownian Motion Chapter 6 Reflected Brownian Motion Often we encounter Diffusions in regions with boundary. If the process can reach the boundary from the interior in finite time with positive probability we need to decide

More information

ELEMENTS OF PROBABILITY THEORY

ELEMENTS OF PROBABILITY THEORY ELEMENTS OF PROBABILITY THEORY Elements of Probability Theory A collection of subsets of a set Ω is called a σ algebra if it contains Ω and is closed under the operations of taking complements and countable

More information

1. Stochastic equations for Markov processes

1. Stochastic equations for Markov processes First Prev Next Go To Go Back Full Screen Close Quit 1 1. Stochastic equations for Markov processes Filtrations and the Markov property Ito equations for diffusion processes Poisson random measures Ito

More information

Convergence of Feller Processes

Convergence of Feller Processes Chapter 15 Convergence of Feller Processes This chapter looks at the convergence of sequences of Feller processes to a iting process. Section 15.1 lays some ground work concerning weak convergence of processes

More information

Branching Processes II: Convergence of critical branching to Feller s CSB

Branching Processes II: Convergence of critical branching to Feller s CSB Chapter 4 Branching Processes II: Convergence of critical branching to Feller s CSB Figure 4.1: Feller 4.1 Birth and Death Processes 4.1.1 Linear birth and death processes Branching processes can be studied

More information

Stochastic Processes. Winter Term Paolo Di Tella Technische Universität Dresden Institut für Stochastik

Stochastic Processes. Winter Term Paolo Di Tella Technische Universität Dresden Institut für Stochastik Stochastic Processes Winter Term 2016-2017 Paolo Di Tella Technische Universität Dresden Institut für Stochastik Contents 1 Preliminaries 5 1.1 Uniform integrability.............................. 5 1.2

More information

Stochastic integration. P.J.C. Spreij

Stochastic integration. P.J.C. Spreij Stochastic integration P.J.C. Spreij this version: April 22, 29 Contents 1 Stochastic processes 1 1.1 General theory............................... 1 1.2 Stopping times...............................

More information

THEOREMS, ETC., FOR MATH 515

THEOREMS, ETC., FOR MATH 515 THEOREMS, ETC., FOR MATH 515 Proposition 1 (=comment on page 17). If A is an algebra, then any finite union or finite intersection of sets in A is also in A. Proposition 2 (=Proposition 1.1). For every

More information

4th Preparation Sheet - Solutions

4th Preparation Sheet - Solutions Prof. Dr. Rainer Dahlhaus Probability Theory Summer term 017 4th Preparation Sheet - Solutions Remark: Throughout the exercise sheet we use the two equivalent definitions of separability of a metric space

More information

Selected Exercises on Expectations and Some Probability Inequalities

Selected Exercises on Expectations and Some Probability Inequalities Selected Exercises on Expectations and Some Probability Inequalities # If E(X 2 ) = and E X a > 0, then P( X λa) ( λ) 2 a 2 for 0 < λ

More information

1. Stochastic Processes and filtrations

1. Stochastic Processes and filtrations 1. Stochastic Processes and 1. Stoch. pr., A stochastic process (X t ) t T is a collection of random variables on (Ω, F) with values in a measurable space (S, S), i.e., for all t, In our case X t : Ω S

More information

Verona Course April Lecture 1. Review of probability

Verona Course April Lecture 1. Review of probability Verona Course April 215. Lecture 1. Review of probability Viorel Barbu Al.I. Cuza University of Iaşi and the Romanian Academy A probability space is a triple (Ω, F, P) where Ω is an abstract set, F is

More information

Convergence of Markov Processes. Amanda Turner University of Cambridge

Convergence of Markov Processes. Amanda Turner University of Cambridge Convergence of Markov Processes Amanda Turner University of Cambridge 1 Contents 1 Introduction 2 2 The Space D E [, 3 2.1 The Skorohod Topology................................ 3 3 Convergence of Probability

More information

Jump Processes. Richard F. Bass

Jump Processes. Richard F. Bass Jump Processes Richard F. Bass ii c Copyright 214 Richard F. Bass Contents 1 Poisson processes 1 1.1 Definitions............................. 1 1.2 Stopping times.......................... 3 1.3 Markov

More information

Dynkin (λ-) and π-systems; monotone classes of sets, and of functions with some examples of application (mainly of a probabilistic flavor)

Dynkin (λ-) and π-systems; monotone classes of sets, and of functions with some examples of application (mainly of a probabilistic flavor) Dynkin (λ-) and π-systems; monotone classes of sets, and of functions with some examples of application (mainly of a probabilistic flavor) Matija Vidmar February 7, 2018 1 Dynkin and π-systems Some basic

More information

Applications of Ito s Formula

Applications of Ito s Formula CHAPTER 4 Applications of Ito s Formula In this chapter, we discuss several basic theorems in stochastic analysis. Their proofs are good examples of applications of Itô s formula. 1. Lévy s martingale

More information

Lecture 12. F o s, (1.1) F t := s>t

Lecture 12. F o s, (1.1) F t := s>t Lecture 12 1 Brownian motion: the Markov property Let C := C(0, ), R) be the space of continuous functions mapping from 0, ) to R, in which a Brownian motion (B t ) t 0 almost surely takes its value. Let

More information

LECTURE 2: LOCAL TIME FOR BROWNIAN MOTION

LECTURE 2: LOCAL TIME FOR BROWNIAN MOTION LECTURE 2: LOCAL TIME FOR BROWNIAN MOTION We will define local time for one-dimensional Brownian motion, and deduce some of its properties. We will then use the generalized Ray-Knight theorem proved in

More information

Lecture 21 Representations of Martingales

Lecture 21 Representations of Martingales Lecture 21: Representations of Martingales 1 of 11 Course: Theory of Probability II Term: Spring 215 Instructor: Gordan Zitkovic Lecture 21 Representations of Martingales Right-continuous inverses Let

More information

Math 635: An Introduction to Brownian Motion and Stochastic Calculus

Math 635: An Introduction to Brownian Motion and Stochastic Calculus First Prev Next Go To Go Back Full Screen Close Quit 1 Math 635: An Introduction to Brownian Motion and Stochastic Calculus 1. Introduction and review 2. Notions of convergence and results from measure

More information

An essay on the general theory of stochastic processes

An essay on the general theory of stochastic processes Probability Surveys Vol. 3 (26) 345 412 ISSN: 1549-5787 DOI: 1.1214/1549578614 An essay on the general theory of stochastic processes Ashkan Nikeghbali ETHZ Departement Mathematik, Rämistrasse 11, HG G16

More information

Solutions to the Exercises in Stochastic Analysis

Solutions to the Exercises in Stochastic Analysis Solutions to the Exercises in Stochastic Analysis Lecturer: Xue-Mei Li 1 Problem Sheet 1 In these solution I avoid using conditional expectations. But do try to give alternative proofs once we learnt conditional

More information

d(x n, x) d(x n, x nk ) + d(x nk, x) where we chose any fixed k > N

d(x n, x) d(x n, x nk ) + d(x nk, x) where we chose any fixed k > N Problem 1. Let f : A R R have the property that for every x A, there exists ɛ > 0 such that f(t) > ɛ if t (x ɛ, x + ɛ) A. If the set A is compact, prove there exists c > 0 such that f(x) > c for all x

More information

MATH MEASURE THEORY AND FOURIER ANALYSIS. Contents

MATH MEASURE THEORY AND FOURIER ANALYSIS. Contents MATH 3969 - MEASURE THEORY AND FOURIER ANALYSIS ANDREW TULLOCH Contents 1. Measure Theory 2 1.1. Properties of Measures 3 1.2. Constructing σ-algebras and measures 3 1.3. Properties of the Lebesgue measure

More information

An Introduction to Stochastic Processes in Continuous Time

An Introduction to Stochastic Processes in Continuous Time An Introduction to Stochastic Processes in Continuous Time Flora Spieksma adaptation of the text by Harry van Zanten to be used at your own expense May 22, 212 Contents 1 Stochastic Processes 1 1.1 Introduction......................................

More information

n E(X t T n = lim X s Tn = X s

n E(X t T n = lim X s Tn = X s Stochastic Calculus Example sheet - Lent 15 Michael Tehranchi Problem 1. Let X be a local martingale. Prove that X is a uniformly integrable martingale if and only X is of class D. Solution 1. If If direction:

More information

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539 Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory

More information

6. Brownian Motion. Q(A) = P [ ω : x(, ω) A )

6. Brownian Motion. Q(A) = P [ ω : x(, ω) A ) 6. Brownian Motion. stochastic process can be thought of in one of many equivalent ways. We can begin with an underlying probability space (Ω, Σ, P) and a real valued stochastic process can be defined

More information

Lecture 19 L 2 -Stochastic integration

Lecture 19 L 2 -Stochastic integration Lecture 19: L 2 -Stochastic integration 1 of 12 Course: Theory of Probability II Term: Spring 215 Instructor: Gordan Zitkovic Lecture 19 L 2 -Stochastic integration The stochastic integral for processes

More information

Stochastic Integration.

Stochastic Integration. Chapter Stochastic Integration..1 Brownian Motion as a Martingale P is the Wiener measure on (Ω, B) where Ω = C, T B is the Borel σ-field on Ω. In addition we denote by B t the σ-field generated by x(s)

More information

1/12/05: sec 3.1 and my article: How good is the Lebesgue measure?, Math. Intelligencer 11(2) (1989),

1/12/05: sec 3.1 and my article: How good is the Lebesgue measure?, Math. Intelligencer 11(2) (1989), Real Analysis 2, Math 651, Spring 2005 April 26, 2005 1 Real Analysis 2, Math 651, Spring 2005 Krzysztof Chris Ciesielski 1/12/05: sec 3.1 and my article: How good is the Lebesgue measure?, Math. Intelligencer

More information

A Concise Course on Stochastic Partial Differential Equations

A Concise Course on Stochastic Partial Differential Equations A Concise Course on Stochastic Partial Differential Equations Michael Röckner Reference: C. Prevot, M. Röckner: Springer LN in Math. 1905, Berlin (2007) And see the references therein for the original

More information

MATH 6605: SUMMARY LECTURE NOTES

MATH 6605: SUMMARY LECTURE NOTES MATH 6605: SUMMARY LECTURE NOTES These notes summarize the lectures on weak convergence of stochastic processes. If you see any typos, please let me know. 1. Construction of Stochastic rocesses A stochastic

More information

Real Analysis, 2nd Edition, G.B.Folland Elements of Functional Analysis

Real Analysis, 2nd Edition, G.B.Folland Elements of Functional Analysis Real Analysis, 2nd Edition, G.B.Folland Chapter 5 Elements of Functional Analysis Yung-Hsiang Huang 5.1 Normed Vector Spaces 1. Note for any x, y X and a, b K, x+y x + y and by ax b y x + b a x. 2. It

More information

Brownian Motion and Stochastic Calculus

Brownian Motion and Stochastic Calculus ETHZ, Spring 17 D-MATH Prof Dr Martin Larsson Coordinator A Sepúlveda Brownian Motion and Stochastic Calculus Exercise sheet 6 Please hand in your solutions during exercise class or in your assistant s

More information

Cores for generators of some Markov semigroups

Cores for generators of some Markov semigroups Cores for generators of some Markov semigroups Giuseppe Da Prato, Scuola Normale Superiore di Pisa, Italy and Michael Röckner Faculty of Mathematics, University of Bielefeld, Germany and Department of

More information

Universal examples. Chapter The Bernoulli process

Universal examples. Chapter The Bernoulli process Chapter 1 Universal examples 1.1 The Bernoulli process First description: Bernoulli random variables Y i for i = 1, 2, 3,... independent with P [Y i = 1] = p and P [Y i = ] = 1 p. Second description: Binomial

More information

Exercises in stochastic analysis

Exercises in stochastic analysis Exercises in stochastic analysis Franco Flandoli, Mario Maurelli, Dario Trevisan The exercises with a P are those which have been done totally or partially) in the previous lectures; the exercises with

More information

Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals

Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals Noèlia Viles Cuadros BCAM- Basque Center of Applied Mathematics with Prof. Enrico

More information

Wiener Measure and Brownian Motion

Wiener Measure and Brownian Motion Chapter 16 Wiener Measure and Brownian Motion Diffusion of particles is a product of their apparently random motion. The density u(t, x) of diffusing particles satisfies the diffusion equation (16.1) u

More information

Probability and Measure

Probability and Measure Part II Year 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2018 84 Paper 4, Section II 26J Let (X, A) be a measurable space. Let T : X X be a measurable map, and µ a probability

More information

Some Properties of NSFDEs

Some Properties of NSFDEs Chenggui Yuan (Swansea University) Some Properties of NSFDEs 1 / 41 Some Properties of NSFDEs Chenggui Yuan Swansea University Chenggui Yuan (Swansea University) Some Properties of NSFDEs 2 / 41 Outline

More information

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past.

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past. 1 Markov chain: definition Lecture 5 Definition 1.1 Markov chain] A sequence of random variables (X n ) n 0 taking values in a measurable state space (S, S) is called a (discrete time) Markov chain, if

More information

13 The martingale problem

13 The martingale problem 19-3-2012 Notations Ω complete metric space of all continuous functions from [0, + ) to R d endowed with the distance d(ω 1, ω 2 ) = k=1 ω 1 ω 2 C([0,k];H) 2 k (1 + ω 1 ω 2 C([0,k];H) ), ω 1, ω 2 Ω. F

More information

ON ADDITIVE TIME-CHANGES OF FELLER PROCESSES. 1. Introduction

ON ADDITIVE TIME-CHANGES OF FELLER PROCESSES. 1. Introduction ON ADDITIVE TIME-CHANGES OF FELLER PROCESSES ALEKSANDAR MIJATOVIĆ AND MARTIJN PISTORIUS Abstract. In this note we generalise the Phillips theorem [1] on the subordination of Feller processes by Lévy subordinators

More information

Classical and quantum Markov semigroups

Classical and quantum Markov semigroups Classical and quantum Markov semigroups Alexander Belton Department of Mathematics and Statistics Lancaster University United Kingdom http://www.maths.lancs.ac.uk/~belton/ a.belton@lancaster.ac.uk Young

More information

One-Dimensional Diffusion Operators

One-Dimensional Diffusion Operators One-Dimensional Diffusion Operators Stanley Sawyer Washington University Vs. July 7, 28 1. Basic Assumptions. Let sx be a continuous, strictly-increasing function sx on I, 1 and let mdx be a Borel measure

More information

Hardy-Stein identity and Square functions

Hardy-Stein identity and Square functions Hardy-Stein identity and Square functions Daesung Kim (joint work with Rodrigo Bañuelos) Department of Mathematics Purdue University March 28, 217 Daesung Kim (Purdue) Hardy-Stein identity UIUC 217 1 /

More information

LARGE DEVIATIONS FOR STOCHASTIC PROCESSES

LARGE DEVIATIONS FOR STOCHASTIC PROCESSES LARGE DEVIATIONS FOR STOCHASTIC PROCESSES By Stefan Adams Abstract: The notes are devoted to results on large deviations for sequences of Markov processes following closely the book by Feng and Kurtz ([FK06]).

More information

Exercises. T 2T. e ita φ(t)dt.

Exercises. T 2T. e ita φ(t)dt. Exercises. Set #. Construct an example of a sequence of probability measures P n on R which converge weakly to a probability measure P but so that the first moments m,n = xdp n do not converge to m = xdp.

More information

µ X (A) = P ( X 1 (A) )

µ X (A) = P ( X 1 (A) ) 1 STOCHASTIC PROCESSES This appendix provides a very basic introduction to the language of probability theory and stochastic processes. We assume the reader is familiar with the general measure and integration

More information

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( )

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( ) Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio (2014-2015) Etienne Tanré - Olivier Faugeras INRIA - Team Tosca November 26th, 2014 E. Tanré (INRIA - Team Tosca) Mathematical

More information

Real Analysis, 2nd Edition, G.B.Folland Signed Measures and Differentiation

Real Analysis, 2nd Edition, G.B.Folland Signed Measures and Differentiation Real Analysis, 2nd dition, G.B.Folland Chapter 3 Signed Measures and Differentiation Yung-Hsiang Huang 3. Signed Measures. Proof. The first part is proved by using addivitiy and consider F j = j j, 0 =.

More information

Poisson random measure: motivation

Poisson random measure: motivation : motivation The Lévy measure provides the expected number of jumps by time unit, i.e. in a time interval of the form: [t, t + 1], and of a certain size Example: ν([1, )) is the expected number of jumps

More information

Chapter 1. Poisson processes. 1.1 Definitions

Chapter 1. Poisson processes. 1.1 Definitions Chapter 1 Poisson processes 1.1 Definitions Let (, F, P) be a probability space. A filtration is a collection of -fields F t contained in F such that F s F t whenever s

More information

Lecture 3. This operator commutes with translations and it is not hard to evaluate. Ae iξx = ψ(ξ)e iξx. T t I. A = lim

Lecture 3. This operator commutes with translations and it is not hard to evaluate. Ae iξx = ψ(ξ)e iξx. T t I. A = lim Lecture 3. If we specify D t,ω as a progressively mesurable map of (Ω [, T], F t ) into the space of infinitely divisible distributions, as well as an initial distribution for the starting point x() =

More information

p 1 ( Y p dp) 1/p ( X p dp) 1 1 p

p 1 ( Y p dp) 1/p ( X p dp) 1 1 p Doob s inequality Let X(t) be a right continuous submartingale with respect to F(t), t 1 P(sup s t X(s) λ) 1 λ {sup s t X(s) λ} X + (t)dp 2 For 1 < p

More information

Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t

Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t 2.2 Filtrations Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of σ algebras {F t } such that F t F and F t F t+1 for all t = 0, 1,.... In continuous time, the second condition

More information

Random Process Lecture 1. Fundamentals of Probability

Random Process Lecture 1. Fundamentals of Probability Random Process Lecture 1. Fundamentals of Probability Husheng Li Min Kao Department of Electrical Engineering and Computer Science University of Tennessee, Knoxville Spring, 2016 1/43 Outline 2/43 1 Syllabus

More information

X n D X lim n F n (x) = F (x) for all x C F. lim n F n(u) = F (u) for all u C F. (2)

X n D X lim n F n (x) = F (x) for all x C F. lim n F n(u) = F (u) for all u C F. (2) 14:17 11/16/2 TOPIC. Convergence in distribution and related notions. This section studies the notion of the so-called convergence in distribution of real random variables. This is the kind of convergence

More information

Exercises Measure Theoretic Probability

Exercises Measure Theoretic Probability Exercises Measure Theoretic Probability 2002-2003 Week 1 1. Prove the folloing statements. (a) The intersection of an arbitrary family of d-systems is again a d- system. (b) The intersection of an arbitrary

More information

STAT 331. Martingale Central Limit Theorem and Related Results

STAT 331. Martingale Central Limit Theorem and Related Results STAT 331 Martingale Central Limit Theorem and Related Results In this unit we discuss a version of the martingale central limit theorem, which states that under certain conditions, a sum of orthogonal

More information

(B(t i+1 ) B(t i )) 2

(B(t i+1 ) B(t i )) 2 ltcc5.tex Week 5 29 October 213 Ch. V. ITÔ (STOCHASTIC) CALCULUS. WEAK CONVERGENCE. 1. Quadratic Variation. A partition π n of [, t] is a finite set of points t ni such that = t n < t n1

More information

CIMPA SCHOOL, 2007 Jump Processes and Applications to Finance Monique Jeanblanc

CIMPA SCHOOL, 2007 Jump Processes and Applications to Finance Monique Jeanblanc CIMPA SCHOOL, 27 Jump Processes and Applications to Finance Monique Jeanblanc 1 Jump Processes I. Poisson Processes II. Lévy Processes III. Jump-Diffusion Processes IV. Point Processes 2 I. Poisson Processes

More information

Real Analysis Math 131AH Rudin, Chapter #1. Dominique Abdi

Real Analysis Math 131AH Rudin, Chapter #1. Dominique Abdi Real Analysis Math 3AH Rudin, Chapter # Dominique Abdi.. If r is rational (r 0) and x is irrational, prove that r + x and rx are irrational. Solution. Assume the contrary, that r+x and rx are rational.

More information

Fundamental Inequalities, Convergence and the Optional Stopping Theorem for Continuous-Time Martingales

Fundamental Inequalities, Convergence and the Optional Stopping Theorem for Continuous-Time Martingales Fundamental Inequalities, Convergence and the Optional Stopping Theorem for Continuous-Time Martingales Prakash Balachandran Department of Mathematics Duke University April 2, 2008 1 Review of Discrete-Time

More information

Analysis Qualifying Exam

Analysis Qualifying Exam Analysis Qualifying Exam Spring 2017 Problem 1: Let f be differentiable on R. Suppose that there exists M > 0 such that f(k) M for each integer k, and f (x) M for all x R. Show that f is bounded, i.e.,

More information

Riesz Representation Theorems

Riesz Representation Theorems Chapter 6 Riesz Representation Theorems 6.1 Dual Spaces Definition 6.1.1. Let V and W be vector spaces over R. We let L(V, W ) = {T : V W T is linear}. The space L(V, R) is denoted by V and elements of

More information

Doléans measures. Appendix C. C.1 Introduction

Doléans measures. Appendix C. C.1 Introduction Appendix C Doléans measures C.1 Introduction Once again all random processes will live on a fixed probability space (Ω, F, P equipped with a filtration {F t : 0 t 1}. We should probably assume the filtration

More information

Martingales, standard filtrations, and stopping times

Martingales, standard filtrations, and stopping times Project 4 Martingales, standard filtrations, and stopping times Throughout this Project the index set T is taken to equal R +, unless explicitly noted otherwise. Some things you might want to explain in

More information

16.1. Signal Process Observation Process The Filtering Problem Change of Measure

16.1. Signal Process Observation Process The Filtering Problem Change of Measure 1. Introduction The following notes aim to provide a very informal introduction to Stochastic Calculus, and especially to the Itô integral. They owe a great deal to Dan Crisan s Stochastic Calculus and

More information

Stochastic Calculus February 11, / 33

Stochastic Calculus February 11, / 33 Martingale Transform M n martingale with respect to F n, n =, 1, 2,... σ n F n (σ M) n = n 1 i= σ i(m i+1 M i ) is a Martingale E[(σ M) n F n 1 ] n 1 = E[ σ i (M i+1 M i ) F n 1 ] i= n 2 = σ i (M i+1 M

More information

Weak convergence and Brownian Motion. (telegram style notes) P.J.C. Spreij

Weak convergence and Brownian Motion. (telegram style notes) P.J.C. Spreij Weak convergence and Brownian Motion (telegram style notes) P.J.C. Spreij this version: December 8, 2006 1 The space C[0, ) In this section we summarize some facts concerning the space C[0, ) of real

More information

Stochastic Analysis I S.Kotani April 2006

Stochastic Analysis I S.Kotani April 2006 Stochastic Analysis I S.Kotani April 6 To describe time evolution of randomly developing phenomena such as motion of particles in random media, variation of stock prices and so on, we have to treat stochastic

More information

MATH 202B - Problem Set 5

MATH 202B - Problem Set 5 MATH 202B - Problem Set 5 Walid Krichene (23265217) March 6, 2013 (5.1) Show that there exists a continuous function F : [0, 1] R which is monotonic on no interval of positive length. proof We know there

More information

Solution for Problem 7.1. We argue by contradiction. If the limit were not infinite, then since τ M (ω) is nondecreasing we would have

Solution for Problem 7.1. We argue by contradiction. If the limit were not infinite, then since τ M (ω) is nondecreasing we would have 362 Problem Hints and Solutions sup g n (ω, t) g(ω, t) sup g(ω, s) g(ω, t) µ n (ω). t T s,t: s t 1/n By the uniform continuity of t g(ω, t) on [, T], one has for each ω that µ n (ω) as n. Two applications

More information

STAT331 Lebesgue-Stieltjes Integrals, Martingales, Counting Processes

STAT331 Lebesgue-Stieltjes Integrals, Martingales, Counting Processes STAT331 Lebesgue-Stieltjes Integrals, Martingales, Counting Processes This section introduces Lebesgue-Stieltjes integrals, and defines two important stochastic processes: a martingale process and a counting

More information

Real Analysis Notes. Thomas Goller

Real Analysis Notes. Thomas Goller Real Analysis Notes Thomas Goller September 4, 2011 Contents 1 Abstract Measure Spaces 2 1.1 Basic Definitions........................... 2 1.2 Measurable Functions........................ 2 1.3 Integration..............................

More information

n [ F (b j ) F (a j ) ], n j=1(a j, b j ] E (4.1)

n [ F (b j ) F (a j ) ], n j=1(a j, b j ] E (4.1) 1.4. CONSTRUCTION OF LEBESGUE-STIELTJES MEASURES In this section we shall put to use the Carathéodory-Hahn theory, in order to construct measures with certain desirable properties first on the real line

More information

1 Independent increments

1 Independent increments Tel Aviv University, 2008 Brownian motion 1 1 Independent increments 1a Three convolution semigroups........... 1 1b Independent increments.............. 2 1c Continuous time................... 3 1d Bad

More information

Lecture 17 Brownian motion as a Markov process

Lecture 17 Brownian motion as a Markov process Lecture 17: Brownian motion as a Markov process 1 of 14 Course: Theory of Probability II Term: Spring 2015 Instructor: Gordan Zitkovic Lecture 17 Brownian motion as a Markov process Brownian motion is

More information

Measure Theory on Topological Spaces. Course: Prof. Tony Dorlas 2010 Typset: Cathal Ormond

Measure Theory on Topological Spaces. Course: Prof. Tony Dorlas 2010 Typset: Cathal Ormond Measure Theory on Topological Spaces Course: Prof. Tony Dorlas 2010 Typset: Cathal Ormond May 22, 2011 Contents 1 Introduction 2 1.1 The Riemann Integral........................................ 2 1.2 Measurable..............................................

More information

Interacting Particle Systems. J.M. Swart July 20, 2011

Interacting Particle Systems. J.M. Swart July 20, 2011 Interacting Particle Systems J.M. Swart July 20, 2011 2 Contents 1 Construction of particle systems 7 1.1 Probability on Polish spaces...................... 7 1.2 Markov chains..............................

More information

MATHS 730 FC Lecture Notes March 5, Introduction

MATHS 730 FC Lecture Notes March 5, Introduction 1 INTRODUCTION MATHS 730 FC Lecture Notes March 5, 2014 1 Introduction Definition. If A, B are sets and there exists a bijection A B, they have the same cardinality, which we write as A, #A. If there exists

More information

GAUSSIAN PROCESSES; KOLMOGOROV-CHENTSOV THEOREM

GAUSSIAN PROCESSES; KOLMOGOROV-CHENTSOV THEOREM GAUSSIAN PROCESSES; KOLMOGOROV-CHENTSOV THEOREM STEVEN P. LALLEY 1. GAUSSIAN PROCESSES: DEFINITIONS AND EXAMPLES Definition 1.1. A standard (one-dimensional) Wiener process (also called Brownian motion)

More information

On pathwise stochastic integration

On pathwise stochastic integration On pathwise stochastic integration Rafa l Marcin Lochowski Afican Institute for Mathematical Sciences, Warsaw School of Economics UWC seminar Rafa l Marcin Lochowski (AIMS, WSE) On pathwise stochastic

More information

3 (Due ). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure?

3 (Due ). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure? MA 645-4A (Real Analysis), Dr. Chernov Homework assignment 1 (Due ). Show that the open disk x 2 + y 2 < 1 is a countable union of planar elementary sets. Show that the closed disk x 2 + y 2 1 is a countable

More information

L p Spaces and Convexity

L p Spaces and Convexity L p Spaces and Convexity These notes largely follow the treatments in Royden, Real Analysis, and Rudin, Real & Complex Analysis. 1. Convex functions Let I R be an interval. For I open, we say a function

More information

1 Brownian Local Time

1 Brownian Local Time 1 Brownian Local Time We first begin by defining the space and variables for Brownian local time. Let W t be a standard 1-D Wiener process. We know that for the set, {t : W t = } P (µ{t : W t = } = ) =

More information