1. Stochastic equations for Markov processes

Size: px
Start display at page:

Download "1. Stochastic equations for Markov processes"

Transcription

1 First Prev Next Go To Go Back Full Screen Close Quit 1 1. Stochastic equations for Markov processes Filtrations and the Markov property Ito equations for diffusion processes Poisson random measures Ito equations for Markov processes with jumps

2 First Prev Next Go To Go Back Full Screen Close Quit 2 Filtrations and the Markov property (Ω, F, P ) a probability space Available information is modeled by a sub-σ-algebra of F F t information available at time t {F t } is a filtration. t < s implies F t F s A stochastic process X is adapted to {F t } if X(t) is F t -measurable for each t. An E-valued stochastic process X adapted to {F t } is {F t }-Markov if E[f(X(t + r)) F t ] = E[f(X(t + r)) X(t)], t, r, f B(E) An R-valued stochastic process M adapted to {F t } is an {F t }-martingale if E[M(t + r) F t ] = M(t), t, r τ is an {F t }-stopping time if for each t, {τ t} F t. For a stopping time τ, F τ = {A F : {τ t} A F t, t }

3 First Prev Next Go To Go Back Full Screen Close Quit 3 Ito integrals W standard Browian motion (W has independent increments with W (t + s) W (t) normally distributed with mean zero and variance s.) W is compatible with a filtration {F t } if W is {F t }-adapted and W (t + s) W (t) is independent of F t for all s, t. If X is R-valued, cadlag and {F t }-adapted, then X(s )dw (s) = lim X(ti )(W (t t i+1 ) W (t t i )) max(t i+1 t i ) exists, and the integral satisfies E[( if the right side is finite. X(s )dw (s)) 2 ] = E[ X(s) 2 ds] The integral extends to all measurable and adapted processes satisfying X(s) 2 ds < W is an R m -valued standard Brownian motion if W = (W 1,..., W m ) T for W 1,..., W m independent one-dimensional standard Brownian motions.

4 First Prev Next Go To Go Back Full Screen Close Quit 4 Ito equations for diffusion processes σ : R d M d m and b : R d R d Consider X(t) = X() + σ(x(s))dw (s) + b(x(s))ds where X() is independent of the standard Brownian motion W. Why is X Markov? Suppose W is compatible with {F t }, X is {F t }-adapted, and X is unique (among {F t }-adapted solutions). Then X(t + r) = X(t) + +r t σ(x(s))dw (s) + +r t b(x(s))ds and X(t + r) = H(t, r, X(t), W t ) where W t (s) = W (t + s) W (t). Since W t is independent of F t, E[f(X(t + r)) F t ] = f(h(t, r, X(t), w))µ W (dw).

5 First Prev Next Go To Go Back Full Screen Close Quit 5 Gaussian white noise. (U, d U ) a complete, separable metric space; B(U), the Borel sets µ a (Borel) measure on U A(U) = {A B(U) : µ(a) < } W (A, t) Mean zero, Gaussian process indexed by A(U) [, ) E[W (A, t)w (B, s)] = t sµ(a B), W (ϕ, t) = ϕ(u)w (du, t) ϕ(u) = a i I Ai (u) W (ϕ, t) = i a iw (A i, t) E[W (ϕ 1, t)w (ϕ 2, s)] = t s ϕ U 1(u)ϕ 2 (u)µ(du) Define W (ϕ, t) for all ϕ L 2 (µ).

6 First Prev Next Go To Go Back Full Screen Close Quit 6 X(t) = i ξ i(t)ϕ i process in L 2 (µ) Definition of integral I W (X, t) = X(s, u)w (du ds) = U [,t] i [ E[I W (X, t) 2 ] = E i,j [ = E U ξ i(s)dw (ϕ i, s) ξ i (s)ξ j (s)ds ϕ i ϕ j dµ U ] X(s, u) 2 µ(du)ds ] The integral extends to adapted processes satisfying X(s, u) 2 µ(du)ds < a.s. U so that (I W (X, t)) 2 is a local martingale. U X(s, u) 2 µ(du)ds

7 First Prev Next Go To Go Back Full Screen Close Quit 7 ν Poisson random measures a σ-finite measure on U, (U, d U ) a complete, separable metric space. ξ a Poisson random measure on U [, ) with mean measure ν m. For A B(U) B([, )), ξ(a) has a Poisson distribution with expectation ν m(a) and ξ(a) and ξ(b) are independent if A B =. ξ(a, t) ξ(a [, t]) is a Poisson process with parameter ν(a). ξ(a, t) ξ(a [, t] ν(a)t is a martingale. ξ is {F t } compatible, if for each A B(U), ξ(a, ) is {F t } adapted and for all t, s, ξ(a (t, t + s]) is independent of F t.

8 First Prev Next Go To Go Back Full Screen Close Quit 8 Stochastic integrals for Poisson random measures X cadlag, L 1 (ν)-valued, {F t }-adapted We define I ξ (X, t) = U [,t] X(u, s )ξ(du ds) in such a way that [ ] E [ I ξ (X, t) ] E X(u, s ) ξ(du ds) U [,t] = E[ X(u, s) ]ν(du)ds U [,t] If the right side is finite, then [ ] E X(u, s )ξ(du ds) = U [,t] U [,t] E[X(u, s)]ν(du)ds

9 First Prev Next Go To Go Back Full Screen Close Quit 9 Predictable integrands The integral extends to predictable integrands satisfying X(u, s) 1ν(du)ds < U [,t] so that [ ] E[I ξ (X, t τ)] = E X(u, s)ξ(du ds) U [,t τ] for any stopping time satisfying [ ] E X(u, s) ν(du)ds < (1) U [,t τ] If (1) holds for all t, then X(u, s)ξ(du ds) is a martingale. U [,t τ] U [,t τ] a.s. X(u, s)ν(du)ds X is predictable if it is the pointwise limit of adapted, left-continuous processes.

10 First Prev Next Go To Go Back Full Screen Close Quit 1 Stochastic integrals for centered Poisson random measures X cadlag, L 2 (ν)-valued, {F t }-adapted We define I ξ(x, t) = X(u, s ) ξ(du ds) U [,t] [I ξ(x, ] in such a way that E t) 2 = E[X(u, U [,t] s)2 ]ν(du)ds if the right side is finite. Then I ξ(x, ) is a square-integrable martingale. The integral extends to predictable integrands satisfying X(u, s) 2 X(u, s) ν(du)ds < U [,t] a.s. so that I ξ(x, t τ) is a martingale for any stopping time satisfying [ ] E X(u, s) 2 X(u, s) ν(du)ds < U [,t τ]

11 First Prev Next Go To Go Back Full Screen Close Quit 11 Itô equations for Markov processes with jumps W Gaussian white noise on U [, ) with E[W (A, t)w (B, t)] = µ(a B)t ξ i Poisson random measure on U i [, ) with mean measure ν i. Let ξ 1 (A, t) = ξ 1 (A, t) ν 1 (A)t σ 2 (x) = σ 2 (x, u)µ(du) < α1(x, 2 u) α 1 (x, u) ν 1 (du) <, α 2 (x, u) 1ν 2 (du) < and X(t) = X() σ(x(s ), u)w (du ds) + U [,t] α 1 (X(s ), u) ξ 1 (du ds) U 1 [,t] α 2 (X(s ), u)ξ 2 (du ds). U 2 [,t] β(x(s ))ds

12 First Prev Next Go To Go Back Full Screen Close Quit 12 Discrete time version: Burkholder (1973). A martingale inequality Continuous time version: Lenglart, Lepingle, and Pratelli (198). Easy proof: Ichikawa (1986). Lemma 1 For < p 2 there exists a constant C p such that for any locally square integrable martingale M with Meyer process M and any stopping time τ E[sup M(s) p ] C p E[ M p/2 τ ] s τ M is the (essentially unique) predictable process such that M 2 M is a local martingale. (A left-continuous, adapted process is predictable.)

13 First Prev Next Go To Go Back Full Screen Close Quit 13 Graham s uniqueness theorem Lipschitz condition σ(x, u) σ(y, u) 2 µ(du) + β(x) β(y) + α 1 (x, u) α 1 (y, u) 2 ν 1 (du) + α 2 (x, u) α 2 (y, u) ν 2 (du) M x y

14 First Prev Next Go To Go Back Full Screen Close Quit 14 Estimate X and X solution of SDE. E[sup X(s) X(s) ] s t E[ X() X() ] + C 1 E[( +C 1 E[( +E[ +E[ U σ(x(s ), u) σ( X(s ), u) 2 µ(du)ds) 1 2 ] U 1 α 1 (X(s ), u) α 1 ( X(s ), u) 2 ν 1 (du)ds) 1 2 ] U 2 α 2 (X(s ), u) α 2 ( X(s ), u) ν 2 (du)ds] β(x(s )) β( X(s )) ds] E[ X() X() ] + D( t + t)e[sup X(s) X(s) ] s t

15 First Prev Next Go To Go Back Full Screen Close Quit 15 X has values in D and satisfies X(t) = X() + Boundary conditions σ(x(s))dw (s) + b(x(s))ds + where λ is nondecreasing and increases only when X(t) D. η(x(s))dλ(s) X has values in D and satisfies X(t) = X() + σ(x(s))dw (s) + b(x(s))ds + α(x(s ), ζ N(s )+1 )dn(s) where ζ 1, ζ 2,... are iid and independent of X() and W and N(t) is the number of times X has hit the boundary by time t.

16 First Prev Next Go To Go Back Full Screen Close Quit Martingale problems for Markov processes Levy and Watanabe characterizations Ito s formula and martingales associated with solutions of stochastic equations Generators and martingale problems for Markov processes Equivalence between stochastic equations and martingale problems

17 First Prev Next Go To Go Back Full Screen Close Quit 17 Martingale characterizations Brownian motion (Levy) W a continuous {F t }-martingale W (t) 2 t an {F t }-martingale Then W is a standard Brownian motion compatible with {F t }. Poisson process (Watanabe) N a counting process adapted to {F t } N(t) λt an {F t }-martingale Then N is a Poisson process with parameter λ compatible with {F t }

18 First Prev Next Go To Go Back Full Screen Close Quit 18 Diffusion processes σ : R d M d m and b : R d R d Consider X(t) = X() + By Itô s formula σ(x(s))dw (s) + b(x(s))ds where f(x(t)) f(x()) Af(X(s))ds = f(x(s)) T σ(x(s))dw (s) Af(x) = 1 2 aij (x) i j f(x) + b(x) f(x) with ((a ij (x) )) = σ(x)σ(x) T.

19 First Prev Next Go To Go Back Full Screen Close Quit 19 Martingale problem for A Assume a ij and b i are locally bounded, and let D(A) = Cc 2 (R d ). X is a solution of the martingale problem for A if there exists a filtration {F t } such that X is {F t }-adapted and f(x(t)) f(x()) is an {F t }-martingale for each f D(A). Af(X(s))ds Any solution of the SDE is a solution of the martingale problem.

20 First Prev Next Go To Go Back Full Screen Close Quit 2 General SDE Let Then X(t) = X() + σ(x(s ), u)w (du ds) + U [,t] + α 1 (X(s ), u) ξ 1 (du ds) U 1 [,t] + α 2 (X(s ), u)ξ 2 (du ds). U 2 [,t] β(x(s ))ds f(x(t)) f(x()) Af(X(s))ds = f(x(s)) σ(x(s), u)w (du ds) U [,t] + (f(x(s ) + α 1 (X(s ), u)) f(x(s )) ξ 1 (du ds) U 1 + (f(x(s ) + α 2 (X(s ), u)) f(x(s )) ξ 2 (du ds) U 2

21 First Prev Next Go To Go Back Full Screen Close Quit 21 Form of the generator Af(x) = 1 aij (x) i j f(x) + b(x) f(x) 2 + (f(x + α 1 (x, u)) f(x) α 1 (x, u) f(x))ν 1 (du) U 1 + (f(x + α 2 (x, u)) f(x))ν 2 (du) U 2 Let D(A) be a collection of functions for which Af is bounded. Then a solution of the SDE is a solution of the martingale problem for A.

22 First Prev Next Go To Go Back Full Screen Close Quit 22 The martingale problem for A X is a solution for the martingale problem for (A, ν ), ν P(E), if P X() 1 = ν and there exists a filtration {F t } such that f(x(t)) f(x()) is an {F t }-martingale for all f D(A). Af(X(s))ds Theorem 2 If any two solutions of the martingale problem for A satisfying P X 1 () 1 = P X 2 () 1 also satisfy P X 1 (t) 1 = P X 2 (t) 1 for all t, then the f.d.d. of a solution X are uniquely determined by P X() 1. If X is a solution of the MGP for A and Y a (t) = X(a + t), then Y a is a solution of the MGP for A. Theorem 3 If the conclusion of the above theorem holds, then any solution of the martingale problem for A is a Markov process. A is called the generator for the Markov process.

23 First Prev Next Go To Go Back Full Screen Close Quit 23 Weak solutions of SDEs X is a weak solution of the SDE if there exists a probability space on which are defined X, W, ξ 1, and ξ 2 satisfying the SDE and X has the same distribution as X. Theorem 4 Suppose that the a ij and b i are locally bounded and that for each f Cc 2 (R d ) sup f(x + α 1 (x, u)) f(x) α 1 (x, u) f(x) ν 1 (du) < x U 1 and sup f(x + α 2 (x, u)) f(x) ν 2 (du) <. x U 2 Let D(A) = C 2 c (R d ). Then any solution of the martingale problem for A is a weak solution of the SDE.

24 First Prev Next Go To Go Back Full Screen Close Quit 24 Consider the SDE X(t) = X() + Nonsingular diffusions σ(x(s))dw (s) + b(x(s))ds and assume that d = m (that is, σ is square). Let Af(x) = 1 aij (x) i j f(x) + b(x) f(x). 2 Assume that σ(x) is invertible for each x and that σ(x) 1 is locally bounded. If X is a solution of the martingale problem for A, then M(t) = X(t) is a local martingale and W (t) = b( X(s))ds σ( X(s)) 1 dm(s) is a standard Brownian motion compatible with {F X t X(t) = X() + σ( X(s))d W (s) + }. It follows that b( X(s))ds.

25 First Prev Next Go To Go Back Full Screen Close Quit 25 Natural form for the jump terms Consider the generator for a simple pure jump process Af(x) = λ(x) (f(z) f(x))µ(x, dz), R d where λ and µ(x, ) is a probability measure. There exists γ : R d [, 1] R d such that 1 f(x + γ(x, u))du = f(z)µ(x, dz). R d Let ξ be a Poisson random measure on [, ) [, 1] [, ) with Lebesgue mean measure. Then X(t) = X() + I [,λ(x(s ))] (v)γ(x(s ), u)ξ(dv du ds) [, ) [,1] [,t] is a stochastic differential equation corresponding to A. Note that this SDE is of the form above with U 2 = [, ) [, 1] and α 2 (x, u, v) = I [,λ(x)] (v)γ(x, u) and that α 2 (x, u, v) α 2 (y, u, v) dvdu λ(x) λ(y) γ(x, u)du [, ) [,1] [,1] +λ(y) γ(x, u) γ(y, u) du [,1]

26 First Prev Next Go To Go Back Full Screen Close Quit 26 More general jump terms X(t) = X() + I [,λ(x(s ),u)] (v)γ(x(s ), u)ξ(dv du ds) [, ) U [,t] where ξ is a Poisson random measure with mean measure m ν m. The generator is of the form Af(x) = λ(x, u)(f(x + γ(x, u)) f(x))ν(du). U If ξ is replaced by ξ, then Af(x) = λ(x, u)(f(x + γ(x, u)) f(x) γ(x, u) f(x))ν(du). U Note that many different choices of λ, γ, and ν will produce the same generator.

27 First Prev Next Go To Go Back Full Screen Close Quit 27 X has values in D and satisfies X(t) = X() + Reflecting diffusions σ(x(s))dw (s) + b(x(s))ds + η(x(s))dλ(s) where λ is nondecreasing and increases only when X(t) D. By Itô s formula f(x(t)) f(x()) where Af(X(s))ds Bf(X(s))dλ(s) = f(x(s)) T σ(x(s))dw (s) Af(x) = 1 2 aij (x) i j f(x) + b(x) f(x) Bf(x) = η(x) f(x) Either take D(A) = {f Cc 2 (D) : Bf(x) =, x D} or formulate a constrained martingale problem with solution (X, λ) by requiring X to take values in D, λ to be nondecreasing and increase only when X D, and f(x(t)) f(x()) to be an {F X,λ t }-martingale. Af(X(s))ds Bf(X(s))dλ(s)

28 First Prev Next Go To Go Back Full Screen Close Quit 28 X has values in D and satisfies X(t) = X() + Instantaneous jump conditions σ(x(s))dw (s) + b(x(s))ds + α(x(s ), ζ N(s )+1 )dn(s) where ζ 1, ζ 2,... are iid and independent of X() and W and N(t) is the number of times X has hit the boundary by time t. Then where and f(x(t)) f(x()) = + Af(X(s))ds f(x(s)) T σ(x(s))dw (s) (f(x(s ) + α(x(s ), ζ N(s )+1 )) Bf(X(s ))dn(s) f(x(s ) + α(x(s ), u))ν(du))dn(s) U Af(x) = 1 2 aij (x) i j f(x) + b(x) f(x) Bf(x) = (f(x + α(x, u)) f(x))ν(du). U

29 First Prev Next Go To Go Back Full Screen Close Quit Weak convergence for stochastic processes General definition of weak convergence Prohorov s theorem Skorohod representation theorem Skorohod topology Conditions for tightness in the Skorohod topology

30 First Prev Next Go To Go Back Full Screen Close Quit 3 Topological proof of convergence (S, d) metric space F n : S R F n F in some sense (e.g., x n x implies F n (x n ) F (x)) F n (x n ) = 1. Show that {x n } is compact 2. Show that any limit point of {x n } satisfies F (x) = 3. Show that the equation F (x) = has a unique solution x 4. Conclude that x n x

31 First Prev Next Go To Go Back Full Screen Close Quit 31 (S, d) X n complete, separable metric space S-valued random variable Convergence in distribution {X n } converges in distribution to X ({P Xn } converges weakly to P X ) if for each f C(S) lim n E[f(X n)] = E[f(X)]. Denote convergence in distribution by X n X. Equivalent statements {X n } converges in distribution to X if and only if or equivalently lim inf n P {X n A} P {X A}, each open A, lim sup P {X n B} P {X B}, each closed B, n

32 First Prev Next Go To Go Back Full Screen Close Quit 32 Tightness and Prohorov s theorem A sequence {X n } is tight if for each ɛ >, there exists a compact set K ɛ S such that sup P {X n / K ɛ } ɛ. n Theorem 5 Suppose that {X n } is tight. Then there exists a subsequence {n(k)} along which the sequence converges in distribution.

33 First Prev Next Go To Go Back Full Screen Close Quit 33 (E, r) D E [, ) Skorohod topology on D E [, ) complete, separable metric space space of cadlag, E-valued functions x n x D E [, ) in the Skorohod (J 1 ) topology if and only if there exist strictly increasing λ n mapping [, ) onto [, ) such that for each T >, lim sup ( λ n (t) t + r(x n λ n (t), x(t))) =. n t T The Skorohod topology is metrizable so that D E [, ) is a complete, separable metric space. Note that I [1+ 1 n, ) I [1, ) in D R [, ), but (I [1+ 1 n, ), I [1, )) does not converge in D R 2[, ).

34 First Prev Next Go To Go Back Full Screen Close Quit 34 Conditions for tightness S n (T ) collection of discrete {F n t }-stopping times q(x, y) = 1 r(x, y) Theorem 6 Suppose that for t T, a dense subset of [, ), {X n (t)} is tight. Then the following are equivalent. a) {X n } is tight in D E [, ). b) (Kurtz) For T >, there exist β > and random variables γ n (δ, T ) such that for t T, u δ, and v t δ and E[q β (X n (t + u), X n (t)) q β (X n (t), X n (t v)) F n t ] E[γ n (δ, T ) F n t ] lim δ lim δ lim sup E[γ n (δ, T )] =, n lim sup E[q β (X n (δ), X n ())] =. (2) n c) (Aldous) Condition (2) holds, and for each T >, there exists β > such that C n (δ, T ) sup sup E[ sup q β (X n (τ + u), X n (τ)) q β (X n (τ), X n (τ v))] τ S n(t ) u δ v δ τ satisfies lim δ lim sup n C n (δ, T ) =.

35 First Prev Next Go To Go Back Full Screen Close Quit 35 η 1, η 2,... iid, E[η i ] =, σ 2 = E[η 2 i ] < Then for u δ. Example X n (t) = 1 [nt] n E[(X n (t + u) X n (t)) 2 Ft Xn ] = i=1 η i [n(t + u)] [nt] σ 2 (δ + 1 n n )σ2

36 First Prev Next Go To Go Back Full Screen Close Quit 36 Uniqueness of limit Theorem 7 If {X n } is tight in D E [, ) and (X n (t 1 ),..., X n (t k )) (X(t 1 ),..., X(t k )) for t 1,..., t k T, T dense in [, ), then X n X. For the example, this condition follows from the central limit theorem.

37 First Prev Next Go To Go Back Full Screen Close Quit 37 Skorohod representation theorem Theorem 8 Suppose that X n X. Then there exists a probability space (Ω, F, P ) and random variables, Xn and X, such that X n has the same distribution as X n, X has the same distribution as X, and X n X a.s. Continuous mapping theorem Corollary 9 Let G(X) : S E and define C G = {x S : G is continuous at x}. Suppose X n X and that P {X C G } = 1. Then G(X n ) G(X).

38 First Prev Next Go To Go Back Full Screen Close Quit 38 Some mappings on D E [, ) π t : D E [, ) E π t (x) = x(t) C πt = {x D E [, ) : x(t) = x(t )} G t : D R [, ) R G t (x) = sup s t x(s) C Gt = {x D R [, ) : lim s t G s (x) = G t (x)} {x D R [, ) : x(t) = x(t )} G : D R [, ) D R [, ), G(x)(t) = G t (x), is continous H t : D E [, ) R H t (x) = sup s t r(x(s), x(s )) C Ht = {x D E [, ) : lim s t H s (x) = H t (x)} {x D E [, ) : x(t) = x(t )} H : D E [, ) D R [, ), H(x)(t) = H t (x), is continuous

39 First Prev Next Go To Go Back Full Screen Close Quit 39 Level crossing times τ c : D R [, ) [, ) τ c (x) = inf{t : x(t) > c} τ c : D R [, ) [, ) τ c (x) = inf{t : x(t) c or x(t ) c} G τc = G τ c = {x : τ c (x) = τ c (x)} Note that τ c (x) τ c (x) and that x n x implies τ c (x) lim inf n τ c (x n ) lim sup τ c (x n ) τ c (x) n

40 First Prev Next Go To Go Back Full Screen Close Quit 4 Localization Theorem 1 Suppose that for each α >, τ α n satisfies P {τ α n > α} α 1 and {X n ( τ α n )} is relatively compact. Then {X n } is relatively compact. Compactification of the state space Theorem 11 Let E E where E is compact and the topology on E is the restriction of the topology on E. Suppose that for each n, X n is a process with sample paths in D E [, ) and that X n X in D E [, ). If X has sample paths in D E [, ), then X n X in D E [, ).

41 First Prev Next Go To Go Back Full Screen Close Quit Convergence for Markov processes characterized by stochastic equations Martingale central limit theorem Convergence for stochastic integrals Convergence for SDEs driven by semimartingales Diffusion approximations for Markov chains Limit theorems involving Poisson random measures and Gaussian white noise

42 First Prev Next Go To Go Back Full Screen Close Quit 42 Martingale central limit theorem Let M n be a martingale such that lim E[sup M n (s) M n (s ) ] = n s t and in probability. Then M n cw. [M n ] t ct Vector-valued version: If for each 1 i d and for each 1 i, j d, lim E[sup Mn(s) i Mn(s ) ] i = n s t [M i n, M j n] t c ij t, then M n σw, where W is d-dimensional standard Brownian motion and σ is a symmetric d d-matrix satisfying σ 2 = c = ((c ij )).

43 Example: Products of random matrices Let A (1), A (2),... be iid random matrices with E[A (k) ] = and E[ A (k) 2 ] <. Set X = I X(k + 1) = (I + 1 A (k+1) )X(k) = (I + 1 A (k+1) ) (I + 1 A (1) ) n n n and Then X n (t) = X([nt]) M n (t) = 1 [nt] A (k). n X n (t) = X n () + k=1 dm n (s)x n (s ). M n M where M is a Brownian motion with E[M ij (t)m kl (t)] = E[A ij A kl ]t, Can we conclude X n X satisfying X(t) = X() + dm(s)x(s)? First Prev Next Go To Go Back Full Screen Close Quit 43

44 First Prev Next Go To Go Back Full Screen Close Quit 44 Example: Markov chains X k+1 = H(X k, ξ k+1 ) where ξ 1, ξ 2... are iid P {X k+1 B X, ξ 1,..., ξ k } = P {X k+1 B X k } Example: X n k+1 = X n k + σ(x n k ) 1 n ξ k+1 + b(x n k+1) 1 n Assume E[ξ k ] = and V ar(ξ k ) = 1. Define X n (t) = X[nt] n A n (t) = [nt] W n n (t) = 1 X n (t) = X n () + Can we conclude X n X satisfying X(t) = X() + + n [nt] σ(x n (s )dw n (s) b(x n (s ))da n (s) σ(x(s))dw (s) + k=1 ξ k. b(x(s))ds?

45 First Prev Next Go To Go Back Full Screen Close Quit 45 Definition: For cadlag processes X, Y, X(s )dy (s) = whenever the limit exists in probability. Stochastic integration lim X(ti )(Y (t i+1 t) Y (t i t)) max t i+1 t i Sample paths of bounded variation: If Y is a finite variation process, the stochastic integral exists (apply dominated convergence theorem) and X(s )dy (s) = X(s )α Y (ds) α Y (,t] is the signed measure with α Y (, t] = Y (t) Y ()

46 First Prev Next Go To Go Back Full Screen Close Quit 46 Existence for square integrable martingales If M is a square integrable martingale, then E[(M(t + s) M(t)) 2 F t ] = E[[M] t+s [M] t F t ] Let X be bounded, cadlag and adapted. Then For partitions {t i } and {r i } E[( X(t i )(M(t i+1 t) M(t i t)) X(r i )(M(r i+1 t) M(r i t))) 2 ] [ ] = E (X(t(s )) X(r(s ))) 2 d[m] s [ ] = E (X(t(s )) X(r(s ))) 2 α [M] (ds) (,t] t(s) = t i for s [t i, t i+1 ) r(s) = r i for s [r i, r i+1 ) Let σ c = inf{t : X(t) c} and Then for τc { X(t) t < X c σc (t) = X(c ) t σ c X(s )dy (s) = τc X c (s )dy (s)

47 First Prev Next Go To Go Back Full Screen Close Quit 47 Semimartingales Definition: A cadlag, {F t }-adapted process Y is an {F t }-semimartingale if: M V Y = M + V a local, square integrable {F t }-martingale an {F t }-adapted, finite variation process Total variation: For a finite variation process T t (Z) sup Z(ti+1 t) Z(t i t) < {t i } Quadratic variation: For cadlag semimartingale [Y ] t = lim max t i+1 t i (Y (ti+1 t) Y (t i t)) 2 Covariation: For cadlag semimartingales [Y, Z] t = lim (Y (t i+1 t) Y (t i t))(z(t i+1 t) Z(t i t))

48 First Prev Next Go To Go Back Full Screen Close Quit 48 Probability estimates for SIs Y = M + V M local square-integrable martingale V finite variation process Assume X 1. P {sup s t s X(r )dy (r) > K} P {σ t} + P { sup s t σ +P {sup s t s s X(r )dm(r) > K/2} X(r )dv (r) > K/2} P {σ t} + 16E[ σ X 2 (s )d[m] s ] K 2 + P { P {σ t} + 16E[[M] t σ] K 2 + P {T t (V ) K/2} X(s ) dt s (V ) K/2}

49 First Prev Next Go To Go Back Full Screen Close Quit 49 Good integrator condition Let S be the collection of X = ξ i I [τi,τ i+1 ), where τ 1 < < τ m are stopping times and ξ i is F τi -measurable. Then If Y is a semimartingale, then is stochastically bounded. X(s )dy (s) = ξ i (Y (t τ i+1 ) Y (t τ i )). { X(s )dy (s) : X S, X 1} Y satisfying this stochastic boundedness condition is a good integrator. Bichteler-Dellacherie: Y is a good integrator if and only if Y is a semimartingale. (See, for example, Protter (199), Theorem III.22.)

50 First Prev Next Go To Go Back Full Screen Close Quit 5 Uniformity conditions Y n = M n + A n, a semimartingale adapted to {F n t } T t (A n ), the total variation of A n on [, t] [M n ] t, the quadratic variation of M n on [, t] Uniform tightness (UT) [JMP]: H t = n=1{ t is stochastically bounded. Z(s )dy n (s) : Z S n, sup Z(s) 1} s t Uniformly controlled variations (UCV) [KP]: {T t (A n ), n = 1, 2,...} is stochastically bounded, and there exist stopping times {τn α } such that P {τn α α} α 1 and sup n E[[M n ] t τ α n ] < A sequence of semimartingales {Y n } that converges in distribution and satisfies either UT or UCV will be called good.

51 First Prev Next Go To Go Back Full Screen Close Quit 51 Basic convergence theorem Theorem. (Jakubowski, Mémin and Pagès; Kurtz & Protter) (X n, Y n ) {Ft n }-adapted in D M km Rm[, ). Y n = M n + A n an {F n t }-semimartingale Assume that {Y n } satisfies either UT or UCV. If (X n, Y n ) (X, Y ) in D M km Rm[, ) with the Skorohod topology. THEN in D M km R m Rk[, ) (X n, Y n, X n (s )dy n (s)) (X, Y, X(s )dy (s)) If (X n, Y n ) (X, Y ) in probability in the Skorohod topology on D M km Rm[, ) THEN (X n, Y n, X n (s )dy n (s)) (X, Y, X(s )dy (s)) in probability in D M km R m Rk[, ) IN PROBABILITY CANNOT BE REPLACED BY A.S.

52 First Prev Next Go To Go Back Full Screen Close Quit 52 Stochastic differential equations Suppose F : D R d[, ) D M d m[, ) is nonanticipating in the sense that where x t (s) = x(s t). U cadlag, adapted, with values in R d Y an R m -valued semimartingale. Consider X(t) = U(t) + F (x, t) = F (x t, t) F (X, s )dy (s) (3) (X, τ) is a local solution of (3) if X is adapted to a filtration {F t } such that Y is an {F t }-semimartingale, τ is an {F t }-stopping time, and X(t τ) = U(t τ) + τ F (X, s )dy (s), t. Strong local uniqueness holds if for any two local solutions (X 1, τ 1 ), (X 2, τ 2 ) with respect to a common filtration, we have X 1 ( τ 1 τ 2 ) = X 2 ( τ 1 τ 2 ).

53 First Prev Next Go To Go Back Full Screen Close Quit 53 Sequences of SDE s X n (t) = U n (t) + Structure conditions F n (X n, s )dy n (s). T 1 [, ) = {γ : γ nondecr. and maps [, ) onto [, ), γ(t + h) γ(t) h} C.a F n behaves well under time changes: If {x n } D R d[, ), {γ n } T 1 [, ), and {x n γ n } is relatively compact in D R d[, ), then {F n (x n ) γ n } is relatively compact in D M d m[, ). C.b F n converges to F : If (x n, y n ) (x, y) in D R d Rm[, ), then (x n, y n, F n (x n )) (x, y, F (x)) in D R d R m Md m[, ). Note that C.b implies continuity of F, that is, (x n, y n ) (x, y) implies (x n, y n, F (x n )) (x, y, F (x)).

54 First Prev Next Go To Go Back Full Screen Close Quit 54 Examples F (x, t) = f(x(t)), f continuous F (x, t) = f( h(x(s))ds) f, h continuous F (x, t) = h(t s, s, x(s))ds, h continuous

55 First Prev Next Go To Go Back Full Screen Close Quit 55 Convergence theorem Theorem 12 Suppose (U n, X n, Y n ) satisfies (U n, Y n ) (U, Y ) {Y n } is good X n (t) = U n (t) + {F n }, F satisfy structure conditions sup n sup x F n (x, ) <. F n (X n, s )dy n (s). Then {(U n, X n, Y n )} is relatively compact and any limit point satisfies X(t) = U(t) + F (X, s )dy (s) If, in addition, (U n, Y n ) (U, Y ) in probability and strong uniqueness holds, then (U n, X n, Y n ) (U, X, Y ) in probability.

56 First Prev Next Go To Go Back Full Screen Close Quit 56 Euler scheme Let = t < t 1 < The Euler approximation for is given by X(t) = U(t) + F (X, s )dy (s) X(t k+1 ) = X(t k ) + U(t k+1 ) U(t k ) + F ( X, t k )(Y (t k+1 ) Y (t k )). Consistency Let {t n k } satisfy max k t n k+1 tn k, and define ( ) ( ) Yn (t) Y (t n = k ) U n (t) U(t n k ), t n k t < t n k+1. Then (U n, Y n ) (U, Y ) and {Y n } is good. The Euler scheme corresponding to {t n k } satisfies X n (t) = U n (t) + F ( X n, s )dy n (s) If F satisfies the structure conditions and strong uniqueness holds, then X n X in probability. (In the diffusion case, Maruyama (1955))

57 First Prev Next Go To Go Back Full Screen Close Quit 57 Sequences of Poisson random measures ξ n Poisson random measures with mean measures nν m. h measurable For A B(U) satisfying A h2 (u)ν(du) <, define M n (A, t) = 1 h(u)(ξ n (du [, t]) ntν(du)). n M n is an orthogonal martingale random measure with [M n (A), M n (B)] t = 1 n A B h(u)2 ξ n (du [, t]) M n (A), M n (B) t = t h(u) 2 ν(du). A A B M n converges to Gaussian white noise W with E[W (A, t)w (B, s)] = t s E[W (ϕ 1, t)w (ϕ 2, s)] = t s A B h(u) 2 ν(du) ϕ 1 (u)ϕ 2 (u)h(u) 2 ν(du)

58 First Prev Next Go To Go Back Full Screen Close Quit 58 Continuous-time Markov chains X n (t) = X n () + 1 n + 1 n Assume α U 1(x, u)ν(du) =. Then X n (t) = X n () + 1 n + 1 n Can we conclude X n X satisfying X(t) = X() + + U [,t] U [,t] U [,t] U [,t] U [,t] U α 1 (X n (s ), u)ξ n (du ds) α 2 (X n (s ), u)ξ n (du ds) α 1 (X n (s ), u) ξ n (du ds) α 2 (X n (s ), u)ξ n (du ds) α 1 (X(s), u)w (du ds) α 2 (X(s ), u)ν(du)ds?

59 First Prev Next Go To Go Back Full Screen Close Quit 59 Consider Discrete-time Markov chains X n k+1 = X n k + σ(x n k, ξ k+1 ) 1 n + b(x n k, ζ k+1 ) 1 n where {(ξ k, ζ k )} is iid in U 1 U 2. µ the distribution of ξ k ν the distribution of ζ k Assume U 1 σ(x, u 1 )µ(du 1 ) = Define M n (A, t) = 1 [nt] (I A (ξ k ) µ(a)) n V n (B, t) = 1 n k=1 [nt] k=1 I B (ζ k )

60 First Prev Next Go To Go Back Full Screen Close Quit 6 Stochastic equation driven by random measure Then X n (t) X[nt] n satisfies X n (t) = X n () + σ n (X n (s), u)m n (du ds) U 1 t + b n (X n (s), u)v n (du ds) U 2 V n (A, t) tν(a) M n (A, t) M(A, t) M is Gaussian with covariance E[M(A, t)m(b, s)] = t s(µ(a B) µ(a)µ(b)) Can we conclude that X n X satisfying X(t) = X() + σ(x(s), u)m(du ds) U 1 t + b(x(s), u)ν(du)ds? U 2

61 First Prev Next Go To Go Back Full Screen Close Quit 61 H Y (ϕ, t) Good integrator condition a separable Banach space (H = L 2 (ν), L 1 (ν), L 2 (µ), etc.) a semimartingale for each ϕ H Y ( a k ϕ k, t) = k a ky (ϕ k, t) Let S be the collection of cadlag, adapted processes of the form Z(t) = m k=1 ξ k(t)ϕ k, ϕ k H. Define I Y (Z, t) = Z(u, s )Y (du ds) = U [,t] k Basic assumption: H t = {sup s t I Y (Z, s) : Z S, sup Z(s) H 1} s t is stochastically bounded. (Call Y a good H # -semimartingale.) The integral extends to all cadlag, adapted H-valued processes. ξ k (s )dy (ϕ k, s).

62 First Prev Next Go To Go Back Full Screen Close Quit 62 H Y n Convergence for H # -semimartingales a separable Banach space of functions on U an {F n t }-H # -semimartingale (for each ϕ H, Y (ϕ, ) is an {F n t }-semimartingale) {X n } cadlag, H-valued processes (X n, Y n ) (X, Y ), if (X n, Y n (ϕ 1, ),..., Y n (ϕ m, )) (X, Y (ϕ 1, ),..., Y (ϕ m, )) in D H R m[, ) for each choice of ϕ 1,..., ϕ m H.

63 First Prev Next Go To Go Back Full Screen Close Quit 63 Let Convergence for Stochastic Integrals H n,t = {sup s t I Yn (Z, s) : Z S n, sup Z(s) H 1}. s t Definition: {Y n } is uniformly tight if n H n,t is stochastically bounded for each t. Theorem 13 Protter and Kurtz (1996). Assume that {Y n } is uniformly tight. If (X n, Y n ) (X, Y ), then there is a filtration {F t }, such that Y is an {F t }-adapted, good, H # -semimartingale, X is {F t }-adapted and (X n, Y n, I Yn (X n )) (X, Y, I Y (X)). If (X n, Y n ) (X, Y ) in probability, then (X n, Y n, I Yn (X n )) (X, Y, I Y (X)) in probability. Cho (1994) for distribution-valued martingales Jakubowski (1995) for Hilbert-space-valued semimatingales.

64 First Prev Next Go To Go Back Full Screen Close Quit 64 Structure conditions X n (t) = U n (t) + Sequences of SDE s U [,t] F n (X n, s, u)y n (du ds). T 1 [, ) = {γ : γ nondecreasing and maps [, ) onto [, ), γ(t + h) γ(t) h} C.a F n behaves well under time changes: If {x n } D R d[, ), {γ n } T 1 [, ), and {x n γ n } is relatively compact in D R d[, ), then {F n (x n ) γ n } is relatively compact in D H d[, ). C.b F n converges to F : If (x n, y n ) (x, y) in D R d Rm[, ), then (x n, y n, F n (x n )) (x, y, F (x)) in D R d R m Hd[, ). Note that C.b implies continuity of F, that is, (x n, y n ) (x, y) implies (x n, y n, F (x n )) (x, y, F (x)).

65 First Prev Next Go To Go Back Full Screen Close Quit 65 SDE convergence theorem Theorem 14 Suppose that (U n, X n, Y n ) satisfies X n (t) = U n (t) + F n (X n, s, u)y n (du ds), U [,t] that (U n, Y n ) (U, Y ), and that {Y n } is uniformly tight. Assume that {F n } and F satisfy the structure condition and that sup n sup x F n (x, ) H d <. Then {(U n, X n, Y n )} is relatively compact and any limit point satisfies X(t) = U(t) + F (X, s, u)y (du ds) U [,t]

66 First Prev Next Go To Go Back Full Screen Close Quit Convergence for Markov processes characterized by martingale problems Tightness estimates based on generators Convergence of processes based on convergence of generators Averaging A measure-valued limit

67 First Prev Next Go To Go Back Full Screen Close Quit 67 Compactness conditions based on compactness of real functions Compact containment condition: For each T > and ɛ >, there exists a compact K E such that lim inf n P {X n(s) K, s T } 1 ɛ. Theorem 15 Let D be dense in C(E) in the compact uniform topology. {X n } is relatively compact (in distribution in D E [, )) iff {X n } satisfies the compact containment conditions and {f X n } is relatively compact for each f D. Note that E[(f(X n (t + u)) f(x n (t))) 2 F n t ] = E[f 2 (X n (t + u)) f 2 (X n (t)) F n t ] 2f(X n (t))e[f(x n (t + u)) f(x n (t)) F n t ]

68 First Prev Next Go To Go Back Full Screen Close Quit 68 Limits of generators X n (t) = 1 n (N b (nt) N d (nt)), where N b, N d are independent, unit Poisson processes. For f C 3 c (R) Set Af = 1 2 f. A n f(x) = n( f(x+ 1 n )+f(x 1 n ) 2 f(x)) = 1 2 f (x) + O( 1 n ). E[(f(X n (t + u)) f(x n (t))) 2 F n t ] = E[f 2 (X n (t + u)) f 2 (X n (t)) F n t ] 2f(X n (t))e[f(x n (t + u)) f(x n (t)) F n t ] u( A n f f A n f ). It follows that {X n } is relatively compact in D R [, ), or using the fact that P {sup X n (s) k} 4E[X2 n(t)] s t k 2 = 4t k 2, the compact containment condition holds and {X n } is relatively compact in D R [, ).

69 First Prev Next Go To Go Back Full Screen Close Quit 69 Limits of martingales Lemma 16 For each n = 1, 2,..., let M n and Z n be cadlag stochastic processes and let M n be a {F (Mn,Zn) t }-martingale. Suppose that (M n, Z n ) (M, Z). If for each t, {M n (t)} is uniformly integrable, then M is a {F (M,Z) t }-martingale. Recall that if a sequence of real-valued random variables {ψ n } is uniformly integrable and ψ n ψ, then E[ψ n ] E[ψ]. It follows that if X is the limit of a subsequence of {X n }, then f(x n (t)) A n f(x n (s))ds f(x(t)) Af(X(s))ds (along the subsequence) and X is a solution of the martingale problem for A.

70 First Prev Next Go To Go Back Full Screen Close Quit 7 Elementary convergence theorem E compact, E n, n = 1, 2,... complete, separable metric space. η n : E n E Y n Markov in E n with generator A n, X n = η n (Y n ) cadlag A C(E) C(E) (for simplicity, we write Af = g if (f, g) A). D(A) = {f : (f, g) A} For each (f, g) A, there exist f n D(A n ) such that sup x E n ( f n (x) f η n (x) + A n f n (x) g η n (x) ). THEN {X n } is relatively compact and any limit point is a solution of the martingale problem for A.

71 Reflecting random walk E n = {, 1 n, 2 n,...} A n f(x) = nλ(f(x + 1 n ) f(x)) + nλi {x>} (f(x 1 n ) f(x)) Let f C 3 c [, ). Then A n f(x) = λf (x) + O( 1 n ) x > A n f() = nλf () + λ 2 f () + O( 1 n ) Assume f () =, but still have discontinuity at. Let f n = f + 1 n h, f {f C 3 c [, ) : f () = }, h C 3 c [, ). Then A n f n (x) = λf (x) + 1 n λh (x) + O( 1 n ) x > A n f n () = λ 2 f () + λh () + O( 1 n ) Assume that h () = 1 2 f (). Then, noting that η n (x) = x, x E n, sup ( f n (x) f(x) + A n f n (x) Af(x) ) x E n First Prev Next Go To Go Back Full Screen Close Quit 71

72 First Prev Next Go To Go Back Full Screen Close Quit 72 Averaging Y stationary with marginal distribution ν, ergodic, and independent of W Y n (t) = Y (β n t), β n X n (t) = X() + σ(x n (s), Y n (s))dw (s) + b(x n (s), Y n (s))ds Lemma 17 Let {µ n } be measures on U [, ) satisfying µ n (U [, t]) = t. Suppose ϕ(u)µ n (du [, t]) ϕ(u)µ(du [, t]), U U for each ϕ C(U) and t, and that x n x in D E [, ). Then h(u, x n (s))µ n (du ds) h(u, x(s))µ(du ds) U [,t] U [,t] for each h C(U E) and t.

73 First Prev Next Go To Go Back Full Screen Close Quit 73 Convergence of averaged generator Assume that σ and b are bounded and continuous. Then for f Cc 2 (R), f(x n (t)) f(x n ()) Af(X n (s), Y n (s))ds = σ(x n (s), Y n (s))f (X n (s))dw (s) Af(x, y) = 1 2 σ2 (x, y)f (x) + b(x, y)f (x) is a martingale, {X n } is relatively compact, ϕ(y n (s))ds = 1 βnt ϕ(y (s))ds t ϕ(u)ν(du), β n U so any limit point of {X n } is a solution of the martingale problem for Af(x) = Af(x, u)ν(du). Khas minskii (1966), Kurtz (1992), Pinsky (1991) U

74 First Prev Next Go To Go Back Full Screen Close Quit 74 Coupled system X n (t) = X() + Y n (t) = Y () + Consequently, σ(x n (s), Y n (s))dw 1 (s) + b(x n (s), Y n (s))ds t nα(xn (s), Y n (s))dw 2 (s) + nβ(x n (s), Y n (s))ds f(x n (t)) f(x n ()) and g(y n (t)) g(y ()) Af(X n (s), Y n (s))ds nbg(x n (s), Y n (s))ds Bg(x, y) = 1 2 α2 (x, y)g (y) + β(x, y)g (y) are martingales.

75 First Prev Next Go To Go Back Full Screen Close Quit 75 Estimates Suppose that for g Cc 2 (R), Bg(x, y) is bounded, and that, taking g(y) = y 2, Bg(x, y) K 1 K 2 y 2. Then, assuming E[Y () 2 ] <, which implies E[Y n (t) 2 ] E[Y () 2 ] + (K 1 K 2 E[Y n (s) 2 ])ds E[Y n (t) 2 ] E[Y () 2 ]e K 2t + K 1 K 2 (1 e K 2t ). The sequence of measures defined by ϕ(y)γ n (dy ds) = is relatively compact. R [,t] ϕ(y n (s))ds

76 First Prev Next Go To Go Back Full Screen Close Quit 76 Convergence of the averaged process If σ and b are bounded, then {X n } is relatively compact and any limit point of {(X n, Γ n )} must satisfy: f(x(t)) f(x()) Af(X(s), y)γ(dy ds) R [,t] is a martingale for each f Cc 2 (R) and Bg(X(s), y)γ(dy ds) (4) R [,t] is a martingale for each g Cc 2 (R). But (4) is continuous and of finite variation. Therefore Bg(X(s), y)γ(dy ds) = Bg(X(s), y)γ s (dy)ds =. R [,t] R

77 First Prev Next Go To Go Back Full Screen Close Quit 77 Characterizing γ s For almost every s R Bg(X(s), y)γ s (dy) =, g C 2 c (R) But, fixing x, and setting B x g(y) = Bg(x, y) B x g(y)π(dy) =, g C 2 (R), R implies π is a stationary distribution for the diffusion with generator B x g(y) = 1 2 α2 x(y)g (y) + β x (y)g (y), α x (y) = α(x, y), β x (y) = β(x, y) If α x (y) > for all y, then the stationary distribution π x is uniquely determined. If uniqueness hold for all x, Γ(dy ds) = π X(s) (dy)ds and X is a solution of the martingale problem for Af(x) = Af(x, y)π x (dy). R

78 First Prev Next Go To Go Back Full Screen Close Quit 78 Moran models in population genetics E B σ type space generator of mutation process selection coefficient Generator with state space E n A n f(x) = n 1 B i f(x) + 2(n 2) i=1 1 i j k n (1 + 2 n σ(x i, x j ))(f(η k (x x i )) f(x)) η k (x z) = (x 1,..., x k 1, z, x k+1,... x n ) Note that if (X 1,..., X n ) is a solution of the martingale problem for A, then for any permutation σ, (X σ1,..., X σn ) is a solution of the martingale problem for A.

79 First Prev Next Go To Go Back Full Screen Close Quit 79 Conditioned martingale lemma Lemma 18 Suppose U and V are {F t }-adapted, U(t) is an {F t }-martingale, and G t F t. Then V (s)ds is a {G t }-martingale. E[U(t) G t ] E[V (s) G s ]ds

80 First Prev Next Go To Go Back Full Screen Close Quit 8 Generator for measure-valued process P n (E) = {µ P(E) : µ = 1 n n i=1 δ x i } For f B(E m ) and µ P n (E) f, µ (m) = Z(t) = 1 n n i=1 1 n (n m + 1) δ Xi (t) Symmetry and the conditioned martingale lemma imply i 1 =i m f(x i1,..., x im ). is a {F Z t }-martingale. Define F (µ) = f, µ (n) f, Z (n) (t) A n f, Z (n) (s) ds A n F (µ) = A n f, µ (n)

81 First Prev Next Go To Go Back Full Screen Close Quit 81 Convergence of the generator If f depends on m variables (m < n) A n f(x) m = B i f(x) i= (n 2) 1 + 2(n 2) m (1 + 2 n σ(x i, x j ))(f(η k (x x i )) f(x)) k=1 1 i k m 1 j k,i n m n k=1 i=m+1 1 j k,i n (1 + 2 n σ(x i, x j ))(f(η k (x x i )) f(x)) A n f, µ (n) = m B i f, µ (m) + 1 ( Φ ik f, µ (m 1) f, µ (m) ) + O( 1 2 n ) i=1 + 1 i k m m ( σ k f, µ (m+1) σf, µ (m+2) ) + O( 1 n ) k=1

82 First Prev Next Go To Go Back Full Screen Close Quit 82 Conclusions If E is compact, compact containment condition is immediate (P(E) is compact). A n F is bounded as long as B i f is bounded. Limit of uniformly integrable martingales is a martingale Z n Z if uniqueness holds for limiting martingale problem.

83 First Prev Next Go To Go Back Full Screen Close Quit The lecturer s whims Wong-Zakai corrections Processes with boundary conditions Averaging for stochastic equations

84 First Prev Next Go To Go Back Full Screen Close Quit 84 Not good (evil?) sequences Markov chains: Let {ξ k } be an irreducible finite Markov chain with stationary distribution π(x), g(x)π(x) =, and let h satisfy P h h = g (P h(x) = p(x, y)h(y)). Define V n (t) = 1 [nt] g(ξ k ). n Then V n σw, where k=1 σ 2 = x,y π(x)p(x, y) (P h(x) h(y)) 2. {V n } is not good but V n (t) = 1 [nt] (P h(ξ k 1 ) h(ξ k )) + 1 (P h(ξ [nt] ) P h(ξ )) n n k=1 = M n (t) + Z n (t) where {M n } is good sequence of martingales and Z n.

85 First Prev Next Go To Go Back Full Screen Close Quit 85 Piecewise linear interpolation of W : W n (t) = W ( [nt] [nt] ) + (t n n )n (Classical Wong-Zakai example.) W n (t) = W ( [nt] + 1 ) ( [nt] + 1 n n = M n (t) + Z n (t) ( W ( [nt] + 1 ) W ( [nt] ) n n ) ( t)n where {M n } is good (take the filtration to be F n t W ( [nt] + 1 ) W ( [nt] n n ) = F [nt]+1 ) and Z n. n Renewal processes: N(t) = max{k : k i=1 ξ i t}, {ξ i } iid, positive, E[ξ k ] = µ, V ar(ξ k ) = σ 2. N(nt) nt/µ V n (t) =. n ) Then V n αw, α = σ/µ 3/2. V n (t) = (N(nt) + 1)µ S N(nt)+1 µ n = M n (t) + Z n (t) + S N(nt)+1 nt µ n

86 First Prev Next Go To Go Back Full Screen Close Quit 86 Not so evil after all Assume V n (t) = Y n (t) + Z n (t) where {Y n } is good and Z n. In addition, assume { Z n dz n }, {[Y n, Z n ]}, and {[Z n ]} are good. X n (t) = X n () + = X n () + Integrate by parts using F (X n (t)) = F (X n ()) + F (X n (s ))dv n (s) F (X n (s ))dy n (s) + F (X n (s ))dz n (s) F (X n (s ))F (X n (s ))dv n (s) + R n (t) where R n can be estimated in terms of [V n ] = [Y n ] + 2[Y n, Z n ] + [Z n ].

87 First Prev Next Go To Go Back Full Screen Close Quit 87 Integration by parts F (X n (s ))dz n (s) = F (X n (t))z n (t) F (X n ())Z n () Z n (s )F (X n (s ))F (X n (s ))dy n (s) Z n (s )dr n (s) Z n (s )df (X n (s)) [F X n, Z n ] t F (X n (s ))F (X n (s ))Z n (s )dz n (s) F (X n (s ))F (X n (s ))d([y n, Z n ] s + [Z n ] s ) [R n, Z n ] t Theorem 19 Assume that V n = Y n +Z n where {Y n } is good, Z n, and { Z n dz n } is good. If (X n (), Y n, Z n, Z n dz n, [Y n, Z n ]) (X(), Y,, H, K), then {X n } is relatively compact and any limit point satisfies X(t) = X() + F (X(s ))dy (s) + Note: For all the examples, H(t) K(t) = ct for some c. F (X(s ))F (X(s ))d(h(s) K(s))

88 First Prev Next Go To Go Back Full Screen Close Quit 88 X has values in D and satisfies X(t) = X() + Reflecting diffusions σ(x(s))dw (s) + b(x(s))ds + η(x(s))dλ(s) where λ is nondecreasing and increases only when X(t) D. By Itô s formula where f(x(t)) f(x()) Af(X(s))ds = Bf(X(s))dλ(s) f(x(s)) T σ(x(s))dw (s) Af(x) = 1 2 aij (x) i j f(x) + b(x) f(x) Bf(x) = η(x) f(x) Either take D(A) = {f Cc 2 (D) : Bf(x) =, x D} or formulate a constrained martingale problem with solution (X, λ) by requiring X to take values in D, λ to be nondecreasing and increase only when X D, and f(x(t)) f(x()) to be an {F X,λ t }-martingale. Af(X(s))ds Bf(X(s))dλ(s)

89 First Prev Next Go To Go Back Full Screen Close Quit 89 X has values in D and satisfies X(t) = X() + Instantaneous jump conditions σ(x(s))dw (s) + b(x(s))ds + α(x(s ), ζ N(s )+1 )dn(s) where ζ 1, ζ 2,... are iid and independent of X() and W and N(t) is the number of times X has hit the boundary by time t. Then where and f(x(t)) f(x()) = + Af(X(s))ds f(x(s)) T σ(x(s))dw (s) (f(x(s ) + α(x(s ), ζ N(s )+1 )) Bf(X(s ))dn(s) f(x(s ) + α(x(s ), u))ν(du))dn(s) U Af(x) = 1 2 aij (x) i j f(x) + b(x) f(x) Bf(x) = (f(x + α(x, u)) f(x))ν(du). U

90 First Prev Next Go To Go Back Full Screen Close Quit 9 X has values in D and satisfies X n (t) = X() + λ n (t) Nn(t) n Approximation of reflecting diffusion σ(x n (s))dw (s) + b(x n (s))ds + 1 n η(x n(s ))dn n (s) Assume there exists ϕ C 2 such that ϕ, Aϕ and ϕ σ are bounded on D and inf x D η(x) ϕ(x) ɛ >. Then inf n(ϕ(x + 1 x D n η(x)) ϕ(x))λ n(t) 2 ϕ + 2 Aϕ t ϕ(x n (s)) T σ(x n (s))dw (s) and {λ n (t)} is stochastically bouned

91 First Prev Next Go To Go Back Full Screen Close Quit 91 Convergence of time changed sequence γ n (t) = inf{s : s + λ n (s) > t} t γ n (t) + λ n γ n (t) t + 1 n X n (t) X n γ n (t) X n (t) = X() + σ( X n (s))dw γ n (s) + + b( X n (s))dγ n (s) η( X n (s ))dλ n γ n (s) Check relative compactness and goodness of {(W γ n, γ n, λ n γ n )}. γ n is Lipschitz with Lipschitz constant 1, λ n γ n is finite variation, nondecreasing and bounded by t + 1 n, M n = W γ n is a martingale with [M n ] t = γ n (t). If σ and b are bounded and continuous, then (by the general theorem) {( X n, W γ n, γ n, λ n γ n )} is relatively compact and any limit point will satisfy X(t) = X() + σ( X(s))dW γ(s) + b( X(s))dγ(s) + where γ(t) + λ(t) = t. (Note that γ and λ are continuous.) η( X(s))d λ(s)

92 First Prev Next Go To Go Back Full Screen Close Quit 92 Convergence of original sequence X(t) = X() + σ( X(s))dW γ(s) + b( X(s))dγ(s) + If γ is constant on [a, b], then for a t b, X(t) D and X(t) = X(a) + a η( X(s))ds. η( X(s))d λ(s) Under appropriate conditions on η, no such interval can exist with a < b and γ must be strictly increasing. Then (at least along a subsequence) X n X X γ 1 and setting λ = λ γ 1, X(t) = X() + σ(x(s))dw (s) + b(x(s))ds + η(x(s))dλ(s)

93 Rapidly varying components W a standard Brownian motion in R X n () independent of W ζ a stochastic process with state space U, independent of W and X n () Define ζ n (t) = ζ(nt). X n (t) = X n () + Define and so that X n (t) = X n () + σ(x n (s), ζ n (s))dw (s) + M n (A, t) = V n (A, t) = I A (ζ n (s))dw (s) I A (ζ n (s))ds, U [,t] U [,t] b(x n (s), ζ n (s))ds σ(x n (s), u)m n (du ds) + b(x n (s), u)v n (du ds) First Prev Next Go To Go Back Full Screen Close Quit 93

94 First Prev Next Go To Go Back Full Screen Close Quit 94 Define Assume that Convergence of driving processes M n (ϕ, t) = V n (ϕ, t) = 1 t in probability for each ϕ C(U). ϕ(ζ(s))ds ϕ(ζ n (s))dw (s) ϕ(ζ n (s))ds U ϕ(u)ν(du) Observe that [M n (ϕ 1, ), M n (ϕ 2, )] t = ϕ 1 (ζ n (s))ϕ 2 (ζ n (s))ds t ϕ 1 (u)ϕ 2 (u)ν(du) U

95 First Prev Next Go To Go Back Full Screen Close Quit 95 Limiting equation The functional central limit theorem for martingales implies M n (ϕ, t) M(ϕ, t) where M is Gaussian with E[M(ϕ 1, t)m(ϕ 2, s)] = t s ϕ 1 (u)ϕ 2 (u)ν(du) and V n (ϕ, t) t ϕ(u)ν(du) U If {(M n, V n )} is good, then X n X satisfying X(t) = X() + U σ(x(s), u)m(du ds) + U U b(x(s), u)ν(du)ds

96 First Prev Next Go To Go Back Full Screen Close Quit 96 Recall If then and E[( U [,t] U [,t] Estimates for averaging problem M n (ϕ, t) = Z(t, u) = ϕ(ζ n (s))dw (s) m ξ k (t)ϕ k (u) k=1 Z(s, u)m n (du ds) = m k=1 ξ k (s )dm n (ϕ k, s) [ ] t Z(s, u)m n (du ds)) 2 ] = E ξ k (s)ξ l (s)ϕ k (ζ n (s))ϕ l (ζ n (s))ds k,l [ ] = E Z(s, ζ n (s)) 2 ds Assume U is locally compact, ψ is strictly positive and vanishes at, ϕ H = sup ψ(u)ϕ(u), and sup T E[ 1 1 ds] <. T ψ(ζ(s)) 2 T

for all f satisfying E[ f(x) ] <.

for all f satisfying E[ f(x) ] <. . Let (Ω, F, P ) be a probability space and D be a sub-σ-algebra of F. An (H, H)-valued random variable X is independent of D if and only if P ({X Γ} D) = P {X Γ}P (D) for all Γ H and D D. Prove that if

More information

Weak convergence and large deviation theory

Weak convergence and large deviation theory First Prev Next Go To Go Back Full Screen Close Quit 1 Weak convergence and large deviation theory Large deviation principle Convergence in distribution The Bryc-Varadhan theorem Tightness and Prohorov

More information

Weak convergence of stochastic integrals and differential equations II: Infinite dimensional case 1

Weak convergence of stochastic integrals and differential equations II: Infinite dimensional case 1 Weak convergence of stochastic integrals and differential equations II: Infinite dimensional case 1 Thomas G. Kurtz 2 Philip E. Protter 3 Departments of Mathematics and Statistics Departments of Mathematics

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

Kai Lai Chung

Kai Lai Chung First Prev Next Go To Go Back Full Screen Close Quit 1 Kai Lai Chung 1917-29 Mathematicians are more inclined to build fire stations than to put out fires. Courses from Chung First Prev Next Go To Go Back

More information

1. Stochastic Processes and filtrations

1. Stochastic Processes and filtrations 1. Stochastic Processes and 1. Stoch. pr., A stochastic process (X t ) t T is a collection of random variables on (Ω, F) with values in a measurable space (S, S), i.e., for all t, In our case X t : Ω S

More information

Math 735: Stochastic Analysis

Math 735: Stochastic Analysis First Prev Next Go To Go Back Full Screen Close Quit 1 Math 735: Stochastic Analysis 1. Introduction and review 2. Notions of convergence 3. Continuous time stochastic processes 4. Information and conditional

More information

Martingale problems and stochastic equations for Markov processes

Martingale problems and stochastic equations for Markov processes First Prev Next Go To Go Back Full Screen Close Quit 1 Martingale problems and stochastic equations for Markov processes 1. Basics of stochastic processes 2. Markov processes and generators 3. Martingale

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 15. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

Branching Processes II: Convergence of critical branching to Feller s CSB

Branching Processes II: Convergence of critical branching to Feller s CSB Chapter 4 Branching Processes II: Convergence of critical branching to Feller s CSB Figure 4.1: Feller 4.1 Birth and Death Processes 4.1.1 Linear birth and death processes Branching processes can be studied

More information

ELEMENTS OF PROBABILITY THEORY

ELEMENTS OF PROBABILITY THEORY ELEMENTS OF PROBABILITY THEORY Elements of Probability Theory A collection of subsets of a set Ω is called a σ algebra if it contains Ω and is closed under the operations of taking complements and countable

More information

Reflected Brownian Motion

Reflected Brownian Motion Chapter 6 Reflected Brownian Motion Often we encounter Diffusions in regions with boundary. If the process can reach the boundary from the interior in finite time with positive probability we need to decide

More information

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3 Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................

More information

Convergence of Markov Processes. Amanda Turner University of Cambridge

Convergence of Markov Processes. Amanda Turner University of Cambridge Convergence of Markov Processes Amanda Turner University of Cambridge 1 Contents 1 Introduction 2 2 The Space D E [, 3 2.1 The Skorohod Topology................................ 3 3 Convergence of Probability

More information

On pathwise stochastic integration

On pathwise stochastic integration On pathwise stochastic integration Rafa l Marcin Lochowski Afican Institute for Mathematical Sciences, Warsaw School of Economics UWC seminar Rafa l Marcin Lochowski (AIMS, WSE) On pathwise stochastic

More information

Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals

Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals Noèlia Viles Cuadros BCAM- Basque Center of Applied Mathematics with Prof. Enrico

More information

Some Properties of NSFDEs

Some Properties of NSFDEs Chenggui Yuan (Swansea University) Some Properties of NSFDEs 1 / 41 Some Properties of NSFDEs Chenggui Yuan Swansea University Chenggui Yuan (Swansea University) Some Properties of NSFDEs 2 / 41 Outline

More information

A Concise Course on Stochastic Partial Differential Equations

A Concise Course on Stochastic Partial Differential Equations A Concise Course on Stochastic Partial Differential Equations Michael Röckner Reference: C. Prevot, M. Röckner: Springer LN in Math. 1905, Berlin (2007) And see the references therein for the original

More information

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition Filtrations, Markov Processes and Martingales Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition David pplebaum Probability and Statistics Department,

More information

Exponential martingales: uniform integrability results and applications to point processes

Exponential martingales: uniform integrability results and applications to point processes Exponential martingales: uniform integrability results and applications to point processes Alexander Sokol Department of Mathematical Sciences, University of Copenhagen 26 September, 2012 1 / 39 Agenda

More information

Wiener Measure and Brownian Motion

Wiener Measure and Brownian Motion Chapter 16 Wiener Measure and Brownian Motion Diffusion of particles is a product of their apparently random motion. The density u(t, x) of diffusing particles satisfies the diffusion equation (16.1) u

More information

Martingale Problems. Abhay G. Bhatt Theoretical Statistics and Mathematics Unit Indian Statistical Institute, Delhi

Martingale Problems. Abhay G. Bhatt Theoretical Statistics and Mathematics Unit Indian Statistical Institute, Delhi s Abhay G. Bhatt Theoretical Statistics and Mathematics Unit Indian Statistical Institute, Delhi Lectures on Probability and Stochastic Processes III Indian Statistical Institute, Kolkata 20 24 November

More information

Selected Exercises on Expectations and Some Probability Inequalities

Selected Exercises on Expectations and Some Probability Inequalities Selected Exercises on Expectations and Some Probability Inequalities # If E(X 2 ) = and E X a > 0, then P( X λa) ( λ) 2 a 2 for 0 < λ

More information

n E(X t T n = lim X s Tn = X s

n E(X t T n = lim X s Tn = X s Stochastic Calculus Example sheet - Lent 15 Michael Tehranchi Problem 1. Let X be a local martingale. Prove that X is a uniformly integrable martingale if and only X is of class D. Solution 1. If If direction:

More information

Applications of Ito s Formula

Applications of Ito s Formula CHAPTER 4 Applications of Ito s Formula In this chapter, we discuss several basic theorems in stochastic analysis. Their proofs are good examples of applications of Itô s formula. 1. Lévy s martingale

More information

p 1 ( Y p dp) 1/p ( X p dp) 1 1 p

p 1 ( Y p dp) 1/p ( X p dp) 1 1 p Doob s inequality Let X(t) be a right continuous submartingale with respect to F(t), t 1 P(sup s t X(s) λ) 1 λ {sup s t X(s) λ} X + (t)dp 2 For 1 < p

More information

Stochastic Differential Equations.

Stochastic Differential Equations. Chapter 3 Stochastic Differential Equations. 3.1 Existence and Uniqueness. One of the ways of constructing a Diffusion process is to solve the stochastic differential equation dx(t) = σ(t, x(t)) dβ(t)

More information

Math 635: An Introduction to Brownian Motion and Stochastic Calculus

Math 635: An Introduction to Brownian Motion and Stochastic Calculus First Prev Next Go To Go Back Full Screen Close Quit 1 Math 635: An Introduction to Brownian Motion and Stochastic Calculus 1. Introduction and review 2. Notions of convergence and results from measure

More information

Convergence of Feller Processes

Convergence of Feller Processes Chapter 15 Convergence of Feller Processes This chapter looks at the convergence of sequences of Feller processes to a iting process. Section 15.1 lays some ground work concerning weak convergence of processes

More information

Solutions to the Exercises in Stochastic Analysis

Solutions to the Exercises in Stochastic Analysis Solutions to the Exercises in Stochastic Analysis Lecturer: Xue-Mei Li 1 Problem Sheet 1 In these solution I avoid using conditional expectations. But do try to give alternative proofs once we learnt conditional

More information

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539 Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory

More information

Stochastic Processes II/ Wahrscheinlichkeitstheorie III. Lecture Notes

Stochastic Processes II/ Wahrscheinlichkeitstheorie III. Lecture Notes BMS Basic Course Stochastic Processes II/ Wahrscheinlichkeitstheorie III Michael Scheutzow Lecture Notes Technische Universität Berlin Sommersemester 218 preliminary version October 12th 218 Contents

More information

SUMMARY OF RESULTS ON PATH SPACES AND CONVERGENCE IN DISTRIBUTION FOR STOCHASTIC PROCESSES

SUMMARY OF RESULTS ON PATH SPACES AND CONVERGENCE IN DISTRIBUTION FOR STOCHASTIC PROCESSES SUMMARY OF RESULTS ON PATH SPACES AND CONVERGENCE IN DISTRIBUTION FOR STOCHASTIC PROCESSES RUTH J. WILLIAMS October 2, 2017 Department of Mathematics, University of California, San Diego, 9500 Gilman Drive,

More information

Homogenization for chaotic dynamical systems

Homogenization for chaotic dynamical systems Homogenization for chaotic dynamical systems David Kelly Ian Melbourne Department of Mathematics / Renci UNC Chapel Hill Mathematics Institute University of Warwick November 3, 2013 Duke/UNC Probability

More information

A Short Introduction to Diffusion Processes and Ito Calculus

A Short Introduction to Diffusion Processes and Ito Calculus A Short Introduction to Diffusion Processes and Ito Calculus Cédric Archambeau University College, London Center for Computational Statistics and Machine Learning c.archambeau@cs.ucl.ac.uk January 24,

More information

Exercises Measure Theoretic Probability

Exercises Measure Theoretic Probability Exercises Measure Theoretic Probability 2002-2003 Week 1 1. Prove the folloing statements. (a) The intersection of an arbitrary family of d-systems is again a d- system. (b) The intersection of an arbitrary

More information

Feller Processes and Semigroups

Feller Processes and Semigroups Stat25B: Probability Theory (Spring 23) Lecture: 27 Feller Processes and Semigroups Lecturer: Rui Dong Scribe: Rui Dong ruidong@stat.berkeley.edu For convenience, we can have a look at the list of materials

More information

STAT331 Lebesgue-Stieltjes Integrals, Martingales, Counting Processes

STAT331 Lebesgue-Stieltjes Integrals, Martingales, Counting Processes STAT331 Lebesgue-Stieltjes Integrals, Martingales, Counting Processes This section introduces Lebesgue-Stieltjes integrals, and defines two important stochastic processes: a martingale process and a counting

More information

Stochastic Processes. Winter Term Paolo Di Tella Technische Universität Dresden Institut für Stochastik

Stochastic Processes. Winter Term Paolo Di Tella Technische Universität Dresden Institut für Stochastik Stochastic Processes Winter Term 2016-2017 Paolo Di Tella Technische Universität Dresden Institut für Stochastik Contents 1 Preliminaries 5 1.1 Uniform integrability.............................. 5 1.2

More information

(B(t i+1 ) B(t i )) 2

(B(t i+1 ) B(t i )) 2 ltcc5.tex Week 5 29 October 213 Ch. V. ITÔ (STOCHASTIC) CALCULUS. WEAK CONVERGENCE. 1. Quadratic Variation. A partition π n of [, t] is a finite set of points t ni such that = t n < t n1

More information

I forgot to mention last time: in the Ito formula for two standard processes, putting

I forgot to mention last time: in the Ito formula for two standard processes, putting I forgot to mention last time: in the Ito formula for two standard processes, putting dx t = a t dt + b t db t dy t = α t dt + β t db t, and taking f(x, y = xy, one has f x = y, f y = x, and f xx = f yy

More information

Lecture 21 Representations of Martingales

Lecture 21 Representations of Martingales Lecture 21: Representations of Martingales 1 of 11 Course: Theory of Probability II Term: Spring 215 Instructor: Gordan Zitkovic Lecture 21 Representations of Martingales Right-continuous inverses Let

More information

Universal examples. Chapter The Bernoulli process

Universal examples. Chapter The Bernoulli process Chapter 1 Universal examples 1.1 The Bernoulli process First description: Bernoulli random variables Y i for i = 1, 2, 3,... independent with P [Y i = 1] = p and P [Y i = ] = 1 p. Second description: Binomial

More information

{σ x >t}p x. (σ x >t)=e at.

{σ x >t}p x. (σ x >t)=e at. 3.11. EXERCISES 121 3.11 Exercises Exercise 3.1 Consider the Ornstein Uhlenbeck process in example 3.1.7(B). Show that the defined process is a Markov process which converges in distribution to an N(0,σ

More information

Exercises. T 2T. e ita φ(t)dt.

Exercises. T 2T. e ita φ(t)dt. Exercises. Set #. Construct an example of a sequence of probability measures P n on R which converge weakly to a probability measure P but so that the first moments m,n = xdp n do not converge to m = xdp.

More information

16.1. Signal Process Observation Process The Filtering Problem Change of Measure

16.1. Signal Process Observation Process The Filtering Problem Change of Measure 1. Introduction The following notes aim to provide a very informal introduction to Stochastic Calculus, and especially to the Itô integral. They owe a great deal to Dan Crisan s Stochastic Calculus and

More information

Uniformly Uniformly-ergodic Markov chains and BSDEs

Uniformly Uniformly-ergodic Markov chains and BSDEs Uniformly Uniformly-ergodic Markov chains and BSDEs Samuel N. Cohen Mathematical Institute, University of Oxford (Based on joint work with Ying Hu, Robert Elliott, Lukas Szpruch) Centre Henri Lebesgue,

More information

3 (Due ). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure?

3 (Due ). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure? MA 645-4A (Real Analysis), Dr. Chernov Homework assignment 1 (Due ). Show that the open disk x 2 + y 2 < 1 is a countable union of planar elementary sets. Show that the closed disk x 2 + y 2 1 is a countable

More information

Topics in fractional Brownian motion

Topics in fractional Brownian motion Topics in fractional Brownian motion Esko Valkeila Spring School, Jena 25.3. 2011 We plan to discuss the following items during these lectures: Fractional Brownian motion and its properties. Topics in

More information

Random Process Lecture 1. Fundamentals of Probability

Random Process Lecture 1. Fundamentals of Probability Random Process Lecture 1. Fundamentals of Probability Husheng Li Min Kao Department of Electrical Engineering and Computer Science University of Tennessee, Knoxville Spring, 2016 1/43 Outline 2/43 1 Syllabus

More information

Stochastic integration. P.J.C. Spreij

Stochastic integration. P.J.C. Spreij Stochastic integration P.J.C. Spreij this version: April 22, 29 Contents 1 Stochastic processes 1 1.1 General theory............................... 1 1.2 Stopping times...............................

More information

Hardy-Stein identity and Square functions

Hardy-Stein identity and Square functions Hardy-Stein identity and Square functions Daesung Kim (joint work with Rodrigo Bañuelos) Department of Mathematics Purdue University March 28, 217 Daesung Kim (Purdue) Hardy-Stein identity UIUC 217 1 /

More information

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( )

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( ) Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio (2014-2015) Etienne Tanré - Olivier Faugeras INRIA - Team Tosca November 26th, 2014 E. Tanré (INRIA - Team Tosca) Mathematical

More information

Lecture 19 L 2 -Stochastic integration

Lecture 19 L 2 -Stochastic integration Lecture 19: L 2 -Stochastic integration 1 of 12 Course: Theory of Probability II Term: Spring 215 Instructor: Gordan Zitkovic Lecture 19 L 2 -Stochastic integration The stochastic integral for processes

More information

STAT 331. Martingale Central Limit Theorem and Related Results

STAT 331. Martingale Central Limit Theorem and Related Results STAT 331 Martingale Central Limit Theorem and Related Results In this unit we discuss a version of the martingale central limit theorem, which states that under certain conditions, a sum of orthogonal

More information

Exercises Measure Theoretic Probability

Exercises Measure Theoretic Probability Exercises Measure Theoretic Probability Chapter 1 1. Prove the folloing statements. (a) The intersection of an arbitrary family of d-systems is again a d- system. (b) The intersection of an arbitrary family

More information

STAT Sample Problem: General Asymptotic Results

STAT Sample Problem: General Asymptotic Results STAT331 1-Sample Problem: General Asymptotic Results In this unit we will consider the 1-sample problem and prove the consistency and asymptotic normality of the Nelson-Aalen estimator of the cumulative

More information

The concentration of a drug in blood. Exponential decay. Different realizations. Exponential decay with noise. dc(t) dt.

The concentration of a drug in blood. Exponential decay. Different realizations. Exponential decay with noise. dc(t) dt. The concentration of a drug in blood Exponential decay C12 concentration 2 4 6 8 1 C12 concentration 2 4 6 8 1 dc(t) dt = µc(t) C(t) = C()e µt 2 4 6 8 1 12 time in minutes 2 4 6 8 1 12 time in minutes

More information

ELEMENTS OF STOCHASTIC CALCULUS VIA REGULARISATION. A la mémoire de Paul-André Meyer

ELEMENTS OF STOCHASTIC CALCULUS VIA REGULARISATION. A la mémoire de Paul-André Meyer ELEMENTS OF STOCHASTIC CALCULUS VIA REGULARISATION A la mémoire de Paul-André Meyer Francesco Russo (1 and Pierre Vallois (2 (1 Université Paris 13 Institut Galilée, Mathématiques 99 avenue J.B. Clément

More information

Proofs of the martingale FCLT

Proofs of the martingale FCLT Probability Surveys Vol. 4 (2007) 268 302 ISSN: 1549-5787 DOI: 10.1214/07-PS122 Proofs of the martingale FCLT Ward Whitt Department of Industrial Engineering and Operations Research Columbia University,

More information

Fast-slow systems with chaotic noise

Fast-slow systems with chaotic noise Fast-slow systems with chaotic noise David Kelly Ian Melbourne Courant Institute New York University New York NY www.dtbkelly.com May 12, 215 Averaging and homogenization workshop, Luminy. Fast-slow systems

More information

Jump Processes. Richard F. Bass

Jump Processes. Richard F. Bass Jump Processes Richard F. Bass ii c Copyright 214 Richard F. Bass Contents 1 Poisson processes 1 1.1 Definitions............................. 1 1.2 Stopping times.......................... 3 1.3 Markov

More information

Brownian Motion and Stochastic Calculus

Brownian Motion and Stochastic Calculus ETHZ, Spring 17 D-MATH Prof Dr Martin Larsson Coordinator A Sepúlveda Brownian Motion and Stochastic Calculus Exercise sheet 6 Please hand in your solutions during exercise class or in your assistant s

More information

The Pedestrian s Guide to Local Time

The Pedestrian s Guide to Local Time The Pedestrian s Guide to Local Time Tomas Björk, Department of Finance, Stockholm School of Economics, Box 651, SE-113 83 Stockholm, SWEDEN tomas.bjork@hhs.se November 19, 213 Preliminary version Comments

More information

Verona Course April Lecture 1. Review of probability

Verona Course April Lecture 1. Review of probability Verona Course April 215. Lecture 1. Review of probability Viorel Barbu Al.I. Cuza University of Iaşi and the Romanian Academy A probability space is a triple (Ω, F, P) where Ω is an abstract set, F is

More information

1 Brownian Local Time

1 Brownian Local Time 1 Brownian Local Time We first begin by defining the space and variables for Brownian local time. Let W t be a standard 1-D Wiener process. We know that for the set, {t : W t = } P (µ{t : W t = } = ) =

More information

OPTIMAL SOLUTIONS TO STOCHASTIC DIFFERENTIAL INCLUSIONS

OPTIMAL SOLUTIONS TO STOCHASTIC DIFFERENTIAL INCLUSIONS APPLICATIONES MATHEMATICAE 29,4 (22), pp. 387 398 Mariusz Michta (Zielona Góra) OPTIMAL SOLUTIONS TO STOCHASTIC DIFFERENTIAL INCLUSIONS Abstract. A martingale problem approach is used first to analyze

More information

Nonlinear representation, backward SDEs, and application to the Principal-Agent problem

Nonlinear representation, backward SDEs, and application to the Principal-Agent problem Nonlinear representation, backward SDEs, and application to the Principal-Agent problem Ecole Polytechnique, France April 4, 218 Outline The Principal-Agent problem Formulation 1 The Principal-Agent problem

More information

Stochastic Differential Equations

Stochastic Differential Equations Chapter 5 Stochastic Differential Equations We would like to introduce stochastic ODE s without going first through the machinery of stochastic integrals. 5.1 Itô Integrals and Itô Differential Equations

More information

Particle models for Wasserstein type diffusion

Particle models for Wasserstein type diffusion Particle models for Wasserstein type diffusion Vitalii Konarovskyi University of Leipzig Bonn, 216 Joint work with Max von Renesse konarovskyi@gmail.com Wasserstein diffusion and Varadhan s formula (von

More information

A Central Limit Theorem for Fleming-Viot Particle Systems Application to the Adaptive Multilevel Splitting Algorithm

A Central Limit Theorem for Fleming-Viot Particle Systems Application to the Adaptive Multilevel Splitting Algorithm A Central Limit Theorem for Fleming-Viot Particle Systems Application to the Algorithm F. Cérou 1,2 B. Delyon 2 A. Guyader 3 M. Rousset 1,2 1 Inria Rennes Bretagne Atlantique 2 IRMAR, Université de Rennes

More information

Convergence at first and second order of some approximations of stochastic integrals

Convergence at first and second order of some approximations of stochastic integrals Convergence at first and second order of some approximations of stochastic integrals Bérard Bergery Blandine, Vallois Pierre IECN, Nancy-Université, CNRS, INRIA, Boulevard des Aiguillettes B.P. 239 F-5456

More information

On a class of stochastic differential equations in a financial network model

On a class of stochastic differential equations in a financial network model 1 On a class of stochastic differential equations in a financial network model Tomoyuki Ichiba Department of Statistics & Applied Probability, Center for Financial Mathematics and Actuarial Research, University

More information

X n D X lim n F n (x) = F (x) for all x C F. lim n F n(u) = F (u) for all u C F. (2)

X n D X lim n F n (x) = F (x) for all x C F. lim n F n(u) = F (u) for all u C F. (2) 14:17 11/16/2 TOPIC. Convergence in distribution and related notions. This section studies the notion of the so-called convergence in distribution of real random variables. This is the kind of convergence

More information

Stochastic calculus without probability: Pathwise integration and functional calculus for functionals of paths with arbitrary Hölder regularity

Stochastic calculus without probability: Pathwise integration and functional calculus for functionals of paths with arbitrary Hölder regularity Stochastic calculus without probability: Pathwise integration and functional calculus for functionals of paths with arbitrary Hölder regularity Rama Cont Joint work with: Anna ANANOVA (Imperial) Nicolas

More information

1 Weak Convergence in R k

1 Weak Convergence in R k 1 Weak Convergence in R k Byeong U. Park 1 Let X and X n, n 1, be random vectors taking values in R k. These random vectors are allowed to be defined on different probability spaces. Below, for the simplicity

More information

Simulation methods for stochastic models in chemistry

Simulation methods for stochastic models in chemistry Simulation methods for stochastic models in chemistry David F. Anderson anderson@math.wisc.edu Department of Mathematics University of Wisconsin - Madison SIAM: Barcelona June 4th, 21 Overview 1. Notation

More information

Preliminary Exam: Probability 9:00am 2:00pm, Friday, January 6, 2012

Preliminary Exam: Probability 9:00am 2:00pm, Friday, January 6, 2012 Preliminary Exam: Probability 9:00am 2:00pm, Friday, January 6, 202 The exam lasts from 9:00am until 2:00pm, with a walking break every hour. Your goal on this exam should be to demonstrate mastery of

More information

STOCHASTIC CALCULUS JASON MILLER AND VITTORIA SILVESTRI

STOCHASTIC CALCULUS JASON MILLER AND VITTORIA SILVESTRI STOCHASTIC CALCULUS JASON MILLER AND VITTORIA SILVESTRI Contents Preface 1 1. Introduction 1 2. Preliminaries 4 3. Local martingales 1 4. The stochastic integral 16 5. Stochastic calculus 36 6. Applications

More information

Exercises in stochastic analysis

Exercises in stochastic analysis Exercises in stochastic analysis Franco Flandoli, Mario Maurelli, Dario Trevisan The exercises with a P are those which have been done totally or partially) in the previous lectures; the exercises with

More information

Fast-slow systems with chaotic noise

Fast-slow systems with chaotic noise Fast-slow systems with chaotic noise David Kelly Ian Melbourne Courant Institute New York University New York NY www.dtbkelly.com May 1, 216 Statistical properties of dynamical systems, ESI Vienna. David

More information

Lecture 12: Diffusion Processes and Stochastic Differential Equations

Lecture 12: Diffusion Processes and Stochastic Differential Equations Lecture 12: Diffusion Processes and Stochastic Differential Equations 1. Diffusion Processes 1.1 Definition of a diffusion process 1.2 Examples 2. Stochastic Differential Equations SDE) 2.1 Stochastic

More information

Stochastic Differential Equations

Stochastic Differential Equations CHAPTER 1 Stochastic Differential Equations Consider a stochastic process X t satisfying dx t = bt, X t,w t dt + σt, X t,w t dw t. 1.1 Question. 1 Can we obtain the existence and uniqueness theorem for

More information

Jump-type Levy Processes

Jump-type Levy Processes Jump-type Levy Processes Ernst Eberlein Handbook of Financial Time Series Outline Table of contents Probabilistic Structure of Levy Processes Levy process Levy-Ito decomposition Jump part Probabilistic

More information

Stochastic Processes III/ Wahrscheinlichkeitstheorie IV. Lecture Notes

Stochastic Processes III/ Wahrscheinlichkeitstheorie IV. Lecture Notes BMS Advanced Course Stochastic Processes III/ Wahrscheinlichkeitstheorie IV Michael Scheutzow Lecture Notes Technische Universität Berlin Wintersemester 218/19 preliminary version November 28th 218 Contents

More information

CIMPA SCHOOL, 2007 Jump Processes and Applications to Finance Monique Jeanblanc

CIMPA SCHOOL, 2007 Jump Processes and Applications to Finance Monique Jeanblanc CIMPA SCHOOL, 27 Jump Processes and Applications to Finance Monique Jeanblanc 1 Jump Processes I. Poisson Processes II. Lévy Processes III. Jump-Diffusion Processes IV. Point Processes 2 I. Poisson Processes

More information

Survival Analysis: Counting Process and Martingale. Lu Tian and Richard Olshen Stanford University

Survival Analysis: Counting Process and Martingale. Lu Tian and Richard Olshen Stanford University Survival Analysis: Counting Process and Martingale Lu Tian and Richard Olshen Stanford University 1 Lebesgue-Stieltjes Integrals G( ) is a right-continuous step function having jumps at x 1, x 2,.. b f(x)dg(x)

More information

Poisson random measure: motivation

Poisson random measure: motivation : motivation The Lévy measure provides the expected number of jumps by time unit, i.e. in a time interval of the form: [t, t + 1], and of a certain size Example: ν([1, )) is the expected number of jumps

More information

Numerical Approximations to Optimal Nonlinear Filters

Numerical Approximations to Optimal Nonlinear Filters Numerical Approximations to Optimal Nonlinear Filters Harold J.Kushner Brown University, Applied Mathematics January, 28 Abstract Two types of numerical algorithms for nonlinear filters are considered.

More information

µ X (A) = P ( X 1 (A) )

µ X (A) = P ( X 1 (A) ) 1 STOCHASTIC PROCESSES This appendix provides a very basic introduction to the language of probability theory and stochastic processes. We assume the reader is familiar with the general measure and integration

More information

MATH 6605: SUMMARY LECTURE NOTES

MATH 6605: SUMMARY LECTURE NOTES MATH 6605: SUMMARY LECTURE NOTES These notes summarize the lectures on weak convergence of stochastic processes. If you see any typos, please let me know. 1. Construction of Stochastic rocesses A stochastic

More information

Stochastic Calculus and Black-Scholes Theory MTH772P Exercises Sheet 1

Stochastic Calculus and Black-Scholes Theory MTH772P Exercises Sheet 1 Stochastic Calculus and Black-Scholes Theory MTH772P Exercises Sheet. For ξ, ξ 2, i.i.d. with P(ξ i = ± = /2 define the discrete-time random walk W =, W n = ξ +... + ξ n. (i Formulate and prove the property

More information

Stochastic Analysis I S.Kotani April 2006

Stochastic Analysis I S.Kotani April 2006 Stochastic Analysis I S.Kotani April 6 To describe time evolution of randomly developing phenomena such as motion of particles in random media, variation of stock prices and so on, we have to treat stochastic

More information

Exact Simulation of Diffusions and Jump Diffusions

Exact Simulation of Diffusions and Jump Diffusions Exact Simulation of Diffusions and Jump Diffusions A work by: Prof. Gareth O. Roberts Dr. Alexandros Beskos Dr. Omiros Papaspiliopoulos Dr. Bruno Casella 28 th May, 2008 Content 1 Exact Algorithm Construction

More information

Lecture 4: Ito s Stochastic Calculus and SDE. Seung Yeal Ha Dept of Mathematical Sciences Seoul National University

Lecture 4: Ito s Stochastic Calculus and SDE. Seung Yeal Ha Dept of Mathematical Sciences Seoul National University Lecture 4: Ito s Stochastic Calculus and SDE Seung Yeal Ha Dept of Mathematical Sciences Seoul National University 1 Preliminaries What is Calculus? Integral, Differentiation. Differentiation 2 Integral

More information

Probability and Measure

Probability and Measure Part II Year 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2018 84 Paper 4, Section II 26J Let (X, A) be a measurable space. Let T : X X be a measurable map, and µ a probability

More information

A D VA N C E D P R O B A B I L - I T Y

A D VA N C E D P R O B A B I L - I T Y A N D R E W T U L L O C H A D VA N C E D P R O B A B I L - I T Y T R I N I T Y C O L L E G E T H E U N I V E R S I T Y O F C A M B R I D G E Contents 1 Conditional Expectation 5 1.1 Discrete Case 6 1.2

More information

THEOREMS, ETC., FOR MATH 515

THEOREMS, ETC., FOR MATH 515 THEOREMS, ETC., FOR MATH 515 Proposition 1 (=comment on page 17). If A is an algebra, then any finite union or finite intersection of sets in A is also in A. Proposition 2 (=Proposition 1.1). For every

More information

Stochastic Calculus February 11, / 33

Stochastic Calculus February 11, / 33 Martingale Transform M n martingale with respect to F n, n =, 1, 2,... σ n F n (σ M) n = n 1 i= σ i(m i+1 M i ) is a Martingale E[(σ M) n F n 1 ] n 1 = E[ σ i (M i+1 M i ) F n 1 ] i= n 2 = σ i (M i+1 M

More information

On the submartingale / supermartingale property of diffusions in natural scale

On the submartingale / supermartingale property of diffusions in natural scale On the submartingale / supermartingale property of diffusions in natural scale Alexander Gushchin Mikhail Urusov Mihail Zervos November 13, 214 Abstract Kotani 5 has characterised the martingale property

More information