Stochastic Integration.

Size: px
Start display at page:

Download "Stochastic Integration."

Transcription

1 Chapter Stochastic Integration..1 Brownian Motion as a Martingale P is the Wiener measure on (Ω, B) where Ω = C, T B is the Borel σ-field on Ω. In addition we denote by B t the σ-field generated by x(s) for s t. It is easy to see tha x(t) is a martingale with respect to (Ω, B t, P ), i.e for each t > s in, T E P {x(t) B s } = x(s) a.e. P (.1) so is x(t) t. In other words E P {x(t) t F s } = x(s) s a.e. P (.) The proof is rather straight forward. We write x(t) = x(s) + Z where Z = x(t) x(s) is a rom variable independent of the past history B s is distributed as a Gaussian rom variable with mean variance t s. Therefore E P {Z B s } = E P {Z B s } = t s a.e P. Conversely, Theorem.1. Lévy s theorem. If P is a measure on (C, T, B) such that P x() = = 1 the the functions x(t) x (t) t are martingales with respect to (C, T, B t, P ) then P is the Wiener measure. Proof. The proof is based on the observation that a Gaussian distribution is determined by two moments. But that the distribution is Gaussian is a consequence of the fact that the paths are almost surely continuous not part of our assumptions. The actual proof is carried out by establishing that for each real number λ X λ (t) = exp λx(t) λ t (.3) is a martingale with respect to (C, T, B t, P ). Once this is established it is elementary to compute E P exp λ(x(t) x(s)) λ B s = exp (t s) 13

2 14 CHAPTER. STOCHASTIC INTEGRATION. which shows that we have a Gaussian Process with independent increments with two matching moments. The proof of (.3) is more or less the same as proving the central limit theorem. In order to prove (.3) we can assume with out loss of generality that s = will show that E P exp λx(t) λ t = 1 (.4) To this end let us define successively τ,ɛ =, τ k+1,ɛ = min inf { s : s τ k,ɛ, x(s) x(τ k,ɛ ) ɛ }, t, τ k,ɛ + ɛ Then each τ k,ɛ is a stopping time eventually τ k,ɛ = t by continuity of paths. The continuity of paths also guarantees that x(τ k+1,ɛ ) x(τ k,ɛ ) ɛ. We write x(t) = k x(τ k+1,ɛ ) x(τ k,ɛ ) t = k τ k+1,ɛ τ k,ɛ To establish (.4) we calculate the quantity on the left h side as lim n EP exp λx(τk+1,ɛ ) x(τ k,ɛ ) λ τ k+1,ɛ τ k,ɛ k n show that it is equal to 1. Let us cosider the σ-field F k = B τk,ɛ the quantity q k (ω) = E P exp λx(τ k+1,ɛ ) x(τ k,ɛ ) λ τ k+1,ɛ τ k,ɛ F k Clearly, if we use Taylor expansion the fact that x(t) as well as x(t) t are martingales q k (ω) 1 CE P λ 3 x(τ k+1,ɛ ) x(τ k,ɛ ) 3 + λ τ k+1,ɛ τ k,ɛ F k C λ ɛ E P x(τ k+1,ɛ ) x(τ k,ɛ ) + τ k+1,ɛ τ k,ɛ Fk = C λ ɛ E P τ k+1,ɛ τ k,ɛ Fk In particular for some constant C depending on λ q k (ω) E P exp C ɛ τ k+1,ɛ τ k,ɛ Fk by induction lim sup E P exp λx(τk+1,ɛ ) x(τ k,ɛ ) λ n τ k+1,ɛ τ k,ɛ k n expc ɛ t

3 .1. BROWNIAN MOTION AS A MARTINGALE 15 Since ɛ > is arbitrary we prove one half of (.4). Notice that in any case sup ω q k (ω) 1 ɛ. Hence we have the lower bound q k (ω) E P exp C ɛ τ k+1,ɛ τ k ɛ F k which can be used to prove the other half. theorem. This completes the proof of the Exercise.1. Why does Theorem.1 fail for the process x(t) = N(t) t where N(t) is the stard Poisson Process with rate 1? Remark.1. One can use the Martingale inequality in order to estimate the probability P {sup s t x(s) l}. For λ >, by Doob s inequality P sup exp λx(s) λ s t s A 1 A P sup x(s) l P sup x(s) λs s t s t l λt Optimizing over λ >, we obtain by symmetry = P sup λx(s) λ s s t λl λ t exp λl + λ t P sup x(s) l exp l s t t P sup x(s) l exp l s t t The estimate is not too bad because by reflection principle P sup x(s) l = P x(t) l = exp x s t π t t dx Exercise.. One can use the estimate above to prove the result of Paul Lévy P lim sup δ sup s,t 1 s t δ x(s) x(t) = = 1 δ log 1 δ We had an exercise in the previous section that established the lower bound. Let us concentrate on the upper bound. If we define δ (ω) = sup x(s) x(t) s,t 1 s t δ l

4 16 CHAPTER. STOCHASTIC INTEGRATION. first check that it is sufficient to prove that for any ρ < 1, a > P ρ n(ω) a nρ n log ρ 1 < (.5) n To estimate ρ n(ω) it is sufficient to estimate sup t I x(t) x(t ) for k ɛ ρ n overlapping intervals {I } of the form t, t + (1 + ɛ)ρ n with length (1 + ɛ)ρ n. For each ɛ >, k ɛ = ɛ 1 is a constant such that any interval s, t of length no larger than ρ n is completely contained in some I with t s t + ɛρ n. Then ρ n(ω) sup sup x(t) x(t ) + sup x(s) x(t ) t I t s t +ɛρ n Therefore, for any a = a 1 + a, P ρ n(ω) a nρ n log 1 ρ P sup sup x(t) x(t ) a 1 nρ n log 1 t I ρ + P sup sup x(s) x(t ) a nρ n log 1 t s t +ɛρ n ρ k ɛ ρ n exp a 1 nρn log 1 ρ (1 + ɛ)ρ n + exp a nρn log 1 ρ ɛρ n Since a >, we can pick a 1 > a >. For ɛ > sufficiently small (.5) is easily verified.. Brownian Motion as a Markov Process. Brownian motion is a process with independent increments, the increment over any interval of length t has the Gaussian distribution with density 1 q(t, y) = (πt) d e y t It is therefore a Markov process with transition probability The operators 1 p(t, x, y) = q(t, y x) = (πt) d (T t f)(x) = f(y)p(t, x, y)dy e y x t satisfy T t T s = T s T t = T t+s, i.e the semigroup property. This is seen to be an easy consequence of the Chapman-Kolmogorov equations p(t, x, y)p(s, y, z)dy = p(t + s, x, z)

5 .. BROWNIAN MOTION AS A MARKOV PROCESS. 17 The infinitesimal generator of the semigroup (Af)(x) = lim t T t f f t is easily calculated as 1 (Af)(x) = lim f(x + y) f(x)q(t, y)dy t t 1 = lim f(x + ty) f(x)q(t, y)dy t t = 1 ( f)(x) by exping f in a Taylor series in t. The term that is linear in y integrates to the quadratic term leads to the Laplace operator. The differential equation dt t dt = T ta = AT t implies that u(t, x) = (T t f)(x) satisfies the heat equation d dt f(y)p(t, x, y)dy = u t = 1 u 1 ( f)(y)p(t, x, y)dy In particular if E x is expectation with respect to Brownian motion starting from x, 1 E x f(x(t) f(x) = E x ( f)(x(s))ds By the Markov property E x f(x(t) f(x(s)) s 1 ( f)(x(τ))dτ Fs = or 1 f(x(t) f(x()) ( f)(x(τ))dτ is a Martingale with respect to Brownian Motion. It is ust one step from here to show that for functions u(t, x) that are smooth u(t, x(t)) u(, x()) u t + 1 u(s, x(s))ds (.6) is a martingale. There are in addition some natural exponential Martingales associated with Brownian motion. For instance for any λ R d, exp< λ, x(t) x() > 1 λ t

6 18 CHAPTER. STOCHASTIC INTEGRATION. is a martingale. More generally for any smooth function u(t, x) that is bounded away from, u(t, x(t)) exp is a martingale. In particular if then u t + 1 u u u t + 1 u + v(t, x)u(t, x) = u(t, x(t)) exp v(s, x(s))ds (s, x(s))ds (.7) is a Martingale, which is the Feynman-Kac formula. To prove (.7) from (.6), we make use of the following elementary lemma. Lemma.. Suppose M(t) is almost surely continuous martingale with respect to (Ω, F t, P ) A(t) is a progressively measurable function, which is almost surely continuous of bounded variation in t. Then, under the assumption that sup s t M(s) V ar,t A(, ω) is integrable, is again a Martingle. M(t)A(t) M()A() Proof. The main step is to see why EM(t)A(t) M()A() M(s)dA(s) M(s)dA(s) = Then the same argument, repeated conditionally will prove the martingale property. EM(t)A(t) M()A() = lim = lim EM(t )A(t ) M(t 1 )A(t 1 ) EM(t )A(t 1 ) M(t 1 )A(t 1 ) + lim EM(t )A(t ) A(t 1 ) = lim EM(t )A(t ) A(t 1 ) = E M(s)dA(s) The limit is over the partition {t } becoming dense in, t ones uses the integrability of sup s t M(s) V ar,t A( ) the dominated convergence theorem to complete the proof.

7 .3. STOCHASTIC INTEGRALS 19 Now, to go from (.6) to (.7), we choose M(t) = u(t, x(t)) u(, x()) A(t) = exp u t + 1 u u u t + 1 u(s, x(s))ds (s, x(s))ds.3 Stochastic Integrals If y 1,..., y n is a martingale relative to the σ-fields F, if e (ω) are rom functions that are F measurable, the sequence 1 z = e k (ω)y k+1 y k k= is again a martingale with respect to the σ-fields F, provided the expectations are finite. A computation shows that if a (ω) = E P (y +1 y ) F then or more precisely 1 E P z = E P a k (ω) e k (ω) k= E P (z +1 z ) F = a (ω) e (ω) a.e. P Formally one can write δz = z +1 z = e (ω)δy = e (ω)(y +1 y ) z is called a martingale transform of y the size of z n measured by its mean square is exactly equal to E P n 1 = e (ω) a (ω). The stochastic integral is ust the continuous analog of this. Theorem.3. Let y(t) be an almost surely continuous martingale relative to (Ω, F t, P ) such that y() = a.e. P, y (t) a(s, ω)ds is again a martingale relative to (Ω, F t, P ), where a(s, ω)ds is a bounded progressively measurable function. Then for progressively measurable functions e(, ) satisfying, for every t >, E P e (s)a(s)ds <

8 CHAPTER. STOCHASTIC INTEGRATION. the stochastic integral z(t) = e(s)dy(s) makes sense as an almost surely continuous martingale with respect to (Ω, F t, P ) z (t) e (s)a(s)ds is again a martingale with respect to (Ω, F t, P ). In particular E P z (t) = E P e (s)a(s)ds (.8) Proof. Step 1. The statements are obvious if e(s) is a constant. Step. Assume that e(s) is a simple function given by e(s, ω) = e (ω) for t s < t +1 where e (ω) is F t measurable bounded for N t N+1 =. Then we can define inductively for t t t +1. Clearly z(t) z(t) = z(t ) + e(t, ω)y(t) y(t ) z (t) e (s, ω)a(s, ω)ds are martingales in the interval t, t +1. Since the definitions match at the end points the martingale property holds for t. Step 3. If e k (s, ω) is a sequence of uniformly bounded progressively measurable functions converging to e(s, ω) as k in such a way that lim k e k (s) a(s)ds = for every t >, because of the relation (.8) lim z k,k EP k (t) z k (t) = lim k,k EP e k (s) e k (s) a(s)ds =. Combined with Doob s inequality, we conclude the existence of a an almost surely continuous martingale z(t) such that lim k EP sup z k (s) z(s) = s t

9 .3. STOCHASTIC INTEGRALS 1 clearly is an (Ω, F t, P ) martingale. z (t) e (s)a(s)ds Step 4. All we need to worry now is about approximating e(, ). Any bounded progressively measurable almost surely continuous e(s, ω) can be approximated by e k (s, ω) = e( ks k k, ω) which is piecewise constant levels off at time k. It is trivial to see that for every t >, lim k e k (s) e(s) a(s) ds = Step 5. Any bounded progressively measurable e(s, ω) can be approximated by continuous ones by defining e k (s, ω) = k again it is trivial to see that it works. s (s 1 k ) e (u, ω)du Step 6. Finally if e(s, ω) is un bounded we can approximate it by truncation, e k (s, ω) = f k (e(s, ω)) where f k (x) = x for x k otherwise. This completes the proof of the theorem. Suppose we have an almost surely continuous process x(t, ω) defined on some (Ω, F t, P ), progressively measurable functions b(s, ω), a(s, ω) with a, such that where y(t, ω) x(t, ω) = x(, ω) + y (t, ω) b(s, ω)ds + y(t, ω) a(s, ω)ds are martingales with respect to (Ω, F t, P ). The stochastic integral z(t) = e(s)dx(s) is defined by z(t) = e(s)dx(s) = For this to make sense we need for every t, e(s)b(s)ds + e(s)dy(s) E P e(s)b(s) ds < E P e(s) a(s)ds <

10 CHAPTER. STOCHASTIC INTEGRATION. If we assume for simplicity that eb e a are uniformly bounded functions in t ω. It then follows, that for any F measurable z(), that z(t) = z() + e(s)dx(s) is again an almost surely continuous process such that where y (t) z(t) = z() + y (t) b (s, ω)ds + y (t, ω) are martingales with b = eb a = e a. a (s, ω)ds Exercise.3. If e is such that eb e a are bounded, then prove directly that the exponentials exp λ(z(t) z()) λ are (Ω, F t, P ) martingales. e(s)b(s)ds λ a(s)e (s)ds We can easily do the mutidimensional generalization. Let y(t) be a vector valued martingale with n components y 1 (t),, y n (t) such that y i (t)y (t) o a i, (s, ω)ds are again martingales with respect to (Ω, F t, P ). Assume that the progressively measurable functions{a i, (t, ω)} are symmetric positive semidefinite for every t ω are uniformly bounded in t ω. Then the stochastic integral z(t) = z() + < e(s), dy(s) >= z() + i e i (s)dy i (s) is well defined for vector velued progressively measurable functions e(s, ω) such that E P < e(s), a(s)e(s) > ds < In a similar fashion to the scalar case, for any diffusion process x(t) corresponding to b(s, ω) = {b i (s, ω)} a(s, ω) = {a i, (s, ω)} any e(s, ω)) = {e i (s, ω)} which is progressively measurable uniformly bounded z(t) = z() + < e(s), dx(s) >

11 .3. STOCHASTIC INTEGRALS 3 is well defined is a diffusion corresponding to the coefficients b(s, ω) =< e(s, ω), b(s, ω) > ã(s, ω) =< e(s, ω), a(s, ω)e(s, ω) > It is now a simple exercise to define stocahstic integrals of the form z(t) = z() + σ(s, ω)dx(s) where σ(s, ω) is a matrix of dimension m n that has the suitable properties of boundedness progressive measurability. z(t) is seen easily to correspond to the coefficients b(s) = σ(s)b(s) ã(s) = σ(s)a(s)σ (s) The analogy here is to linear transformations of Gaussian variables. If ξ is a Gaussian vector in R n with mean µ covariance A, if η = T ξ is a linear transformation from R n to R m, then η is again Gaussian in R m has mean T µ covariance matrix T AT. Exercise.4. If x(t) is Brownian motion in R n σ(s, ω) is a progreessively measurable bounded function then z(t) = σ(s, ω)dx(s) is again a Brownian motion in R n if only if σ is an orthogonal matrix for almost all s (with repect to Lebesgue Measure) ω (with respect to P ) Exercise.5. We can mix stochastic ordinary integrals. If we define z(t) = z() + σ(s)dx(s) + f(s)ds where x(s) is a process corresponding to b(s), a(s), then z(t) corresponds to b(s) = σ(s)b(s) + f(s) ã(s) = σ(s)a(s)σ (s) The analogy is again to affine linear transformations of Gaussians η = T ξ + γ. Exercise.6. Chain Rule. If we transform from x to z again from z to w, it is the same as makin a single transformation from z to w. dz(s) = σ(s)dx(s) + f(s)ds dw(s) = τ(s)dz(s) + g(s)ds can be rewritten as dw(s) = τ(s)σ(s)dx(s) + τ(s)f(s) + g(s)ds

12 4 CHAPTER. STOCHASTIC INTEGRATION..4 Ito s Formula. The chain rule in ordinary calculus allows us to compute df(t, x(t)) = f t (t, x(t))dt + f(t, x(t)).dx(t) We replace x(t) by a Brownian path, say in one dimension to keep things simple for f take the simplest nonlinear function f(x) = x that is independent of t. We are looking for a formula of the type β (t) β () = We have already defined integrals of the form β(s) dβ(s) (.9) β(s) dβ(s) (.1) as Ito s stochastic integrals. But still a formula of the type (.9) cannot possibly hold. The left h side has expectation t while the right h side as a stochastic integral with respect to β( ) is mean zero. For Ito s theory it was important to evaluate β(s) at the back end of the interval t 1, t before multiplying by the increment (β(t ) β(t 1 ) to keeep things progressively measurable. That meant the stochastic integral (.1) was approximated by the sums β(t 1 )(β(t ) β(t 1 ) over successive partitions of, t. We could have approximated by sums of the form β(t )(β(t ) β(t 1 ). In ordinary calculus, because β( ) would be a continuous function of bounded variation in t, the difference would be negligible as the partitions became finer leading to the same answer. But in Ito calculus the differnce does not go to. The difference D π is given by D π = β(t )(β(t ) β(t 1 ) β(t 1 (β(t ) β(t 1 ) = = (β(t ) β(t 1 )(β(t ) β(t 1 ) (β(t ) β(t 1 ) An easy computation gives ED π = t E(D π t) = 3 (t t 1 ) tends to as the partition is refined. On the other h if we are neutral approximate the integral (.1) by 1 (β(t 1) + β(t ))(β(t ) β(t 1 )

13 .4. ITO S FORMULA. 5 then we can simplify calculate the limit as lim β(t ) β(t 1 ) = 1 (β (t) β ()) This means as we defined it (.1) can be calculated as or the correct version of (.9) is β(s) dβ(s) = 1 (β (t) β ()) t β (t) β () = β(s) dβ(s) + t Now we can attempt to calculate f(β(t)) f(β()) for a smooth function of one variable. Roughly speaking, by a two term Taylor expansion f(β(t)) f(β()) = = f(β(t )) f(β(t 1 )) f (β(t 1 )(β(t )) β(t 1 )) + 1 f (β(t 1 )(β(t )) β(t 1 )) + O β(t )) β(t 1 ) 3 The expected value of the error term is approximately E O β(t )) β(t 1 ) 3 = O t t 1 3 = o(1) leading to Ito s formula f(β(t)) f(β()) = f (β(s))dβ(s) + 1 It takes some effort to see that f (β(t 1 )(β(t )) β(t 1 )) f (β(s))ds (.11) f (β(s))ds But the idea is, that because f (β(s)) is continuous in t, we can pretend that it is locally constant use that calculation we did for x where f is a constant. While we can make a proof after a careful estimation of all the errors, in fact we do not have to do it. After all we have already defined the stochastic integral (.1). We should be able to verify (.11) by computing the mean square of the difference showing that it is. In fact we will do it very generally with out much effort. We have the tools already.

14 6 CHAPTER. STOCHASTIC INTEGRATION. Theorem.4. Let x(t) be an almost surely continuous process with values on R d such that y i (t) = x i (t) x i () b i (s, ω)ds (.1) y i (t)y (t) a i, (s, ω)ds (.13) are martingales for 1 i, d. For any smooth function u(t, x) on, ) R d u(t, x(t)) u(, x()) = s u s (s, x(s))ds < ( u)(s, x(s)), dx(s) > u a i, (s, ω) (s, x(s))ds x i x i, Proof. Let us define the stochastic process ξ(t) =u(t, x(t)) u(, x()) s u s (s, x(s))ds < ( u)(s, x(s)), dx(s) > 1 u a i, (s, ω) (s, x(s))ds x i x i, (.14) We define a d + 1 dimensional process x(t) = {u(t, x(t)), x(t)} which is again a process with almost surely continuous paths satisfying relations analogous to (.1) (.13) with b, ã. If we number the extra coordinate by, then bi = { u s + L s,ωu(s, x(s)) if i = b i (s, ω) if i 1 < a(s, ω) u, u > if i = = ã i, = a(s, ω) u i if =, i 1 a i, (s, ω) if i, 1 The actual computation is interesting reveals the connection between ordinary calculus, second order operators Ito calculus. If we want to know the parametrs of the process y(t), then we need to know what to subtract from v(t, y(t)) v(, y()) to obtain a martingale. But v(t,, y(t)) = w(t, x(t)), where

15 .4. ITO S FORMULA. 7 w(t, x) = v(t, u(t, x), x) if we compute with ( w t + L s,ωw)(t, x) = v t + v u u t + b i u xi + b i v xi + 1 a i, u xi,x i i i, 1 + v u,u a i, u xi u x + v u,xi a i, u x i, i + 1 a i, v xi,x L t,ω v = i i, = v t + L t,ω v bi (s, ω)v yi + 1 ã i, (s, ω)v yi,y i, We can construct stochastic integrals with respect to the d + 1 dimensional process y( ) ξ(t) defined by (.14) is again an almost surely continuous process its parameters can be calculated. After all with ξ(t) = f i (s, ω) = g(s, ω) = u s + 1 < f(s, ω), dy(s) > + g(s, ω)ds { 1 if i = ( u) i (s, x(s)) if i 1 u a i, (s, ω) (s, x(s)) x i x Denoting the parameters of ξ( ) by B(s, ω), A(s, ω), we find i, A(s, ω) =< f(s, ω), ã(s, ω)f(s, ω) > =< a u, u > < a u, u > + < a u, u > = B(s, ω) =< b, f > +g = b (s, ω) < b(s, ω), u(s, x(s)) > u s (s, ω) + 1 u a i, (s, ω) (s, x(s)) x i x = Now all we are left with is the following Lemma.5. If ξ(t) is a scalar process corresponding to the coefficients, then ξ(t) ξ() a.e. i,

16 8 CHAPTER. STOCHASTIC INTEGRATION. Proof. Just compute E(ξ(t) ξ()) = E ds = This concludes the proof of the theorem. Exercise.7. Ito s formula is a local formula that is valid for almost all paths. If u is a smooth function i.e. with one continuous t derivative two continuous x derivatives (.11) must still be valid a.e. We cannot do it with moments, because for moments to exist we need control on growth at infinity. But it should not matter. Should it? Application: Local time in one dimension. Tanaka Formula. If β(t) is the one dimensional Brownian Motion, for any path β( ) any t, the occupation meausre L t (A, ω) is defined by L t (A, ω) = m{s : s t & β(s) A} Theorem.6. There exists a function l(t, y, ω) such that, for almost all ω, L t (A, ω) = l(t, y, ω) dy identically in t. Proof. Formally l(t, y, ω) = A δ(β(s) y)ds but, we have to make sense out of it. From Ito s formula f(β(t)) f(β()) = f (β(s)) dβ(s) + 1 f (β(s))ds If we take f(x) = x y then f (x) = sign x 1 f (x) = δ(x y). We get the identity β(t) y β() y sign β(s)dβ(s) = δ(β(s) y)ds = l(t, y, ω) While we have not proved the identity, we can use it to define l(,, ). It is now well defined as a continuous function of t for almost all ω for each y, by Fubini s theorem for almost all y ω.

17 .4. ITO S FORMULA. 9 Now all we need to do is to check that it works. It is enough to check that for any smooth test function φ with compact support The function R φ(y)l(t, y, ω) dy = ψ(x) = R x y φ(y) dy is smooth a straigt forward calculation shows ψ (x) = sign (x y)φ(y) dy R ψ (x) = φ(x) It is easy to see that (.15) is nothing but Ito s formuls for ψ. Remark.. One can estimate φ(β(s))ds (.15) E sign (β(s) y) sign (β(s) z)dβ(s) 4 C y z by Garsia- Rodemich- Rumsey or Kolmogorov one can conclude that for each t, l(t, y, ω) is almost surely a continuous function of y. Remark.3. With a little more work one can get it to be ointly continuous in t y for almost all ω.

Brownian Motion. Chapter Stochastic Process

Brownian Motion. Chapter Stochastic Process Chapter 1 Brownian Motion 1.1 Stochastic Process A stochastic process can be thought of in one of many equivalent ways. We can begin with an underlying probability space (Ω, Σ, P) and a real valued stochastic

More information

6. Brownian Motion. Q(A) = P [ ω : x(, ω) A )

6. Brownian Motion. Q(A) = P [ ω : x(, ω) A ) 6. Brownian Motion. stochastic process can be thought of in one of many equivalent ways. We can begin with an underlying probability space (Ω, Σ, P) and a real valued stochastic process can be defined

More information

Reflected Brownian Motion

Reflected Brownian Motion Chapter 6 Reflected Brownian Motion Often we encounter Diffusions in regions with boundary. If the process can reach the boundary from the interior in finite time with positive probability we need to decide

More information

Stochastic Differential Equations.

Stochastic Differential Equations. Chapter 3 Stochastic Differential Equations. 3.1 Existence and Uniqueness. One of the ways of constructing a Diffusion process is to solve the stochastic differential equation dx(t) = σ(t, x(t)) dβ(t)

More information

Lecture 3. This operator commutes with translations and it is not hard to evaluate. Ae iξx = ψ(ξ)e iξx. T t I. A = lim

Lecture 3. This operator commutes with translations and it is not hard to evaluate. Ae iξx = ψ(ξ)e iξx. T t I. A = lim Lecture 3. If we specify D t,ω as a progressively mesurable map of (Ω [, T], F t ) into the space of infinitely divisible distributions, as well as an initial distribution for the starting point x() =

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3 Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 15. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

Wiener Measure and Brownian Motion

Wiener Measure and Brownian Motion Chapter 16 Wiener Measure and Brownian Motion Diffusion of particles is a product of their apparently random motion. The density u(t, x) of diffusing particles satisfies the diffusion equation (16.1) u

More information

Stochastic Integrals and Itô s formula.

Stochastic Integrals and Itô s formula. Chapter 5 Stochastic Itegrals ad Itô s formula. We will call a Itô process a progressively measurable almost surely cotiuous process x(t, ω) with values i R d, defied o some (Ω, F t, P) that is related

More information

p 1 ( Y p dp) 1/p ( X p dp) 1 1 p

p 1 ( Y p dp) 1/p ( X p dp) 1 1 p Doob s inequality Let X(t) be a right continuous submartingale with respect to F(t), t 1 P(sup s t X(s) λ) 1 λ {sup s t X(s) λ} X + (t)dp 2 For 1 < p

More information

Stochastic Calculus February 11, / 33

Stochastic Calculus February 11, / 33 Martingale Transform M n martingale with respect to F n, n =, 1, 2,... σ n F n (σ M) n = n 1 i= σ i(m i+1 M i ) is a Martingale E[(σ M) n F n 1 ] n 1 = E[ σ i (M i+1 M i ) F n 1 ] i= n 2 = σ i (M i+1 M

More information

I forgot to mention last time: in the Ito formula for two standard processes, putting

I forgot to mention last time: in the Ito formula for two standard processes, putting I forgot to mention last time: in the Ito formula for two standard processes, putting dx t = a t dt + b t db t dy t = α t dt + β t db t, and taking f(x, y = xy, one has f x = y, f y = x, and f xx = f yy

More information

Applications of Ito s Formula

Applications of Ito s Formula CHAPTER 4 Applications of Ito s Formula In this chapter, we discuss several basic theorems in stochastic analysis. Their proofs are good examples of applications of Itô s formula. 1. Lévy s martingale

More information

Lecture 4: Ito s Stochastic Calculus and SDE. Seung Yeal Ha Dept of Mathematical Sciences Seoul National University

Lecture 4: Ito s Stochastic Calculus and SDE. Seung Yeal Ha Dept of Mathematical Sciences Seoul National University Lecture 4: Ito s Stochastic Calculus and SDE Seung Yeal Ha Dept of Mathematical Sciences Seoul National University 1 Preliminaries What is Calculus? Integral, Differentiation. Differentiation 2 Integral

More information

Bernardo D Auria Stochastic Processes /12. Notes. March 29 th, 2012

Bernardo D Auria Stochastic Processes /12. Notes. March 29 th, 2012 1 Stochastic Calculus Notes March 9 th, 1 In 19, Bachelier proposed for the Paris stock exchange a model for the fluctuations affecting the price X(t) of an asset that was given by the Brownian motion.

More information

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539 Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory

More information

Some Tools From Stochastic Analysis

Some Tools From Stochastic Analysis W H I T E Some Tools From Stochastic Analysis J. Potthoff Lehrstuhl für Mathematik V Universität Mannheim email: potthoff@math.uni-mannheim.de url: http://ls5.math.uni-mannheim.de To close the file, click

More information

Lecture 12. F o s, (1.1) F t := s>t

Lecture 12. F o s, (1.1) F t := s>t Lecture 12 1 Brownian motion: the Markov property Let C := C(0, ), R) be the space of continuous functions mapping from 0, ) to R, in which a Brownian motion (B t ) t 0 almost surely takes its value. Let

More information

EXISTENCE OF NEUTRAL STOCHASTIC FUNCTIONAL DIFFERENTIAL EQUATIONS WITH ABSTRACT VOLTERRA OPERATORS

EXISTENCE OF NEUTRAL STOCHASTIC FUNCTIONAL DIFFERENTIAL EQUATIONS WITH ABSTRACT VOLTERRA OPERATORS International Journal of Differential Equations and Applications Volume 7 No. 1 23, 11-17 EXISTENCE OF NEUTRAL STOCHASTIC FUNCTIONAL DIFFERENTIAL EQUATIONS WITH ABSTRACT VOLTERRA OPERATORS Zephyrinus C.

More information

A Concise Course on Stochastic Partial Differential Equations

A Concise Course on Stochastic Partial Differential Equations A Concise Course on Stochastic Partial Differential Equations Michael Röckner Reference: C. Prevot, M. Röckner: Springer LN in Math. 1905, Berlin (2007) And see the references therein for the original

More information

Brownian Motion. Chapter Stochastic Process

Brownian Motion. Chapter Stochastic Process Chapter 1 Brownian Motion 1.1 Stochastic Process A stochastic process can be thought of in one of many equivalent ways. We can begin with an underlying probability space (Ω, Σ,P and a real valued stochastic

More information

Homework #6 : final examination Due on March 22nd : individual work

Homework #6 : final examination Due on March 22nd : individual work Université de ennes Année 28-29 Master 2ème Mathématiques Modèles stochastiques continus ou à sauts Homework #6 : final examination Due on March 22nd : individual work Exercise Warm-up : behaviour of characteristic

More information

STAT 331. Martingale Central Limit Theorem and Related Results

STAT 331. Martingale Central Limit Theorem and Related Results STAT 331 Martingale Central Limit Theorem and Related Results In this unit we discuss a version of the martingale central limit theorem, which states that under certain conditions, a sum of orthogonal

More information

Verona Course April Lecture 1. Review of probability

Verona Course April Lecture 1. Review of probability Verona Course April 215. Lecture 1. Review of probability Viorel Barbu Al.I. Cuza University of Iaşi and the Romanian Academy A probability space is a triple (Ω, F, P) where Ω is an abstract set, F is

More information

SDE Coefficients. March 4, 2008

SDE Coefficients. March 4, 2008 SDE Coefficients March 4, 2008 The following is a summary of GARD sections 3.3 and 6., mainly as an overview of the two main approaches to creating a SDE model. Stochastic Differential Equations (SDE)

More information

GAUSSIAN PROCESSES; KOLMOGOROV-CHENTSOV THEOREM

GAUSSIAN PROCESSES; KOLMOGOROV-CHENTSOV THEOREM GAUSSIAN PROCESSES; KOLMOGOROV-CHENTSOV THEOREM STEVEN P. LALLEY 1. GAUSSIAN PROCESSES: DEFINITIONS AND EXAMPLES Definition 1.1. A standard (one-dimensional) Wiener process (also called Brownian motion)

More information

A Short Introduction to Diffusion Processes and Ito Calculus

A Short Introduction to Diffusion Processes and Ito Calculus A Short Introduction to Diffusion Processes and Ito Calculus Cédric Archambeau University College, London Center for Computational Statistics and Machine Learning c.archambeau@cs.ucl.ac.uk January 24,

More information

Stability of Stochastic Differential Equations

Stability of Stochastic Differential Equations Lyapunov stability theory for ODEs s Stability of Stochastic Differential Equations Part 1: Introduction Department of Mathematics and Statistics University of Strathclyde Glasgow, G1 1XH December 2010

More information

The concentration of a drug in blood. Exponential decay. Different realizations. Exponential decay with noise. dc(t) dt.

The concentration of a drug in blood. Exponential decay. Different realizations. Exponential decay with noise. dc(t) dt. The concentration of a drug in blood Exponential decay C12 concentration 2 4 6 8 1 C12 concentration 2 4 6 8 1 dc(t) dt = µc(t) C(t) = C()e µt 2 4 6 8 1 12 time in minutes 2 4 6 8 1 12 time in minutes

More information

n E(X t T n = lim X s Tn = X s

n E(X t T n = lim X s Tn = X s Stochastic Calculus Example sheet - Lent 15 Michael Tehranchi Problem 1. Let X be a local martingale. Prove that X is a uniformly integrable martingale if and only X is of class D. Solution 1. If If direction:

More information

Exercises. T 2T. e ita φ(t)dt.

Exercises. T 2T. e ita φ(t)dt. Exercises. Set #. Construct an example of a sequence of probability measures P n on R which converge weakly to a probability measure P but so that the first moments m,n = xdp n do not converge to m = xdp.

More information

The first order quasi-linear PDEs

The first order quasi-linear PDEs Chapter 2 The first order quasi-linear PDEs The first order quasi-linear PDEs have the following general form: F (x, u, Du) = 0, (2.1) where x = (x 1, x 2,, x 3 ) R n, u = u(x), Du is the gradient of u.

More information

4.6 Example of non-uniqueness.

4.6 Example of non-uniqueness. 66 CHAPTER 4. STOCHASTIC DIFFERENTIAL EQUATIONS. 4.6 Example of non-uniqueness. If we try to construct a solution to the martingale problem in dimension coresponding to a(x) = x α with

More information

Solutions to the Exercises in Stochastic Analysis

Solutions to the Exercises in Stochastic Analysis Solutions to the Exercises in Stochastic Analysis Lecturer: Xue-Mei Li 1 Problem Sheet 1 In these solution I avoid using conditional expectations. But do try to give alternative proofs once we learnt conditional

More information

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( )

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( ) Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio (2014-2015) Etienne Tanré - Olivier Faugeras INRIA - Team Tosca November 26th, 2014 E. Tanré (INRIA - Team Tosca) Mathematical

More information

Kolmogorov Equations and Markov Processes

Kolmogorov Equations and Markov Processes Kolmogorov Equations and Markov Processes May 3, 013 1 Transition measures and functions Consider a stochastic process {X(t)} t 0 whose state space is a product of intervals contained in R n. We define

More information

On pathwise stochastic integration

On pathwise stochastic integration On pathwise stochastic integration Rafa l Marcin Lochowski Afican Institute for Mathematical Sciences, Warsaw School of Economics UWC seminar Rafa l Marcin Lochowski (AIMS, WSE) On pathwise stochastic

More information

Theoretical Tutorial Session 2

Theoretical Tutorial Session 2 1 / 36 Theoretical Tutorial Session 2 Xiaoming Song Department of Mathematics Drexel University July 27, 216 Outline 2 / 36 Itô s formula Martingale representation theorem Stochastic differential equations

More information

MFE6516 Stochastic Calculus for Finance

MFE6516 Stochastic Calculus for Finance MFE6516 Stochastic Calculus for Finance William C. H. Leon Nanyang Business School December 11, 2017 1 / 51 William C. H. Leon MFE6516 Stochastic Calculus for Finance 1 Symmetric Random Walks Scaled Symmetric

More information

Contents. 1 Preliminaries 3. Martingales

Contents. 1 Preliminaries 3. Martingales Table of Preface PART I THE FUNDAMENTAL PRINCIPLES page xv 1 Preliminaries 3 2 Martingales 9 2.1 Martingales and examples 9 2.2 Stopping times 12 2.3 The maximum inequality 13 2.4 Doob s inequality 14

More information

Bernardo D Auria Stochastic Processes /10. Notes. Abril 13 th, 2010

Bernardo D Auria Stochastic Processes /10. Notes. Abril 13 th, 2010 1 Stochastic Calculus Notes Abril 13 th, 1 As we have seen in previous lessons, the stochastic integral with respect to the Brownian motion shows a behavior different from the classical Riemann-Stieltjes

More information

An Introduction to Malliavin calculus and its applications

An Introduction to Malliavin calculus and its applications An Introduction to Malliavin calculus and its applications Lecture 3: Clark-Ocone formula David Nualart Department of Mathematics Kansas University University of Wyoming Summer School 214 David Nualart

More information

MA8109 Stochastic Processes in Systems Theory Autumn 2013

MA8109 Stochastic Processes in Systems Theory Autumn 2013 Norwegian University of Science and Technology Department of Mathematical Sciences MA819 Stochastic Processes in Systems Theory Autumn 213 1 MA819 Exam 23, problem 3b This is a linear equation of the form

More information

LECTURE 2: LOCAL TIME FOR BROWNIAN MOTION

LECTURE 2: LOCAL TIME FOR BROWNIAN MOTION LECTURE 2: LOCAL TIME FOR BROWNIAN MOTION We will define local time for one-dimensional Brownian motion, and deduce some of its properties. We will then use the generalized Ray-Knight theorem proved in

More information

1 Brownian Local Time

1 Brownian Local Time 1 Brownian Local Time We first begin by defining the space and variables for Brownian local time. Let W t be a standard 1-D Wiener process. We know that for the set, {t : W t = } P (µ{t : W t = } = ) =

More information

The Pedestrian s Guide to Local Time

The Pedestrian s Guide to Local Time The Pedestrian s Guide to Local Time Tomas Björk, Department of Finance, Stockholm School of Economics, Box 651, SE-113 83 Stockholm, SWEDEN tomas.bjork@hhs.se November 19, 213 Preliminary version Comments

More information

Solution for Problem 7.1. We argue by contradiction. If the limit were not infinite, then since τ M (ω) is nondecreasing we would have

Solution for Problem 7.1. We argue by contradiction. If the limit were not infinite, then since τ M (ω) is nondecreasing we would have 362 Problem Hints and Solutions sup g n (ω, t) g(ω, t) sup g(ω, s) g(ω, t) µ n (ω). t T s,t: s t 1/n By the uniform continuity of t g(ω, t) on [, T], one has for each ω that µ n (ω) as n. Two applications

More information

Lecture 4: Introduction to stochastic processes and stochastic calculus

Lecture 4: Introduction to stochastic processes and stochastic calculus Lecture 4: Introduction to stochastic processes and stochastic calculus Cédric Archambeau Centre for Computational Statistics and Machine Learning Department of Computer Science University College London

More information

for all f satisfying E[ f(x) ] <.

for all f satisfying E[ f(x) ] <. . Let (Ω, F, P ) be a probability space and D be a sub-σ-algebra of F. An (H, H)-valued random variable X is independent of D if and only if P ({X Γ} D) = P {X Γ}P (D) for all Γ H and D D. Prove that if

More information

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition Filtrations, Markov Processes and Martingales Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition David pplebaum Probability and Statistics Department,

More information

Poisson random measure: motivation

Poisson random measure: motivation : motivation The Lévy measure provides the expected number of jumps by time unit, i.e. in a time interval of the form: [t, t + 1], and of a certain size Example: ν([1, )) is the expected number of jumps

More information

Stochastic Analysis I S.Kotani April 2006

Stochastic Analysis I S.Kotani April 2006 Stochastic Analysis I S.Kotani April 6 To describe time evolution of randomly developing phenomena such as motion of particles in random media, variation of stock prices and so on, we have to treat stochastic

More information

Itô s Theory of Stochastic Integration

Itô s Theory of Stochastic Integration chapter 5 Itô s Theory of Stochastic Integration Up to this point, I have been recognizing but not confronting the challenge posed by integrals of the sort in 3.1.22. There are several reasons for my decision

More information

ELEMENTS OF PROBABILITY THEORY

ELEMENTS OF PROBABILITY THEORY ELEMENTS OF PROBABILITY THEORY Elements of Probability Theory A collection of subsets of a set Ω is called a σ algebra if it contains Ω and is closed under the operations of taking complements and countable

More information

Homework # , Spring Due 14 May Convergence of the empirical CDF, uniform samples

Homework # , Spring Due 14 May Convergence of the empirical CDF, uniform samples Homework #3 36-754, Spring 27 Due 14 May 27 1 Convergence of the empirical CDF, uniform samples In this problem and the next, X i are IID samples on the real line, with cumulative distribution function

More information

1. Stochastic Processes and filtrations

1. Stochastic Processes and filtrations 1. Stochastic Processes and 1. Stoch. pr., A stochastic process (X t ) t T is a collection of random variables on (Ω, F) with values in a measurable space (S, S), i.e., for all t, In our case X t : Ω S

More information

Man Kyu Im*, Un Cig Ji **, and Jae Hee Kim ***

Man Kyu Im*, Un Cig Ji **, and Jae Hee Kim *** JOURNAL OF THE CHUNGCHEONG MATHEMATICAL SOCIETY Volume 19, No. 4, December 26 GIRSANOV THEOREM FOR GAUSSIAN PROCESS WITH INDEPENDENT INCREMENTS Man Kyu Im*, Un Cig Ji **, and Jae Hee Kim *** Abstract.

More information

Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals

Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals Noèlia Viles Cuadros BCAM- Basque Center of Applied Mathematics with Prof. Enrico

More information

Universal examples. Chapter The Bernoulli process

Universal examples. Chapter The Bernoulli process Chapter 1 Universal examples 1.1 The Bernoulli process First description: Bernoulli random variables Y i for i = 1, 2, 3,... independent with P [Y i = 1] = p and P [Y i = ] = 1 p. Second description: Binomial

More information

(B(t i+1 ) B(t i )) 2

(B(t i+1 ) B(t i )) 2 ltcc5.tex Week 5 29 October 213 Ch. V. ITÔ (STOCHASTIC) CALCULUS. WEAK CONVERGENCE. 1. Quadratic Variation. A partition π n of [, t] is a finite set of points t ni such that = t n < t n1

More information

Stochastic Volatility and Correction to the Heat Equation

Stochastic Volatility and Correction to the Heat Equation Stochastic Volatility and Correction to the Heat Equation Jean-Pierre Fouque, George Papanicolaou and Ronnie Sircar Abstract. From a probabilist s point of view the Twentieth Century has been a century

More information

On the martingales obtained by an extension due to Saisho, Tanemura and Yor of Pitman s theorem

On the martingales obtained by an extension due to Saisho, Tanemura and Yor of Pitman s theorem On the martingales obtained by an extension due to Saisho, Tanemura and Yor of Pitman s theorem Koichiro TAKAOKA Dept of Applied Physics, Tokyo Institute of Technology Abstract M Yor constructed a family

More information

Optimal investment strategies for an index-linked insurance payment process with stochastic intensity

Optimal investment strategies for an index-linked insurance payment process with stochastic intensity for an index-linked insurance payment process with stochastic intensity Warsaw School of Economics Division of Probabilistic Methods Probability space ( Ω, F, P ) Filtration F = (F(t)) 0 t T satisfies

More information

Weak convergence and large deviation theory

Weak convergence and large deviation theory First Prev Next Go To Go Back Full Screen Close Quit 1 Weak convergence and large deviation theory Large deviation principle Convergence in distribution The Bryc-Varadhan theorem Tightness and Prohorov

More information

1 Independent increments

1 Independent increments Tel Aviv University, 2008 Brownian motion 1 1 Independent increments 1a Three convolution semigroups........... 1 1b Independent increments.............. 2 1c Continuous time................... 3 1d Bad

More information

Feller Processes and Semigroups

Feller Processes and Semigroups Stat25B: Probability Theory (Spring 23) Lecture: 27 Feller Processes and Semigroups Lecturer: Rui Dong Scribe: Rui Dong ruidong@stat.berkeley.edu For convenience, we can have a look at the list of materials

More information

STATISTICS 385: STOCHASTIC CALCULUS HOMEWORK ASSIGNMENT 4 DUE NOVEMBER 23, = (2n 1)(2n 3) 3 1.

STATISTICS 385: STOCHASTIC CALCULUS HOMEWORK ASSIGNMENT 4 DUE NOVEMBER 23, = (2n 1)(2n 3) 3 1. STATISTICS 385: STOCHASTIC CALCULUS HOMEWORK ASSIGNMENT 4 DUE NOVEMBER 23, 26 Problem Normal Moments (A) Use the Itô formula and Brownian scaling to check that the even moments of the normal distribution

More information

P (A G) dp G P (A G)

P (A G) dp G P (A G) First homework assignment. Due at 12:15 on 22 September 2016. Homework 1. We roll two dices. X is the result of one of them and Z the sum of the results. Find E [X Z. Homework 2. Let X be a r.v.. Assume

More information

Lecture 17 Brownian motion as a Markov process

Lecture 17 Brownian motion as a Markov process Lecture 17: Brownian motion as a Markov process 1 of 14 Course: Theory of Probability II Term: Spring 2015 Instructor: Gordan Zitkovic Lecture 17 Brownian motion as a Markov process Brownian motion is

More information

Topics in fractional Brownian motion

Topics in fractional Brownian motion Topics in fractional Brownian motion Esko Valkeila Spring School, Jena 25.3. 2011 We plan to discuss the following items during these lectures: Fractional Brownian motion and its properties. Topics in

More information

Branching Processes II: Convergence of critical branching to Feller s CSB

Branching Processes II: Convergence of critical branching to Feller s CSB Chapter 4 Branching Processes II: Convergence of critical branching to Feller s CSB Figure 4.1: Feller 4.1 Birth and Death Processes 4.1.1 Linear birth and death processes Branching processes can be studied

More information

9 Brownian Motion: Construction

9 Brownian Motion: Construction 9 Brownian Motion: Construction 9.1 Definition and Heuristics The central limit theorem states that the standard Gaussian distribution arises as the weak limit of the rescaled partial sums S n / p n of

More information

B8.3 Mathematical Models for Financial Derivatives. Hilary Term Solution Sheet 2

B8.3 Mathematical Models for Financial Derivatives. Hilary Term Solution Sheet 2 B8.3 Mathematical Models for Financial Derivatives Hilary Term 18 Solution Sheet In the following W t ) t denotes a standard Brownian motion and t > denotes time. A partition π of the interval, t is a

More information

More Empirical Process Theory

More Empirical Process Theory More Empirical Process heory 4.384 ime Series Analysis, Fall 2008 Recitation by Paul Schrimpf Supplementary to lectures given by Anna Mikusheva October 24, 2008 Recitation 8 More Empirical Process heory

More information

An introduction to Mathematical Theory of Control

An introduction to Mathematical Theory of Control An introduction to Mathematical Theory of Control Vasile Staicu University of Aveiro UNICA, May 2018 Vasile Staicu (University of Aveiro) An introduction to Mathematical Theory of Control UNICA, May 2018

More information

Poisson Jumps in Credit Risk Modeling: a Partial Integro-differential Equation Formulation

Poisson Jumps in Credit Risk Modeling: a Partial Integro-differential Equation Formulation Poisson Jumps in Credit Risk Modeling: a Partial Integro-differential Equation Formulation Jingyi Zhu Department of Mathematics University of Utah zhu@math.utah.edu Collaborator: Marco Avellaneda (Courant

More information

Preliminary Exam: Probability 9:00am 2:00pm, Friday, January 6, 2012

Preliminary Exam: Probability 9:00am 2:00pm, Friday, January 6, 2012 Preliminary Exam: Probability 9:00am 2:00pm, Friday, January 6, 202 The exam lasts from 9:00am until 2:00pm, with a walking break every hour. Your goal on this exam should be to demonstrate mastery of

More information

SMSTC (2007/08) Probability.

SMSTC (2007/08) Probability. SMSTC (27/8) Probability www.smstc.ac.uk Contents 12 Markov chains in continuous time 12 1 12.1 Markov property and the Kolmogorov equations.................... 12 2 12.1.1 Finite state space.................................

More information

Gaussian Processes. 1. Basic Notions

Gaussian Processes. 1. Basic Notions Gaussian Processes 1. Basic Notions Let T be a set, and X : {X } T a stochastic process, defined on a suitable probability space (Ω P), that is indexed by T. Definition 1.1. We say that X is a Gaussian

More information

Hardy-Stein identity and Square functions

Hardy-Stein identity and Square functions Hardy-Stein identity and Square functions Daesung Kim (joint work with Rodrigo Bañuelos) Department of Mathematics Purdue University March 28, 217 Daesung Kim (Purdue) Hardy-Stein identity UIUC 217 1 /

More information

Example 4.1 Let X be a random variable and f(t) a given function of time. Then. Y (t) = f(t)x. Y (t) = X sin(ωt + δ)

Example 4.1 Let X be a random variable and f(t) a given function of time. Then. Y (t) = f(t)x. Y (t) = X sin(ωt + δ) Chapter 4 Stochastic Processes 4. Definition In the previous chapter we studied random variables as functions on a sample space X(ω), ω Ω, without regard to how these might depend on parameters. We now

More information

Harmonic Functions and Brownian motion

Harmonic Functions and Brownian motion Harmonic Functions and Brownian motion Steven P. Lalley April 25, 211 1 Dynkin s Formula Denote by W t = (W 1 t, W 2 t,..., W d t ) a standard d dimensional Wiener process on (Ω, F, P ), and let F = (F

More information

Weak convergence and Brownian Motion. (telegram style notes) P.J.C. Spreij

Weak convergence and Brownian Motion. (telegram style notes) P.J.C. Spreij Weak convergence and Brownian Motion (telegram style notes) P.J.C. Spreij this version: December 8, 2006 1 The space C[0, ) In this section we summarize some facts concerning the space C[0, ) of real

More information

ON THE PATHWISE UNIQUENESS OF SOLUTIONS OF STOCHASTIC DIFFERENTIAL EQUATIONS

ON THE PATHWISE UNIQUENESS OF SOLUTIONS OF STOCHASTIC DIFFERENTIAL EQUATIONS PORTUGALIAE MATHEMATICA Vol. 55 Fasc. 4 1998 ON THE PATHWISE UNIQUENESS OF SOLUTIONS OF STOCHASTIC DIFFERENTIAL EQUATIONS C. Sonoc Abstract: A sufficient condition for uniqueness of solutions of ordinary

More information

Convergence of Feller Processes

Convergence of Feller Processes Chapter 15 Convergence of Feller Processes This chapter looks at the convergence of sequences of Feller processes to a iting process. Section 15.1 lays some ground work concerning weak convergence of processes

More information

Numerical methods for solving stochastic differential equations

Numerical methods for solving stochastic differential equations Mathematical Communications 4(1999), 251-256 251 Numerical methods for solving stochastic differential equations Rózsa Horváth Bokor Abstract. This paper provides an introduction to stochastic calculus

More information

Infinite-dimensional methods for path-dependent equations

Infinite-dimensional methods for path-dependent equations Infinite-dimensional methods for path-dependent equations (Università di Pisa) 7th General AMaMeF and Swissquote Conference EPFL, Lausanne, 8 September 215 Based on Flandoli F., Zanco G. - An infinite-dimensional

More information

Lifshitz tail for Schödinger Operators with random δ magnetic fields

Lifshitz tail for Schödinger Operators with random δ magnetic fields Lifshitz tail for Schödinger Operators with random δ magnetic fields Takuya Mine (Kyoto Institute of Technology, Czech technical university in Prague) Yuji Nomura (Ehime University) March 16, 2010 at Arizona

More information

Summary of Stochastic Processes

Summary of Stochastic Processes Summary of Stochastic Processes Kui Tang May 213 Based on Lawler s Introduction to Stochastic Processes, second edition, and course slides from Prof. Hongzhong Zhang. Contents 1 Difference/tial Equations

More information

ITO OLOGY. Thomas Wieting Reed College, Random Processes 2 The Ito Integral 3 Ito Processes 4 Stocastic Differential Equations

ITO OLOGY. Thomas Wieting Reed College, Random Processes 2 The Ito Integral 3 Ito Processes 4 Stocastic Differential Equations ITO OLOGY Thomas Wieting Reed College, 2000 1 Random Processes 2 The Ito Integral 3 Ito Processes 4 Stocastic Differential Equations 1 Random Processes 1 Let (, F,P) be a probability space. By definition,

More information

d(x n, x) d(x n, x nk ) + d(x nk, x) where we chose any fixed k > N

d(x n, x) d(x n, x nk ) + d(x nk, x) where we chose any fixed k > N Problem 1. Let f : A R R have the property that for every x A, there exists ɛ > 0 such that f(t) > ɛ if t (x ɛ, x + ɛ) A. If the set A is compact, prove there exists c > 0 such that f(x) > c for all x

More information

Linear Ordinary Differential Equations

Linear Ordinary Differential Equations MTH.B402; Sect. 1 20180703) 2 Linear Ordinary Differential Equations Preliminaries: Matrix Norms. Denote by M n R) the set of n n matrix with real components, which can be identified the vector space R

More information

FE610 Stochastic Calculus for Financial Engineers. Stevens Institute of Technology

FE610 Stochastic Calculus for Financial Engineers. Stevens Institute of Technology FE610 Stochastic Calculus for Financial Engineers Lecture 3. Calculaus in Deterministic and Stochastic Environments Steve Yang Stevens Institute of Technology 01/31/2012 Outline 1 Modeling Random Behavior

More information

Stochastic Processes II/ Wahrscheinlichkeitstheorie III. Lecture Notes

Stochastic Processes II/ Wahrscheinlichkeitstheorie III. Lecture Notes BMS Basic Course Stochastic Processes II/ Wahrscheinlichkeitstheorie III Michael Scheutzow Lecture Notes Technische Universität Berlin Sommersemester 218 preliminary version October 12th 218 Contents

More information

Stochastic Differential Equations

Stochastic Differential Equations Chapter 5 Stochastic Differential Equations We would like to introduce stochastic ODE s without going first through the machinery of stochastic integrals. 5.1 Itô Integrals and Itô Differential Equations

More information

A brief introduction to ordinary differential equations

A brief introduction to ordinary differential equations Chapter 1 A brief introduction to ordinary differential equations 1.1 Introduction An ordinary differential equation (ode) is an equation that relates a function of one variable, y(t), with its derivative(s)

More information

{σ x >t}p x. (σ x >t)=e at.

{σ x >t}p x. (σ x >t)=e at. 3.11. EXERCISES 121 3.11 Exercises Exercise 3.1 Consider the Ornstein Uhlenbeck process in example 3.1.7(B). Show that the defined process is a Markov process which converges in distribution to an N(0,σ

More information

STOCHASTIC CALCULUS JASON MILLER AND VITTORIA SILVESTRI

STOCHASTIC CALCULUS JASON MILLER AND VITTORIA SILVESTRI STOCHASTIC CALCULUS JASON MILLER AND VITTORIA SILVESTRI Contents Preface 1 1. Introduction 1 2. Preliminaries 4 3. Local martingales 1 4. The stochastic integral 16 5. Stochastic calculus 36 6. Applications

More information

1. Stochastic Process

1. Stochastic Process HETERGENEITY IN QUANTITATIVE MACROECONOMICS @ TSE OCTOBER 17, 216 STOCHASTIC CALCULUS BASICS SANG YOON (TIM) LEE Very simple notes (need to add references). It is NOT meant to be a substitute for a real

More information