Lecture 3. This operator commutes with translations and it is not hard to evaluate. Ae iξx = ψ(ξ)e iξx. T t I. A = lim

Size: px
Start display at page:

Download "Lecture 3. This operator commutes with translations and it is not hard to evaluate. Ae iξx = ψ(ξ)e iξx. T t I. A = lim"

Transcription

1 Lecture 3. If we specify D t,ω as a progressively mesurable map of (Ω [, T], F t ) into the space of infinitely divisible distributions, as well as an initial distribution for the starting point x() = x then we would like to associate a measure P on the space Ω of paths on [, T]. Definition of progressively measurable : In general we definitely want our maps from Ω [, T] to any space (Y, Σ) to be non-anticipating, i.e. y(t, ω) from Ω Y to be measurable with respect to the σ-field F t of events observable upto time t. But technically we need a little bit more. To do natural things like define η(t) = t f(y(s, ω))ds and get them to be again non-anticipating functions one needs the joint measurability of y as a function of t and ω, which does not follow from measurability in ω alone. If the map is continuous in t for each ω then for every t > the map y : Ω [, t] Y is jointly measurable with respect to the product σ-field F t B t, where F t is the σ-field of events observable upto time t and B t is the Borel σ-field on the interval [, t]. In general we have to assume this property, known as progressive measurability. With this, natural operations like integrals can be performed and result again in functions with the same property. How should P be related to D t,ω?. The intuitive infinitesimal picture that we gave is hard to deal with in mathematically rigorous fashion. We turn instead to an essentially equivalent integral formulation with the help of martingales. Let f(x) be a smooth function on R. We can define for any infinitely divisible distribution or equivalently for any triplet [b, σ, M] in the Levy-Khintchine representation the operator A acting on f by (Af)(x) = bf (x) + σ f (x) + [f(x + y) f(x) yf (x) 1 + y ]M(dy) This operator commutes with translations and it is not hard to evaluate Ae iξx = ψ(ξ)e iξx A is the infinitesimal generator of the semigroup T t = e ta which is convolution by µ t with characteristic function e tψ(ξ). In other words A is the infinitesimal generator A = lim t T t I t and (T t f)(x) = f(x + y)µ t (dy) = E[f(x(t)) x() = x] where the expectation is with respect to the process with independent increments x(t) which has distribution µ t and characteristic function e tψ(ξ). 1

2 If D t,ω is given then we have [b(t, ω), σ (t, ω), M t,ω (dy)]. Assuming they depend reasonably on t, ω we have the operator (A t,ω f) = b(t, ω)f (x) + σ (t, ω) f (x) + [f(x + y) f(x) yf (x) 1 + y ]M t,ω(dy) and for each f, u(t, ω, x) = (A t,ω f)(x) is a progressively meausrable map into the space of smooth functions on R. The natural boundedness assumption is to assume that sup [ b(t, ω) + σ y (t, ω) y M t,ω(dy)] < t T ω Ω One could in addition assume that u(t, ω, x) is a continuous functional of its variables. Then the realtionship between P and D t,ω can be expressed as follows. We say that P is a solution to the Martingale Problem corresponding to D t,ω or equivalently [b(t, ω), σ (t, ω), M t,ω (dy)] starting from x at time, if for every smooth f Z f (t) = f(x(t)) f(x()) t (A s,ω f)(x(s))ds = f(x(t)) f(x()) t u(s, ω, x(s))ds is a martingale with respect to (Ω, F t, P) and P[x() = x] = 1. Even if it is assumed only for functions f that depend on x alone, it extends automatically to functions f that depend smoothly on x and t. If we freeze t and apply A t,ω with Z f (t) = f(t, x(t)) f(, x()) u(t, x, ω) = A t,ω f(t, )(x) t [ f s (s, x(s)) + u(s, ω, x(s))] ds is a martingale with respect to (Ω, F t, P). This is not hard to prove. We need to show E[Z f (t) Z f (s) F s ] =. Let us show that E P [Z f (t)] = The conditional argument is identical. Let V (s, τ, ω, x) = (A τ,ω f(s, ))(x) E P [Z f (t)] = E P[ f(t, x(t)) f(, x()) t [f s (s, x(s)) + V (s, s, ω, x(s))ds] ] = E P[ f(t, x(t)) f(t, x()) + f(t, x()) f(, x()) t [f s (s, x(s)) + V (s, s, ω, x(s))ds] ] = E P[ t V (t, s, ω, x(s))ds + t = E P[ t = t f s (s, x())ds [f s (s, x(s)) + V (s, s, ω, x(s))ds] ] t t s ds V τ (τ, s, ω, x(s)))dτ ds V (s, τ, ω, x(τ))dτ ] s

3 If we use the fact that for any martingale M(t) and a function of bounded variation A(t) the quantity M(t)A(t) t M(s)dA(s) is again a martingale we can go, back and forth, between adidtive martingales Z f (t) = e f(t,x(t)) e f(,x()) and multiplicative martingales [ Y f (t) = exp f(x(t)) f(x()) This requires the choice of M(t) = Z f (t) and [ A(t) = exp t and a similar choice of M(t) = Y f (t) and to go back. Let us look at the case M. Then t t [( s + A s,ω)e f ](s, x(s))ds [e f (( ] s + A s,ω)e f )(s, x(s))]ds [e f (( ] s + A s,ω)e f )(s, x(s))]ds [ t A(t) = exp [e f (( ] s + A s,ω)e f )(s, x(s))]ds e f (( s +A s,ω)e f )(s, x) = f s (s, x)+b(s, ω)f x (s, x)+ σ (s, ω) Let us take f(x) = θx. We get f xx (s, x)+ σ (s, ω) f x (s, x) [ t t ] Y θ (t) = exp θ[x(t) x() b(s, ω)ds] θ σ (s, ω)ds We need to justify this because θx is unbounded. We can truncate and pass to the limit. By Fatou s lemma we will only get that Y f (t) is a super-martingale. This is enough to get a bound E P [Y θ (t)] 1 Since σ is bounded by C, [ t ] E [exp P θ[x(t) x() b(s, ω)ds] e Ctθ 3

4 By Tchebechev s inequality, this will give a Gaussian type bound on P[ x(t) x() t b(s, ω)ds l] Ae l Ct which can then be used to get uniform integrabilty and actually show that Y θ (t) is a martingale. Remark. If we just knew that Y θ (t) are martingales, it is easy to see that one can continue analytically and replace θ by iθ. We can then go from Y iθ (t) to Z iθ (t). By Fourier synthesis it is easy to pass from {e iθx : θ R} to smooth functions f. In terms of Y θ (t) it is easy to see that y(t) = x(t) t b(s, ω)ds corresponds to [, σ (t, ω)]. Since we have uniform Gaussian estimates on any y(t) corresponding to [, σ (t, ω)] with a bounded σ of the from E [ e θ [y(t) y()] ] t e Cθ one can get estimates of the form E[(y(t) y()) n ] c n t n or E[(y(t) y(s)) n ] c n t s n proving, by Kolmogorov s theorem tightness for any family {P α } that corresponds to [, σα (t, ω)] provided there is a uniform bound on {σ α (t, ω)} as well as the starting points {x α }. Adding b causes no problem because { t b α(s, ω)ds} are uniformly Lipshitz if we have a uniform bound on {b α (t, ω)}. We will use this to prove existence of P for given b, σ. Let us do this for the Markovian case where b(t, ω), σ(t, ω) are given by b(t, x(t)) and σ(t, x(t)). We assume that b, σ are bounded continuous functions of t, x. If b and σ are constants then bt + σβ(t) where β(t) is Brownian motion will do it. We split the interval [, T] into subintervals of length h and run x(t) = b(, x)t + σ(, x)β(t) upto time h. Then conditionally in the interval [h, h] we run x(t) = x(h) + b(h, x(h))(t h) + σ(h, x(h))[β(t) β(h)]. The role of β(t) β(h) is to give us a Brownian motion independent of the past. We continue like this till time t. It is not hard to check, by induction if you want to be meticulous, that we now have a process P h that is a solution to the martingale problem corresponding to [b h (t, ω), σh (t, ω)], where b h (t, ω) = b(π h (t), x(π h (t)); σ h (t, ω) = σ (π h (t), x(π h (t)) and π h (t) = sup{s : s t, s = kh} the last time of updating. Since the starting points are same and {b h, σ h } are uniformly bounded P h is tight. It is not hard to see that any limit point P of {P h } as h will work. E P h [Z h f (t) Zh f (s) F s] = 4

5 P h P and Z h f Z f. One can easily pass to the limit if it were not for the conditioning. But all the conditioning is the relation E P h[ [Z h f (t) Z h f (s)]g(ω) ] = For any F s measurable bounded function. It is enough to check for all bounded continuous functions G and in that case we can pass to the limit under weak convergence of P h to P. Uniquenss, Markov property etc. Suppose we can solve the PDE u s + b(s, x)u x (s, x) + σ (s, x) u xx (s, x) = ; u(t, x) = f(x) for s t, and for any t T for a large class of functions f and get a smooth solution. Then for any solution P of the martingale problem corresponding to b(t, x), σ (t, x) that starts from x at time s u(s, x) = E P [f(x(t)] Another way of saying it is that if we define by C s,x the set of all solutions to the martingale problem corresponding to a(, ) and σ (, ) then for any solution for any P C s,x, E P [f(x(t))] is gineby u(t, x) where u is a solution of the PDE above. If the solution to the PDE exists for sufficiently many f, then the distribution of of x(t) under any P C s.x is dtermined and is given by P(s, x, t, dy) that satisfies u(s, x) = f(y)p(s, x, t, dy) for all s < t T. One can alternatively solve u s (s, x) + b(s, x)u x (s, x) + σ (s, x) u xx (s, x) + f(s, x) = in [, T] R; u(t, x) = Then for any P C s,x we would get T u(s, x) = E P [ f(t, x(t))dt] s and that would determine P[x(t) A] as well. To dtermine P completely as well as to prove the Markov property it suffiecs to show that if P C s,x ans s < t < T then the regular conditional probability distribution Q t,ω of P given F t is in C t,x(t) for almost all ω with respect to P. This amounts to proving E Q t,ω [Z f (t ) Z f (t) F t ] = a.e Q t,ω, a.e. P In other words for A F t, B F t A [ [Z f (t ) Z f (t)]dq t,ω ]dp = B 5

6 But this equals A B [Z f (t ) Z f (t)]dp = because A B F t. There are problems with sets of measure zero that now depend on f, A, B, t, t. But countable choices can be made to get one null set and then the result extended by continuity. By this argument if the PDE has enough solutions then P C s,x is Markov with transition probability p(s, x, t, dy) and is unique. The same argument applies to stopping times by Doob s theorem on stopping times for Martingales and we deduce the strong Markov property as well. Once we know τ(ω) and x(τ(ω)), ω), the process is the unique probability measure P τ,x(τ) C τ,x(τ) Lecture 4. One could in a similar fashion consider Markov processes with jumps determined by b(t, x), σ (t, x), M(t, x, dy). and the corresponding operator (A s f)(x) = b(s, x)f x + σ (s, x) f xx + [f(x + y) f(x) yf x(x) ]M(s, x, dy) 1 + y Solutions to the Martingale problem will be determined in exactly similar manner by solving instead of the PDE, the integro diffrential equation u s (s, x) + (A s u(s, ))(x) = ; u(t, x) = f(x) or u s (s, x) + (A s u(s, ))(x) + f(s, x) = ; u(t, x) = for sufficiently many functions f. The martingales involved as well as the process itself are only right continuous and have left limits, but may have jumps. Otherwise the theory is very similar. For proving existence one works in the Skorohod space D[, T] instead of C[, T] which is the natural space for diffusions, i.e. Markov processes with out jumps. In the homogeneous or time independent case, we have the operator (Af)(x) = b(x)f x + σ (x) f xx + [f(x + y) f(x) yf x(x) ]M(x, dy) 1 + y In order to construct the semigroup T t with generator A one can, in the semi group theory solve for the resolvent and reconstruct T t from R λ as R λ = (λi A) 1 = T t = lim λ etλ(r λ I) 6 e λt T t dt

7 But one can use martingales instead of semigroup theory and if the equation has a solution, then solves Therefore with respect to any P C,x, λu λ (x) (Au λ )(x) = f(x) w(s, x) = e λs u(x) w s + (Aw(s, ) + e λs f(x) = w(t, x(t)) + t e λs f(x(s))ds is a Martingale. Equating expectations at and, since w(t, x) as t, we get uλ(x) = w(, x) = E P [ e λs f(x(s))ds From the uniqueness theorem for Laplace transforms we determine E P [f(x(t)] = f(y)q(t, x, dy). The rest of proceeds like the inhomogeneous case. The transition probabilities p(s, x, t, dy) are given by q(t s, x, dy). Stochastic Integrals, Itô s formula. Although we have seen examples of this, it is useful to introduce the following general concept. We have a probability space (Ω, F, P) and an increasing family F t of sub-σ-fields. One can assume that F is generated by t F t. Limiting ourselves to the continuous case given x(t, ω), b(t, ω) and σ (t, ω) that are progressively measurable,we say that x(. ) is an Itô process corresponding to b(t, ω), σ (t, ω) on (Ω, F t, P), if x(t, ω) is almost surely continuous and Z f (t) are martingales with respect to (Ω, F t, P). We denote this in symbols by x( ) I[b(, ), σ (, )]. We know for instance that if x( ) I[b(, ), σ (, )] and y(t) = t b(s, ω)ds then y( ) I[, σ (, )]. If we want to define stochastic integrals with respect to x( ) since x(t) y(t) is of bounded variation we can always assume that b =. Simple functions. A simple function relative to a partition = t < t 1 < < t n = T is a bounded function c(s, ω) which is equal to c(t j 1, ω) in the interval [t j 1, t j ] with c(t j 1, ω) being F tj 1 measurable. One can define the stocahs tic integral ξ(t, ω) by ξ(t) = t j 1 c(s)dx(s) = c(t i 1, ω)[x(t i ) x(t i 1 )] + c(t j 1, ω)[x(t) x(t j 1 )] i=1 7

8 Clearly ξ(t) is continuous and it is not hard to check that it is an Itô process corresponding to [, c (s, ω)σ (s, ω)]. To see this we need to note that since E P [e θ(x(t j) x(t j 1 )) θ tj t j 1 σ (s, ω)ds F tj 1 ] = 1 for all θ, we can take for θ a function θ c(t j 1, ω) which is F tj 1 measurable. It is now easy to check that ξ(t) is a martingale and t E P [ξ(t)] = ; E P [ξ (t)] = E P [ c(s, ω) ds] Bounded progressively mesuarble integrands. If c is bounded and progressively measurable it can be approximated by a sequence c n of simple functions such that For h > one defines T lim n EP [ c n (s, ω) c(s, ω) ds] = c h (s, ω) = 1 h s s h c(s, ω)ds if s > h and otherwise. Then c h are uniformly bounded and T c h(s, ω) c(s, ω) ds a.e P. Therefore T E P [ c h (s, ω) c(s, ω) ds] as h. Now c h is continuous in s and can be easily approximated by defining c h, n(s, ω) to be c h (t j 1, ω) on [t j 1, t j ]. We approximate c by c h and c h by c h,n. Now if E P [ T c n(s, ω) c(s, ω) ds, then by Doob s inequality lim n,m EP [ sup ξ n (t) ξ m (t) dt] = t T By taking a subsequence if needed there is an almost surely uniform limit of ξ n (t) = t c n(s, ω)dx(s) and we define it to be ξ(t) = t c(s, ω)dx(s). It is not hard to check that ξ(t) I[, c (s, ω)σ (s, ω)]. Further Extensions. One can define ξ(t) = t c(s, ω)dx(s) for unbounded c(s, ω) provided E P [ T c(s, ω) ds] <. Since σ (s, ω) is bounded a simple truncation gives it as a square integrable limit of bounded approximating ones obtaine by truncation. All the stochastic integrals are almost surely contnouos square integrable martingales with t E P [ ξ(t) ] = E P [ c(s, ω) σ (s, ω)ds] 8

9 One gets the exponetial martingales only if c(s, ω)σ(s, ω) is bounded. The stochstic integrals are still Itô processes but correspond to unbounded parameters and can be defined by martingales Z f (t). Multi-dimensional versions. The stochastic process is R n valued. The defiining parameters are b with components b i, 1 i n, a positive semi-definite symmetric matrix {a i,j (s, ω)} and possibly a levy-measuure M(s, ω, dy) on R n /backslash{}. The operator A s,ω is given by (A s,ω f) = b i (s, ω)f xi (x) + 1 ai,j (s, ω)f xi x j (x) < y, ( f)(x) + [f(x + y) f(x) 1 + y M(s, ω, dy) The exponential martingales are [ t exp < θ, x(t) x() < θ, b(s, ω) > ds 1 t < θ, a(s, ω)θ > ds The PDE, in the Markov case is u t + < b(s, x), u(s, x) > + 1 ai,j (s, x)u xi,x j (s, x) =, u(t, x) = f(x). The stochastic integrals take the form y(t) = t c(s, ω) dx(s) where c is an m n matrix. Then y takes values in R m. The definitions are simple enough. We do it componentwise. y i (t) = j If x(t) I[b, a] then y(t) I[cb, cac ]. t c i,j (s, ω)dx j (s) Itô s formula. Let f(t, x) be a smooth function and x(t) I[a(s, ω), σ (s, ω)] with bounded a and σ. We define y(t) = f(t, x(t)) and wish to consider the pair x(t), y(t) as a two dimensional Itô process. Its chracteristic paramters will be [ã, C] a two vector ã and a positive semidefinite matrix C. We can compute them by calculating F(x, f(x, t)) t = F f t and F t + AF = F f t + b(s, ω)[f 1 + F f x ] + σ (s, x) [F 11 + F 1, f x + F fx + F f xx ] 9

10 One can read off from this and b = {b(s, ω)fs (s, x(s)) + b(s, ω)f x (s, x(s)) + σ (s, ω) f xx } C = σ (s, ω) σ (s, ω)f x (s, x(s)) σ (s, ω)f x (s, x(s)) σ (s, ω)fx (s, x(s)) If this is not clear one can do it with the exponential martingale that makes it more transparent. is a martingale provided exp[λ 1 x(t) + λ f(t, x(t)) t G(λ 1, λ, s, ω)ds] G(λ 1, λ, s, ω) = exp[ λ 1 x(t) λ f(t, x(t))]ω) (s, ω) x +σ x ] exp[λ 1 x(t)+λ f(t, x(t))]da 1 x(t)+λ f( G will be a secon degree polynomial in λ 1, λ and the coefficients of the linear and quadratic terms determine b, C. Now we can define the stochastic integral η(t) = t and its parameters are and < {c 1 (s, ω), c (s, ω)}, {dx(s), dy(s)} >= t c 1 (s, ω)dx(s) + b = c 1 (s, ω)b(s, ω) + c (s, ω)b(s, ω)f x (s, x(s)) + σ (s, ω) f xx (s, x(s)) t c (s, ω)dy(s) a = c 1 (s, ω) σ (s, ω) + c 1 (s, ω)c (s, ω)σ (s, ω)f x (s, x(s)) + c (s, ω) σ (s, ω)fx (s, x(s)) If we choose c 1 (s, ω) = f x (s, x(s)) and c = 1, then a = and b = σ (s,ω) f xx (s, x(s)). In particular if z(t) = y(t) y() t then z(t) I[, ] and is in fact zero. [f s (s, x(s))ds + f x (s, x(s))dx(s) + σ (s, ω) f xx (s, x(s))ds 1

Stochastic Integration.

Stochastic Integration. Chapter Stochastic Integration..1 Brownian Motion as a Martingale P is the Wiener measure on (Ω, B) where Ω = C, T B is the Borel σ-field on Ω. In addition we denote by B t the σ-field generated by x(s)

More information

Stochastic Differential Equations.

Stochastic Differential Equations. Chapter 3 Stochastic Differential Equations. 3.1 Existence and Uniqueness. One of the ways of constructing a Diffusion process is to solve the stochastic differential equation dx(t) = σ(t, x(t)) dβ(t)

More information

Reflected Brownian Motion

Reflected Brownian Motion Chapter 6 Reflected Brownian Motion Often we encounter Diffusions in regions with boundary. If the process can reach the boundary from the interior in finite time with positive probability we need to decide

More information

Stochastic Integrals and Itô s formula.

Stochastic Integrals and Itô s formula. Chapter 5 Stochastic Itegrals ad Itô s formula. We will call a Itô process a progressively measurable almost surely cotiuous process x(t, ω) with values i R d, defied o some (Ω, F t, P) that is related

More information

p 1 ( Y p dp) 1/p ( X p dp) 1 1 p

p 1 ( Y p dp) 1/p ( X p dp) 1 1 p Doob s inequality Let X(t) be a right continuous submartingale with respect to F(t), t 1 P(sup s t X(s) λ) 1 λ {sup s t X(s) λ} X + (t)dp 2 For 1 < p

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 15. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

Verona Course April Lecture 1. Review of probability

Verona Course April Lecture 1. Review of probability Verona Course April 215. Lecture 1. Review of probability Viorel Barbu Al.I. Cuza University of Iaşi and the Romanian Academy A probability space is a triple (Ω, F, P) where Ω is an abstract set, F is

More information

Exercises. T 2T. e ita φ(t)dt.

Exercises. T 2T. e ita φ(t)dt. Exercises. Set #. Construct an example of a sequence of probability measures P n on R which converge weakly to a probability measure P but so that the first moments m,n = xdp n do not converge to m = xdp.

More information

4.6 Example of non-uniqueness.

4.6 Example of non-uniqueness. 66 CHAPTER 4. STOCHASTIC DIFFERENTIAL EQUATIONS. 4.6 Example of non-uniqueness. If we try to construct a solution to the martingale problem in dimension coresponding to a(x) = x α with

More information

Bernardo D Auria Stochastic Processes /12. Notes. March 29 th, 2012

Bernardo D Auria Stochastic Processes /12. Notes. March 29 th, 2012 1 Stochastic Calculus Notes March 9 th, 1 In 19, Bachelier proposed for the Paris stock exchange a model for the fluctuations affecting the price X(t) of an asset that was given by the Brownian motion.

More information

Solution for Problem 7.1. We argue by contradiction. If the limit were not infinite, then since τ M (ω) is nondecreasing we would have

Solution for Problem 7.1. We argue by contradiction. If the limit were not infinite, then since τ M (ω) is nondecreasing we would have 362 Problem Hints and Solutions sup g n (ω, t) g(ω, t) sup g(ω, s) g(ω, t) µ n (ω). t T s,t: s t 1/n By the uniform continuity of t g(ω, t) on [, T], one has for each ω that µ n (ω) as n. Two applications

More information

A Short Introduction to Diffusion Processes and Ito Calculus

A Short Introduction to Diffusion Processes and Ito Calculus A Short Introduction to Diffusion Processes and Ito Calculus Cédric Archambeau University College, London Center for Computational Statistics and Machine Learning c.archambeau@cs.ucl.ac.uk January 24,

More information

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3 Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................

More information

6. Brownian Motion. Q(A) = P [ ω : x(, ω) A )

6. Brownian Motion. Q(A) = P [ ω : x(, ω) A ) 6. Brownian Motion. stochastic process can be thought of in one of many equivalent ways. We can begin with an underlying probability space (Ω, Σ, P) and a real valued stochastic process can be defined

More information

Lecture 12. F o s, (1.1) F t := s>t

Lecture 12. F o s, (1.1) F t := s>t Lecture 12 1 Brownian motion: the Markov property Let C := C(0, ), R) be the space of continuous functions mapping from 0, ) to R, in which a Brownian motion (B t ) t 0 almost surely takes its value. Let

More information

Lecture 4: Ito s Stochastic Calculus and SDE. Seung Yeal Ha Dept of Mathematical Sciences Seoul National University

Lecture 4: Ito s Stochastic Calculus and SDE. Seung Yeal Ha Dept of Mathematical Sciences Seoul National University Lecture 4: Ito s Stochastic Calculus and SDE Seung Yeal Ha Dept of Mathematical Sciences Seoul National University 1 Preliminaries What is Calculus? Integral, Differentiation. Differentiation 2 Integral

More information

ANALYTIC SEMIGROUPS AND APPLICATIONS. 1. Introduction

ANALYTIC SEMIGROUPS AND APPLICATIONS. 1. Introduction ANALYTIC SEMIGROUPS AND APPLICATIONS KELLER VANDEBOGERT. Introduction Consider a Banach space X and let f : D X and u : G X, where D and G are real intervals. A is a bounded or unbounded linear operator

More information

Hardy-Stein identity and Square functions

Hardy-Stein identity and Square functions Hardy-Stein identity and Square functions Daesung Kim (joint work with Rodrigo Bañuelos) Department of Mathematics Purdue University March 28, 217 Daesung Kim (Purdue) Hardy-Stein identity UIUC 217 1 /

More information

Kolmogorov Equations and Markov Processes

Kolmogorov Equations and Markov Processes Kolmogorov Equations and Markov Processes May 3, 013 1 Transition measures and functions Consider a stochastic process {X(t)} t 0 whose state space is a product of intervals contained in R n. We define

More information

Branching Processes II: Convergence of critical branching to Feller s CSB

Branching Processes II: Convergence of critical branching to Feller s CSB Chapter 4 Branching Processes II: Convergence of critical branching to Feller s CSB Figure 4.1: Feller 4.1 Birth and Death Processes 4.1.1 Linear birth and death processes Branching processes can be studied

More information

ELEMENTS OF PROBABILITY THEORY

ELEMENTS OF PROBABILITY THEORY ELEMENTS OF PROBABILITY THEORY Elements of Probability Theory A collection of subsets of a set Ω is called a σ algebra if it contains Ω and is closed under the operations of taking complements and countable

More information

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition Filtrations, Markov Processes and Martingales Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition David pplebaum Probability and Statistics Department,

More information

for all f satisfying E[ f(x) ] <.

for all f satisfying E[ f(x) ] <. . Let (Ω, F, P ) be a probability space and D be a sub-σ-algebra of F. An (H, H)-valued random variable X is independent of D if and only if P ({X Γ} D) = P {X Γ}P (D) for all Γ H and D D. Prove that if

More information

Feller Processes and Semigroups

Feller Processes and Semigroups Stat25B: Probability Theory (Spring 23) Lecture: 27 Feller Processes and Semigroups Lecturer: Rui Dong Scribe: Rui Dong ruidong@stat.berkeley.edu For convenience, we can have a look at the list of materials

More information

Lecture 17 Brownian motion as a Markov process

Lecture 17 Brownian motion as a Markov process Lecture 17: Brownian motion as a Markov process 1 of 14 Course: Theory of Probability II Term: Spring 2015 Instructor: Gordan Zitkovic Lecture 17 Brownian motion as a Markov process Brownian motion is

More information

Applications of Ito s Formula

Applications of Ito s Formula CHAPTER 4 Applications of Ito s Formula In this chapter, we discuss several basic theorems in stochastic analysis. Their proofs are good examples of applications of Itô s formula. 1. Lévy s martingale

More information

The concentration of a drug in blood. Exponential decay. Different realizations. Exponential decay with noise. dc(t) dt.

The concentration of a drug in blood. Exponential decay. Different realizations. Exponential decay with noise. dc(t) dt. The concentration of a drug in blood Exponential decay C12 concentration 2 4 6 8 1 C12 concentration 2 4 6 8 1 dc(t) dt = µc(t) C(t) = C()e µt 2 4 6 8 1 12 time in minutes 2 4 6 8 1 12 time in minutes

More information

The Heat Equation John K. Hunter February 15, The heat equation on a circle

The Heat Equation John K. Hunter February 15, The heat equation on a circle The Heat Equation John K. Hunter February 15, 007 The heat equation on a circle We consider the diffusion of heat in an insulated circular ring. We let t [0, ) denote time and x T a spatial coordinate

More information

Martingale Problems. Abhay G. Bhatt Theoretical Statistics and Mathematics Unit Indian Statistical Institute, Delhi

Martingale Problems. Abhay G. Bhatt Theoretical Statistics and Mathematics Unit Indian Statistical Institute, Delhi s Abhay G. Bhatt Theoretical Statistics and Mathematics Unit Indian Statistical Institute, Delhi Lectures on Probability and Stochastic Processes III Indian Statistical Institute, Kolkata 20 24 November

More information

I forgot to mention last time: in the Ito formula for two standard processes, putting

I forgot to mention last time: in the Ito formula for two standard processes, putting I forgot to mention last time: in the Ito formula for two standard processes, putting dx t = a t dt + b t db t dy t = α t dt + β t db t, and taking f(x, y = xy, one has f x = y, f y = x, and f xx = f yy

More information

UCSD ECE250 Handout #27 Prof. Young-Han Kim Friday, June 8, Practice Final Examination (Winter 2017)

UCSD ECE250 Handout #27 Prof. Young-Han Kim Friday, June 8, Practice Final Examination (Winter 2017) UCSD ECE250 Handout #27 Prof. Young-Han Kim Friday, June 8, 208 Practice Final Examination (Winter 207) There are 6 problems, each problem with multiple parts. Your answer should be as clear and readable

More information

Brownian Motion. Chapter Stochastic Process

Brownian Motion. Chapter Stochastic Process Chapter 1 Brownian Motion 1.1 Stochastic Process A stochastic process can be thought of in one of many equivalent ways. We can begin with an underlying probability space (Ω, Σ, P) and a real valued stochastic

More information

Stochastic Calculus February 11, / 33

Stochastic Calculus February 11, / 33 Martingale Transform M n martingale with respect to F n, n =, 1, 2,... σ n F n (σ M) n = n 1 i= σ i(m i+1 M i ) is a Martingale E[(σ M) n F n 1 ] n 1 = E[ σ i (M i+1 M i ) F n 1 ] i= n 2 = σ i (M i+1 M

More information

Brownian Motion and Stochastic Calculus

Brownian Motion and Stochastic Calculus ETHZ, Spring 17 D-MATH Prof Dr Martin Larsson Coordinator A Sepúlveda Brownian Motion and Stochastic Calculus Exercise sheet 6 Please hand in your solutions during exercise class or in your assistant s

More information

Brownian Motion. Chapter Stochastic Process

Brownian Motion. Chapter Stochastic Process Chapter 1 Brownian Motion 1.1 Stochastic Process A stochastic process can be thought of in one of many equivalent ways. We can begin with an underlying probability space (Ω, Σ,P and a real valued stochastic

More information

Elliptic Operators with Unbounded Coefficients

Elliptic Operators with Unbounded Coefficients Elliptic Operators with Unbounded Coefficients Federica Gregorio Universitá degli Studi di Salerno 8th June 2018 joint work with S.E. Boutiah, A. Rhandi, C. Tacelli Motivation Consider the Stochastic Differential

More information

Definition: Lévy Process. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 2: Lévy Processes. Theorem

Definition: Lévy Process. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 2: Lévy Processes. Theorem Definition: Lévy Process Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 2: Lévy Processes David Applebaum Probability and Statistics Department, University of Sheffield, UK July

More information

Stochastic Analysis I S.Kotani April 2006

Stochastic Analysis I S.Kotani April 2006 Stochastic Analysis I S.Kotani April 6 To describe time evolution of randomly developing phenomena such as motion of particles in random media, variation of stock prices and so on, we have to treat stochastic

More information

Poisson Jumps in Credit Risk Modeling: a Partial Integro-differential Equation Formulation

Poisson Jumps in Credit Risk Modeling: a Partial Integro-differential Equation Formulation Poisson Jumps in Credit Risk Modeling: a Partial Integro-differential Equation Formulation Jingyi Zhu Department of Mathematics University of Utah zhu@math.utah.edu Collaborator: Marco Avellaneda (Courant

More information

{σ x >t}p x. (σ x >t)=e at.

{σ x >t}p x. (σ x >t)=e at. 3.11. EXERCISES 121 3.11 Exercises Exercise 3.1 Consider the Ornstein Uhlenbeck process in example 3.1.7(B). Show that the defined process is a Markov process which converges in distribution to an N(0,σ

More information

Weak solutions of mean-field stochastic differential equations

Weak solutions of mean-field stochastic differential equations Weak solutions of mean-field stochastic differential equations Juan Li School of Mathematics and Statistics, Shandong University (Weihai), Weihai 26429, China. Email: juanli@sdu.edu.cn Based on joint works

More information

Finite element approximation of the stochastic heat equation with additive noise

Finite element approximation of the stochastic heat equation with additive noise p. 1/32 Finite element approximation of the stochastic heat equation with additive noise Stig Larsson p. 2/32 Outline Stochastic heat equation with additive noise du u dt = dw, x D, t > u =, x D, t > u()

More information

A Concise Course on Stochastic Partial Differential Equations

A Concise Course on Stochastic Partial Differential Equations A Concise Course on Stochastic Partial Differential Equations Michael Röckner Reference: C. Prevot, M. Röckner: Springer LN in Math. 1905, Berlin (2007) And see the references therein for the original

More information

13 The martingale problem

13 The martingale problem 19-3-2012 Notations Ω complete metric space of all continuous functions from [0, + ) to R d endowed with the distance d(ω 1, ω 2 ) = k=1 ω 1 ω 2 C([0,k];H) 2 k (1 + ω 1 ω 2 C([0,k];H) ), ω 1, ω 2 Ω. F

More information

µ X (A) = P ( X 1 (A) )

µ X (A) = P ( X 1 (A) ) 1 STOCHASTIC PROCESSES This appendix provides a very basic introduction to the language of probability theory and stochastic processes. We assume the reader is familiar with the general measure and integration

More information

Ito Formula for Stochastic Integrals w.r.t. Compensated Poisson Random Measures on Separable Banach Spaces. B. Rüdiger, G. Ziglio. no.

Ito Formula for Stochastic Integrals w.r.t. Compensated Poisson Random Measures on Separable Banach Spaces. B. Rüdiger, G. Ziglio. no. Ito Formula for Stochastic Integrals w.r.t. Compensated Poisson Random Measures on Separable Banach Spaces B. Rüdiger, G. Ziglio no. 187 Diese rbeit ist mit Unterstützung des von der Deutschen Forschungsgemeinschaft

More information

n E(X t T n = lim X s Tn = X s

n E(X t T n = lim X s Tn = X s Stochastic Calculus Example sheet - Lent 15 Michael Tehranchi Problem 1. Let X be a local martingale. Prove that X is a uniformly integrable martingale if and only X is of class D. Solution 1. If If direction:

More information

(B(t i+1 ) B(t i )) 2

(B(t i+1 ) B(t i )) 2 ltcc5.tex Week 5 29 October 213 Ch. V. ITÔ (STOCHASTIC) CALCULUS. WEAK CONVERGENCE. 1. Quadratic Variation. A partition π n of [, t] is a finite set of points t ni such that = t n < t n1

More information

Information and Credit Risk

Information and Credit Risk Information and Credit Risk M. L. Bedini Université de Bretagne Occidentale, Brest - Friedrich Schiller Universität, Jena Jena, March 2011 M. L. Bedini (Université de Bretagne Occidentale, Brest Information

More information

Likelihood Functions for Stochastic Signals in White Noise* TYRONE E. DUNCAN

Likelihood Functions for Stochastic Signals in White Noise* TYRONE E. DUNCAN INFORMATION AND CONTROL 16, 303-310 (1970) Likelihood Functions for Stochastic Signals in White Noise* TYRONE E. DUNCAN Computer, Information and Control Engineering, The University of Nlichigan, Ann Arbor,

More information

The Pedestrian s Guide to Local Time

The Pedestrian s Guide to Local Time The Pedestrian s Guide to Local Time Tomas Björk, Department of Finance, Stockholm School of Economics, Box 651, SE-113 83 Stockholm, SWEDEN tomas.bjork@hhs.se November 19, 213 Preliminary version Comments

More information

Introduction to self-similar growth-fragmentations

Introduction to self-similar growth-fragmentations Introduction to self-similar growth-fragmentations Quan Shi CIMAT, 11-15 December, 2017 Quan Shi Growth-Fragmentations CIMAT, 11-15 December, 2017 1 / 34 Literature Jean Bertoin, Compensated fragmentation

More information

SDE Coefficients. March 4, 2008

SDE Coefficients. March 4, 2008 SDE Coefficients March 4, 2008 The following is a summary of GARD sections 3.3 and 6., mainly as an overview of the two main approaches to creating a SDE model. Stochastic Differential Equations (SDE)

More information

Jump-type Levy Processes

Jump-type Levy Processes Jump-type Levy Processes Ernst Eberlein Handbook of Financial Time Series Outline Table of contents Probabilistic Structure of Levy Processes Levy process Levy-Ito decomposition Jump part Probabilistic

More information

Solving Stochastic Partial Differential Equations as Stochastic Differential Equations in Infinite Dimensions - a Review

Solving Stochastic Partial Differential Equations as Stochastic Differential Equations in Infinite Dimensions - a Review Solving Stochastic Partial Differential Equations as Stochastic Differential Equations in Infinite Dimensions - a Review L. Gawarecki Kettering University NSF/CBMS Conference Analysis of Stochastic Partial

More information

ON THE POLICY IMPROVEMENT ALGORITHM IN CONTINUOUS TIME

ON THE POLICY IMPROVEMENT ALGORITHM IN CONTINUOUS TIME ON THE POLICY IMPROVEMENT ALGORITHM IN CONTINUOUS TIME SAUL D. JACKA AND ALEKSANDAR MIJATOVIĆ Abstract. We develop a general approach to the Policy Improvement Algorithm (PIA) for stochastic control problems

More information

Lecture 4: Introduction to stochastic processes and stochastic calculus

Lecture 4: Introduction to stochastic processes and stochastic calculus Lecture 4: Introduction to stochastic processes and stochastic calculus Cédric Archambeau Centre for Computational Statistics and Machine Learning Department of Computer Science University College London

More information

Weak convergence and large deviation theory

Weak convergence and large deviation theory First Prev Next Go To Go Back Full Screen Close Quit 1 Weak convergence and large deviation theory Large deviation principle Convergence in distribution The Bryc-Varadhan theorem Tightness and Prohorov

More information

Bernardo D Auria Stochastic Processes /10. Notes. Abril 13 th, 2010

Bernardo D Auria Stochastic Processes /10. Notes. Abril 13 th, 2010 1 Stochastic Calculus Notes Abril 13 th, 1 As we have seen in previous lessons, the stochastic integral with respect to the Brownian motion shows a behavior different from the classical Riemann-Stieltjes

More information

SMSTC (2007/08) Probability.

SMSTC (2007/08) Probability. SMSTC (27/8) Probability www.smstc.ac.uk Contents 12 Markov chains in continuous time 12 1 12.1 Markov property and the Kolmogorov equations.................... 12 2 12.1.1 Finite state space.................................

More information

Nash Type Inequalities for Fractional Powers of Non-Negative Self-adjoint Operators. ( Wroclaw 2006) P.Maheux (Orléans. France)

Nash Type Inequalities for Fractional Powers of Non-Negative Self-adjoint Operators. ( Wroclaw 2006) P.Maheux (Orléans. France) Nash Type Inequalities for Fractional Powers of Non-Negative Self-adjoint Operators ( Wroclaw 006) P.Maheux (Orléans. France) joint work with A.Bendikov. European Network (HARP) (to appear in T.A.M.S)

More information

MA8109 Stochastic Processes in Systems Theory Autumn 2013

MA8109 Stochastic Processes in Systems Theory Autumn 2013 Norwegian University of Science and Technology Department of Mathematical Sciences MA819 Stochastic Processes in Systems Theory Autumn 213 1 MA819 Exam 23, problem 3b This is a linear equation of the form

More information

1 Brownian Local Time

1 Brownian Local Time 1 Brownian Local Time We first begin by defining the space and variables for Brownian local time. Let W t be a standard 1-D Wiener process. We know that for the set, {t : W t = } P (µ{t : W t = } = ) =

More information

Uniformly Uniformly-ergodic Markov chains and BSDEs

Uniformly Uniformly-ergodic Markov chains and BSDEs Uniformly Uniformly-ergodic Markov chains and BSDEs Samuel N. Cohen Mathematical Institute, University of Oxford (Based on joint work with Ying Hu, Robert Elliott, Lukas Szpruch) Centre Henri Lebesgue,

More information

1.1 Definition of BM and its finite-dimensional distributions

1.1 Definition of BM and its finite-dimensional distributions 1 Brownian motion Brownian motion as a physical phenomenon was discovered by botanist Robert Brown as he observed a chaotic motion of particles suspended in water. The rigorous mathematical model of BM

More information

Stability of Stochastic Differential Equations

Stability of Stochastic Differential Equations Lyapunov stability theory for ODEs s Stability of Stochastic Differential Equations Part 1: Introduction Department of Mathematics and Statistics University of Strathclyde Glasgow, G1 1XH December 2010

More information

EXISTENCE OF NEUTRAL STOCHASTIC FUNCTIONAL DIFFERENTIAL EQUATIONS WITH ABSTRACT VOLTERRA OPERATORS

EXISTENCE OF NEUTRAL STOCHASTIC FUNCTIONAL DIFFERENTIAL EQUATIONS WITH ABSTRACT VOLTERRA OPERATORS International Journal of Differential Equations and Applications Volume 7 No. 1 23, 11-17 EXISTENCE OF NEUTRAL STOCHASTIC FUNCTIONAL DIFFERENTIAL EQUATIONS WITH ABSTRACT VOLTERRA OPERATORS Zephyrinus C.

More information

P (A G) dp G P (A G)

P (A G) dp G P (A G) First homework assignment. Due at 12:15 on 22 September 2016. Homework 1. We roll two dices. X is the result of one of them and Z the sum of the results. Find E [X Z. Homework 2. Let X be a r.v.. Assume

More information

Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals

Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals Noèlia Viles Cuadros BCAM- Basque Center of Applied Mathematics with Prof. Enrico

More information

Numerical Methods with Lévy Processes

Numerical Methods with Lévy Processes Numerical Methods with Lévy Processes 1 Objective: i) Find models of asset returns, etc ii) Get numbers out of them. Why? VaR and risk management Valuing and hedging derivatives Why not? Usual assumption:

More information

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539 Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory

More information

Infinite-dimensional methods for path-dependent equations

Infinite-dimensional methods for path-dependent equations Infinite-dimensional methods for path-dependent equations (Università di Pisa) 7th General AMaMeF and Swissquote Conference EPFL, Lausanne, 8 September 215 Based on Flandoli F., Zanco G. - An infinite-dimensional

More information

The multidimensional Ito Integral and the multidimensional Ito Formula. Eric Mu ller June 1, 2015 Seminar on Stochastic Geometry and its applications

The multidimensional Ito Integral and the multidimensional Ito Formula. Eric Mu ller June 1, 2015 Seminar on Stochastic Geometry and its applications The multidimensional Ito Integral and the multidimensional Ito Formula Eric Mu ller June 1, 215 Seminar on Stochastic Geometry and its applications page 2 Seminar on Stochastic Geometry and its applications

More information

ON THE FRACTIONAL CAUCHY PROBLEM ASSOCIATED WITH A FELLER SEMIGROUP

ON THE FRACTIONAL CAUCHY PROBLEM ASSOCIATED WITH A FELLER SEMIGROUP Dedicated to Professor Gheorghe Bucur on the occasion of his 7th birthday ON THE FRACTIONAL CAUCHY PROBLEM ASSOCIATED WITH A FELLER SEMIGROUP EMIL POPESCU Starting from the usual Cauchy problem, we give

More information

A Note on the Central Limit Theorem for a Class of Linear Systems 1

A Note on the Central Limit Theorem for a Class of Linear Systems 1 A Note on the Central Limit Theorem for a Class of Linear Systems 1 Contents Yukio Nagahata Department of Mathematics, Graduate School of Engineering Science Osaka University, Toyonaka 560-8531, Japan.

More information

Model Specification Testing in Nonparametric and Semiparametric Time Series Econometrics. Jiti Gao

Model Specification Testing in Nonparametric and Semiparametric Time Series Econometrics. Jiti Gao Model Specification Testing in Nonparametric and Semiparametric Time Series Econometrics Jiti Gao Department of Statistics School of Mathematics and Statistics The University of Western Australia Crawley

More information

UNIVERSITY OF MANITOBA

UNIVERSITY OF MANITOBA Question Points Score INSTRUCTIONS TO STUDENTS: This is a 6 hour examination. No extra time will be given. No texts, notes, or other aids are permitted. There are no calculators, cellphones or electronic

More information

Wiener Measure and Brownian Motion

Wiener Measure and Brownian Motion Chapter 16 Wiener Measure and Brownian Motion Diffusion of particles is a product of their apparently random motion. The density u(t, x) of diffusing particles satisfies the diffusion equation (16.1) u

More information

Harnack Inequalities and Applications for Stochastic Equations

Harnack Inequalities and Applications for Stochastic Equations p. 1/32 Harnack Inequalities and Applications for Stochastic Equations PhD Thesis Defense Shun-Xiang Ouyang Under the Supervision of Prof. Michael Röckner & Prof. Feng-Yu Wang March 6, 29 p. 2/32 Outline

More information

An Introduction to Stochastic Processes in Continuous Time

An Introduction to Stochastic Processes in Continuous Time An Introduction to Stochastic Processes in Continuous Time Flora Spieksma adaptation of the text by Harry van Zanten to be used at your own expense May 22, 212 Contents 1 Stochastic Processes 1 1.1 Introduction......................................

More information

Lecture 9. d N(0, 1). Now we fix n and think of a SRW on [0,1]. We take the k th step at time k n. and our increments are ± 1

Lecture 9. d N(0, 1). Now we fix n and think of a SRW on [0,1]. We take the k th step at time k n. and our increments are ± 1 Random Walks and Brownian Motion Tel Aviv University Spring 011 Lecture date: May 0, 011 Lecture 9 Instructor: Ron Peled Scribe: Jonathan Hermon In today s lecture we present the Brownian motion (BM).

More information

Optimal investment strategies for an index-linked insurance payment process with stochastic intensity

Optimal investment strategies for an index-linked insurance payment process with stochastic intensity for an index-linked insurance payment process with stochastic intensity Warsaw School of Economics Division of Probabilistic Methods Probability space ( Ω, F, P ) Filtration F = (F(t)) 0 t T satisfies

More information

Contents. 1 Preliminaries 3. Martingales

Contents. 1 Preliminaries 3. Martingales Table of Preface PART I THE FUNDAMENTAL PRINCIPLES page xv 1 Preliminaries 3 2 Martingales 9 2.1 Martingales and examples 9 2.2 Stopping times 12 2.3 The maximum inequality 13 2.4 Doob s inequality 14

More information

Convergence at first and second order of some approximations of stochastic integrals

Convergence at first and second order of some approximations of stochastic integrals Convergence at first and second order of some approximations of stochastic integrals Bérard Bergery Blandine, Vallois Pierre IECN, Nancy-Université, CNRS, INRIA, Boulevard des Aiguillettes B.P. 239 F-5456

More information

Solutions to the Exercises in Stochastic Analysis

Solutions to the Exercises in Stochastic Analysis Solutions to the Exercises in Stochastic Analysis Lecturer: Xue-Mei Li 1 Problem Sheet 1 In these solution I avoid using conditional expectations. But do try to give alternative proofs once we learnt conditional

More information

Stochastic integration. P.J.C. Spreij

Stochastic integration. P.J.C. Spreij Stochastic integration P.J.C. Spreij this version: April 22, 29 Contents 1 Stochastic processes 1 1.1 General theory............................... 1 1.2 Stopping times...............................

More information

Jump Processes. Richard F. Bass

Jump Processes. Richard F. Bass Jump Processes Richard F. Bass ii c Copyright 214 Richard F. Bass Contents 1 Poisson processes 1 1.1 Definitions............................. 1 1.2 Stopping times.......................... 3 1.3 Markov

More information

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( )

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( ) Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio (2014-2015) Etienne Tanré - Olivier Faugeras INRIA - Team Tosca November 26th, 2014 E. Tanré (INRIA - Team Tosca) Mathematical

More information

Nonlinear Lévy Processes and their Characteristics

Nonlinear Lévy Processes and their Characteristics Nonlinear Lévy Processes and their Characteristics Ariel Neufeld Marcel Nutz January 11, 215 Abstract We develop a general construction for nonlinear Lévy processes with given characteristics. More precisely,

More information

Some Tools From Stochastic Analysis

Some Tools From Stochastic Analysis W H I T E Some Tools From Stochastic Analysis J. Potthoff Lehrstuhl für Mathematik V Universität Mannheim email: potthoff@math.uni-mannheim.de url: http://ls5.math.uni-mannheim.de To close the file, click

More information

Lecture 2. We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales.

Lecture 2. We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales. Lecture 2 1 Martingales We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales. 1.1 Doob s inequality We have the following maximal

More information

State Space Representation of Gaussian Processes

State Space Representation of Gaussian Processes State Space Representation of Gaussian Processes Simo Särkkä Department of Biomedical Engineering and Computational Science (BECS) Aalto University, Espoo, Finland June 12th, 2013 Simo Särkkä (Aalto University)

More information

On the martingales obtained by an extension due to Saisho, Tanemura and Yor of Pitman s theorem

On the martingales obtained by an extension due to Saisho, Tanemura and Yor of Pitman s theorem On the martingales obtained by an extension due to Saisho, Tanemura and Yor of Pitman s theorem Koichiro TAKAOKA Dept of Applied Physics, Tokyo Institute of Technology Abstract M Yor constructed a family

More information

MATH 425, FINAL EXAM SOLUTIONS

MATH 425, FINAL EXAM SOLUTIONS MATH 425, FINAL EXAM SOLUTIONS Each exercise is worth 50 points. Exercise. a The operator L is defined on smooth functions of (x, y by: Is the operator L linear? Prove your answer. L (u := arctan(xy u

More information

Simulation methods for stochastic models in chemistry

Simulation methods for stochastic models in chemistry Simulation methods for stochastic models in chemistry David F. Anderson anderson@math.wisc.edu Department of Mathematics University of Wisconsin - Madison SIAM: Barcelona June 4th, 21 Overview 1. Notation

More information

BROWNIAN MOTION AND LIOUVILLE S THEOREM

BROWNIAN MOTION AND LIOUVILLE S THEOREM BROWNIAN MOTION AND LIOUVILLE S THEOREM CHEN HUI GEORGE TEO Abstract. Probability theory has many deep and surprising connections with the theory of partial differential equations. We explore one such

More information

Optimal Stopping and Maximal Inequalities for Poisson Processes

Optimal Stopping and Maximal Inequalities for Poisson Processes Optimal Stopping and Maximal Inequalities for Poisson Processes D.O. Kramkov 1 E. Mordecki 2 September 10, 2002 1 Steklov Mathematical Institute, Moscow, Russia 2 Universidad de la República, Montevideo,

More information

1. Stochastic Processes and filtrations

1. Stochastic Processes and filtrations 1. Stochastic Processes and 1. Stoch. pr., A stochastic process (X t ) t T is a collection of random variables on (Ω, F) with values in a measurable space (S, S), i.e., for all t, In our case X t : Ω S

More information

STAT 331. Martingale Central Limit Theorem and Related Results

STAT 331. Martingale Central Limit Theorem and Related Results STAT 331 Martingale Central Limit Theorem and Related Results In this unit we discuss a version of the martingale central limit theorem, which states that under certain conditions, a sum of orthogonal

More information