1. Stochastic Process

Size: px
Start display at page:

Download "1. Stochastic Process"

Transcription

1 HETERGENEITY IN QUANTITATIVE TSE OCTOBER 17, 216 STOCHASTIC CALCULUS BASICS SANG YOON (TIM) LEE Very simple notes (need to add references). It is NOT meant to be a substitute for a real course in stochastic calculus, just listing heuristic derivations of the stuff most oftenly used in economics. Ito calculus is a lot more than only dealing with Poisson jumps and Wiener processes. Some abuses of notation included without clarification. 1. Stochastic Process A stochastic process is a collection of random variables (measurable functions) {X t : t T}, X t : Ω S, ordered in t (time), along with a measurable space (S, Σ). The probability space (Ω, F, P) denotes, respectively, the state space" (set of all possible histories), the σ-algebra that contains all possible sets (Borel sets) of histories induced by Ω, and the probability measure over F. The space (S, Σ) contains the range of the function X t : Ω S and its corresponding σ-algebra. For example, for most of our applications X t R or R +. If X t is measurable, any process induces a measure P t that we can construct using the original probability space. This calls for the notion of a filtration: a weakly increasing collection of Borel sets on Σ, {F t, t T}, s.t. for all s < t T, F s F t F. The process X is adapted to the filtration {F t } t T if X t is F t -measurable. This just means that for any X t, I can compute the probability only using F t and not all of F. Hence, a well defined stochastic process is always adapted to its natural filtration ({ F t = σ X 1 s }) (A) : s t, A Σ. This just means that for any history of X t up to time t, all possibly realizable trajectories can be mapped backed into a subset of F t, so that I can compute its probability for all points up to time t. This generates an induced probability measure over X. EXAMPLE 1 Let Ω = [, 1. Then any ω Ω is just a coordinate on the infinite dimensional unit cube. If we let X t : Ω S denote the t-th coordinate, S is just the unit interval [, 1. If we construct, say, the probability measure so that P = P 1 P, where each P t is the uniform distribution, X t is i.i.d. uniform. 2. Poisson (Jump) Process Let N t be the random variable equal to the number of hits" up to time t. The (adapted) state space is R [,t and range is all right-continuous paths that increase by 1. Now define 1

2 the probability measure over ω as Poisson: P {N t N s = n} = (λ(t s)n n! exp( λ(t s) where λ is the rate of arrival. This is what is usually called the Poisson process. (Not to be confused with what we use more often in economics: X t is a Compound Poisson Process (CPP) if if it changes to some value at rate λ t, studied below. In fact this is a new random variable in which X t changes to some value if N t > N s for all s < t, and you could redefine the probability space to the histories of N t rather than R [,t. This is the set of all right-continuous paths that increase by 1.) More typically, the Poisson process is defined as a counting process: DEFINITION 1 A continuous stochastic process N t is Poisson if 1. N t is a counting process: (a) N t lives in (Z +, 2 Z +), for all t, (b) N s N t for all s t, (c) lim s t N s lim s t N s for all t ; that is, no hit can happen simultaneously. 2. N = a.s., 3. N is a stochastic process with stationary, independent increments The two definitions are equivalent; there are many other definitions as well but I refer you to the internet. It is easier to show that the earlier definition implies the counting process; by definition, increments are independent. The probability of getting, 1, or 2 or more hits in a time interval dt > is P(N t+dt N t = ) = exp( λdt) P(N t+dt N t = 1) = λdt exp( λdt) P(N t+dt N t 2) = (λdt) 2 e λdt /2 + o(dt) = o(dt). 1 λdt + o(dt) λdt λ 2 dt 2 + o(dt) λdt Clearly, the actual probability that something happens in any interval dt (and (t, t + dt, since the increments are independent) is. Conversely, one way to make sense of the counting process is to realize that stationarity implies E[N(T)/T = lim T N(T)/T = λ and instead of sending T to infinity, send the number of intervals dt in (, T to infinity to get that the expected number of hits in any given time interval is λ: E[dN t = E[N(dt) = λdt = P(N(dt) = ) + 1 P(N(dt) = 1) + 2 n P(N(dt) = n) n=2

3 = P(N(dt) = 1) since two hits cannot occur at the same time. This is important later when we derive the stochastic HJB equation. 2.1 Compound Poisson Process Now define a jump process over the underlying Poisson process: Let X t be a r.v. that is γ a if N t is even and γ b if N t is odd. Heuristically, E[dX t = exp( λdt) + (γ b γ a ) λdt exp( λdt) + (λdt) 2 exp λdt /2 E[Ẋ t = lim dt (γ b γ a ) λ exp( λdt) = λ(γ b γ a ) More generally, let {Z k } k 1 be an i.i.d. ordered sequence of random variables with measure G z (z), independent of the Poisson process N t. Let X t be a continuous stochastic process that is a function of (N t, Z k ), and define Then and X t = EX t = N t Z k. k=1 { [( n ) } E Z k N t = n P(N t = n) = n=1 k=1 [( n ) = E Z k = λtµ Z n=1 k=1 dx t = Z Nt dn t = assuming X =. Z Nt dn t 2.2 Stochastic Integral with Poisson e λt (λt) n 1 (n 1)! e λt (λt) n n! n=1 = µ Z λt First consider a function f (N t ). The integral is easy to write as f (N t ) f () = = N t k=1 [ f (k) f (k 1) = [ f (N s ) f (N s 1) dn s = [ f (1 + N s ) f (N s ) dn s [ f (N s ) f (N s ) dn s, 3

4 where N s is the left limit of the Poisson process, and only one jump occurs in an dt by definition (or construction) of the Poisson process. For the compound process, recall that the waiting time for the kth hit of the Poisson process, T k, is also a random variable s.t. that the event {T k > t} {N t k 1}; in particular this means that T k T k 1 is an i.i.d. process by definition. For k = 1, the waiting time follows an exponential distribution. For k > 1, since P(T k > t) = λ t e λs (λs) n 1 ds, (1) (n 1)! P(T k > t) = P(T k > t T k 1 ) + P(T k 1 > t) = P(N t = n 1) + λ = e λt (λt) n 1 (n 1)! + λ t t e λs (λs) n 2 ds (n 2)! e λs (λs) n 2 ds (n 2)! and integration by parts leads to (1). Using waiting times, the stochastic integral of a function of a compound Poisson process can be written f (Y t ) f () = = = N t k=1 [ f (Y T k + Z k ) f (Y T ) = [ f (Y s + Z Ns ) f (Y s ) dn s k [ f (Y s ) f (Y s ) dn s [ f (Y s ) f (Y s ) (dn s λds) + λ [ f (Y s ) f (Y s ) ds. 3. Wiener Process (Brownian Motion) DEFINITION 2 A Wiener process is defined by four properties: 1. W = a.s. 2. Independent increments: W t W s is independent of F s for all s t 3. Normality: W t W s N (, t s) 4. W t is continuous a.s. We could spend the whole semester just talking about this, which we won t. Basically, think of Brownian motion as a random walk in continuous time: the best predictor of dx t is, with Gaussian errors. So clearly, W t is a particular type of a martingale (E[W t F s = W s a.s., for all s < t < ). 4

5 Most commonly you will encounter a Brownian motion with drift, a geometric Brownian motion, or a generic (Ito) diffusion process: dx t = µdt + σdw t, dx t = µx t dt + σx t dw t, dx t = µ(x t )dt + σ(x t )dw t the geometric Brownian motion simply gives dx t /X t = d log X t = µdt + σdw t, so it is the just a Brownian motion with drift in percentage points (or log-points, to be exact). In the Ito process, the instantaneous drift and variance depend on the current value of X t and is related to the version of Ito s Lemma that we will look at below. Before we move along, note that both the Poisson process and Brownian motion are Markov processes, but while the Brownian motion has a continuous time path a.s., the Poisson process has a discontinuous time path a.s. Also, Poisson was not a martingale, but dn t λdt was. It will be useful to know the quadratic variation of the Brownian motion: we will use a particular formulation that exploits the CLT in discrete time: W t E[Wt 2 2 n 1 = lim n i W t W t n i+1 W t n i i=1 [ i W t 2 where t n i it/2 n. This makes the difference in adjacent W equal t/2 n, so Z i 2 n [ i W t N (, t). That is, all Z i are normal with variance t. Since 2 n 1 i W t = i=1 2 n 1 Zi 2 /2n, t=1 the term converges to t a.s. by SLLN: W t E[W 2 t = t. Hence the quadratic variation of a Brownian motion is equal to t. This is an important notion that will help us understand Ito. Conversely, suppose Z i is a random walk s.t. Z equals ± h with probability (p, 1 p). That is, Z is Bernoulli. So E Z = h(2p 1) and V Z = 4p(1 p)( h) 2. Now we repeat this process n times; this is a Bernoulli process and we can write E[ n Z = n h(2p 1) = T h(2p 1)/ t E[ T Z V[ n Z = n( h) 2 4p(1 p) = T( h) 2 4p(1 p)/ t V[ T Z, 5

6 where all we have done is to consider that the n trials happened in a time interval T with time t = T/n increments. If we want this process to converge to a Wiener process as t, n, we just choose h and p so that h(2p 1)/ t = µ, ( h) 2 4p(1 p)/ t = σ 2, h = σ [ 1 t + (µ/σ) t = σ t as t [ p = 1 ± (µ/σ) t / 1 / + (µ/σ) t 2 For the standard BM W t, µ = and σ = 1, so h = t, p = 1/2. So BM can be viewed as the limit of the sum of Bernoulli i.i.d. r.v. s with 1/2 probabilities ±1: W T = lim t [ T Z = lim n [ n Z which converges to N (, T) since t = T/n. 4. Stochastic Integral with BM and CPP The Ito stochastic integral is defined on a semi-martingale (basically, a random walk plus a process with finite variation), where the underlying martingale component has finite quadratic variation. That is, X t = X + B t + M t dx t = db t + dm t, where E[M t F t =, B t is adapted to F t, and M t <. For example, if M t is BM, M t = t. Similarly if M t is the compensated Poisson process N t λt, M t = N t. THEOREM 1 (ITO S LEMMA FOR CONTINUOUS MARTINGALES) If X t is continuous and f is a three-times continuously differentiable function, the stochastic integral of f (X t ) is f (X t ) = f (X ) + f (X s )db s + f (X s )dm s or d f (X t ) = f (X t )db t + f (X t )dm t f (M t )d M t. f (M s )d M s Note that this version of Ito does not apply to Poisson. No proof given, but the intuition is that Ito extends Riemann integrals to stochastic increments: f (X t )dx t = lim n n i=1 f (X ti 1 ) [ X ti X ti 1 where Π n is an n-partition of [, t. It is important that the point of approximation for each interval is taken from the left. Also importantly, the stochastic integral itself is not a deterministic concept: it is the martingale such that its quadratic variation equals the expectation of the square of all realized paths integrated over X t. 6

7 Formally, note that any function of M t is simply a process Y t that is adapted to F t, the filtration of the martingale. Let M t be square-integrable in the sense that M t = EM 2 t <. Let Y (k) be simple process, that is for an infinitely fine partition {t i } i= on [, t, and countably infinite sequence of random variables {ζ (k) i } i=, Y (k) t = ζ (k) 1() + ζ (k) i 1 1(t i 1 t i. i=1 One definition of the stochastic integral of Y t over M t is the (unique) square integrable martingale I t (Y) s.t. DEFINITION 3 (HEURISTIC DEFINITION OF STOCHASTIC INTEGRAL) For all sequences of simple processes lim k [Y (k) Y = (in quadratic variation): lim Y(k) Y 2 = lim E[Y (k) Y 2 k k I(Y) is the martingale s.t. [ 2 lim I(Y(k) ) I(Y) 2 = lim E I(Y (k) ) I(Y) =, k k where for each Y (n), I t (Y (k) ) = ζ (k) [ i 1 Mti M ti 1. i=1 This is just a complicated way of saying the stochastic integral is a Riemann-Stieltjes integral where the measure of integration is stochastic (so need Lebegue). When and when it doesn t work, and why it s unique, we won t worry about. Perhaps the most important property of the stochastic integral defined as such is that [ E[I t (Y) F s = I s (Y) and E[I t (Y) 2 = E Ys 2 d M s where the first part just means it is a martingale, and the second that the square can be taken inside the integral. The problem is when M ti M ti 1 goes to, but Ito tells us that as long as M t is bounded we can define an integral. [Graphical Representation of Riemann-Stieltjes and Ito Although the above doesn t apply to Poisson, we already know how to write the integral for CPP. Consider the general jump diffusion process (which is all we re going to 7

8 deal with, really) X t X = µ(t, X t ) + σ(t, X t )W t + Y t dx t = µ(t, X t )dt + σ(t, X t )dw t + dy t, (2) }{{}}{{}}{{} db t dm t jumps where W t and Y t are independent Wiener and CPP. Then since W t = t, we have f (X t ) f (X ) = or + µ(s, X s ) f (X s )ds + [ f (X s ) f (X s )dn s σ(s, X s ) f (X s )dw s d f (X t ) =µ(s, X s ) f (X t )dt + σ(s, X s ) f (X t )dw t σ2 (s, X s ) f (M t )dt + [ f (X t ) f (X t )dn t. σ 2 (s, X s ) f (X s )ds Note of caution: you cannot just write λ[ f (X t ) f (X t )dt instead of [ f (X t ) f (X t )dn t there, since dn t λdt is only in expectation. Without proving (again!) we can also let λ vary with time and state, of which the non-homogeneous Poisson process satisfies [ E N t λ(s, X s )ds = = E t [dn t λ(t, X t )dt. Intuitively, we can always reset the underlying Poisson process following any hit until the next hit arrives, during which the process remains homogeneous." 5. Stochastic HJB Let x t denote realizations from the stochastic process X t that follows the jump diffusion process (2). Now consider the stochastic control problem { T } v(t, x t, a t ) = max E t U(s, X s, a s, c s )ds (c s ) t s.t. da s = f (s, x s, a s, c s )ds where T can be finite or infinite, and I have suppressed the scrap value Z(T, a T, X T ) which may or may not be there. Except for x t, all that has changed from the deterministic control problem is that we added an expectation operator over the objective. But let us generalize the jumps a bit. Let Y t, be associated with a non-homogeneous Poission process with time- and state-dependent arrival rates λ(t, x t ), and also have jumps Z k that are drawn from a time- and state-dependent measure G z (z; t, x t ). For example, X t can be a random wage or dividend process, or interest rate process (in which case things can be simplified, since it would multiply the state a s ; likewise if we wanted a stochastic discount rate in which case it would show up multiplicatively in U). 8

9 Without going into the details, we can use similar methods as in deterministic control (stochastic versions of Taylor expansion, verification theorem) to show that the following heuristic method works: U(t, a t, c t )dt + E t [dv(t, x t, a t ) where by Ito we have [ dv = V t + V a f (t, x t, a t, c t ) + V xx 2 σ2 (t, x t, a t ) dt + crap + λ(t, x t ) [E t V(t, x t + Z k, a t ) V(t, x t, a t ) dt where we have set X t = x t a.s., since it is already realized, and have allowed µ, σ to also depend on a. The crap term is E t [crap = = E t [µ(t, x t, a t )V x dw t + E t {[V(t, x t + Z k, a) V(t, x t, a) [dn t λ(t, x t )} the latter since Z k is independent of N t. So we get the HJB equation V t (t, x, a) =H (t, x, a, V a (t, x, a)) + V xx 2 σ2 (t, x, a) [ + λ(t, x) V(t, x + z, a)dg z (z; t, x) V(t, x, a) so when U = e ρt u, we can multiply the whole system by e ρt and define v(t, x, a) e ρt V(t, x, a) to obtain ρv(t, x, a) =v t (t, x, a) + Ĥ (t, x, a, v a (t, x, a)) + v xx 2 σ2 (t, x, a) [ + λ(t, x) v(t, x + z)dg z (z; t, x) v(t, x, a). Note that the only time the expectation operator comes in is for the (possibly) stochastic r.v. Z k ; everything else is adapted to F t. That is, for the HJB, there is no longer any expectations taken over X t ; all of that is washed out in continuous time with martingales. Since the Hamiltonians are the same as in the deterministic case, it follows that the f.o.c. holds deterministically in continuous time, that is, u c (t, x, a, c) + v a f (t, x, a, c) = (no expectations over v or v a!) 6. Fokker-Planck (Kolmogorov Forward) Equation The last tool that will be relevant for our purposes is the KFE. Given a solution to v(t, x, a), we want to understand the evolution of p(t, x, a), the population p.d.f. over (x, a) at time t. KFE gives us a (partial) differential equation that does exactly this. Formally, KFE tells 9

10 you: suppose at time t, you know P{(x, a) B} for all B F t. How is P(B) evolving going forward in the filtration (for the same set B)? For example, in the savings problem a solution to v admits optimal policy functions c (t, x, a) and associated (change in) assets ȧ (t, x, a). Now suppose at time t, the p.d.f. is represented as p(t, x, a). KFE tells us how the distribution evolves going forward. (Conversely, Feynman-Kac, or the Kolmogorov Backward Equation, tells us how you would have got to p(t, x, a) going backward; but we are not so interested in this). To compare with discrete time, it is as if we are simulating a distribution of individuals starting from some given initial distribution. The following is a version of Fokker-Planck: THEOREM 2 (FOKKER-PLANCK-KOLMOGOROV) Let X t be a stochastic process as in (2), where Y t is a CPP with jumps Z k G z (t, X t ) associated with a non-homogeneous Poisson process with rate λ(t, X t ). Let p(t, x) denote the p.d.f. of x at time t. Then for all x (x min, x max ) (the interior of possibly realizable states), p(t, x) = [µ(t, x)p(t, x) + 2 [ t x 2 x 2 σ 2 (t, x)p(t, x) λ(t, x)p(t, x) + g z (x x ; t, x )λ(t, x )p(t, x )dx. Proof (heuristic): The trick is to use a function that is differentiable, so we can apply Ito. For any x S (the range of X t ), approximate the probability of the event by the expectation of a smooth function: x P(X t x) = p(t, x )dx = E[χ(X t x) = χ(x x)dx. With some abuse of notation, we will assume that the indicator is already smoothed (it does not matter how it is smoothed; convolving it with any mollifer will do). Using Ito, we obtain (the derivative is w.r.t. time): dp(x t x) =E[χ (X t x)µ(t, X t ) σ2 (t, X t )dt + crap (3) + E {λ(t, X t )[χ(x t + Z k x) χ(x t x)} where crap is again in expectations. Note that the derivatives of χ are w.r.t X t, not x. First look at the Poisson part. To compute the expectation over X t, we denote the variable of integration by X t = x, and remember that X t = X t = x a.s.: = [ λ(t, x ) χ(x + z x)dg(z; t, x ) χ(x x) p(t, x )dx z λ(t, x )G z (x x ; t, x )p(t, x )dx x λ(t, x )p(t, x )dx. Note that this is as if" we were looking backward," not forward like we did in the HJB. This is because we are looking at all the X t when they are still random variables, not 1

11 a realized point like in the HJB. (The density functions are deterministic in x, not X t = x ). For the diffusion part, we can integrate by parts: χ (x x)µ(t, x )p(t, x )dx = B 1 (x) x where B 1 is a term determined by boundary conditions: x [µ(t, x )p(t, x )dx, B 1 (x) χ(x max x)µ(t, x max )p(t, x max ) µ(t, x min )p(t, x min ). And likewise { χ (x x)σ 2 (t, x )p(t, x )dx =B 2 (x) χ (x x) } x [σ2 (t, x )p(t, x ) dx x { 2 } =B 2 (x) B 3 (x) + x 2 [σ2 (t, x )p(t, x ) dx where B j are determined by boundary conditions: B 2 (x) χ (x max x)σ 2 (t, x max )p(t, x max ) χ (x min x)σ 2 (t, x min )p(t, x min ) B 3 (x) χ(x max x) [ σ 2 (t, x max )p(t, x max ) [ σ 2 (t, x min )p(t, x min ). x x So we have obtained that (3) becomes x [ dp(t, x) = + x µ(t, x ) x 2 σ2 (t, x ) λ(t, x ) p(t, x )dx (4) λ(t, x )G z (x x ; t, x )p(t, x )dx + B 1 (x) + B 2 (x) B 3 (x) Note that B j (x) = except at the boundaries. So taking the derivative w.r.t. x on both sides of (4) we obtain the formula in the theorem. Of course in our economics problems, typically we want p(t, x, a) not p(t, x). But most problems will assume a law of motion s.t. x is subsumed in a, as we will see later. 11

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

The concentration of a drug in blood. Exponential decay. Different realizations. Exponential decay with noise. dc(t) dt.

The concentration of a drug in blood. Exponential decay. Different realizations. Exponential decay with noise. dc(t) dt. The concentration of a drug in blood Exponential decay C12 concentration 2 4 6 8 1 C12 concentration 2 4 6 8 1 dc(t) dt = µc(t) C(t) = C()e µt 2 4 6 8 1 12 time in minutes 2 4 6 8 1 12 time in minutes

More information

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( )

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( ) Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio (2014-2015) Etienne Tanré - Olivier Faugeras INRIA - Team Tosca November 26th, 2014 E. Tanré (INRIA - Team Tosca) Mathematical

More information

Lecture 4: Introduction to stochastic processes and stochastic calculus

Lecture 4: Introduction to stochastic processes and stochastic calculus Lecture 4: Introduction to stochastic processes and stochastic calculus Cédric Archambeau Centre for Computational Statistics and Machine Learning Department of Computer Science University College London

More information

I forgot to mention last time: in the Ito formula for two standard processes, putting

I forgot to mention last time: in the Ito formula for two standard processes, putting I forgot to mention last time: in the Ito formula for two standard processes, putting dx t = a t dt + b t db t dy t = α t dt + β t db t, and taking f(x, y = xy, one has f x = y, f y = x, and f xx = f yy

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 15. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

1. Stochastic Processes and filtrations

1. Stochastic Processes and filtrations 1. Stochastic Processes and 1. Stoch. pr., A stochastic process (X t ) t T is a collection of random variables on (Ω, F) with values in a measurable space (S, S), i.e., for all t, In our case X t : Ω S

More information

Week 9 Generators, duality, change of measure

Week 9 Generators, duality, change of measure Week 9 Generators, duality, change of measure Jonathan Goodman November 18, 013 1 Generators This section describes a common abstract way to describe many of the differential equations related to Markov

More information

ELEMENTS OF PROBABILITY THEORY

ELEMENTS OF PROBABILITY THEORY ELEMENTS OF PROBABILITY THEORY Elements of Probability Theory A collection of subsets of a set Ω is called a σ algebra if it contains Ω and is closed under the operations of taking complements and countable

More information

1 Introduction. 2 Diffusion equation and central limit theorem. The content of these notes is also covered by chapter 3 section B of [1].

1 Introduction. 2 Diffusion equation and central limit theorem. The content of these notes is also covered by chapter 3 section B of [1]. 1 Introduction The content of these notes is also covered by chapter 3 section B of [1]. Diffusion equation and central limit theorem Consider a sequence {ξ i } i=1 i.i.d. ξ i = d ξ with ξ : Ω { Dx, 0,

More information

From Random Variables to Random Processes. From Random Variables to Random Processes

From Random Variables to Random Processes. From Random Variables to Random Processes Random Processes In probability theory we study spaces (Ω, F, P) where Ω is the space, F are all the sets to which we can measure its probability and P is the probability. Example: Toss a die twice. Ω

More information

Stochastic Calculus February 11, / 33

Stochastic Calculus February 11, / 33 Martingale Transform M n martingale with respect to F n, n =, 1, 2,... σ n F n (σ M) n = n 1 i= σ i(m i+1 M i ) is a Martingale E[(σ M) n F n 1 ] n 1 = E[ σ i (M i+1 M i ) F n 1 ] i= n 2 = σ i (M i+1 M

More information

Stochastic Calculus. Kevin Sinclair. August 2, 2016

Stochastic Calculus. Kevin Sinclair. August 2, 2016 Stochastic Calculus Kevin Sinclair August, 16 1 Background Suppose we have a Brownian motion W. This is a process, and the value of W at a particular time T (which we write W T ) is a normally distributed

More information

Problems 5: Continuous Markov process and the diffusion equation

Problems 5: Continuous Markov process and the diffusion equation Problems 5: Continuous Markov process and the diffusion equation Roman Belavkin Middlesex University Question Give a definition of Markov stochastic process. What is a continuous Markov process? Answer:

More information

Brownian Motion and Poisson Process

Brownian Motion and Poisson Process and Poisson Process She: What is white noise? He: It is the best model of a totally unpredictable process. She: Are you implying, I am white noise? He: No, it does not exist. Dialogue of an unknown couple.

More information

Brownian Motion. An Undergraduate Introduction to Financial Mathematics. J. Robert Buchanan. J. Robert Buchanan Brownian Motion

Brownian Motion. An Undergraduate Introduction to Financial Mathematics. J. Robert Buchanan. J. Robert Buchanan Brownian Motion Brownian Motion An Undergraduate Introduction to Financial Mathematics J. Robert Buchanan 2010 Background We have already seen that the limiting behavior of a discrete random walk yields a derivation of

More information

Point Process Control

Point Process Control Point Process Control The following note is based on Chapters I, II and VII in Brémaud s book Point Processes and Queues (1981). 1 Basic Definitions Consider some probability space (Ω, F, P). A real-valued

More information

Universal examples. Chapter The Bernoulli process

Universal examples. Chapter The Bernoulli process Chapter 1 Universal examples 1.1 The Bernoulli process First description: Bernoulli random variables Y i for i = 1, 2, 3,... independent with P [Y i = 1] = p and P [Y i = ] = 1 p. Second description: Binomial

More information

A Short Introduction to Diffusion Processes and Ito Calculus

A Short Introduction to Diffusion Processes and Ito Calculus A Short Introduction to Diffusion Processes and Ito Calculus Cédric Archambeau University College, London Center for Computational Statistics and Machine Learning c.archambeau@cs.ucl.ac.uk January 24,

More information

HJB equations. Seminar in Stochastic Modelling in Economics and Finance January 10, 2011

HJB equations. Seminar in Stochastic Modelling in Economics and Finance January 10, 2011 Department of Probability and Mathematical Statistics Faculty of Mathematics and Physics, Charles University in Prague petrasek@karlin.mff.cuni.cz Seminar in Stochastic Modelling in Economics and Finance

More information

Stochastic Modelling Unit 1: Markov chain models

Stochastic Modelling Unit 1: Markov chain models Stochastic Modelling Unit 1: Markov chain models Russell Gerrard and Douglas Wright Cass Business School, City University, London June 2004 Contents of Unit 1 1 Stochastic Processes 2 Markov Chains 3 Poisson

More information

Some Terminology and Concepts that We will Use, But Not Emphasize (Section 6.2)

Some Terminology and Concepts that We will Use, But Not Emphasize (Section 6.2) Some Terminology and Concepts that We will Use, But Not Emphasize (Section 6.2) Statistical analysis is based on probability theory. The fundamental object in probability theory is a probability space,

More information

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition Filtrations, Markov Processes and Martingales Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition David pplebaum Probability and Statistics Department,

More information

Why study probability? Set theory. ECE 6010 Lecture 1 Introduction; Review of Random Variables

Why study probability? Set theory. ECE 6010 Lecture 1 Introduction; Review of Random Variables ECE 6010 Lecture 1 Introduction; Review of Random Variables Readings from G&S: Chapter 1. Section 2.1, Section 2.3, Section 2.4, Section 3.1, Section 3.2, Section 3.5, Section 4.1, Section 4.2, Section

More information

Joint Probability Distributions and Random Samples (Devore Chapter Five)

Joint Probability Distributions and Random Samples (Devore Chapter Five) Joint Probability Distributions and Random Samples (Devore Chapter Five) 1016-345-01: Probability and Statistics for Engineers Spring 2013 Contents 1 Joint Probability Distributions 2 1.1 Two Discrete

More information

Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals

Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals Noèlia Viles Cuadros BCAM- Basque Center of Applied Mathematics with Prof. Enrico

More information

Controlled Diffusions and Hamilton-Jacobi Bellman Equations

Controlled Diffusions and Hamilton-Jacobi Bellman Equations Controlled Diffusions and Hamilton-Jacobi Bellman Equations Emo Todorov Applied Mathematics and Computer Science & Engineering University of Washington Winter 2014 Emo Todorov (UW) AMATH/CSE 579, Winter

More information

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3 Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................

More information

p 1 ( Y p dp) 1/p ( X p dp) 1 1 p

p 1 ( Y p dp) 1/p ( X p dp) 1 1 p Doob s inequality Let X(t) be a right continuous submartingale with respect to F(t), t 1 P(sup s t X(s) λ) 1 λ {sup s t X(s) λ} X + (t)dp 2 For 1 < p

More information

Poisson random measure: motivation

Poisson random measure: motivation : motivation The Lévy measure provides the expected number of jumps by time unit, i.e. in a time interval of the form: [t, t + 1], and of a certain size Example: ν([1, )) is the expected number of jumps

More information

Manual for SOA Exam MLC.

Manual for SOA Exam MLC. Chapter 10. Poisson processes. Section 10.5. Nonhomogenous Poisson processes Extract from: Arcones Fall 2009 Edition, available at http://www.actexmadriver.com/ 1/14 Nonhomogenous Poisson processes Definition

More information

Tyler Hofmeister. University of Calgary Mathematical and Computational Finance Laboratory

Tyler Hofmeister. University of Calgary Mathematical and Computational Finance Laboratory JUMP PROCESSES GENERALIZING STOCHASTIC INTEGRALS WITH JUMPS Tyler Hofmeister University of Calgary Mathematical and Computational Finance Laboratory Overview 1. General Method 2. Poisson Processes 3. Diffusion

More information

Random variables. DS GA 1002 Probability and Statistics for Data Science.

Random variables. DS GA 1002 Probability and Statistics for Data Science. Random variables DS GA 1002 Probability and Statistics for Data Science http://www.cims.nyu.edu/~cfgranda/pages/dsga1002_fall17 Carlos Fernandez-Granda Motivation Random variables model numerical quantities

More information

Kolmogorov Equations and Markov Processes

Kolmogorov Equations and Markov Processes Kolmogorov Equations and Markov Processes May 3, 013 1 Transition measures and functions Consider a stochastic process {X(t)} t 0 whose state space is a product of intervals contained in R n. We define

More information

Exercises. T 2T. e ita φ(t)dt.

Exercises. T 2T. e ita φ(t)dt. Exercises. Set #. Construct an example of a sequence of probability measures P n on R which converge weakly to a probability measure P but so that the first moments m,n = xdp n do not converge to m = xdp.

More information

Northwestern University Department of Electrical Engineering and Computer Science

Northwestern University Department of Electrical Engineering and Computer Science Northwestern University Department of Electrical Engineering and Computer Science EECS 454: Modeling and Analysis of Communication Networks Spring 2008 Probability Review As discussed in Lecture 1, probability

More information

µ X (A) = P ( X 1 (A) )

µ X (A) = P ( X 1 (A) ) 1 STOCHASTIC PROCESSES This appendix provides a very basic introduction to the language of probability theory and stochastic processes. We assume the reader is familiar with the general measure and integration

More information

FE610 Stochastic Calculus for Financial Engineers. Stevens Institute of Technology

FE610 Stochastic Calculus for Financial Engineers. Stevens Institute of Technology FE610 Stochastic Calculus for Financial Engineers Lecture 3. Calculaus in Deterministic and Stochastic Environments Steve Yang Stevens Institute of Technology 01/31/2012 Outline 1 Modeling Random Behavior

More information

Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t

Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t 2.2 Filtrations Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of σ algebras {F t } such that F t F and F t F t+1 for all t = 0, 1,.... In continuous time, the second condition

More information

6.1 Moment Generating and Characteristic Functions

6.1 Moment Generating and Characteristic Functions Chapter 6 Limit Theorems The power statistics can mostly be seen when there is a large collection of data points and we are interested in understanding the macro state of the system, e.g., the average,

More information

Jump-type Levy Processes

Jump-type Levy Processes Jump-type Levy Processes Ernst Eberlein Handbook of Financial Time Series Outline Table of contents Probabilistic Structure of Levy Processes Levy process Levy-Ito decomposition Jump part Probabilistic

More information

Some Tools From Stochastic Analysis

Some Tools From Stochastic Analysis W H I T E Some Tools From Stochastic Analysis J. Potthoff Lehrstuhl für Mathematik V Universität Mannheim email: potthoff@math.uni-mannheim.de url: http://ls5.math.uni-mannheim.de To close the file, click

More information

Stochastic Calculus Made Easy

Stochastic Calculus Made Easy Stochastic Calculus Made Easy Most of us know how standard Calculus works. We know how to differentiate, how to integrate etc. But stochastic calculus is a totally different beast to tackle; we are trying

More information

Exponential Distribution and Poisson Process

Exponential Distribution and Poisson Process Exponential Distribution and Poisson Process Stochastic Processes - Lecture Notes Fatih Cavdur to accompany Introduction to Probability Models by Sheldon M. Ross Fall 215 Outline Introduction Exponential

More information

(B(t i+1 ) B(t i )) 2

(B(t i+1 ) B(t i )) 2 ltcc5.tex Week 5 29 October 213 Ch. V. ITÔ (STOCHASTIC) CALCULUS. WEAK CONVERGENCE. 1. Quadratic Variation. A partition π n of [, t] is a finite set of points t ni such that = t n < t n1

More information

Lectures for APM 541: Stochastic Modeling in Biology. Jay Taylor

Lectures for APM 541: Stochastic Modeling in Biology. Jay Taylor Lectures for APM 541: Stochastic Modeling in Biology Jay Taylor November 3, 2011 Contents 1 Distributions, Expectations, and Random Variables 4 1.1 Probability Spaces...................................

More information

CDA5530: Performance Models of Computers and Networks. Chapter 3: Review of Practical

CDA5530: Performance Models of Computers and Networks. Chapter 3: Review of Practical CDA5530: Performance Models of Computers and Networks Chapter 3: Review of Practical Stochastic Processes Definition Stochastic ti process X = {X(t), t T} is a collection of random variables (rvs); one

More information

CDA6530: Performance Models of Computers and Networks. Chapter 3: Review of Practical Stochastic Processes

CDA6530: Performance Models of Computers and Networks. Chapter 3: Review of Practical Stochastic Processes CDA6530: Performance Models of Computers and Networks Chapter 3: Review of Practical Stochastic Processes Definition Stochastic process X = {X(t), t2 T} is a collection of random variables (rvs); one rv

More information

The Smoluchowski-Kramers Approximation: What model describes a Brownian particle?

The Smoluchowski-Kramers Approximation: What model describes a Brownian particle? The Smoluchowski-Kramers Approximation: What model describes a Brownian particle? Scott Hottovy shottovy@math.arizona.edu University of Arizona Applied Mathematics October 7, 2011 Brown observes a particle

More information

Simulation of conditional diffusions via forward-reverse stochastic representations

Simulation of conditional diffusions via forward-reverse stochastic representations Weierstrass Institute for Applied Analysis and Stochastics Simulation of conditional diffusions via forward-reverse stochastic representations Christian Bayer and John Schoenmakers Numerical methods for

More information

Uniformly Uniformly-ergodic Markov chains and BSDEs

Uniformly Uniformly-ergodic Markov chains and BSDEs Uniformly Uniformly-ergodic Markov chains and BSDEs Samuel N. Cohen Mathematical Institute, University of Oxford (Based on joint work with Ying Hu, Robert Elliott, Lukas Szpruch) Centre Henri Lebesgue,

More information

Stochastic Integration and Stochastic Differential Equations: a gentle introduction

Stochastic Integration and Stochastic Differential Equations: a gentle introduction Stochastic Integration and Stochastic Differential Equations: a gentle introduction Oleg Makhnin New Mexico Tech Dept. of Mathematics October 26, 27 Intro: why Stochastic? Brownian Motion/ Wiener process

More information

Inference for Stochastic Processes

Inference for Stochastic Processes Inference for Stochastic Processes Robert L. Wolpert Revised: June 19, 005 Introduction A stochastic process is a family {X t } of real-valued random variables, all defined on the same probability space

More information

stochnotes Page 1

stochnotes Page 1 stochnotes110308 Page 1 Kolmogorov forward and backward equations and Poisson process Monday, November 03, 2008 11:58 AM How can we apply the Kolmogorov equations to calculate various statistics of interest?

More information

Continuous Time Finance

Continuous Time Finance Continuous Time Finance Lisbon 2013 Tomas Björk Stockholm School of Economics Tomas Björk, 2013 Contents Stochastic Calculus (Ch 4-5). Black-Scholes (Ch 6-7. Completeness and hedging (Ch 8-9. The martingale

More information

1 Brownian Local Time

1 Brownian Local Time 1 Brownian Local Time We first begin by defining the space and variables for Brownian local time. Let W t be a standard 1-D Wiener process. We know that for the set, {t : W t = } P (µ{t : W t = } = ) =

More information

The strictly 1/2-stable example

The strictly 1/2-stable example The strictly 1/2-stable example 1 Direct approach: building a Lévy pure jump process on R Bert Fristedt provided key mathematical facts for this example. A pure jump Lévy process X is a Lévy process such

More information

Week 1 Quantitative Analysis of Financial Markets Distributions A

Week 1 Quantitative Analysis of Financial Markets Distributions A Week 1 Quantitative Analysis of Financial Markets Distributions A Christopher Ting http://www.mysmu.edu/faculty/christophert/ Christopher Ting : christopherting@smu.edu.sg : 6828 0364 : LKCSB 5036 October

More information

CIMPA SCHOOL, 2007 Jump Processes and Applications to Finance Monique Jeanblanc

CIMPA SCHOOL, 2007 Jump Processes and Applications to Finance Monique Jeanblanc CIMPA SCHOOL, 27 Jump Processes and Applications to Finance Monique Jeanblanc 1 Jump Processes I. Poisson Processes II. Lévy Processes III. Jump-Diffusion Processes IV. Point Processes 2 I. Poisson Processes

More information

One-Parameter Processes, Usually Functions of Time

One-Parameter Processes, Usually Functions of Time Chapter 4 One-Parameter Processes, Usually Functions of Time Section 4.1 defines one-parameter processes, and their variations (discrete or continuous parameter, one- or two- sided parameter), including

More information

MA8109 Stochastic Processes in Systems Theory Autumn 2013

MA8109 Stochastic Processes in Systems Theory Autumn 2013 Norwegian University of Science and Technology Department of Mathematical Sciences MA819 Stochastic Processes in Systems Theory Autumn 213 1 MA819 Exam 23, problem 3b This is a linear equation of the form

More information

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539 Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory

More information

18.175: Lecture 2 Extension theorems, random variables, distributions

18.175: Lecture 2 Extension theorems, random variables, distributions 18.175: Lecture 2 Extension theorems, random variables, distributions Scott Sheffield MIT Outline Extension theorems Characterizing measures on R d Random variables Outline Extension theorems Characterizing

More information

DS-GA 1002 Lecture notes 2 Fall Random variables

DS-GA 1002 Lecture notes 2 Fall Random variables DS-GA 12 Lecture notes 2 Fall 216 1 Introduction Random variables Random variables are a fundamental tool in probabilistic modeling. They allow us to model numerical quantities that are uncertain: the

More information

Lecture 12: Diffusion Processes and Stochastic Differential Equations

Lecture 12: Diffusion Processes and Stochastic Differential Equations Lecture 12: Diffusion Processes and Stochastic Differential Equations 1. Diffusion Processes 1.1 Definition of a diffusion process 1.2 Examples 2. Stochastic Differential Equations SDE) 2.1 Stochastic

More information

Fast-slow systems with chaotic noise

Fast-slow systems with chaotic noise Fast-slow systems with chaotic noise David Kelly Ian Melbourne Courant Institute New York University New York NY www.dtbkelly.com May 12, 215 Averaging and homogenization workshop, Luminy. Fast-slow systems

More information

Stochastic Differential Equations.

Stochastic Differential Equations. Chapter 3 Stochastic Differential Equations. 3.1 Existence and Uniqueness. One of the ways of constructing a Diffusion process is to solve the stochastic differential equation dx(t) = σ(t, x(t)) dβ(t)

More information

Markov Chain BSDEs and risk averse networks

Markov Chain BSDEs and risk averse networks Markov Chain BSDEs and risk averse networks Samuel N. Cohen Mathematical Institute, University of Oxford (Based on joint work with Ying Hu, Robert Elliott, Lukas Szpruch) 2nd Young Researchers in BSDEs

More information

Part I Stochastic variables and Markov chains

Part I Stochastic variables and Markov chains Part I Stochastic variables and Markov chains Random variables describe the behaviour of a phenomenon independent of any specific sample space Distribution function (cdf, cumulative distribution function)

More information

Poisson Processes. Particles arriving over time at a particle detector. Several ways to describe most common model.

Poisson Processes. Particles arriving over time at a particle detector. Several ways to describe most common model. Poisson Processes Particles arriving over time at a particle detector. Several ways to describe most common model. Approach 1: a) numbers of particles arriving in an interval has Poisson distribution,

More information

Stochastic Modelling in Climate Science

Stochastic Modelling in Climate Science Stochastic Modelling in Climate Science David Kelly Mathematics Department UNC Chapel Hill dtbkelly@gmail.com November 16, 2013 David Kelly (UNC) Stochastic Climate November 16, 2013 1 / 36 Why use stochastic

More information

Stochastic Differential Equations. Introduction to Stochastic Models for Pollutants Dispersion, Epidemic and Finance

Stochastic Differential Equations. Introduction to Stochastic Models for Pollutants Dispersion, Epidemic and Finance Stochastic Differential Equations. Introduction to Stochastic Models for Pollutants Dispersion, Epidemic and Finance 15th March-April 19th, 211 at Lappeenranta University of Technology(LUT)-Finland By

More information

IEOR 6711: Stochastic Models I SOLUTIONS to the First Midterm Exam, October 7, 2008

IEOR 6711: Stochastic Models I SOLUTIONS to the First Midterm Exam, October 7, 2008 IEOR 6711: Stochastic Models I SOLUTIONS to the First Midterm Exam, October 7, 2008 Justify your answers; show your work. 1. A sequence of Events. (10 points) Let {B n : n 1} be a sequence of events in

More information

Stochastic process for macro

Stochastic process for macro Stochastic process for macro Tianxiao Zheng SAIF 1. Stochastic process The state of a system {X t } evolves probabilistically in time. The joint probability distribution is given by Pr(X t1, t 1 ; X t2,

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 8 10/1/2008 CONTINUOUS RANDOM VARIABLES

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 8 10/1/2008 CONTINUOUS RANDOM VARIABLES MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 8 10/1/2008 CONTINUOUS RANDOM VARIABLES Contents 1. Continuous random variables 2. Examples 3. Expected values 4. Joint distributions

More information

Reflected Brownian Motion

Reflected Brownian Motion Chapter 6 Reflected Brownian Motion Often we encounter Diffusions in regions with boundary. If the process can reach the boundary from the interior in finite time with positive probability we need to decide

More information

Solutions For Stochastic Process Final Exam

Solutions For Stochastic Process Final Exam Solutions For Stochastic Process Final Exam (a) λ BMW = 20 0% = 2 X BMW Poisson(2) Let N t be the number of BMWs which have passes during [0, t] Then the probability in question is P (N ) = P (N = 0) =

More information

Lecture 4: September Reminder: convergence of sequences

Lecture 4: September Reminder: convergence of sequences 36-705: Intermediate Statistics Fall 2017 Lecturer: Siva Balakrishnan Lecture 4: September 6 In this lecture we discuss the convergence of random variables. At a high-level, our first few lectures focused

More information

Tentamentsskrivning: Stochastic processes 1

Tentamentsskrivning: Stochastic processes 1 Tentamentsskrivning: Stochastic processes 1 Tentamentsskrivning i MSF2/MVE33, 7.5 hp. Tid: fredagen den 31 maj 213 kl 8.3-12.3 Examinator och jour: Serik Sagitov, tel. 772-5351, mob. 736 97 613, rum H326

More information

Verona Course April Lecture 1. Review of probability

Verona Course April Lecture 1. Review of probability Verona Course April 215. Lecture 1. Review of probability Viorel Barbu Al.I. Cuza University of Iaşi and the Romanian Academy A probability space is a triple (Ω, F, P) where Ω is an abstract set, F is

More information

POISSON PROCESSES 1. THE LAW OF SMALL NUMBERS

POISSON PROCESSES 1. THE LAW OF SMALL NUMBERS POISSON PROCESSES 1. THE LAW OF SMALL NUMBERS 1.1. The Rutherford-Chadwick-Ellis Experiment. About 90 years ago Ernest Rutherford and his collaborators at the Cavendish Laboratory in Cambridge conducted

More information

Continuous-time Markov Chains

Continuous-time Markov Chains Continuous-time Markov Chains Gonzalo Mateos Dept. of ECE and Goergen Institute for Data Science University of Rochester gmateosb@ece.rochester.edu http://www.ece.rochester.edu/~gmateosb/ October 23, 2017

More information

Markov processes and queueing networks

Markov processes and queueing networks Inria September 22, 2015 Outline Poisson processes Markov jump processes Some queueing networks The Poisson distribution (Siméon-Denis Poisson, 1781-1840) { } e λ λ n n! As prevalent as Gaussian distribution

More information

UCSD ECE250 Handout #27 Prof. Young-Han Kim Friday, June 8, Practice Final Examination (Winter 2017)

UCSD ECE250 Handout #27 Prof. Young-Han Kim Friday, June 8, Practice Final Examination (Winter 2017) UCSD ECE250 Handout #27 Prof. Young-Han Kim Friday, June 8, 208 Practice Final Examination (Winter 207) There are 6 problems, each problem with multiple parts. Your answer should be as clear and readable

More information

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R. Ergodic Theorems Samy Tindel Purdue University Probability Theory 2 - MA 539 Taken from Probability: Theory and examples by R. Durrett Samy T. Ergodic theorems Probability Theory 1 / 92 Outline 1 Definitions

More information

Summary of Stochastic Processes

Summary of Stochastic Processes Summary of Stochastic Processes Kui Tang May 213 Based on Lawler s Introduction to Stochastic Processes, second edition, and course slides from Prof. Hongzhong Zhang. Contents 1 Difference/tial Equations

More information

September Math Course: First Order Derivative

September Math Course: First Order Derivative September Math Course: First Order Derivative Arina Nikandrova Functions Function y = f (x), where x is either be a scalar or a vector of several variables (x,..., x n ), can be thought of as a rule which

More information

STAT 380 Markov Chains

STAT 380 Markov Chains STAT 380 Markov Chains Richard Lockhart Simon Fraser University Spring 2016 Richard Lockhart (Simon Fraser University) STAT 380 Markov Chains Spring 2016 1 / 38 1/41 PoissonProcesses.pdf (#2) Poisson Processes

More information

Gaussian, Markov and stationary processes

Gaussian, Markov and stationary processes Gaussian, Markov and stationary processes Gonzalo Mateos Dept. of ECE and Goergen Institute for Data Science University of Rochester gmateosb@ece.rochester.edu http://www.ece.rochester.edu/~gmateosb/ November

More information

Lecture 4: Ito s Stochastic Calculus and SDE. Seung Yeal Ha Dept of Mathematical Sciences Seoul National University

Lecture 4: Ito s Stochastic Calculus and SDE. Seung Yeal Ha Dept of Mathematical Sciences Seoul National University Lecture 4: Ito s Stochastic Calculus and SDE Seung Yeal Ha Dept of Mathematical Sciences Seoul National University 1 Preliminaries What is Calculus? Integral, Differentiation. Differentiation 2 Integral

More information

Stochastic differential equation models in biology Susanne Ditlevsen

Stochastic differential equation models in biology Susanne Ditlevsen Stochastic differential equation models in biology Susanne Ditlevsen Introduction This chapter is concerned with continuous time processes, which are often modeled as a system of ordinary differential

More information

The Wiener Itô Chaos Expansion

The Wiener Itô Chaos Expansion 1 The Wiener Itô Chaos Expansion The celebrated Wiener Itô chaos expansion is fundamental in stochastic analysis. In particular, it plays a crucial role in the Malliavin calculus as it is presented in

More information

IEOR 3106: Introduction to Operations Research: Stochastic Models. Fall 2011, Professor Whitt. Class Lecture Notes: Thursday, September 15.

IEOR 3106: Introduction to Operations Research: Stochastic Models. Fall 2011, Professor Whitt. Class Lecture Notes: Thursday, September 15. IEOR 3106: Introduction to Operations Research: Stochastic Models Fall 2011, Professor Whitt Class Lecture Notes: Thursday, September 15. Random Variables, Conditional Expectation and Transforms 1. Random

More information

The exponential distribution and the Poisson process

The exponential distribution and the Poisson process The exponential distribution and the Poisson process 1-1 Exponential Distribution: Basic Facts PDF f(t) = { λe λt, t 0 0, t < 0 CDF Pr{T t) = 0 t λe λu du = 1 e λt (t 0) Mean E[T] = 1 λ Variance Var[T]

More information

n E(X t T n = lim X s Tn = X s

n E(X t T n = lim X s Tn = X s Stochastic Calculus Example sheet - Lent 15 Michael Tehranchi Problem 1. Let X be a local martingale. Prove that X is a uniformly integrable martingale if and only X is of class D. Solution 1. If If direction:

More information

Solution: The process is a compound Poisson Process with E[N (t)] = λt/p by Wald's equation.

Solution: The process is a compound Poisson Process with E[N (t)] = λt/p by Wald's equation. Solutions Stochastic Processes and Simulation II, May 18, 217 Problem 1: Poisson Processes Let {N(t), t } be a homogeneous Poisson Process on (, ) with rate λ. Let {S i, i = 1, 2, } be the points of the

More information

Stochastic Calculus for Finance II - some Solutions to Chapter VII

Stochastic Calculus for Finance II - some Solutions to Chapter VII Stochastic Calculus for Finance II - some Solutions to Chapter VII Matthias hul Last Update: June 9, 25 Exercise 7 Black-Scholes-Merton Equation for the up-and-out Call) i) We have ii) We first compute

More information

Poisson Processes. Stochastic Processes. Feb UC3M

Poisson Processes. Stochastic Processes. Feb UC3M Poisson Processes Stochastic Processes UC3M Feb. 2012 Exponential random variables A random variable T has exponential distribution with rate λ > 0 if its probability density function can been written

More information

On pathwise stochastic integration

On pathwise stochastic integration On pathwise stochastic integration Rafa l Marcin Lochowski Afican Institute for Mathematical Sciences, Warsaw School of Economics UWC seminar Rafa l Marcin Lochowski (AIMS, WSE) On pathwise stochastic

More information

Partial Differential Equations with Applications to Finance Seminar 1: Proving and applying Dynkin s formula

Partial Differential Equations with Applications to Finance Seminar 1: Proving and applying Dynkin s formula Partial Differential Equations with Applications to Finance Seminar 1: Proving and applying Dynkin s formula Group 4: Bertan Yilmaz, Richard Oti-Aboagye and Di Liu May, 15 Chapter 1 Proving Dynkin s formula

More information