Kolmogorov Equations and Markov Processes

Similar documents
TMS165/MSA350 Stochastic Calculus, Lecture on Applications

Stochastic Calculus and Black-Scholes Theory MTH772P Exercises Sheet 1

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( )

Lecture 12: Diffusion Processes and Stochastic Differential Equations

p 1 ( Y p dp) 1/p ( X p dp) 1 1 p

Sloppy derivations of Ito s formula and the Fokker-Planck equations

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

Verona Course April Lecture 1. Review of probability

UCSD ECE250 Handout #27 Prof. Young-Han Kim Friday, June 8, Practice Final Examination (Winter 2017)

Poisson Jumps in Credit Risk Modeling: a Partial Integro-differential Equation Formulation

Brownian Motion and Stochastic Calculus

Xt i Xs i N(0, σ 2 (t s)) and they are independent. This implies that the density function of X t X s is a product of normal density functions:

The Smoluchowski-Kramers Approximation: What model describes a Brownian particle?

A new approach for investment performance measurement. 3rd WCMF, Santa Barbara November 2009

Backward Stochastic Differential Equations with Infinite Time Horizon

1 Introduction. 2 Diffusion equation and central limit theorem. The content of these notes is also covered by chapter 3 section B of [1].

Example 4.1 Let X be a random variable and f(t) a given function of time. Then. Y (t) = f(t)x. Y (t) = X sin(ωt + δ)

Some Terminology and Concepts that We will Use, But Not Emphasize (Section 6.2)

2012 NCTS Workshop on Dynamical Systems

Week 9 Generators, duality, change of measure

Homogenization with stochastic differential equations

Derivation of Itô SDE and Relationship to ODE and CTMC Models

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

Brownian Motion. An Undergraduate Introduction to Financial Mathematics. J. Robert Buchanan. J. Robert Buchanan Brownian Motion

1. Stochastic Process

Lecture 4: Ito s Stochastic Calculus and SDE. Seung Yeal Ha Dept of Mathematical Sciences Seoul National University

1 Solution to Problem 2.1

A Short Introduction to Diffusion Processes and Ito Calculus

Conditional Distributions

6. Brownian Motion. Q(A) = P [ ω : x(, ω) A )

Chapter 3 sections. SKIP: 3.10 Markov Chains. SKIP: pages Chapter 3 - continued

Lecture 6: Bayesian Inference in SDE Models

Stochastic Differential Equations

Lecture 4: Introduction to stochastic processes and stochastic calculus

Reflected Brownian Motion

ELEMENTS OF PROBABILITY THEORY

Problems 5: Continuous Markov process and the diffusion equation

Mathematics 426 Robert Gross Homework 9 Answers

Stochastic Calculus February 11, / 33

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Malliavin Calculus in Finance

{σ x >t}p x. (σ x >t)=e at.

Birth and Death Processes. Birth and Death Processes. Linear Growth with Immigration. Limiting Behaviour for Birth and Death Processes

Particle models for Wasserstein type diffusion

Introduction to Random Diffusions

Gaussian, Markov and stationary processes

SDE Coefficients. March 4, 2008

lim n C1/n n := ρ. [f(y) f(x)], y x =1 [f(x) f(y)] [g(x) g(y)]. (x,y) E A E(f, f),

Lecture 4: Numerical Solution of SDEs, Itô Taylor Series, Gaussian Approximations

Introduction to Diffusion Processes.

Stochastic Calculus Notes, Lecture 8 Last modified December 14, Path space measures and change of measure

MATH 425, HOMEWORK 3 SOLUTIONS

Stochastic Differential Equations.

Weak solutions of mean-field stochastic differential equations

Rough Burgers-like equations with multiplicative noise

The concentration of a drug in blood. Exponential decay. Different realizations. Exponential decay with noise. dc(t) dt.

E. Vitali Lecture 6. Brownian motion

This is a Gaussian probability centered around m = 0 (the most probable and mean position is the origin) and the mean square displacement m 2 = n,or

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2

1 Brownian Local Time

Stochastic Integration.

2 Functions of random variables

Lecture 6: Multiple Model Filtering, Particle Filtering and Other Approximations

Chapter 3 sections. SKIP: 3.10 Markov Chains. SKIP: pages Chapter 3 - continued

Gaussian processes for inference in stochastic differential equations

CNH3C3 Persamaan Diferensial Parsial (The art of Modeling PDEs) DR. PUTU HARRY GUNAWAN

Intertwinings for Markov processes

Some Tools From Stochastic Analysis

Simulation methods for stochastic models in chemistry

MA8109 Stochastic Processes in Systems Theory Autumn 2013

Controlled Diffusions and Hamilton-Jacobi Bellman Equations

Nash Type Inequalities for Fractional Powers of Non-Negative Self-adjoint Operators. ( Wroclaw 2006) P.Maheux (Orléans. France)

A Concise Course on Stochastic Partial Differential Equations

B8.3 Mathematical Models for Financial Derivatives. Hilary Term Solution Sheet 2

1 Simulating normal (Gaussian) rvs with applications to simulating Brownian motion and geometric Brownian motion in one and two dimensions

Stochastic Volatility and Correction to the Heat Equation

Lecture 4: Numerical Solution of SDEs, Itô Taylor Series, Gaussian Process Approximations

Introduction to asymptotic techniques for stochastic systems with multiple time-scales

Reaction-Diffusion Equations In Narrow Tubes and Wave Front P

MATH4210 Financial Mathematics ( ) Tutorial 7

1.1 Definition of BM and its finite-dimensional distributions

Research Article A Necessary Characteristic Equation of Diffusion Processes Having Gaussian Marginals

Formulas for probability theory and linear models SF2941

Stat 366 A1 (Fall 2006) Midterm Solutions (October 23) page 1

MATH 56A SPRING 2008 STOCHASTIC PROCESSES 197

Chapter 2. Some Basic Probability Concepts. 2.1 Experiments, Outcomes and Random Variables

1.1 Review of Probability Theory

Some Properties of NSFDEs

SMSTC (2007/08) Probability.

Improved diffusion Monte Carlo

Lecture 3. This operator commutes with translations and it is not hard to evaluate. Ae iξx = ψ(ξ)e iξx. T t I. A = lim

The Wiener Itô Chaos Expansion

Towards a Bayesian model for Cyber Security

Joint Probability Distributions and Random Samples (Devore Chapter Five)

ON THE SHAPE OF THE GROUND STATE EIGENFUNCTION FOR STABLE PROCESSES

Dynamical systems with Gaussian and Levy noise: analytical and stochastic approaches

Skew Brownian Motion and Applications in Fluid Dispersion

Lecture 21: Stochastic Differential Equations

APPLIED STOCHASTIC PROCESSES

Local vs. Nonlocal Diffusions A Tale of Two Laplacians

Transcription:

Kolmogorov Equations and Markov Processes May 3, 013 1 Transition measures and functions Consider a stochastic process {X(t)} t 0 whose state space is a product of intervals contained in R n. We define the transition probability measure at time t, from sate x at time s < t, by the formula P (A, t; x, s) P[X(t) A X(s) x], where A is a Borel subset of R n. Since the probability that X(s) x may be zero, the right hand side may require a generalized interpretation of conditional probability 1 and so P (A, t; x, s) E[1 A (X(t)) X(s) x] E s,x [1 A (X(t))]. (1) If this probability measure has a density, it will be denoted by p(y, t; x, s) and will be called the transition PDF from state x at time s to state y at time t. In other words p(y, t; x, s) f X(t) X(s) (y x), where the notation on the right hand side indicated the a conditional probability density function. We will refer to the first two and the last two variables of 1 If E[Z Y ] g(y ), then E[Z Y y] g(y). The latter conditional expectation can be defined via conditional density and thus used to define the former, or vice versa. If U, V are random variables, the conditional density f U V (u v) of U given that V v, is the joint density of (U, V ) divided by the density of the known random variable V calculated at v, provided that the divisor is different from zero. If U is R k -valued, then tha marginal PDF f V of f (U,V ) is simply the integral over R k of the joint density with resect to u. 1

p, respectively, as the forward variables and the backward variables. It is obvious that P (A, t; x, s) p(y, t; x, s) dy. Kolmogorov s Backward Equation In this section we will be assuming that X is an Itô diffusion with the infinitesimal operator L. The name Kolmogorov s backward equation is is used in connection with two closely related PDEs. The first form of the Kolmogorov backward equation is satisfied by the transition probability measure regarded as a function of the backward variables x and s. Let a Borel set A R n be fixed. Then A P s (A, t; x, s) + LP (A, t; x, s) 0, for (s, x) (0, t) Rn, P (A, t; x, t) 1 A (x). We obtain the equation by a direct application of the Feynman-Kac Theorem with r 0, Ψ 0 and Φ 1 A. The second form of the Kolmogorov backward equation can be derived similarly but this time it is satisfied by the density p(y, t; x, s) with respect to the backward variables: p s (y, t; x, s) + Lp(y, t; x, s) 0, for (s, x) (0, t) Rn, p(y, t; x, s) δ y (x) as s t. The last condition means that for any bounded continuous function f we have lim p(y, t; x, s)f(x) dx f(y). s t R n

3 Kolmogorov s Forward Equation Recall that the infinitesimal operator L associated with the SDE dx µ dt + σ dw is given by the formula Lg i µ i g x i + 1 j,k c jk x j x k g, where Its adjoint is the operator L g i [c j,k ] σσ. (µ i g) + 1 (c jk g). x i x j x k j,k Indeed, for smooth functions g, h vanishing at infinity together with their partial derivatives derivatives Lg, h L Lg h dx x by parts gl h dx g, L h L. Let T > 0. The Kolmogorov forward equation is the following PDE with respect to the forward variables of the density p(y, t; x, s): p t (y, t; x, s) L p(y, t; x, s) 0, for (t, y) (0, T ) R n, p(y, t; x, s) δ x (y) as t s. The last condition means that for any bounded continuous function f we have lim p(y, t; x, s)f(y) dy f(x). t s R n Example 1: Let us consider a Wiener process with a drift, that is a solution of the SDE dx(t) µ dt + σ dw (t), 3

where µ and σ > 0 are constants. Then if X(s) x and t > s, we have X(t) x + µ(t s) + σ (W (t) W (s)). Since p(y, t; x, s) P[X(t) y X(s) x], y we have to calculate explicitly P[X(t) y X(s) x]. Now, P[X(t) y X(s) x] P W (t) W (s) t s y x µ(t s) σ t s } {{ } c(y) N[c(y)], where, as usual, N[ ] denotes the cumulative distribution function for the standard normal distribution. Since N (z) 1 ) exp ( z π we have p(y, t; x, s) ) 1 ( σ π(t s) exp c(y) σ (t s) Kolmogorov s forward equation for this process is t p(y, t; x, s) + µ σ p(y, t; x, s) p(y, t; x, s) 0. y y 4 Markov processes If X is a stochastic process with values in R n, let F t denote the information generated by the process X during the time interval [0, t]. We say that X has the Markov property if for any s [0, t] and any bounded Borel function f E[f(X(t)) F s ] E[f(X(t)) X(s)]. 3 3 More generally, the definition is used with an arbitrary filtration to which X is adapted. 4

Equivalently for any Borel set A R n P[X(t) A F s ] P[X(t) A X(s)]. If n 1, an even simpler formulation is the following: P[ X(t) y X(s) x, X(s n ) x n,... X(s 0 ) x 0 ] P[X(t) y X(s) x]. () where 0 s 0 <... < s n < s and y R. If a process X has the Markov property, we say that it is a Markov process. Markov processes satisfy the following Chapman-Kolmogorov Equation: p(z, u; x, s) p(z, u; y, t)p(y, t; x, s) dy, s t u. Proof: We have: p(z, u; x, s) f X(u) X(s) (z x) f X(u),X(s)(z, x) f X(s) (x) f X(u),X(t),X(s)(z, y, x) dy f X(s) (x), as a marginal density has integral 1, f (X(u),X(t)) X(s) ((z, y) x) dy, f X(u) (X(t),X(s)) (z (y, x)) f X(t) X(s) (y x) dy, by the definition of conditional densities, f X(u) X(t) (z y)f X(t) X(s) (y x) dy, by (). It can be shown that all Itô diffusions are Markov processes (provided that the usual existence and uniqueness conditions for the underlying SDE are satisfied). For some basic types of processes, this can be checked directly. 5

Example : Consider a Geometric Brownian Motion (or GBM): dx(t) µx(t) dt + σx(t) dw (t), where µ and σ > 0 are constants. If X(s) x, then due to uniqueness of solutions of SDEs ) ( X(t) x exp [(µ σ (t s) + σ W (t) W (s)) ], t s. (3) In particular, the condition X(s) x determines the CDF of X(t) hence implying (). Example 3: Using (3) we can easily calculate the transition density function for GBM. Similarly to what was done in Example 1, we have to calculate explicitly P[X y X(s) x]. We have { ) ( P[X y X(s) x] P x exp [(µ σ (t s) + σ W (t) W (s)) ] } y ) P N[d(y)], W (t) W (s) t s ( ln y µ σ (t s) x σ t s }{{} where N denotes the cumulative distribution function of the standard normal distribution. Since d (y) 1/(yσ t s) we conclude that p(y, t; x, s) ) y P[X y X(s) x] 1 ( yσ π(t s) exp d(y). d(y) c Maciej Klimek 013 6