Kolmogorov Equations and Markov Processes May 3, 013 1 Transition measures and functions Consider a stochastic process {X(t)} t 0 whose state space is a product of intervals contained in R n. We define the transition probability measure at time t, from sate x at time s < t, by the formula P (A, t; x, s) P[X(t) A X(s) x], where A is a Borel subset of R n. Since the probability that X(s) x may be zero, the right hand side may require a generalized interpretation of conditional probability 1 and so P (A, t; x, s) E[1 A (X(t)) X(s) x] E s,x [1 A (X(t))]. (1) If this probability measure has a density, it will be denoted by p(y, t; x, s) and will be called the transition PDF from state x at time s to state y at time t. In other words p(y, t; x, s) f X(t) X(s) (y x), where the notation on the right hand side indicated the a conditional probability density function. We will refer to the first two and the last two variables of 1 If E[Z Y ] g(y ), then E[Z Y y] g(y). The latter conditional expectation can be defined via conditional density and thus used to define the former, or vice versa. If U, V are random variables, the conditional density f U V (u v) of U given that V v, is the joint density of (U, V ) divided by the density of the known random variable V calculated at v, provided that the divisor is different from zero. If U is R k -valued, then tha marginal PDF f V of f (U,V ) is simply the integral over R k of the joint density with resect to u. 1
p, respectively, as the forward variables and the backward variables. It is obvious that P (A, t; x, s) p(y, t; x, s) dy. Kolmogorov s Backward Equation In this section we will be assuming that X is an Itô diffusion with the infinitesimal operator L. The name Kolmogorov s backward equation is is used in connection with two closely related PDEs. The first form of the Kolmogorov backward equation is satisfied by the transition probability measure regarded as a function of the backward variables x and s. Let a Borel set A R n be fixed. Then A P s (A, t; x, s) + LP (A, t; x, s) 0, for (s, x) (0, t) Rn, P (A, t; x, t) 1 A (x). We obtain the equation by a direct application of the Feynman-Kac Theorem with r 0, Ψ 0 and Φ 1 A. The second form of the Kolmogorov backward equation can be derived similarly but this time it is satisfied by the density p(y, t; x, s) with respect to the backward variables: p s (y, t; x, s) + Lp(y, t; x, s) 0, for (s, x) (0, t) Rn, p(y, t; x, s) δ y (x) as s t. The last condition means that for any bounded continuous function f we have lim p(y, t; x, s)f(x) dx f(y). s t R n
3 Kolmogorov s Forward Equation Recall that the infinitesimal operator L associated with the SDE dx µ dt + σ dw is given by the formula Lg i µ i g x i + 1 j,k c jk x j x k g, where Its adjoint is the operator L g i [c j,k ] σσ. (µ i g) + 1 (c jk g). x i x j x k j,k Indeed, for smooth functions g, h vanishing at infinity together with their partial derivatives derivatives Lg, h L Lg h dx x by parts gl h dx g, L h L. Let T > 0. The Kolmogorov forward equation is the following PDE with respect to the forward variables of the density p(y, t; x, s): p t (y, t; x, s) L p(y, t; x, s) 0, for (t, y) (0, T ) R n, p(y, t; x, s) δ x (y) as t s. The last condition means that for any bounded continuous function f we have lim p(y, t; x, s)f(y) dy f(x). t s R n Example 1: Let us consider a Wiener process with a drift, that is a solution of the SDE dx(t) µ dt + σ dw (t), 3
where µ and σ > 0 are constants. Then if X(s) x and t > s, we have X(t) x + µ(t s) + σ (W (t) W (s)). Since p(y, t; x, s) P[X(t) y X(s) x], y we have to calculate explicitly P[X(t) y X(s) x]. Now, P[X(t) y X(s) x] P W (t) W (s) t s y x µ(t s) σ t s } {{ } c(y) N[c(y)], where, as usual, N[ ] denotes the cumulative distribution function for the standard normal distribution. Since N (z) 1 ) exp ( z π we have p(y, t; x, s) ) 1 ( σ π(t s) exp c(y) σ (t s) Kolmogorov s forward equation for this process is t p(y, t; x, s) + µ σ p(y, t; x, s) p(y, t; x, s) 0. y y 4 Markov processes If X is a stochastic process with values in R n, let F t denote the information generated by the process X during the time interval [0, t]. We say that X has the Markov property if for any s [0, t] and any bounded Borel function f E[f(X(t)) F s ] E[f(X(t)) X(s)]. 3 3 More generally, the definition is used with an arbitrary filtration to which X is adapted. 4
Equivalently for any Borel set A R n P[X(t) A F s ] P[X(t) A X(s)]. If n 1, an even simpler formulation is the following: P[ X(t) y X(s) x, X(s n ) x n,... X(s 0 ) x 0 ] P[X(t) y X(s) x]. () where 0 s 0 <... < s n < s and y R. If a process X has the Markov property, we say that it is a Markov process. Markov processes satisfy the following Chapman-Kolmogorov Equation: p(z, u; x, s) p(z, u; y, t)p(y, t; x, s) dy, s t u. Proof: We have: p(z, u; x, s) f X(u) X(s) (z x) f X(u),X(s)(z, x) f X(s) (x) f X(u),X(t),X(s)(z, y, x) dy f X(s) (x), as a marginal density has integral 1, f (X(u),X(t)) X(s) ((z, y) x) dy, f X(u) (X(t),X(s)) (z (y, x)) f X(t) X(s) (y x) dy, by the definition of conditional densities, f X(u) X(t) (z y)f X(t) X(s) (y x) dy, by (). It can be shown that all Itô diffusions are Markov processes (provided that the usual existence and uniqueness conditions for the underlying SDE are satisfied). For some basic types of processes, this can be checked directly. 5
Example : Consider a Geometric Brownian Motion (or GBM): dx(t) µx(t) dt + σx(t) dw (t), where µ and σ > 0 are constants. If X(s) x, then due to uniqueness of solutions of SDEs ) ( X(t) x exp [(µ σ (t s) + σ W (t) W (s)) ], t s. (3) In particular, the condition X(s) x determines the CDF of X(t) hence implying (). Example 3: Using (3) we can easily calculate the transition density function for GBM. Similarly to what was done in Example 1, we have to calculate explicitly P[X y X(s) x]. We have { ) ( P[X y X(s) x] P x exp [(µ σ (t s) + σ W (t) W (s)) ] } y ) P N[d(y)], W (t) W (s) t s ( ln y µ σ (t s) x σ t s }{{} where N denotes the cumulative distribution function of the standard normal distribution. Since d (y) 1/(yσ t s) we conclude that p(y, t; x, s) ) y P[X y X(s) x] 1 ( yσ π(t s) exp d(y). d(y) c Maciej Klimek 013 6