Lecture 4: Introduction to stochastic processes and stochastic calculus

Similar documents
A Short Introduction to Diffusion Processes and Ito Calculus

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

1. Stochastic Process

Stochastic differential equation models in biology Susanne Ditlevsen

M4A42 APPLIED STOCHASTIC PROCESSES

The concentration of a drug in blood. Exponential decay. Different realizations. Exponential decay with noise. dc(t) dt.

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( )

Handbook of Stochastic Methods

ELEMENTS OF PROBABILITY THEORY

Numerical methods for solving stochastic differential equations

Handbook of Stochastic Methods

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition

Introduction to Diffusion Processes.

Some Terminology and Concepts that We will Use, But Not Emphasize (Section 6.2)

Example 4.1 Let X be a random variable and f(t) a given function of time. Then. Y (t) = f(t)x. Y (t) = X sin(ωt + δ)

HJB equations. Seminar in Stochastic Modelling in Economics and Finance January 10, 2011

p 1 ( Y p dp) 1/p ( X p dp) 1 1 p

The Smoluchowski-Kramers Approximation: What model describes a Brownian particle?

Gaussian Process Approximations of Stochastic Differential Equations

Stochastic Volatility and Correction to the Heat Equation

Stochastic process for macro

Lecture 4: Ito s Stochastic Calculus and SDE. Seung Yeal Ha Dept of Mathematical Sciences Seoul National University

Numerical Integration of SDEs: A Short Tutorial

FE 5204 Stochastic Differential Equations

1 Introduction. 2 Diffusion equation and central limit theorem. The content of these notes is also covered by chapter 3 section B of [1].

Kolmogorov Equations and Markov Processes

Introduction. Stochastic Processes. Will Penny. Stochastic Differential Equations. Stochastic Chain Rule. Expectations.

Jump-type Levy Processes

1. Stochastic Processes and filtrations

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Ernesto Mordecki 1. Lecture III. PASI - Guanajuato - June 2010

Exercises in stochastic analysis

Stochastic Differential Equations

Stochastic Integration and Stochastic Differential Equations: a gentle introduction

Stochastic Calculus. Kevin Sinclair. August 2, 2016

Brownian Motion and Poisson Process

Simulation and Parametric Estimation of SDEs

Markov processes and queueing networks

Lecture 1a: Basic Concepts and Recaps

Poisson Jumps in Credit Risk Modeling: a Partial Integro-differential Equation Formulation

M5A42 APPLIED STOCHASTIC PROCESSES

A variational radial basis function approximation for diffusion processes

An introduction to Lévy processes

The Wiener Itô Chaos Expansion

Homogenization with stochastic differential equations

Introduction to nonequilibrium physics

Regular Variation and Extreme Events for Stochastic Processes

Higher order weak approximations of stochastic differential equations with and without jumps

Lecture 12: Detailed balance and Eigenfunction methods

Interest Rate Models:

Elementary Applications of Probability Theory

{σ x >t}p x. (σ x >t)=e at.

Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals

Lecture 4 An Introduction to Stochastic Processes

MATH4210 Financial Mathematics ( ) Tutorial 7

FE610 Stochastic Calculus for Financial Engineers. Stevens Institute of Technology

MFE6516 Stochastic Calculus for Finance

Exercises. T 2T. e ita φ(t)dt.

Gaussian processes for inference in stochastic differential equations

I forgot to mention last time: in the Ito formula for two standard processes, putting

Continuous and Discrete random process

Local vs. Nonlocal Diffusions A Tale of Two Laplacians

Short-time expansions for close-to-the-money options under a Lévy jump model with stochastic volatility

Brownian Motion. An Undergraduate Introduction to Financial Mathematics. J. Robert Buchanan. J. Robert Buchanan Brownian Motion

Continuous Time Finance

Multi-Factor Lévy Models I: Symmetric alpha-stable (SαS) Lévy Processes

Malliavin Calculus in Finance

6. Brownian Motion. Q(A) = P [ ω : x(, ω) A )

Nonparametric Bayesian Methods - Lecture I

SMSTC (2007/08) Probability.

1.1 Review of Probability Theory

Discretization of SDEs: Euler Methods and Beyond

Verona Course April Lecture 1. Review of probability

A numerical method for solving uncertain differential equations

Stochastic Calculus for Finance II - some Solutions to Chapter VII

Stochastic Calculus February 11, / 33

Infinitely divisible distributions and the Lévy-Khintchine formula

Lecture 12: Detailed balance and Eigenfunction methods

Some Tools From Stochastic Analysis

GAUSSIAN PROCESSES; KOLMOGOROV-CHENTSOV THEOREM

Bernardo D Auria Stochastic Processes /10. Notes. Abril 13 th, 2010

Week 1 Quantitative Analysis of Financial Markets Distributions A

SDE Coefficients. March 4, 2008

Numerical Methods with Lévy Processes

State Space Representation of Gaussian Processes

This is a Gaussian probability centered around m = 0 (the most probable and mean position is the origin) and the mean square displacement m 2 = n,or

APM 541: Stochastic Modelling in Biology Diffusion Processes. Jay Taylor Fall Jay Taylor (ASU) APM 541 Fall / 61

Lecture 12: Diffusion Processes and Stochastic Differential Equations

Gaussian, Markov and stationary processes

Stochastic Modelling Unit 1: Markov chain models

STAT 331. Martingale Central Limit Theorem and Related Results

Information Theory and Statistics Lecture 3: Stationary ergodic processes

Brownian Motion. Chapter Definition of Brownian motion

Stochastic Differential Equations.

Week 9 Generators, duality, change of measure

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past.

Random Process Lecture 1. Fundamentals of Probability

Lecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes

Transcription:

Lecture 4: Introduction to stochastic processes and stochastic calculus Cédric Archambeau Centre for Computational Statistics and Machine Learning Department of Computer Science University College London c.archambeau@cs.ucl.ac.uk Advanced Topics in Machine Learning (MSc in Intelligent Systems) January 2008

Discrete-time vs continuous-time? Real systems are continuous. Can we gain something? Physical model. Can we exploit this information? 2 1.5 1 0.5 0!0.5!1!1.5!2 450 500 550 600

Outline Some definitions Stochastic processes Lévy processes Markov processes Diffusion processes Itô s formula Variational inference for Diffusion processes

Elements of probability theory A collection A of subsets of the sample space Ω is a σ-algebra if A contains Ω: Ω A. A is closed under the operation of complementation: Ω\A A if A A. A is closed under the operation of countable unions: [ A n A if A 1, A 2,..., A n,... A. n This implies that A is closed under countable intersections. We say that (Ω, A) is a measurable space if Ω is a non-empty set and A is a σ-algebra of Ω.

Elements of probability theory (continued) A measure H( ) on (Ω, A) is a nonnegative valued set function on A satisfying H( ) = 0, H [! A n = X n n H(A n) if A i A j =, for any sequence A 1, A 2,..., A n,... A. If A B, it follows that H(A) H(B). If H(Ω) is finite, i.e. 0 H(Ω), then H( ) can be normalized to obtain a probability measure P( ): for all A A. P(A) = H(A), P(A) [0, 1], H(Ω) We say that (Ω, A, P) is a probability space if P is a probability measure on the measurable space (Ω, A).

Elements of probability theory (continued) Let (Ω 1, A 1) and (Ω 2, A 2) be two measurable spaces. The function f : Ω 1 Ω 2 is measurable if the pre-image of any A 2 A 2 is in A 1: for all A 2 A 2. f 1 (A 2) = {ω 1 Ω 1 : f (ω 1) A 2} A 1, Let (Ω, A, P) be a probability space.. We call the measurable function X : Ω R D a continuous random variable.

Stochastic process Let T be the time index set and (Ω, A, P) the underlying probability space. The function X : T Ω R D is a stochastic process, such that X t = X (t, ) : Ω R D is a random variable for each t T, X ω = X (, ω) : T R D is a realization or sample path for each ω Ω. When considering continuous time systems, T will often be equal to R +. In practice, we call stochastic process a collections of random variables X = {X t, t 0}, which are defined on a common probability space. We can think of X t as the position of a particle at time t, changing as t varies. The particle moves continuously or has jumps for some t 0: X t = X t+ X t = lim ɛ 0 X t+ɛ lim ɛ 0 X t ɛ. In general, we will assume that the process is right-continuous, i.e. X t+ = X t.

Independence Let {Y 1,..., Y n} be a collection of random variables, with Y i R D i. They are independent if for all A i R D i. P(Y 1 A 1,..., Y n A n) = ny P(Y i A i ), An infinite collection is said to be independent if every finite subcollection is independent. A stochastic process X = {X t, t 0} has independent increments if the random variables X t0, X t1 X t0,..., X tn X tn 1 are independent for all n 1 and t 0 < t 1 <... < t n. i=1

Stationarity A stochastic process is (strictly) stationary if all the joint marginals are invariant under time displacement h 0, that is for all t 1,..., t n. p(x t1 +h, X t2 +h,..., X tn+h) = p(x t1, X t2,..., X tn ) The stochastic process X = {X t, t 0} is wide-sense stationary if there exists a constant m R D and a function C : R + R D, such that for all s, t R +. µ t X t = m, Σ t (X t µ t )(X t µ t ) = C(0), V s,t (X t µ t )(X s µ s ) = C(t s), We call V s,t the two-time covariance. The stochastic process X = {X t, t 0} has stationary increments if X t+s X t has the same distribution as X s for all s, t 0.

Example: Poisson process The Poisson process with intensity parameter λ > 0 is a continuous time stochastic process X = {X t, t R + } with independent, stationary increments: for all 0 s t. X t X s P(λ(t s)), X 0 = 0, 10 The Poisson process is not wide-sense stationary: µ t = λt, σt 2 = λt, v s,t = λ min{s, t}. X t 9 8 7 6 5 4 3 2 1 0 0 5 10 15 20 25 30 35 40 45 50 t The Poisson process is right-continuous and, in fact, it is Lévy process (see later) consisting only of jumps.

The Poisson distribution (or law of rare events) is defined as n P(λ) = λn n! e λ, n N, where λ > 0. The mean and the variance are given by n = λ, (n n ) 2 = λ.

Lévy process A stochastic process X = {X t, t 0} is a Lévy process if The increments on disjoint time intervals are independent. The increments are stationary: increments with equally long time intervals are identically distributed. The sample paths are right-continuous with left limit, i.e. lim X t+ɛ = X t, ɛ 0 lim X t ɛ = X t. ɛ 0 Lévy processes are usually described in terms of the Lévy-Khintchine representation. A Lévy process can have three types of components: a determinisitic drift, a random diffusion component and a random jump component. It is implicitly assumed that a Lévy process starts at X 0 = 0 with probability 1. Applications: Financial stock prices: Black-Scholes Population models: birth-and-death processes...

Interpretation of Lévy processes Lévy processes are the continuous time equivalent of random walks. A random walk over over n time units is a sum of n independent and identically distributed random variables: S n = X n 1 x n, where x n are iid random variables. Random walks have independent and stationary increments. 2 1.8 1.6 1.4 state 1.2 1 0.8 0.6 0.4 0 0.2 0.4 0.6 0.8 1 time Figure: Example of a Gaussian random walk with S 0 = 1.

Interpretation of Lévy processes (continued) A random variable X has an infinitely divisible distribution if for every m 1 we can write mx X X (m) j, where {X (m) j } j are iid. j=1 For example the Normal, Poisson or Gamma distribution are infinitely divisible. The Bernouilli is not infinitely divisible. Lévy processes are infinitely divisible since the increments for non-overlapping time intervals are independent and stationary: X s = mx (X js/m X (j 1)s/m ), j=1 for all m 1. In fact, it can be shown that there is a Lévy process for each infinitely divisible probability distribution.

Markov process The stochastic process X = {X t, t 0} is a (continuous time continuous-state) Markov process if for all 0 r 1... r n s t. p(x t X s) = p(x t X r1,..., X rn, X s), We call p(x t X s) the transition density. It can be time depenent. The Chapman-Kolmogorov equation follows from the Markov property: Z p(x t X s) = p(x t X τ )p(x τ X s)dx τ, for all s τ t. The Chapman-Kolmogorov played already an important role in (discrete time) dynamical systems. Lévy processes satisfy the Markov property.

Markov process (continued) A Markov process is homogeneous if its transition density depends only on the time difference: for all 0 h. p(x t+h X t) = p(x h X 0), The Poisson process is homogeneous a discrete state Markov process: P(n t+h n t) = P(λ(t + h t)) = P(n h n 0). Let f ( ) be a bounded function. A Markov process is ergodic if the time average limit coincides with the spatial average, i.e. 1 lim T T Z T 0 f (X t)dt = f where the expectation is taken wrt the stationary probability density.

Martingale (fair game) A martingale is a stochastic process such that the expectation of some future event given the past and the present is the same as if given only the present: for all t. X t {X τ, 0 τ s} = X s More formally, let (Ω, A, P) be a probability space and {A t, t 0} a filtration 1 of A. The stochastic process X = {X t, t 0} is a martingale if with probability 1, for all 0 s < t. X t A s = X s, When the process X t satisfies the Markov property, we have X t A s = X t X s. 1 A filtration {A t, t 0} of A is an increasing family of σ-algebras on the measurable space (Ω, A), that is A s A t A for any 0 s t. This means that more information becomes available with increasing time.

Diffusion process A Markov process X = {X t, t 0} is a diffusion process if the following limits exist for all ɛ > 0: Z 1 lim p(x t X s)dx t = 0, t s t s X t X s >ɛ Z 1 lim (X t X s) p(x t X s)dx t = α(s, X s), t s t s X t X s <ɛ Z 1 lim (X t X s)(x t X s) p(x t X s)dx t = β(s, X s)β (s, X s), t s t s where 0 s t. X t X s <ɛ We call vector α(s, x) the drift and matrix β(s, x) the diffusion coefficient at time s and state X s = x. The first condition prevents the diffusion process from having instantaneous jumps. The drift α is the instantaneous rate of change of the mean, given that X s = x at time s. The diffusion matrix D = ββ is the instantaneous rate of change of the squared fluctuations of the process, given that X s = x at time s.

Diffusion process (continued) Diffusion processes are almost surely continuous functions of time, but they need not to be differentiable. Diffusion processes are Lévy processes (without the jump component). The time evolution of the transition density p(y, t x, s) with s t, given some initial condition or target constraint was described by Kolmogorov: The forward evolution of the transition density is given by the Kolmogorov forward equation (also known as the Fokker-Planck equation): p t = X i {α i (t, y)p} + 1 X 2 {D ij (t, y)p}, y i 2 y i,j i y j for a fixed initial state (s, x). The backward evolution of the transition density is given by the Kolmogorov backward equation (or adjoint equation): p s = X i for a fixed final state (t, y). α i (s, x) p x i + 1 2 X D i,j (s, x) 2 p, x i,j i x j

Wiener process The Wiener process was proposed by Wiener as mathematical description of Brownian motion. It characterizes the erratic motion (i.e. diffusion) of a grain pollen on a water surface due to it continually be bombarded by water molecules. It can be viewed as a scaling limit of a random walk on any finite time interval (Donsker s Theorem). It is also commonly used to model stock market fluctuations. 0.6 0.4 0.2 0 W t!0.2!0.4!0.6!0.8 0 0.2 0.4 0.6 0.8 1 t

Wiener process (continued) A standard Wiener process is a continuous time Gaussian Markov process W = {W t, t 0} with (non-overlapping) independent increments for which W 0 = 0, The sample path W ω is almost surely continuous for all ω Ω, W t W s N (0, t s), for all 0 s t. The sample paths of W ω are almost surely nowhere differentiable. The expectation W t is equal to 0 for all t. W is not wide-sense stationary as v s,t = min{s, t}, but has stationary increments. W is homogeneous since p(w t+h W t) = p(w h W 0). W is a diffusion process with drift α = 0 and diffusion coefficient β = 1, such that Kolmogorov s forward and backward equation are given by p t 1 2 p 2 y = 0, 2 p s + 1 2 p 2 x = 0. 2

Informal proof that a Wiener process is not differentiable: Consider the partition of a bounded time interval [s, t] into subintervals [τ (n) k, τ (n) k+1 ] of equal length, such that τ (n) k = s + k t s 2 n, k = 0, 1,..., 2 n 1. Consider a sample path W ω(τ) of the standard Wiener process W = {W τ, τ [s, t]}. It can be shown (Kloeden and Platen, p. 72) that lim n 2X n 1 k=0 W (τ (n) (n) 2 k+1, ω) W (τ k ), ω = t s. Hence, taking the limit superior, i.e. the supremum 2 of all the limit points, we get t s lim sup n (n) max W (τ k k+1 (n), ω) W (τ k, ω) 2X n 1 k=0 (n) W (τ k+1 (n), ω) W (τ k, ω). From the sample path continuity, we have max k W (τ (n) (n) k+1, ω) W (τ k, ω) 0 with probability 1 when n and therefore 2X n 1 k=0 (n) W (τ k+1, ω) W (τ (n) k, ω). As a consequence, the sample paths do almost surely not have bounded variation on [s, t] and cannot be differentiated. 2 For S T, the supremum of S is the least element of T, which is greater or equal to all elements of S.

Let s t. The two-time covariance is then given by v s,t = W tw s = (W t W s + W s)w s = W t W s W s + W 2 s = 0 0 + s. The transition density of W is given by p(w t W s) = N (W s, t s). Hence, the drift and the diffusion coefficient for a standard Wiener process are W t W s α(s, W s) = lim = 0, t s t s β(s, W s) = lim t s 2 Wt 2 Wt W s + Ws 2 = lim t s t s 2 Wt W 2 s t s = lim t s t s t s. The same results are found by directly differentiating the transition density as required in the Kolmogorov s equations.

Brownian bridge A Brownian bridge is a Wiener process pinned at both ends, i.e. the sample paths all go through an initial state at time t = 0 and a given state at a later time t = T. Let W = {W t, 0 t} be a standard Wiener process. The Brownian bridge B(x 0, y T ) = {B t(x 0, y T ), 0 t T } is a stochastic process, such that B t(x 0, y T ) = x 0 + W t t T (x0 + W T y T ). A Brownian bridge B t(x 0, y T ) is a Gaussian process with mean function and two-time covariance given by for 0 s, t T. B t = x 0 t T (x0 y T ), v s,t = min{s, t} st T,

Brownian bridge (continued) 3 5 2 4 1 3 2 B t 0 B t 1!1 0!2!1!3 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 t!2 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 t Figure: Sample path examples of a Brownian bridge for different initial and final states.

Diffusion processes revisited Let W = {W t, t 0} be a standard Wiener process. The time evolution of a diffusion process can be described by a stochastic differential equation (SDE): dx t = α(t, X t)dt + β(t, X t)dw t, dw t N (0, dti D ), where X = {X t, t 0} is a stochastic process with drift α R D and diffusion coefficient β R D D. This representation corresponds to the state-space representation of discrete-time dynamical systems. An SDE is be interpretted as a (stochastic) integral equation along a sample path ω, that is X (t, ω) X (s, ω) = Z t s α(τ, X (τ, ω))dτ + Z t s β(τ, X (τ, ω)) dw (τ, ω) dτ. dτ This representation is symbolic as a Wiener process is almost surely not differentiable, but the limiting process corresponds to Gaussian white noise: W (τ + h, ω) W (τ, ω) lim N (0, 1). h 0 h This means that Gaussian white noise cannot be realized physically!

Construction of Itô s stochastic integral The central question is how to compute a stochastic integral of the form Z t s β(τ, X (τ, ω))dw (τ, ω) =? K. Itô s starting point is the following: Consider the standard Wiener process W = {W t, t 0} and a (scalar) constant diffusion coefficient β(t, X t) = β for all t. The integral along the sample path ω is equal to Z t with probability 1. s s βdw (τ, ω) = β {W (t, ω) W (s, ω)} The expected integral and the expected squared integral are thus given by fiz t fl * Z t «+ 2 βdw (τ, ω) = 0, βdw (τ, ω) = β 2 (t s). s

Construction of Itô s stochastic integral (continued) Consider the integral of the random function f : T Ω R: I [f ](ω) = Z t It is assumed f is mean square integrable. s f (τ, ω)dw (τ, ω). 1 If f is a random step function, that is f (t, ω) = f j (ω) on [t j, t j+1 [, then Xn 1 I [f ](ω) = f j (ω){w (t j+1, ω) W (t j, ω)}, j=1 with probability 1 for all ω. Since f j (ω) is constant on [t j, t j+1 [, we get I [f ] = 0, I 2 [f ] = P j f 2 j (t j+1 t j ). 2 If f (n) is a sequence of random n-step functions converging to the general random function f, such that f (n) (t, ω) = f (t (n) j, ω) on [t (n) j, t (n) j+1 [, then Xn 1 I [f (n) ](ω) = f (t (n) j j=1, ω){w (t (n) j+1 with probability 1 for all ω. The same results follow., ω) W (t(n) j, ω)},

The Itô stochastic integral Theorem: The Itô stochastic integral I [f ] of a random function f : T Ω R is the (unique) mean square limit of sequences of stochastic integrals I [f (n) ] for any sequence of random n-step functions f (n) converging to f : I [f ](ω) = m.s. lim n Xn 1 f (t (n) j j=1, ω){w (t (n) j+1, ω) W (t(n) j, ω)} with probability 1 and s = t (n) 1 <... < t (n) n 1 < t. The Itô integral of f with respect to W is a zero mean random variable. Since the Itô integral is constructed from the sequence f (n) evaluated at t j s, it defines a stochastic process which is a martingale. The chain rule from classical calculus does not apply (see later)! The Stratonivich construction preserves the classical chain rule, but not the martingale property. We call I 2 [f ] = R t s f 2 dt the Itô isomery.

Itô s stochastic integral follows from the fact that D E Xn 1 I 2 [f (n) ] = f 2 (t (n) j, ω) (t (n) j=1 is a proper Riemann integral for t (n) j+1 t(n) j 0. j+1 t(n) j ),

Itô formula Let Y t = U(t, X t) and consider the process X = {X t, t 0} described by the following SDE: dx t = α(t, X t)dt + β(t, X t)dw t. The stochastic chain rule is given by j U dy t = t + α U x + 1 ff 2 U 2 β2 dt + β U x 2 x dwt with probability 1. The additional term comes from the fact that The symbolic SDE is to be interpreted as an Itô stochastic integral, i.e. with equality in the mean square sense. dw 2 t is of O(dt).

Chain rule for classical calculus: Consider y = u(t, x). Discarding the second and higher order terms in the Taylor expansion of u leads to Chain rule for stochastic calculus: dy = u(t + dt, x + dx) u(t, x) = u u dt + t x dx. For Y t = U(t, X t), the Taylor expansion of U leads to dy t = U(t + dt, X t + dx t) U(t, X t) where = U U dt + t x dxt + 1 2 j 2 U t 2 (dt)2 + 2 2 U t x dt dxt + 2 U x 2 (dxt)2 (dx t) 2 = α 2 (dt) 2 + 2αβdt dw t+β 2 (dw t) 2. Hence, we need to keep the additional term of O(dt), such that j U dy t = t + 1 ff 2 β2 2 U x 2 dt + U x dxt. Substituting dx t leads to the desired results. ff +...,

Application: Black-Scholes option-pricing model Assume the evolution of a stock price X t is described by a geometric Wiener process: dx t = ρx tdt + σx tdw t, where ρ is called the risk-free rate (or drift) and σ the volatility. Consider the change of variable Y t = log X t. Applying the stochastic chain rule leads to Black-Scholes formula: «dy t = ρ σ2 dt + σdw t. 2 This leads to the following solution for the stock price at time t: «ff X t = X 0 exp j ρ σ2 t + σw t. 2 Assumptions: No dividends or charges European exercise terms Markets are efficient Interest rates are known Returns are log-normal

Ornstein-Uhlenbeck (OU) process The Ornstein-Uhlenbeck process with drift γ > 0 and mean µ is defined as follows: dx t = γ(x t µ)dt + σdw t. The OU process is known as the mean reverting process. It is a Gaussian process covariance function: It is wide-sense stationary. It is a homogeneous Markov process. It process is a diffusion process. v s,t = σ2 2γ e γ t s, s t. It is the continuous equivalent of the discrete AR(1) process.

OU process (continued) 2 1 0!1 X t!2!3!4!5!6 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 t Figure: Sample path examples of a OU process with different drift and diffusion coefficient. The same mean µ and initial condition were used.

References Crispin W. Gardiner: Handbook of Stochastic Methods. Springer, 2004 (3rd edition). Peter E. Kloeden and Eckhard Platen: Numerical Solution of Stochastic Differential Equations. Springer, 1999. Bernt Øksendal: Stochastic Differential Equations. An Introduction with Applications. Springer, 2000 (5th edition). A Tutorial Introduction to Stochastic Differential Equations: Continuous time Gaussian Markov Processes by Christopher K. I. Williams at NIPS workshop on Dynamical Systems, Stochastic Processes and Bayesian Inference, 2006. Lévy processes and Finance by Matthias Winkel.