Definition: Lévy Process. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 2: Lévy Processes. Theorem

Similar documents
Lectures on Lévy Processes, Stochastic Calculus and Financial Applications, Ovronnaz September 2005

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition

Poisson random measure: motivation

Jump-type Levy Processes

Numerical Methods with Lévy Processes

Lecture Characterization of Infinitely Divisible Distributions

1 Independent increments

Stochastic Analysis. Prof. Dr. Andreas Eberle

Lévy Processes and Infinitely Divisible Measures in the Dual of afebruary Nuclear2017 Space 1 / 32

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

1 Infinitely Divisible Random Variables

Homework #6 : final examination Due on March 22nd : individual work

ELEMENTS OF PROBABILITY THEORY

Jump Processes. Richard F. Bass

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

CIMPA SCHOOL, 2007 Jump Processes and Applications to Finance Monique Jeanblanc

Hardy-Stein identity and Square functions

Short-time expansions for close-to-the-money options under a Lévy jump model with stochastic volatility

Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 7 9/25/2013

MOMENTS AND CUMULANTS OF THE SUBORDINATED LEVY PROCESSES

Infinitely divisible distributions and the Lévy-Khintchine formula

Obstacle problems for nonlocal operators

µ X (A) = P ( X 1 (A) )

The concentration of a drug in blood. Exponential decay. Different realizations. Exponential decay with noise. dc(t) dt.

OBSTACLE PROBLEMS FOR NONLOCAL OPERATORS: A BRIEF OVERVIEW

STOCHASTIC ANALYSIS FOR JUMP PROCESSES

Stochastic Processes

Statistical inference on Lévy processes

6. Brownian Motion. Q(A) = P [ ω : x(, ω) A )

Feller Processes and Semigroups

Stochastic Differential Equations.

Poisson Jumps in Credit Risk Modeling: a Partial Integro-differential Equation Formulation

Lévy-VaR and the Protection of Banks and Insurance Companies. A joint work by. Olivier Le Courtois, Professor of Finance and Insurance.

Holomorphic functions which preserve holomorphic semigroups

Convergence of price and sensitivities in Carr s randomization approximation globally and near barrier

Ernesto Mordecki 1. Lecture III. PASI - Guanajuato - June 2010

The strictly 1/2-stable example

Man Kyu Im*, Un Cig Ji **, and Jae Hee Kim ***

An introduction to Lévy processes

Stochastic Calculus and Black-Scholes Theory MTH772P Exercises Sheet 1

Densities for the Navier Stokes equations with noise

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( )

Introduction to self-similar growth-fragmentations

(B(t i+1 ) B(t i )) 2

LAN property for sde s with additive fractional noise and continuous time observation

Optimal investment strategies for an index-linked insurance payment process with stochastic intensity

Lecture 17 Brownian motion as a Markov process

COGARCH Processes: Theory and Asymptotics for the Pseudo-Maximum Likelihood Estimator

Lecture 4: Introduction to stochastic processes and stochastic calculus

Some Tools From Stochastic Analysis

Stochastic Calculus February 11, / 33

Asymptotics for posterior hazards

1 Brownian Local Time

Scale functions for spectrally negative Lévy processes and their appearance in economic models

Bernstein-gamma functions and exponential functionals of Lévy processes

Gaussian Random Fields: Geometric Properties and Extremes

ON ADDITIVE TIME-CHANGES OF FELLER PROCESSES. 1. Introduction

Small-time asymptotics of stopped Lévy bridges and simulation schemes with controlled bias

Cauchy Problems in Bounded Domains and Iterated Processes

Change of Measure formula and the Hellinger Distance of two Lévy Processes

GAUSSIAN PROCESSES; KOLMOGOROV-CHENTSOV THEOREM

Probability and Measure

Point Process Control

1 IEOR 6712: Notes on Brownian Motion I

arxiv:math/ v1 [math.pr] 12 Jul 2006

FE 5204 Stochastic Differential Equations

Probability and Measure

n E(X t T n = lim X s Tn = X s

Lévy processes and Lévy copulas with an application in insurance

Part II Probability and Measure

A Change of Variable Formula with Local Time-Space for Bounded Variation Lévy Processes with Application to Solving the American Put Option Problem 1

Exercises. T 2T. e ita φ(t)dt.

Local vs. Nonlocal Diffusions A Tale of Two Laplacians

A Short Introduction to Diffusion Processes and Ito Calculus

9 Brownian Motion: Construction

Solution: The process is a compound Poisson Process with E[N (t)] = λt/p by Wald's equation.

I forgot to mention last time: in the Ito formula for two standard processes, putting

On a class of stochastic differential equations in a financial network model

The Lévy-Itô decomposition and the Lévy-Khintchine formula in31 themarch dual of 2014 a nuclear 1 space. / 20

18.175: Lecture 14 Infinite divisibility and so forth

13 The martingale problem

Fourier analysis, measures, and distributions. Alan Haynes

Some Terminology and Concepts that We will Use, But Not Emphasize (Section 6.2)

ON THE FRACTIONAL CAUCHY PROBLEM ASSOCIATED WITH A FELLER SEMIGROUP

Brownian Motion. Chapter Stochastic Process

Beyond the color of the noise: what is memory in random phenomena?

Random Process Lecture 1. Fundamentals of Probability

Solution for Problem 7.1. We argue by contradiction. If the limit were not infinite, then since τ M (ω) is nondecreasing we would have

Hölder continuity of the solution to the 3-dimensional stochastic wave equation

Brownian Motion. Chapter Definition of Brownian motion

Semiclassical analysis and a new result for Poisson - Lévy excursion measures

{σ x >t}p x. (σ x >t)=e at.

On semilinear elliptic equations with measure data

1.1 Definition of BM and its finite-dimensional distributions

MA50251: Applied SDEs

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 8 10/1/2008 CONTINUOUS RANDOM VARIABLES

Random Dynamical Systems with Non-Gaussian Fluctuations Some Advances & Perspectives

Transcription:

Definition: Lévy Process Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 2: Lévy Processes David Applebaum Probability and Statistics Department, University of Sheffield, UK July 22nd - 24th 21 Let X = (X(t), t ) be a stochastic process defined on a probability space (Ω, F, P). We say that it has independent increments if for each n N and each t 1 < t 2 < < t n+1 <, the random variables (X(t j+1 ) X(t j ), 1 j n) are independent and it has stationary increments if each X(t j+1 ) X(t j ) d = X(t j+1 t j ) X(). Dave Applebaum (Sheffield UK) Lecture 2 July 21 1 / 56 Dave Applebaum (Sheffield UK) Lecture 2 July 21 2 / 56 We say that X is a Lévy process if (L2) (L1) Each X() = (a.s), X has independent and stationary increments, (L3) X is stochastically continuous i.e. for all a > and for all s, lim P( X(t) X(s) > a) =. t s Note that in the presence of (L1) and (L2), (L3) is equivalent to the condition The sample paths of a process are the maps t X(t)(ω) from R + to R d, for each ω Ω. We are now going to explore the relationship between Lévy processes and infinite divisibility. Theorem If X is a Lévy process, then X(t) is infinitely divisible for each t. lim P( X(t) > a) =. t Dave Applebaum (Sheffield UK) Lecture 2 July 21 3 / 56 Dave Applebaum (Sheffield UK) Lecture 2 July 21 4 / 56

Theorem Proof. For each n N, we can write where each Y (n) k X(t) = Y (n) 1 (t) = X( kt n (n) (t) + + Y (t) ) X( (k 1)t n ). The Y (n) k (t) s are i.i.d. by (L2). From Lecture 1 we can write φ X(t) (u) = e η(t,u) for each t, u R d, where each η(t, ) is a Lévy symbol. n If X is a Lévy process, then φ X(t) (u) = e tη(u), for each u R d, t, where η is the Lévy symbol of X(1). Proof. Suppose that X is a Lévy process and for each u R d, t, define φ u (t) = φ X(t) (u) then by (L2) we have for all s, φ u (t + s) = E(e i(u,x(t+s)) ) = E(e i(u,x(t+s) X(s)) e i(u,x(s)) ) = E(e i(u,x(t+s) X(s)) )E(e i(u,x(s)) ) = φ u (t)φ u (s)... (i) Dave Applebaum (Sheffield UK) Lecture 2 July 21 5 / 56 Dave Applebaum (Sheffield UK) Lecture 2 July 21 6 / 56 x Now φ u () = 1... (ii) by (L1), and the map t φ u (t) is continuous. However the unique continuous solution to (i) and (ii) is given by φ u (t) = e tα(u), where α : R d C. Now by Theorem 1, X(1) is infinitely divisible, hence α is a Lévy symbol and the result follows. We now have the Lévy-Khinchine formula for a Lévy process X = (X(t), t ):- ( [ E(e i(u,x(t)) ) = exp{ t i(b, u) 1 (u, Au) 2 ]) + (e i(u,y) 1 i(u, y)1ˆb(y))ν(dy) },(2.1) R d {} for each t, u R d, where (b, A, ν) are the characteristics of X(1). We will define the Lévy symbol and the characteristics of a Lévy process X to be those of the random variable X(1). We will sometimes write the former as η X when we want to emphasise that it belongs to the process X. Dave Applebaum (Sheffield UK) Lecture 2 July 21 7 / 56 Dave Applebaum (Sheffield UK) Lecture 2 July 21 8 / 56

Let p t be the law of X(t), for each t. By (L2), we have for all s, t that: p t+s = p t p s. By (L3), we have p t w δ as t, i.e. lim t f (x)p t (dx) = f (). (p t, t ) is a weakly continuous convolution semigroup of probability measures on R d. Conversely, given any such semigroup, we can always construct a Lévy process on path space via Kolmogorov s construction. Informally, we have the following asymptotic relationship between the law of a Lévy process and its Lévy measure: More precisely p t ν = lim t t. 1 lim f (x)p t (dx) = f (x)ν(dx), (2.2) t t R d R d for bounded, continuous functions f which vanish in some neighborhood of the origin. Dave Applebaum (Sheffield UK) Lecture 2 July 21 9 / 56 Dave Applebaum (Sheffield UK) Lecture 2 July 21 1 / 56 Examples of Lévy Processes Example 1, Brownian Motion and Gaussian Processes A (standard) Brownian motion in R d is a Lévy process B = (B(t), t ) for which (B1) B(t) N(, ti) for each t, (B2) B has continuous sample paths. It follows immediately from (B1) that if B is a standard Brownian motion, then its characteristic function is given by φ B(t) (u) = exp{ 1 2 t u 2 }, We introduce the marginal processes B i = (B i (t), t ) where each B i (t) is the ith component of B(t), then it is not difficult to verify that the B i s are mutually independent Brownian motions in R. We will call these one-dimensional Brownian motions in the sequel. Brownian motion has been the most intensively studied Lévy process. In the early years of the twentieth century, it was introduced as a model for the physical phenomenon of Brownian motion by Einstein and Smoluchowski and as a description of the dynamical evolution of stock prices by Bachelier. for each u R d, t. Dave Applebaum (Sheffield UK) Lecture 2 July 21 11 / 56 Dave Applebaum (Sheffield UK) Lecture 2 July 21 12 / 56

The theory was placed on a rigorous mathematical basis by Norbert Wiener in the 192 s. We could try to use the Kolmogorov existence theorem to construct one-dimensional Brownian motion from the following prescription on cylinder sets of the form I H t 1,...,t n = {ω Ω; ω(t 1 ) [a 1, b 1 ],..., ω(t n ) [a n, b n ]} where H = [a 1, b 1 ] [a n, b n ] and we have taken Ω to be the set of all mappings from R + to R: = P(It H 1,...,t n ) ( 1 H (2π) n 2 t1 (t 2 t 1 )... (t n t n 1 ) exp 1 2 )) dx 1 dx n. + (x n x n 1 ) 2 t n t n 1 ( x 2 1 t 1 + (x 2 x 1 ) 2 t 2 t 1 + The literature contains a number of ingenious methods for constructing Brownian motion. One of the most delightful of these (originally due to Paley and Wiener) obtains this, in the case d = 1, as a random Fourier series for t 1: 2 sin(πt(n + 1 2 B(t) = )) π n + 1 ξ(n), 2 n= for each t, where (ξ(n), n N) is a sequence of i.i.d. N(, 1) random variables. However there there is then no guarantee that the paths are continuous. Dave Applebaum (Sheffield UK) Lecture 2 July 21 13 / 56 Dave Applebaum (Sheffield UK) Lecture 2 July 21 14 / 56 We list a number of useful properties of Brownian motion in the case d = 1. Brownian motion is locally Hölder continuous with exponent α for every < α < 1 2 i.e. for every T >, ω Ω there exists K = K (T, ω) such that for all s < t T. B(t)(ω) B(s)(ω) K t s α, The sample paths t B(t)(ω) are almost surely nowhere differentiable. For any sequence, (t n, n N) in R + with t n, lim inf n B(t n) = a.s. lim sup B(t n ) = a.s. n The law of the iterated logarithm:- ( ) B(t) P lim sup t (2t log(log( 1 t ))) = 1 = 1. 1 2 Dave Applebaum (Sheffield UK) Lecture 2 July 21 15 / 56 1.2.8.4.4.8 1.2 1.6 2. 1 2 3 4 5 Simulation of standard Brownian motion Dave Applebaum (Sheffield UK) Lecture 2 July 21 16 / 56

Let A be a non-negative symmetric d d matrix and let σ be a square root of A so that σ is a d m matrix for which σσ T = A. Now let b R d and let B be a Brownian motion in R m. We construct a process C = (C(t), t ) in R d by C(t) = bt + σb(t), (2.3) then C is a Lévy process with each C(t) N(tb, ta). It is not difficult to verify that C is also a Gaussian process, i.e. all its finite dimensional distributions are Gaussian. It is sometimes called Brownian motion with drift. The Lévy symbol of C is η C (u) = i(b, u) 1 (u, Au). 2 Example 2 - The Poisson Process The Poisson process of intensity λ > is a Lévy process N taking values in N {} wherein each N(t) π(λt) so we have P(N(t) = n) = (λt)n e λt, n! for each n =, 1, 2,.... The Poisson process is widely used in applications and there is a wealth of literature concerning it and its generalisations. In fact a Lévy process has continuous sample paths if and only if it is of the form (2.3). Dave Applebaum (Sheffield UK) Lecture 2 July 21 17 / 56 Dave Applebaum (Sheffield UK) Lecture 2 July 21 18 / 56 We define non-negative random variables (T n, N {}) (usually called waiting times) by T = and for n N, T n = inf{t ; N(t) = n}, then it is well known that the T n s are gamma distributed. Moreover, the inter-arrival times T n T n 1 for n N are i.i.d. and each has exponential distribution with mean 1 λ. The sample paths of N are clearly piecewise constant with jump discontinuities of size 1 at each of the random times (T n, n N). Number 2 4 6 8 1 5 1 15 2 Time Simulation of a Poisson process (λ =.5) Dave Applebaum (Sheffield UK) Lecture 2 July 21 19 / 56 Dave Applebaum (Sheffield UK) Lecture 2 July 21 2 / 56

For later work it is useful to introduce the compensated Poisson process Ñ = (Ñ(t), t ) where each Ñ(t) = N(t) λt. Note that E(Ñ(t)) = and E(Ñ(t)2 ) = λt for each t. Example 3 - The Compound Poisson Process Let (Z (n), n N) be a sequence of i.i.d. random variables taking values in R d with common law µ Z and let N be a Poisson process of intensity λ which is independent of all the Z (n) s. The compound Poisson process Y is defined as follows:- { Y (t) := for each t, so each Y (t) π(λt, µ Z ). if N(t) = Z (1) + + Z (N(t)) if N(t) >, Dave Applebaum (Sheffield UK) Lecture 2 July 21 21 / 56 Dave Applebaum (Sheffield UK) Lecture 2 July 21 22 / 56 From the work of Lecture 1, Y has Lévy symbol [ ] η Y (u) = (e i(u,y) 1)λµ Z (dy). Again the sample paths of Y are piecewise constant with jump discontinuities at the random times (T (n), n N), however this time the size of the jumps is itself random, and the jump at T (n) can be any value in the range of the random variable Z (n). Path -3-2 -1 1 2 3 Time Dave Applebaum (Sheffield UK) Lecture 2 July 21 23 / 56 Simulation of a compound Poisson process with N(, 1) summands(λ = 1). Dave Applebaum (Sheffield UK) Lecture 2 July 21 24 / 56

Example 4 - Interlacing Processes Let C be a Gaussian Lévy process as in Example 1 and Y be a compound Poisson process as in Example 3, which is independent of C. Define a new process X by X(t) = C(t) + Y (t), for all t, then it is not difficult to verify that X is a Lévy process with Lévy symbol η X (u) = i(b, u) 1 [ ] 2 (u, Au) + (e i(u,y) 1)λµ Z (dy). Using the notation of Examples 2 and 3, we see that the paths of X have jumps of random size occurring at random times. X(t) = C(t) for t < T 1, = C(T 1 ) + Z 1 when t = T 1, = X(T 1 ) + C(t) C(T 1 ) for T 1 < t < T 2, = X(T 2 ) + Z 2 when t = T 2, and so on recursively. We call this procedure an interlacing as a continuous path process is interlaced with random jumps. It seems reasonable that the most general Lévy process might arise as the limit of a sequence of such interlacings, and this can be established rigorously. Dave Applebaum (Sheffield UK) Lecture 2 July 21 25 / 56 Dave Applebaum (Sheffield UK) Lecture 2 July 21 26 / 56 Example 5 - Stable Lévy Processes A stable Lévy process is a Lévy process X in which the Lévy symbol is that of a given stable law. So, in particular, each X(t) is a stable random variable. For example, we have the rotationally invariant case whose Lévy symbol is given by η(u) = σ α u α, where α is the index of stability ( < α 2). One of the reasons why these are important in applications is that they display self-similarity. In general, a stochastic process Y = (Y (t), t ) is self-similar with Hurst index H > if the two processes (Y (at), t ) and (a H Y (t), t ) have the same finite-dimensional distributions for all a. By examining characteristic functions, it is easily verified that a rotationally invariant stable Lévy process is self-similar with Hurst index H = 1 α, so that e.g. Brownian motion is self-similar with H = 1 2. A Lévy process X is self-similar if and only if each X(t) is strictly stable. Dave Applebaum (Sheffield UK) Lecture 2 July 21 27 / 56 Dave Applebaum (Sheffield UK) Lecture 2 July 21 28 / 56

8 4 4 8 12 16 2 24 1 2 3 4 5 Simulation of the Cauchy process. Dave Applebaum (Sheffield UK) Lecture 2 July 21 29 / 56 Densities of Lévy Processes Question: When does a Lévy process have a density f t for all t > so that for all Borel sets B: P(X t B) = p t (B) = f t (x)dx? In general, a random variable has a continuous density if its characteristic function is integrable and in this case, the density is the Fourier transform of the characteristic function. So for Lévy processes, if for all t >, e tη(u) du = e tr(η(u)) du < R d R d we then have f t (x) = (2π) d R d e tη(u) i(u,x) du. Dave Applebaum (Sheffield UK) Lecture 2 July 21 3 / 56 B Every Lévy process with a non-degenerate Gaussian component has a density. In this case and so R(η(u)) = 1 2 (u, Au) + e tr(η(u)) du R d R d {} e t R d (cos(u, y) 1)ν(dy), 2 (u,au) <, using (u, Au) λ u 2 where λ > is smallest eigenvalue of A. For examples where densities exist for A = with d = 1: if X is α-stable, it has a density since for all 1 α 2: e t u α du e t u du <, and for α < 1: u 1 R e t u α du = 2 α u 1 e ty y 1 α 1 dy <. Dave Applebaum (Sheffield UK) Lecture 2 July 21 31 / 56 Dave Applebaum (Sheffield UK) Lecture 2 July 21 32 / 56

In general, a sufficient condition for a density is ν(r d ) = ν m is absolutely continuous with respect to Lebesgue measure for some m N where ν(a) = ( x 2 1)ν(dx). A A Lévy process has a Lévy density g ν if its Lévy measure ν is absolutely continuous with respect to Lebesgue measure, then g ν is defined to be the Radon-Nikodym derivative dν dx. A process may have a Lévy density but not have a density. Example. Let X be a compound Poisson process with each X(t) = Y 1 + Y 2 + + Y N(t) wherein each Y j has a density f Y, then g ν = λf Y is the Lévy density. But P(Y (t) = ) P(N(t) = ) = e λt >, so p t has an atom at {}. Dave Applebaum (Sheffield UK) Lecture 2 July 21 33 / 56 Dave Applebaum (Sheffield UK) Lecture 2 July 21 34 / 56 Subordinators We have p t (A) = e λt δ (A) + A f ac t (x)dx, where for x ft ac ft ac (x) = e λt (λt) n fy n n! (x). n=1 (x) is the conditional density of X(t) given that it jumps at least once between and t. In this case, (2.2) takes the precise form (for x ) g ν (x) = lim ft ac t (x). t A subordinator is a one-dimensional Lévy process which is increasing a.s. Such processes can be thought of as a random model of time evolution, since if T = (T (t), t ) is a subordinator we have T (t) for each t > a.s. and T (t 1 ) T (t 2 ) whenever t 1 t 2 a.s. Now since for X(t) N(, At) we have P(X(t) ) = P(X(t) ) = 1 2, it is clear that such a process cannot be a subordinator. Dave Applebaum (Sheffield UK) Lecture 2 July 21 35 / 56 Dave Applebaum (Sheffield UK) Lecture 2 July 21 36 / 56

Theorem If T is a subordinator then its Lévy symbol takes the form η(u) = ibu + (e iuy 1)λ(dy), (2.4) (, ) where b, and the Lévy measure λ satisfies the additional requirements λ(, ) = and (y 1)λ(dy) <. (, ) Conversely, any mapping from R d C of the form (2.4) is the Lévy symbol of a subordinator. We call the pair (b, λ), the characteristics of the subordinator T. For each t, the map u E(e iut (t) ) can be analytically continued to the region {iu, u > } and we then obtain the following expression for the Laplace transform of the distribution E(e ut (t) ) = e tψ(u), where ψ(u) = η(iu) = bu + (1 e uy )λ(dy) (2.5) (, ) for each t, u. This is much more useful for both theoretical and practical application than the characteristic function. The function ψ is usually called the Laplace exponent of the subordinator. Dave Applebaum (Sheffield UK) Lecture 2 July 21 37 / 56 Dave Applebaum (Sheffield UK) Lecture 2 July 21 38 / 56 Examples of Subordinators (1) The Poisson Case Poisson processes are clearly subordinators. More generally a compound Poisson process will be a subordinator if and only if the Z (n) s are all R + valued. (2) α-stable Subordinators Using straightforward calculus, we find that for < α < 1, u, u α = α Γ(1 α) (1 e ux ) dx x 1+α. Hence for each < α < 1 there exists an α-stable subordinator T with Laplace exponent ψ(u) = u α. and the characteristics of T are (, λ) where λ(dx) = Γ(1 α). x 1+α Note that when we analytically continue this to obtain the Lévy symbol we obtain the form given in Lecture 1 for stable laws with µ =, β = 1 and σ α = cos ( ) απ 2. α dx Dave Applebaum (Sheffield UK) Lecture 2 July 21 39 / 56 Dave Applebaum (Sheffield UK) Lecture 2 July 21 4 / 56

(3) The Lévy Subordinator The 1 2-stable subordinator has a density given by the Lévy distribution (with µ = and σ = t2 2 ) f T (t) (s) = ( ) t 2 s 3 2 e t2 4s, π for s. The Lévy subordinator has a nice probabilistic interpretation as a first hitting time for one-dimensional standard Brownian motion (B(t), t ), { T (t) = inf s > ; B(s) = t }. (2.6) 2 To show directly that for each t, E(e ut (t) ) = e us f T (t) (s)ds = e tu 12, write g t (u) = E(e ut (t) ). Differentiate with respect to u and make the substitution x = t2 4us to obtain the differential equation g t (u) = t 2 u g t(u). Via the substitution y = t 2 s we see that g t() = 1 and the result follows. Dave Applebaum (Sheffield UK) Lecture 2 July 21 41 / 56 Dave Applebaum (Sheffield UK) Lecture 2 July 21 42 / 56 (4) Inverse Gaussian Subordinators We generalise the Lévy subordinator by replacing Brownian motion by the Gaussian process C = (C(t), t ) where each C(t) = B(t) + µt and µ R. The inverse Gaussian subordinator is defined by T (t) = inf{s > ; C(s) = δt} where δ > and is so-called since t T (t) is the generalised inverse of a Gaussian process. Using martingale methods, we can show that for each t, u >, E(e ut (t) ) = e tδ( 2u+µ 2 µ), (2.7) In fact each T (t) has a density:- f T (t) (s) = δt e δtµ s 3 2 exp { 1 } 2π 2 (t2 δ 2 s 1 + µ 2 s), (2.8) for each s, t. In general any random variable with density f T (1) is called an inverse Gaussian and denoted as IG(δ, µ). Dave Applebaum (Sheffield UK) Lecture 2 July 21 43 / 56 (5) Gamma Subordinators Let (T (t), t ) be a gamma process with parameters a, b > so that each T (t) has density f T (t) (x) = bat Γ(at) x at 1 e bx, for x ; then it is easy to verify that for each u, e ux f T (t) (x)dx = ( 1 + u ) at ( ( = exp ta log 1 + u )). b b From here it is a straightforward exercise in calculus to show that [ ] e ux f T (t) (x)dx = exp t (1 e ux )ax 1 e bx dx. Dave Applebaum (Sheffield UK) Lecture 2 July 21 44 / 56

5 4 From this we see that (T (t), t ) is a subordinator with b = and λ(dx) = ax 1 e bx dx. Moreover ψ(u) = a log ( 1 + u b ) is the associated Bernstein function (see below). 3 2 1 1 2 3 4 5 Simulation of a gamma subordinator. Dave Applebaum (Sheffield UK) Lecture 2 July 21 45 / 56 Dave Applebaum (Sheffield UK) Lecture 2 July 21 46 / 56 Theorem Before we go further into the probabilistic properties of subordinators we ll make a quick diversion into analysis. Let f C ((, )). We say it is completely monotone if ( 1) n f (n) for all n N, and a Bernstein function if f and ( 1) n f (n) for all n N. 1 f is a Bernstein function if and only if the mapping x e tf (x) is completely monotone for all t. 2 f is a Bernstein function if and only if it has the representation f (x) = a + bx + (1 e yx )λ(dy), for all x > where a, b and (y 1)λ(dy) <. 3 g is completely monotone if and only if there exists a measure µ on [, ) for which g(x) = e xy µ(dy). Dave Applebaum (Sheffield UK) Lecture 2 July 21 47 / 56 Dave Applebaum (Sheffield UK) Lecture 2 July 21 48 / 56

To interpret this theorem, first consider the case a =. In this case, if we compare the statement of Theorem 4 with equation (2.5), we see that there is a one to one correspondence between Bernstein functions for which lim x f (x) = and Laplace exponents of subordinators. The Laplace transforms of the laws of subordinators are always completely monotone functions and a subclass of all possible measures µ appearing in Theorem 4 (3) is given by all possible laws p T (t) associated to subordinators. A general Bernstein function with a > can be given a probabilistic interpretation by means of killing. One of the most important probabilistic applications of subordinators is to time change. Let X be an arbitrary Lévy process and let T be a subordinator defined on the same probability space as X such that X and T are independent. We define a new stochastic process Z = (Z (t), t ) by the prescription Z (t) = X(T (t)), for each t so that for each ω Ω, Z (t)(ω) = X(T (t)(ω))(ω). The key result is then the following. Theorem Z is a Lévy process. Dave Applebaum (Sheffield UK) Lecture 2 July 21 49 / 56 Dave Applebaum (Sheffield UK) Lecture 2 July 21 5 / 56 We compute the Lévy symbol of the subordinated process Z. Theorem η Z = ψ T ( η X ). Proof. For each u R d, t, e iη Z (t)(u) = E(e i(u,z (t)) ) = E(e i(u,x(t (t))) ) = E(e i(u,x(s)) )p T (t) (ds) = e sη X (u) p T (t) (ds) Example : From Brownian Motion to 2α-stable Processes Let T be an α-stable subordinator (with < α < 1) and X be a d-dimensional Brownian motion with covariance A = 2I, which is independent of T. Then for each s, u R d, ψ T (s) = s α and η X (u) = u 2, and hence η Z (u) = u 2α, i.e. Z is a rotationally invariant 2α-stable process. In particular, if d = 1 and T is the Lévy subordinator, then Z is the Cauchy process, so each Z (t) has a symmetric Cauchy distribution with parameters µ = and σ = 1. It is interesting to observe from (2.6) that Z is constructed from two independent standard Brownian motions. = E(e ( η X (u))t (t) ) = e tψ T ( η X (u)). Dave Applebaum (Sheffield UK) Lecture 2 July 21 51 / 56 Dave Applebaum (Sheffield UK) Lecture 2 July 21 52 / 56

Examples of subordinated processes have recently found useful applications in mathematical finance. We briefly mention two interesting cases:- (i) The Variance Gamma Process In this case Z (t) = B(T (t)), for each t, where B is a standard Brownian motion and T is an independent gamma subordinator. The name derives from the fact that, in a formal sense, each Z (t) arises by replacing the variance of a normal random variable by a gamma random variable. Using Theorem 6, a simple calculation yields ) at Φ Z (t) (u) = (1 + u2, 2b where G and L are independent gamma subordinators each with parameters 2b and a. This yields a nice financial representation of Z as a difference of independent gains and losses. From this representation, we can compute that Z has a Lévy density g ν (x) = a x (e 2bx 1 (,) (x) + e 2bx 1 (, ) (x)). for each t, u R, where a and b are the usual parameters which determine the gamma process. It is an easy exercise in manipulating characteristic functions to compute the alternative representation: Z (t) = G(t) L(t), Dave Applebaum (Sheffield UK) Lecture 2 July 21 53 / 56 Dave Applebaum (Sheffield UK) Lecture 2 July 21 54 / 56 The CGMY processes are a generalisation of the variance-gamma processes due to Carr, Geman, Madan and Yor. They are characterised by their Lévy density: g ν (x) = a x 1+α (eb 1x 1 (,) (x) + e b 2x 1 (, ) (x)), where a >, α < 2 and b 1, b 2. We obtain stable Lévy processes when b 1 = b 2 =. They can also be obatined by subordinating Brownian motion with drift. The CGMY processes are a subclass of the tempered stable processes. Note how the exponential dampens the effects of large jumps. (ii) The Normal Inverse Gaussian Process In this case Z (t) = C(T (t)) + µt for each t where each C(t) = B(t) + βt, with β R. Here T is an inverse Gaussian subordinator, which is independent of B, and in which we write the parameter γ = α 2 β 2, where α R with α 2 β 2. Z depends on four parameters and has characteristic function Φ Z (t) (α, β, δ, µ)(u) = exp {δt( α 2 β 2 α 2 (β + iu) 2 ) + iµtu} for each u R, t. Here δ > is as in (2.7). Each Z (t) has a density given by ( x µt f Z (t) (x) = C(α, β, δ, µ; t)q δt ) 1 K 1 ( δtαq ( x µt δt )) e βx, for each x R, where q(x) = 1 + x 2, C(α, β, δ, µ; t) = π 1 αe δt α 2 β 2 βµt and K 1 is a Bessel function of the third kind. Dave Applebaum (Sheffield UK) Lecture 2 July 21 55 / 56 Dave Applebaum (Sheffield UK) Lecture 2 July 21 56 / 56