Two viewpoints on measure valued processes Olivier Hénard Université Paris-Est, Cermics
Contents 1 The classical framework : from no particle to one particle 2 The lookdown framework : many particles.
The basic ingredients Superprocesses form a specific class of measure valued processes, whose definition requires two basic ingredients : a Markov process valued in some space E, with law P x when started at x E. a branching mechanism ψ : ψ(x, λ) = α(x)λ + β(x)λ 2 + (0, ) [e λl 1 + λl]π(x, dl) for λ 0, α B, β B +, and π a kernel from E to (0, ) such that sup (l x R d (0, ) l2 )π(x, dl) <.
The inhomogeneous superprocesses Definition ((P, ψ)-superprocesses, Dynkin) There exists a unique M f (E) valued Markov process (Z = Z t, t 0) such that, for all f C + b (E), E µ ( e Z t (f ) ) = e µ(u t ), µ M F (E), t 0, where u t is the unique positive solution to the mild equation ( t ) ( u t (x) + E x ds ψ(x s, u t s (X s )) = E x f (Xt ) ). 0
The superprocess started from one individual There exists a sigma-finite measure N x called the canonical measure, such that, if i I δ Z i is a Poisson Point Measure with intensity N x, then Z under P δx equals Z i in distribution. i I Exponential formula for Poisson Point measure writes down : E δx ( e Z t (f ) ) = e N x (1 e Z t (f )) Recalling the definition of the superprocess : u (f ) t (x) = N x (1 e Z t (f ) ). Let H max = inf {t 0, Z t = 0} [0, ] denote the height of the process. Taking f = l1 allows us to see that : v t (x) := N x (H max > t) equals lim u (l1) t (x). l
We will assume almost sure extinction, meaning : P µ (H max < ) = 1 for every µ M f (E), where H max = inf {t 0, Z t = 0} is the height of the process. Questions Desintegration of the law of superprocess with respect to H max : that means, find a family of probability measure (N (h) x, 0 < h < ) such that : N x (A) = N x (H max dh)n (h) x (A). h (0, ) Weak convergence of this family (N (h) x, 0 < h < ).
Spinal decomposition under N (h) x (i) Sample (X s, 0 s < h) with law P (h) x : 0 t h, dp (h) x D t = hv h t (X t ) e t 0 ds λψ(x s,v h s (X s )). dp x Dt h v h (x) (ii) Conditionally on (X s, 0 s < h), define µ (h) s (dz ) = 2β(X s )N Xs (dz ) + (0, ) e l v h s(x s ) l π(x s, dl) P lδxs (dz ) (iii) Conditionally on (µ (h) s, 0 s < h), let i I δ (si,z i )(ds, dz ) denote a Poisson point measure with intensity 1 {0 s<h,hmax (Z )+s h} ds µ(h) s (dz ). Theorem (Williams decomposition) ( i I Z i (t s i ) +, 0 t < h) has law N (h) x.
Weak convergence of this spinal decomposition (i) Sample (X s, s 0) with law P ( ) x : t 0, dp ( ) x D t = φ 0(X t ) t dp x Dt φ 0 (x) e 0 ds (α(x s) λ 0 ) with λ 0 = sup {λ R, f, (L α)f = λf } the generalized eigenvalue and φ 0 the associated generalized eigenvector. (ii) Conditionally on (X s, s 0), define µ ( ) s (dz ) = 2β(X s )N Xs (dz ) + l π(x s, dl) P lδxs (dz ) (0, ) (iii) Conditionally on (µ ( ) s, s 0), let i I δ (si,z i )(ds, dz ) denote a Poisson point measure with intensity ds µ ( ) s (dz ). Theorem (N (h) x, 0 < h < ) is weakly converging as h, towards N ( ) x say. ( i I Z i (t s i ) +, t 0) has law N ( ) x.
Hint for the proof, h fixed First prove an absolute continuity property : For every G t = σ(z s, s t) measurable functional F and 0 t < h, ( ) N (h) Zt ( x (F (Z )) = N h v h t ) x e (Z t (v h t ) v h (x)) F (Z ). h v h (x) Using the definition of the superprocess and the Feynman Kac formula, we also have that, for every f and g positive measurable function on E, N x ( Zt (f ) e Z t (g) ) = E x ( e t 0 ds λψ(x s,u (g) t s (X s)) f (X t ) ). since lhs and rhs satisfy the same mild equation. Set f ( ) hv h t ( ) h v h (x) ev h(x) and g g + v h t : 1 the lhs is first formula for F (Z ) = e Z t (g). 2 the integrand on the rhs is the product This gives the Williams decomposition. dp (h) x D t e t 0 ds µ(h) s ((1 e Z t s (g) )1 Hmax h s) dp x Dt
Hint for the proof, h 4. For passing to the limit as h, we work directly on the Williams decomposition. no problem for the laws of the subtrees. weak convergence of the law of the spine amounts to check that : h v h t ( ) h v h (x) φ(x) φ( ) eλ 0t 1 in L 1 (P x ) as h. Interpretation We get a recursive decomposition of the superprocess as a trunk on which copies of the same superprocess are grafted.
From one particle to many So far, we have unearthed one particle, namely the most persistent. What about the second most persistent, and the third, and the fourth...? Idea : conditionally on the total mass process, in the homogeneous case, these particles form an exchangeable particle particle system, which, ponderated by the total mass process, allows to recover the whole superprocess.
Contents 1 The classical framework : from no particle to one particle 2 The lookdown framework : many particles.
Conversely, let us assume that we are given : some process Y (in the rôle of Z (1)). some increasing process U (in the rôle of the quadratic variation [Z (1)] of Z (1)). Question Question : How to define the genealogy of a population process with size Y and resampling events directed by U?
U being increasing : U t = Ut k + s t U s where U k is continuous and U s = U s U s. We define N ρ = δ (t,π) and, for each t > 0, the exchangeable partition Π of t 0, U t =0 N has a unique non trivial block with asymptotic frequency U(t)/Y t. N k is a Poisson point measure with intensity (du k (t)/y 2 (t)) µ k, and the Kingman measure µ k assigns mass one to partitions with a unique non trivial block consisting of two different integers. We let : N = N ρ + N k
Let R 0 be some random probability measure on E independent of N. Conditionally on (R 0, N), we can define a particle system X = (X t (n), t 0, n N) : the initial state (X 0 (n), n N) is an exchangeable sequence valued in E with de Finetti measure R 0. between the reproduction events, X t (n) mutates according to a Markov process in E with law P x when started at x independently for each n. each atom (t, Π) of N k + N ρ is a reproduction event : let j 1 < j 2 <... be the elements of the unique block of the partition Π which is not a singleton. Then put { Xt (j l ) = X t (j 1 ) for each integer l X t (n) = X t (n #{l > 1, j l n}) if n / {j l, l N}.
Theorem (Donnelly and Kurtz, 99) The sequence (X t (n), n N) is then exchangeable. We denote by R its de Finetti measure : and define 1 R t := lim n n n δ Xt (i). i=1 (Z t, t 0) := (Y t R t, t 0). Notice the following consequence of the de Finetti Theorem : Conditionally on R t, (X t (n), n N) is i.i.d. according to R t.
Two useful examples : taking Y to be a CB process and U t = [Y ] t yields the (homogeneous) superprocesses. taking Y = 1 and U a subordinator (with jumps not bigger than 1) yields a Generalized Fleming Viot process. We introduce the filtrations : F t = σ((y s, s t), (X s, s t)) knows everything about the genealogy. G t = σ(z s, s t) just knows the resulting type distribution of the population. D t will denote the filtration induced by the spatial motion with law P.
Definition of the additive h-transform Assume there is some deterministic m = m(t) such that (Y t /m(t), t 0) and (m(t)h t (X t (1)), t 0) are two martingales in their own filtrations. Then we have that : (Z t (h t ), t 0) is a G t - martingale We may now define the additive h-transform Z H of Z. ( ) A G t, E(Z H Zt (h A) = E t ) Z 0 (h 0 ) 1 A(Z )
A particle system for the h-transform Assume m = 1. We define the law of a new process by the following requirements : (Y h, U h, X h ) (i) Initial condition (if deterministic) is unchanged. (iv) (Y h, U h ) is distributed according to : A G t, P((Y h, U h ) A) = E ( ) Yt 1 Y A (Y, U). 0 (v) (Xt h(n), n N, t 0) is defined as (X t (n), n N, t 0), except that (Xt h (1), t 0) now follows : ( ) A D t, P(X h ht (X (1) A) = E t (1)) h 0 (X 0 (1)) 1 A(X (1)). Interpretation Factorization of the effect of the h-transform : first particle responsible for the change of the spatial motion/ total mass process for the change of the mass.
Noting that E(X t (1) G t ) = R t, we get by projection on the smaller filtration G t : Theorem We have that : (a) the limit of the empirical measure : R h t 1 := lim n n δ X h t (i) 1 i n exists a.s. (b) the process (Yt hrh t, t 0) is distributed as (Z H t, t 0).
The first particle is precursory Corollary Conditionally on {R h t = µ}, Xt h (1) is distributed according to : P(X h t (1) dx) = h t (x) µ(h t ) µ(dx), and (Xt h (n), n 2) is an independent exchangeable random sequence with de Finetti measure µ.
Application : the branching case. Question Is there an individual responsible for the additional mass? Assume Y is a is pure jump CB(ψ) with ψ(λ) = αλ + (0, ) [e λl 1 + λl]π(dl) and U = [Y ]. Jumps of Y form a PPM with intensity Y t π(dl) at time t. Y h is a CBI(ψ, ψ ψ (0)). Jumps of Y h form a PPM with intensity Yt h π(dl) + lπ(dl) at time t. Jump l of Y h at time t is associated to a reproduction of the first level particle with probability l/y h (t) = l/(y h (t ) + l). Reproduction rate of the first level particle : l/(yt h + l)(y t h π(dl) + lπ(dl)) = lπ(dl). Reproduction rate of the other particles : (1 l/(yt h + l))(y t h π(dl) + lπ(dl)) = Y h t π(dl). Corollary The additive h-transform of a homogeneous superprocess is just the usual process, plus some immigration along a spine moving as an h-transform.
Application : the GFV process The Generalized Fleming Viot process is the process Z (= R) obtained when setting : Y = 1, U is a subordinator with jumps no greater than 1, In that case, we denote by φ the Laplace exponent of the subordinator U : 1 φ(λ) = cλ + (1 e λx )π(dx) 0 where c 0 and 1 0 x π(dx) <. The genealogy of the lookdown particle system is by construction described by the Λ-coalescent of Pitman. The finite measure Λ may be recovered from φ : 1 0 1 g(x)λ(dx) = cg(0) + g( x)π(dx). 0 Corollary The additive h-transform of a GFV is just the usual GFV process with the first level particle moving as an h-transform.
The GFV process in finite state space without mutation Let us assume furthermore that E = {1, 2,..., K }, and P x = δ x. Set ν(dx) = x 2 Λ(dx). Define the pushing rates. i(i 1) 1 r i = c + ν(dx) (1 (1 x) i ix(1 x) i 1) 2 0 i(i 1) 1 = c + ν(dx)p(bin(i, x) 2) 2 0 We have that, for 1 K K : ( ) K i=1 R t ({i}) E( K i=1 R 0({i})) er K t, t 0 is a non negative G-martingale. Thus we may define, for 1 K K, ( ) A G t, P(R H K i=1 A) := E 1 A (R) R t ({i}) E( K i=1 R 0({i})) er K t.
We define from X a new particle system X h as follows : (i) The random sequence X0 h is such that the finite sequence ) (X0 h(j), 1 j K is a uniform permutation of {1,..., K }, and, ) independently, the sequence (X0 h(j), j K + 1 is exchangeable with asymptotic frequencies R0 h given by : P(R0 h dx) := ( K i=1 R 0({i}) ) P(R 0 dx). E K i=1 R 0({i}) (ii) The reproduction events are driven { by the restriction of the Poisson Point } measure N = N k + N ρ to V := (s, Π), Π [K ] = {{1}, {2},..., {K }}, that is the atoms for which the reproductions events do not involve more than one of the first K levels. Interpretation Forget some reproduction events/ Introduce immigration.
We define the lowest level L(t) at which the first K types have appeared : L(t) = inf{i 1, {1,..., K } {X t (1),..., X t (i)}} The random variable L(t) is F t measurable, but not G t measurable. Lemma The law of the process X h is absolutely continuous with respect to the law of the process X on each F t and : ( ) A F t, P(X h 1 {L(t)=K } A) = E 1 A (X ) P(L(0) = K ) er K t.
Noting that E(L(t) = K G t ) = K! 1 i K R t ({i}), we get by projection on the smaller filtration G t : Theorem We have that : (a) the limit of the empirical measure : exists a.s. R h t 1 := lim n n δ X h t (i) 1 i n (b) the process (R h t, t 0) is distributed as (RH t, t 0).
Define the family of processes R ( t) by : A G t, P(R ( t) A) := P ( R A K R t ({i}) = 0 ). i=1 The process R ( t) thus corresponds to the process R conditioned on non extinction of each of the first K types before time t. We have the following extension of Kimura s Theorem for GFV processes : Theorem Let t 0 0 be fixed. Assume that lim l P K +1 (L(l)< ) P K (L(l)< ) = 0. The family of processes (R ( t), t t 0 ) weakly converges on G t0 as t towards the law of R h.
Let us assume that K = K = 2 : two type population conditioned on non absorbtion. We define, for f C 2 ([0, 1]), and x [0, 1] : and 1 L 0 f (x) := c(1 2x)f (x) + y(1 y)ν(dy)[f (x(1 y) + y) f (x)] 1 + y(1 y)ν(dy)[f (x(1 y)) f (x)], 0 L 1 f (x) := 1 2 cx(1 x)f (x) + x 0 1 0 (1 y) 2 ν(dy)[f (x(1 y) + y)) f (x)] 1 + (1 x) (1 y) 2 ν(dy)[f (x(1 y)) f (x)]. 0 Proposition Let L h be the generator of R h {1}. We have that : L h = L 0 + L 1.
Assume K = 1, and K = 2 : two type population conditioned on absorbtion by 1. We define, for f C 2 ([0, 1]), and x [0, 1] : and 1 L 0 f (x) := c(1 x)f (x) + yν(dy)[f (x(1 y) + y) f (x)] 0 L 1 f (x) := 1 2 cx(1 x)f (x) + x 1 0 (1 y)ν(dy)[f (x(1 y) + y)) f (x)] 1 + (1 x) (1 y)ν(dy)[f (x(1 y)) f (x)]. 0 Proposition Let L h be the generator of R h {1}. We have that : L h = L 0 + L 1.