An introduction to growth-fragmentations An unfinished draft

Size: px
Start display at page:

Download "An introduction to growth-fragmentations An unfinished draft"

Transcription

1 An introduction to growth-fragmentations An unfinished draft Quan Shi December 13, 217 Abstract Growth-fragmentation processes describe branching systems of particles, in which the mass of each particle may vary and split into smaller masses randomly as time passes. Such examples arise in several important models in statistical physics and large random structures. The purpose of this lecture is to provide an elementary introduction to (binary) self-similar growthfragmentation processes [5, 6. We shall study a few fundamental martingales, characterize extinction or explosion, and present the long-time asymptotic behavior. 1 Cell systems and growth-fragmentations Bertoin [6 developed a general construction of growth-fragmentations, which can be conveniently described as a cell system. Each cell may grow continuously and divide into two cells occasionally. These dynamics, both the growth and the splitting, are encoded by a càdlàg Markov process X = (X(t), t ) on [, ) with no positive jumps, which shall be referred to as a cell process. Specifically, at initial time there exists a single cell, called the Eve. As time proceeds, the size of Eve evolves according to the cell process X. At each jump time t of X with X(t) = X(t) X(t ) <, a daughter cell with initial size X(t) is born. We stress that the Eve survives after this cell division. Each daughter follows the same dynamics as the Eve and evolves independently of the others. This description can be made rigorous. Specifically, we shall index the cell system by the Ulam-Harris tree U := n= Nn, with N := {1, 2, 3,...} and N := { } by convention. An element u U is a finite sequence of natural numbers u = (n 1,..., n u ), where u N stands for the generation of u. Write u = (n 1,..., n u 1 ) for her mother and uk = (n 1,... n u, k) for her k-th daughter with k N. For each u U, we shall build a process X u that depicts the evolution of the size of the cell indexed by u as time passes, in the following way. For y >, write P y for the law of X starting from X() = y. We fix an enumeration method for the jumps of càdlàg functions [, ) R. That is, whenever we have such a function f, we have a canonical way (by using this method) to list all jump times of f in a sequence (t i ) i 1 and refer to t i as the i-th jump of f. Definition 1.1 ([6). For x >, a cell system X := (X u, u U) driven by X, in which the Eve cell has the initial size x, is built by the following description. 1. We set the birth time of by b := and let the Eve process X = (X (t), t ) be of law P x. Under P x, the process X is possibly killed at a certain time ζ (,, and we write X (t) = for any t ζ. Supported by SNSF grant P2ZHP quanshi.math@gmail.com 1

2 2. For an individual u U, suppose we have built X u. Say the i-th largest (in size) jump of X u occurs at time t i and has size x i := X u (t). Then its i-th daughter ui is born at time b ui := b u + t i and ui s size process X ui = (X ui (r), r ) has distribution P xi conditionally on X u, independent of the other individuals in the same generation. Let ζ ui be the life time of ui. Write P x for the law of this cell system X (recall that x > indicates the initial size of the Eve, i.e. X () = x). The cell system can be viewed as a Crump-Mode-Jagers branching process [13, which infers that the probability distribution P x indeed exists and is uniquely determined by the above description. Definition 1.2 ([6). Let X be a cell system driven by X. For every t, the multiset (that allows multiple instances of its elements) of the sizes of the cells alive at time t is X(t) := {X u (t b u ) : u U, b u t b u + ζ u }, where b u is the birth time of u. Then we call X := (X(t), t ) a (Markovian) growth-fragmentation process associated with the cell process X and we write P x for the law of X under P x. Remark 1.3. One can view a multiset I as a point measure i I δ i, where δ stands for the Dirac mass. Figure 1: A cell system (simulated by B.Dadoun) Theorem 1.4 (Theorem 1 in [6). Let f : (, ) { } (, ) be a function such that f( ) = and that inf f(y) < for every a >. y (a, ) Suppose that X satisfies [ f(x(t)) + r t f( X(r)) f(x), x >, t. (1.1) 2

3 Then f is called an excessive function for X, in the sense that for every x >, there is f(y) f(x), for all t. y X(t) Proof. We may assume that X is associated with a cell system X of law P x and write for mathematical expectation under P x. We will prove that the sequence Σ(i) := f(x u (t b u )) + f( X v (r b v )), i N u i,b u t v =i,b v t b v r t is a non-negative super-martingale, then Σ( ) = lim i Σ(i) exists almost surely and Σ( ) X(t), f(s + t, ). We thus deduce from Fatou s lemma that [ X(t), f [Σ() = f(x(t)) + f( X(r)) f(x), where the last inequality derives from (1.1). So it remains to prove that Σ(i) is a super-martingale. For every v with v = i, given F i 1 := σ(x u, u i 1) we have by (1.1) that f(x v (t b v )) + f( X v (r b v )) F i 1 f(x v ()). b v r t r t Summing over v of i-th generation on the event {t b v }, we get that f(x v (t b v )) + f( X v (r b v )) v =i,b v t v =i,b v t b v r t F i 1 f(x v ()) = f( X u (r b u )). v =i,b v t u =i 1,b u t b u r t Adding u i 1,b u t f(x u(t b u )) to both sides of inequality, we conclude that which means that Σ(i) is a super-martingale. [Σ(i) F i 1 Σ(i 1), 2 Homogeneous growth-fragmentation processes In this section we focus on homogeneous growth-fragmentations. This case is closely related to Lévy processes (càdlàg processes with independent and stationary increments). 2.1 Lévy processes We refer to [2, 14 for general theory of Lévy processes. Let ξ be a Lévy process without positive jumps, possibly killed at some independent time ζ with exponential distribution with parameter k >. Such a process is often referred to as a spectrally negative Lévy process (SNLP). The distribution of the SNLP ξ is characterized by 3

4 its Laplace exponent Φ : [, ) R: [ E e qξ(t) = e Φ(q)t, for all q, t. It is well-known that the convex function Φ is can be expressed by the Lévy-Khintchine formula Φ(q) = k σ2 q 2 + cq + (e qz 1 + q(1 e z )) Λ(dz), q, (2.1) (,) where k is the killing rate, σ, c R and the Lévy measure Λ on (, ) satisfies (z 2 1)Λ(dz) <. (2.2) (,) Then we say ξ is a SNLP with characteristics (σ, c, Λ, k). For every t, let ξ(t) := ξ(t) ξ(t ). The jump process (t, ξ(t)) t is a Poisson random measure with intensity dt Λ, where dt denotes the Lebesgue measure. The following statement is a consequence of the strong law of large numbers. Lemma 2.1 ([14, Theorem 7.1 & 7.2). Let ξ be a SNLP. Then we have If Φ (+) =, then ξ is oscillating, that is ξ(t) lim = E[ξ(1) = Φ (+) [, ). t t lim sup ξ(t) = and lim inf ξ(t) =. t t 2.2 Homogeneous growth-fragmentation processes For x >, denote by P x the law of the homogeneous cell process X(t) := xe ξ(t), t. Definition 2.2. Let X be a Markovian growth-fragmentation (Definition 1.2) associated with X. Then we call X a homogeneous growth-fragmentation. Let us introduce an important function κ: [, ) (, κ(q) := Φ(q) + (1 e z ) q Λ(dz), q. (2.3) We call κ the cumulant of ξ or X or X. Let (,) q := inf{q : κ(q) < }, then we have q 2 because of (2.2). Note that κ(q) < for all q > q, and that κ is infinitely differentiable and strictly convex on (q, ). We stress that κ does not characterize the law of ξ; see [17, Lemma 2.1. Theorem 2.3 ([5). Let X be a homogeneous growth-fragmentation with cumulant κ. For q > q, there is [ y X(t) y q = x q exp(κ(q)t), for all t. (2.4) 4

5 Proof. For simplicity, let us assume the support of the measure Λ is included in [ log 2, ). For the general case the arguments are quite similar but the notations become heavier; see [17, Proposition For ɛ >, define two independent Lévy processes ξ [ɛ and η [ɛ, where η [ɛ is a compound Poisson process with (finite) Lévy measure Λ (,log(1 ɛ)) and Laplace exponent Φ [ɛ η (q) := (,log(1 ɛ)) (1 ez ) q Λ(dz), and ξ [ɛ a Lévy process with Laplace exponent Φ [ɛ ξ (q) := Φ(q) Φ[ɛ η (q). Then the process ξ [ɛ + η [ɛ has the same law as ξ. Let us truncate the cell system at level ɛ > in the following way. For every u U, at each jump t i of the process X u, we kill the child ui, as well as its descendants, if and only if X u (t i ) X u (t i ) ɛ ξ(t) log(1 ɛ). Then the dynamics of the truncated system has the following description. It starts with an initial fragment whose size evolves according to the process exp(ξ [ɛ ). We run an independent Lévy process η [ɛ. At the first jump time τ of η [ɛ, this fragment is replaced by two particles with respective initial sizes y 1 := e ξ[ɛ (τ) e η[ɛ (τ) and y 2 := e ξ[ɛ (τ) (1 e η[ɛ (τ) ). The two children continue to evolve in a similar way, independent one of the other. Let X [ɛ be the growthfragmentation associated with this truncated system, then one can check that where denotes the multiset sum. X [ɛ (t + τ) = y 1 X [ɛ (t) y 2 X [ɛ (t), From properties of the compound Poisson process η [ɛ, we know that τ has exponential distribution with parameter β := Λ ( (, log(1 ɛ)) ) <, and η [ɛ (τ) has distribution Λ ( (, log(1 ɛ)) ). we have m(q, t) := E 1 [ y X [ɛ (t) y q t = P(τ > t)e[exp(ξ [ɛ (t)) + βe βs E 1 [y q 1 m(q, t s) + yq 2m(q, t s)ds t = e βt exp(φ [ɛ ξ (q)t) + βe βs exp(φ [ɛ ξ (q)s)m(q, t s)β 1 Solving this equation yields m(q, t) = e κ[ɛ (q)t, where κ [ɛ (q) := Φ [ɛ ξ (q) + (e qz + (1 e z ) q )Λ(dz) = Φ(q) + (,log(1 ɛ)) Letting ɛ +, the monotone convergence completes the proof. (,log(1 ɛ)) (,log(1 ɛ)) (e qz + (1 e z ) q )Λ(dz)ds. ((1 e z ) q )Λ(dz). Rearranging the elements of X(t) in decreasing order, we denote the obtained decreasing sequence by X 1 (t) X 2 (t).... The notation shall be adopted throughout the rest of this note. Proposition 2.4. The following properties holds for a homogeneous growth-fragmentation X: 5

6 (P1) (Temporal branching property) For s, write X(s) = {X 1 (s), X 2 (s), }. Then conditionally on σ(x(r), r s), the distribution of the process X(t + s) t is the same as the (multiset) sum of a sequence of independent growth-fragmentations (X [i ) i 1, where each X [i has distribution P Xi (s). (P2) (Homogeneity) For x >, the process {xy, y X(t) } t Under P 1 has the law of P x. Proof. It is easy to see that the properties hold for the truncated growth-fragmentation X [ɛ as in the proof of Theorem 2.3. Further, for every t, we have lim [ ɛ + y X(t)\X [ɛ (t) y q =. Then the temporal branching property can be transferred to X by letting ɛ The additive martingale For ω > q (such that κ(ω) < ), it follows from Theorem 2.3 that the following process under P x has a constant mean value 1: M(ω, t) := x ω e tκ(ω) i 1 X i (t) ω, t Then it follows from the temporal branching property that M(ω, ) is a martingale with respective to the filtration F t := σ(x(s), s t). Let us to define a change of measure: for every A F t, P ω,t x (A) := (M(ω, t)1 A ). Then the martingale property of M(ω, ) ensures that P ω x is indeed a probability on F t, with consistency: for every s t and A F s, there is the identity P ω,t x (A) = P ω,s (A). By Kolmogorovs theorem, there exists a probability measure P ω x on F such that P ω x Ft = P ω,t x. To describe the law of X under the new measure P ω x, we introduce a new cell system Y := (Y u, u U) constructed in follow manner. The Eve cell is born at the birth time b =, and the Eve process is given by Y (t) := xe η(t), with x > and η a (non-killed) SNLP with Laplace exponent Φ ω ( ) := κ( + ω) κ(ω). More precisely, the Lévy process η has characteristics (σ, c ω, Λ ω, ), where c ω := c + σ 2 ω + (,) x ( (1 e z ) (e ωz (1 e z ) + (1 e z ) ω e z) Λ(dz), and the Lévy measure Λ ω on (, ) is defined such that for every bounded measurable function g on (, ) there is ( g(z)λ ω (dz) = e ωz g(z) + (1 e z ) ω g(log(1 e z )) ) Λ(dz). (,) (,) Regeneratively, given Y u and b u, say t i is the i-th jump of Y u with y i := Y u (t i ) >, then ui U is born at time b ui := b u + t i, and Y ui has the law of (y i e ξ(t), t ), independent of the other Y uj with j i. 6

7 Write Q x for the law of Y and let Y(t) := {Y u (t b u ) : u U, t b u }. Theorem 2.5 (Spinal Decomposition [8, 18). For every x >, the process Y under Q x has the same distribution as X under P ω x. This theorem has been proven in [8 for the case with finite dislocation rate; a complete proof of this theorem is given in [18, Theorem 5.2. For every x >, the (non-negative) additive martingale M(ω, t) converges P x -almost surely to a limit M(ω, ) as t. The following result shows how the uniform integrability of M(ω, ) depends on the value of ω. Let P x( ) := P x ( non-extinction), where the non-extinction event refers to {X(t) for all t }. Note that P x (non-extinction) = 1 whenever k = or Λ((, )) =. Theorem 2.6 ([12, Theorem 2.3). Let x > and q := sup{q q : qκ (q) κ(q) < } [q,. 1. If ω [ q, ), then M(ω, ) is P x -almost surely equal to zero. 2. If ω (q, q), then [M(ω, ) = 1 and M(ω, ) is P x-almost surely strictly positive. 1 Proof. This statement was proved by Dadoun [12 via a reduction to branching random walks; here we offer a direct proof by using the spine decomposition. Let us consider the system Y with distribution Q x and let M(ω, t) = u U Y u (t b u )1 {bu t<b u+ζ u}; then Y has law P ω x under Q x. We omit the subscript x as it is clear. 1. By a fundamental result in measure theory (see e.g. [1, Corollary 1), it suffices to prove that lim sup M(ω, ) =, t P ω -a.s. As ω q, there is κ (ω)ω κ(ω). Then for every t, we have M(ω, t) exp ( κ(ω)t + ωη(t) ) ( exp ω ( η(t) κ (ω)(t) )). Since by Lemma 2.1 there is lim sup t η(t) κ (ω)t t = +, the claim follows. 2. By [19, Lemma 4.2 it suffices to show that where G := σ(η(t), t ). lim inf t Q[ M(ω, t) G <, Q-a.s., For every i 1, by the independence between the sub-population (Y iv, v U) and G, we have the identity Q [ Y iv (t b iv ) ω G = Yi () ω e κ(ω)(t bi) = Y (b i ) ω e κ(ω)(t bi). v U 1 We are in the conservative case; so the condition (2.7) in [12 is satisfied. Further, since ωκ (ω) < κ(ω) and by convexity that κ(ω) κ() ωκ (ω), we have κ() (,. 7

8 We hence deduce that Q [ M(ω, t) G = e κ(ω)t+ωη(t) + e κ(ω)t Q [ Y iv (t b iv ) ω G i 1 v U = e κ(ω)t+ωη(t) + e κ(ω)s e ωη(s ) (1 e η(s) ) ω. <s t Let Y n := n 1<s n e κ(ω)(s (n 1)) e ω[η(s ) η(n 1) (1 e η(s) ) ω. (Y n ) n are i.i.d. random variables, with finite mean value E[Y 1 = 1 For any interger n > t, observe that Q [ M(ω, t) n 1 G e κ(ω)t+ωη(t) + Y 1 + e (κ(2ω) 2κ(ω))s ds (1 e z ) ω Λ(dz) <. (,) ( exp i ( κ(ω) + ω η(i) i By the definition of Lévy processes, + log Y i+1 ) ). i By the law of large numbers we have log Y i+1 i κ(ω) a.s. Then the claim follows. a.s. We also have by Lemma 2.1 that ω η(i) i ωκ (ω) < 3 Self-similar growth-fragmentation processes 3.1 Lamperti transform To construct self-similar growth-fragmentations, we first provide some background [14, Sec 13 on positive self-similar Markov processes and Lamperti s representation of the latter. Let α R and ξ be a SNLP. We define a time-change by { r } τ (α) (t) := inf r : exp( αξ(s))ds t, t. For every x >, denote by P x the law of the process ( X (α) (t) := x exp ξ ( τ (α) (x α t) )), t, (3.1) with convention X (α) (t) := whenever For every γ >, one can deduce from (3.1) that t ζ (α) := x α exp( αξ(s))ds. the law of (γx (α) (γ α t), t ) under P x is P γx. So we call X (α) a positive self-similar Markov process (pssmp) with index α. If the SNLP ξ has characteristics (σ, c, Λ, k) and Laplace exponent Φ as in (2.1), then we say that X (α) has characteristics (σ, c, Λ, k, α), or simply 8

9 (Φ, α). Using (3.1) we also have the following identity: X (α) (t) = X () (T (α) (t)) = X ()( t ) X (α) (s) α ds, t, where T (α) (t) := inf { r r : X() (s) α ds t }. Note that for every fixed t, T (α) (t) is a stopping time with respect to the filtration F t := σ(ξ(s), s t) = σ(x () (s), s t). Let us gather a few useful results on pssmps. Lemma 3.1. Let X (α) be a pssmp with characteristics (Φ, α). Then the process X := (X (α) ) 1 is a pssmp with characteristics ( Φ, α). Proof. ( X(t) = (X (α) (t)) 1 = x 1 exp ξ ( τ (α) (tx α ) )) ( = x 1 exp ξ ( τ ( α) (t(x 1 ) α ) )), where τ ( α) (t) := inf { r : Lemma 3.2. Suppose that Φ(q) <, then r exp( ( α)( ξ)(s))ds t}. [ ζ (α) otherwise, the above expected value is. Proof. Using Lamperti transform, we have [ ζ (X (α) (t)) q+α dt = E (X (α) (t)) q+α dt = x q 1 Φ(q) ; [ [ = E [ = E = x q+α exp ( (q + α)ξ(τ(x α t)) ) dt x q exp ( (q + α)ξ(τ(s)) ) ds x q exp ( qξ(r) ) dr x q exp ( Φ(q)r ) dr. Lemma 3.3 ([9, Theorem 1). Let X (α) be a pssmp with characteristics (Φ, α). Suppose that ξ is not arithmetic (i.e. there is not r > such that P (ξ(t) rz) for all t ). If α <, Φ() =, and Φ (+) (, ), then as x +, the probability measure P x (α) converge in the sense of finite-dimensional distributions to a probability measure denoted by P (α). The process Y under P (α) is a self-similar process with càdlàg path and no positive jumps, and lim t Y (t) = +, P (α) a.s. Further, under P (α) there is where I := e αξ(s) ds. E [f(y (t) α 1 ) = αφ (+) E[I 1 f(t/i), The following statement is a useful corollary of Lemma

10 Lemma 3.4. Let X (α) be a pssmp with characteristics (Φ, α), associated with ξ. Suppose that α >, Φ() =, and Φ (+) (, ). As t, the random variable t 1 α X (α) (t) converges in distribution to a law ρ given by 1 f(y)ρ(dy) := αφ (+) E[I 1 f(i 1 α ), where I := e αξ(s) ds. (, ) Proof. Applying Lemma 3.3 to (X (α) ) 1 gives the result. 3.2 Self-similar growth-fragmentation processes and excessive functions Definition 3.5. Let X (α) be a pssmp with characteristics (Φ, α), X (α) be a cell system generated by X, and X (α) be its associated growth-fragmentation. Then we call X (α) a self-similar growth-fragmentation process driven by X (α). Recall that the cumulant κ of ξ is defined by (2.3), then κ is also called the cumulant of the pssmp X. Theorem 3.6 ([6, Theorem 2 ). Suppose that there exists q > such that κ(q). (3.2) Then [ y X (α) (t) y q = [ u U X (α) u (t b u ) q x q, for all t. We say the function y y q is excessive for X (α). Proof of Theorem 3.6. We first prove that for every t [ X (α) (t) q + X (α) (s) q x q. s t Let us start with the case α =, where we have X () (t) = xe ξ(t). Since (t, ξ(t)) t is a Poisson random measure with intensity dt Λ(dz), the compensation formula leads to [ s t X () (s) q = [ An easy calculation leads to s t [ X () (t) q + s t e qξ(s ) (1 e ξ(s) ) = X () (s) q = t ( ) [e qξ(s ) (1 e z )Λ(dz) ds. (,) ( 1 κ(q) ) Φ(q) (1 eφ(q)t ) x q x q. (3.3) By the Markov property of the process X (), we hence know that X () (t) q + s t X() (s) q is a supermartingale with respect to the filtration F t := σ(ξ(s), s t). Finally, for α, recall that the Lamperti time-change T (α) (t) is a F t -stopping time, using the optional stopping theorem (for supermartingales) yields [ X (α) (t) q + X (α) (s) q = [X () (T (α) (t)) q + X () (s) q x q. s t The the claim follows from Theorem 1.4. s T (α) (t) 1

11 Proposition 3.7 ([6, Theorem 2). Suppose that (3.2) holds. Then X (α) satisfies the branching property (P1) and the following self-similarity: For x >, the re-scaled process (xx (α) (x α t)) t under P 1 has distribution P x. Proof. The proof of temporal branching property is similar as in the homogeneous case. For the self-similarity, consider the system X u(t) := xx u (x α t), b u := x α b u, u U, t. Then by the self-similarity of X, the process X has law P x. The jump sequence of X is given by (b j ) j 1, and there is the identity X (b j ) = X j (). So the law of (X u, b u) u =1 under P 1 has the law of (X u, b u ) u =1 under P x. By iterating this argument we complete the proof. 3.3 Extinction Let X () be a homogeneous cell system associated with the cell process X () (t) = xe ξ(t) and P x () be its distribution. We can construct all cell systems (X (α), α R) in the same probability space as X () by using the Lamperti transforms. Specifically, for every u U, let { with τ u (α) (t) := inf r : r X u () X u (α) (t) := X u () (τ u (α) (t)), } (s) α ds t. Theorem 3.8 ([6, Corollary 3). Suppose that α < and that κ(q) < for some q >. Then the extinction time inf{t : X (α) (t) = } is P x -a.s. finite for every x >. Proof. For every u U and i u, let u i N i be the ancestor of u at the i-th generation. Consider the ancestral lineage of u: Y u (α) (t) := X u (α) i (t b ui ), t [b ui, b ui + ζ ui ) for i u. By convention Y u (α) (t) := for t b u + ζ u. We see that Y (α) is also related to Y () by the Lamperti transformation: { with T u (α) (t) := inf r : r Y() u Y u (α) (t) = Y u () (T u (α) (t)), } (s) α ds t. In particular, we have the identity b (α) u + ζ (α) u = Y u () (s) α ds. { } Let X () 1 (t) = max{x u(t b u ), u U} = max{y u (t), u U}, and C := sup e κ(q)t X () 1 (t)q : t. Then C < by the convergence of the martingale M(q, t) (Theorem 2.6). We hence have Y u () (s) α ds C α κ(q) α q e q s ds = C α q ακ(q) <, which entails that { } inf{t : X (α) (t) = } = sup b (α) u + ζ u (α) : u U < C α q q ακ(q). 11

12 We complete the proof. 3.4 Local explosion When (3.2) is not satisfied, the self-similar growth-fragmentation X (α) with index α explodes in finite time. Theorem 3.9 ([8, Theorem 2.3). Suppose that α and that κ(q) >, for every q. (3.4) Then for every < a < a, there exists almost surely a random time T >, such that X (α) (T ) has infinitely many elements in the interval (a, a ). The proof of Theorem 3.9 is based on the following two lemmas. Lemma 3.1. Under condition (3.4), κ reaches its minimum of [, ), at q m (, ). Lemma 3.11 ([8, Lemma 3.5). 1. Suppose that α <. Fix < a < a, then there exists < t < t such that ) lim inf P x (X (α) (r) (a, a ) for all t r t > x + 2. Suppose that α >. Fix < a < a, then there exists < t < t such that ) lim inf P x (X (α) (r) (a, a ) for all t r t > x Proof of Theorem 3.9. We first prove for the case α <. Let us consider the minimum q m as in Lemma 3.1. There is κ (q m ) = < κ(qm) q m, which infers that q m (q, q) ( q is defined in Theorem 2.6). Then can choose q < q m < q +, such that q, q + (q, q), and κ (q ) < < κ (q + ). Let P x and P + x be the measures defined as in Theorem 2.5 by using M(q, ) and M(q +, ) respectively. By Theorem 2.6, both these two martingales are uniform integrable, so we have P x (A) = [M(q, )1 A, A σ(x(t), t ), and a similar result for P + x. We also know from Theorem 2.6 that M(q, ) is strictly positive conditionally on no sudden death, hence the law of X (α) under P x conditionally on no sudden death is equivalent to that under P x. Let Y be a cell system as in Theorem 2.5, and Y (α) be the Lamperti transform of Y. associated growth-fragmentation. Then under P x, X (α) has the same distribution as Y (α). Let us consider Y (α). κ( + q ) κ(q ). Since α < and Φ (+) <, we have ζ (α) subsequence of jump times of Y (α) Let Y (α) be its The Eve process Y (α) is a pssmp with characteristics (Φ, α), where Φ ( ) := < and Y (α) (ζ (α) ) =. So there exists a, denoted by (t ki ) i 1, such that lim t k i = ζ (α) and lim Y (α) (t ki ) =. i i Next, consider the sub-populations X (α) k i generated at (t ki ). For every < a < a, using Lemma 3.11 and the Borel-Cantelli lemma, conditionally on Y (α) (r) (a, a ) for all r [t, t. Then for any r (t, t ), the multiset X (α) (ζ (α) X (α) k i, almost surely there exists infinitely many i such that + r) has infinitely many elements in (a, a ), P x almost surely. The equivalence between P x and P x conditionally on no sudden death completes the proof (for the case α < ). 12

13 To prove for the case α >, we just consider P + and use the very similar arguments. We finally prove the two lemmas. [TODO 4 Martingales in self-similar growth-fragmentations The object of this section is to establish two temporal martingales for a self-similar growth-fragmentations. We fix α R in this section and omit the superscript (α) for simplicity. 4.1 The genealogical martingales We observe that Z := (Z u := log X u (), u U) is a branching random walk, in the sense that for any u U, there is (Z ui Z u ) i 1 d = (Zi Z ) i 1. The Laplace transform of the point process of the first generation is [ m(q) := x q e q( log Xu()) [ = E 1 u =1 <s<ζ X(s) q = 1 κ(q), if Φ(q) < and κ(q) <. (4.1) Φ(q) where the last equality is obtained by a similar calculation as (3.3). By the branching random walk property, M(q, n) := x q m(q) n e qzu = x q m(q) n X u () q, n 1 u =n is a martingale, known as the additive martingale of the branching random walk Z. We recall two useful results for general branching random walks. Lemma 4.1 ([1, Theorem A, [11, Theorem 1). Let M(q, ) be the additive martingale of a certain branching random walk. 1. We have E[M(q, ) = 1 if and only if E[M(q, 1) log + M(q, 1) < and m(q) exp( qm (q)/m(q)) > 1. (4.2) 2. Suppose that there exists γ (1, 2, such that E[M(q, 1) γ <, and that for some θ (1, γ, there is m(θq) m(q) θ < 1. Note that this condition entails (4.2). Then M(q, ) converges a.s. and in L θ. Lemma 4.2. Suppose that (H1) there exists ω + > such that κ(ω + ) = and κ (ω + ) >. Then m(ω + ) = 1, and the martingale converges P x -a.s. to. M + (n) := M(ω +, n) = x ω + u =n e ω +Z u Proof. Since Φ(ω + ) < κ(ω + ) < and κ (ω + ) >, we have m (ω + ) = κ (ω + ) Φ(ω + ) and thus M + converges P x -a.s. to. m(ω + ) exp( ω + m (ω + )/m(ω + )) < 1, >. Then 13

14 Lemma 4.3. Suppose that (H1) holds and that (H2) (Cramér s condition) there exists ω (, ω + ) such that κ(ω ) = and κ (ω ) >. Then for any p [1, ω + ω ), the martingale M (n) := M(ω, n) = x ω u =n e ω Z u converges P x -a.s. and in L p (P x ). Its terminal value M ( ) is P -a.s. strictly positive. Note that if (3.2) holds and ξ is not the negative of a subordinator, then (H1) is satisfied. Proof. We first check that M (1) is in L ω +/ω. By the Lamperti transform, it is clear that we only need to prove for the case α =, i.e. X(t) = xe ξ(t). Using the compensation formula, we find that the following process is a F t -martingale: <s t X(s) ω + Φ(ω ) t X ω (s)ds. This martingale is purely discontinuous with quadratic variation <s t X(s) 2ω. We further have by [16, Lemma 3.1 that e qξ(t) dt is in L ω +/q for every q (, ω +, since E(e ω +ξ(1) ) = e Φ(ω +) < 1. By this observation and the Burkholder-Davis-Gundy inequality, in order to prove M (1) is in L ω +/ω, we only need to check that <s< X(s) 2ω is in L ω +/2ω. Applying the same arguments as above to the martingale <s t X(s) 2ω + Φ(2ω ) t X 2ω (s)ds, then it suffices to prove that <s< X(s) 4ω is in L ω +/4ω. By iteration, it suffices to prove that there exists k 1 that <s< X(s) 2k ω is in L ω +/(2 k ω ). Choose k large enough such that ω + /2 k ω 1, then Jensen s inequality infers that ( <s< X(s) 2k ω ) ω+ /2 k ω <s< By (4.1), the right-hand-side has finite mean value. The claim is proven. X(s) ω Many-to-one formula For the self-similar case with index α, there does not exist a cumulant in the sense of (2.4). Nevertheless, to describe the mean value of the particles, one can use an one particle picture developed in [7, which extends Corollary 2 in [3 for self-similar (pure) fragmentations. Suppose that (H1) holds and let Φ + (q) := κ(q + ω + ), q. It is known that there exists a self-similar Markov process Y + with characteristics (Φ +, α). 14

15 Theorem 4.4 (Many-to-one formula [7, Theorem 3.5). Let g be a non-negative measurable function on (, ) { } with g( ) =. Then for every x > and t, there is the identity [ g(x i (t))x i (t) ω + = x ω + [ g(y + (t)), where denotes the mathematical expectation under the law of Y + started from Y + () = x. Further, similar result holds for ω. To prove the theorem we need the following formula. Lemma 4.5. Suppose that κ(q) <. Then [ ( X i (t) q+α) dt = 1 κ(q) xq. Proof. By first using Lemma 3.2 and then (4.1), we have [ ( X i (t) q+α) [ dt = u U = [ u U = 1 Φ(q) xq ζu X u (t) q+α dt X u () q 1 Φ(q) (1 κ(q) Φ(q) )n. The series is convergent whenever κ(q) < (then Φ(q) < κ(q) < ), and the sum is Φ(q) κ(q), which leads to the desire result. Proof of Theorem 4.4. For any θ > such that κ(θ) <, let Φ(q) := κ(q + θ), for all q. Then Φ is the Laplace exponent of a killed Lévy process with killing rate κ(θ) >. Define [ ρ t (x, ), f := x θ f(x i (t))x i (t) θ. We shall prove that ρ t is the transition kernel of a pssmp with characteristics ( Φ, α). We first notice that the Chapman-Kolmogorov equation holds: ρ t+s (x, ), f = ρ t (y, ), f ρ s (x, dy). (, ) So ρ t is a sub-probability Markovian kernel. Next, we have the self-similarity: n= ρ t (x, ), f = ρ x α t(1, ), f(x ). Further, for every q such that Φ(q) = κ(q + θ) <, by Lemma 4.5 there is dt (, ) [ y q+α ρ t (1, dy) = x θ ( E 1 X i (t) θ+q+α) dt 1 = κ(θ + q) = 1 Φ(q). Then it follows essentially from Lamperti s characterization [15 that ρ t is the transition kernel of a pssmp with characteristics ( Φ, α); see [7, Lemma 3.6 for details. 15

16 We end the proof by the change of measure: since Φ(q) = Φ + (q + θ ω + ), there is the identity which induces the desired many-to-one formula. ρ t (x, ), f = E[f(Y + (t))(y + (t)) θ ω +, 4.3 Two temporal martingales Theorem 4.6 ([7, Corollary 3.7). Suppose that (H1) and (H2) hold. 1. If α, then under P x, M + (t) := x ω + X i (t) ω +, t is a martingale with respect to F t := σ(x(s), s t). 2. If α >, then M + is a supermartingale with [M + (t) ct (ω + ω )/α x ω + ω, as t. Proof. Applying the many-to-one formula entails that [M + (t) = P x (Y + (t) (, )) = P x (ζ + > t) = P (x α ) exp( αξ + (s))ds > t, where ξ + is a SNLP with Laplace exponent κ( + ω + ). As ξ+ (t) t κ (ω + ) >, if α then the integral is almost surely infinite, which yields [M + (t) = 1. This is enough to deduce that M + is a martingale by the temporal branching property. On the other hand, when α >, since E[e (ω ω + )η + (1) = 1 and E[ η + (1) e (ω ω + )η + (1) = E[ η (1) <, then results in [16 entail that ( ) P exp( αξ + (s))ds > t ct (ω + ω )/α. We complete the proof. Theorem 4.7 ([7, Corollary 3.9, Theorem 3.1). Suppose that (H1) and (H2) hold. 1. If α, then M (t) := x ω X i (t) ω, t is a uniformly integrable F t -martingale under P x, and M is bounded in L p (P x ) for any p [1, ω + ω ). Further, there is M ( ) = M ( ) under P x. 2. If α <, then M is a supermartingale with [M (t) c t (ω + ω )/α x ω + ω, as t. 16

17 Proof. We shall only prove that when α, there is M ( ) = M ( ) under P x. Then M is bounded in L p (P x ) for any p [1, ω + ω ) by Theorem 2.6. The other parts can be deduced by similar arguments as in M + case. Let us introduce X(t) := {(X u (t b u ), u ) : u U, b u t < b u + ζ u }, and define a new filtration F t := σ( X(s), s t). Then a variation of the branching property still holds [TODO ; see [7, Lemma 3.2 for details. By this branching property and (4.1), we have that [ P x 1 {bu t} X u (s b u ) ω F t = 1 {bv t<bv+ζ v}x u (t b v ) ω. t s<b u+ζ u u =n Then we have by the cell system construction that [ P x [M (n) F t = P x X u () ω F t = u =n+1 u =n+1 v n 1 {bu t}x u () ω + 1 {bv t<bv+ζ v}x v (t b v ) ω. Letting n, recall that M (n) converges to M ( ) in L p for any p [1, ω + ω ), then v n P x [M ( ) F t 1 {bu t<bu+ζ u}x u (t b u ) ω = M (t). u U On the other hand, by the many-to-one formula we have ( [M (t) = P x (Y (t) (, )) = P x (ζ > t) = P x x α ) exp( αξ (s))ds > t, where ξ is a SNLP with Laplace exponent κ( + ω ). When α, we have [M (t) = 1. So [M (t) = [M ( ) and we hence conclude that P x [M ( ) F t = M (t). This completes the proof. 4.4 Asymptotics of self-similar growth-fragmentations with α > We finally use Theorem 4.7 to establish the following statement. Theorem 4.8 ([12, Theorem 3.4). Let η be a SNLP with Laplace exponent Φ ( ) := κ( + ω ). Suppose that η is not arithmetic, that is there is no r > such that P(η (t) rz) = 1 for all t. Let I := exp(αη (s))ds and define a measure ρ on (, ): for every f with compact support on (, ), 1 f(y)ρ(dy) := ακ (ω ) E[I 1 f(i 1 α ). Then ρ is a probability measure, and we have for every p (1, ω= ω ), lim t X i (t) ω f(t 1 α Xi (t)) = M ( ) f(y)ρ(dy) in L p (P 1 ). Proof. Since (Φ ) (+) = κ (ω ) < and α >, we have by Lemma 3.4 that ρ is indeed a well-defined probability measure. Let H(t + t 2 ) := X i(t + t 2 ) ω f((t + t 2 ) 1 α X i (t + t 2 )). By the branching property, we can write E 1 [H(t + t 2 ) F t = X i (t) ω A i (t) 17

18 with A i (t) := X i,j (X i (t) α t 2 )f((t + t 2 ) 1 α Xi (t)x i,j (X i (t) α t 2 )), j=1 where X [i := (X i,1 (t), X i,2 (t),...) t are i.i.d. copies of X, further independent of F t. By the many-to-one formula, we deduce that ( E 1 [A i (t) = E 1 [f (1 + t 1 ) 1 α xi t 2 α Y (x α i t )) 2 xi. =X i (t) For every i, by Lemma 3.4 we have x i t 2 α Y (x α i t2 ) converges to ρ weakly as t. So we have for all x α i t2 > t 1 2 i.e. x i > t 3/2α uniformly, [ ( ) lim E 1 f (1 + t 1 ) 1 α xi t 2 α Y (x α i t 2 ) = t On the other hand, the many-to-one formula leads to f(y)ρ(dy). E 1 [ X i (t) ω 1 {Xi (t) t 3/2α } = E 1[t 1/α Y (t) < t 1/2α, which tends to. Since M is bounded in L p (P 1 ), the convergence also holds in L p (P 1 ). Summarizing, we conclude that It remains to prove that lim E[H(t + t t2 ) F t = M ( ) f(y)ρ(dy) in L p (P 1 ). lim H(t + t t2 ) E[H(t + t 2 ) F t = in L p (P 1 ). Since M is bounded in L p (P 1 ), using Doob s maximal inequality, we have that f sup t j=1 X i,j(t) ω is in L p (P 1 ). We can also obtain from the many-to-one formula that lim E 1[ t X i (t) pω =. Therefore, by a variation of law of large numbers [4, Lemma 1.5, we prove the claim. Acknowledgement These lecture notes are based on mini-courses given in Oxford and CIMAT. I am very grateful to the organizers, Christina Goldschmidt, Andreas Kyprianou and Juan Carlos Pardo, for giving me the chance and for the organization. I also thank to the participants for questions and fruitful comments. References [1 K. B. Athreya. Change of measures for Markov chains and the L log L theorem for branching processes. Bernoulli, 6(2): , 2. [2 J. Bertoin. Lévy processes, volume 121 of Cambridge Tracts in Mathematics. Cambridge University Press, Cambridge,

19 [3 J. Bertoin. Self-similar fragmentations. Ann. Inst. H. Poincaré Probab. Statist., 38(3):319 34, 22. [4 J. Bertoin. Random fragmentation and coagulation processes, volume 12 of Cambridge Studies in Advanced Mathematics. Cambridge University Press, Cambridge, 26. [5 J. Bertoin. Compensated fragmentation processes and limits of dilated fragmentations. Ann. Probab., 44(2): , 216. [6 J. Bertoin. Markovian growth-fragmentation processes. Bernoulli, 23(2): , 217. [7 J. Bertoin, T. Budd, N. Curien, and I. Kortchemski. Martingales in self-similar growth-fragmentations and their connections with random planar maps. Preprint, arxiv: v1 [math.pr, 216. [8 J. Bertoin and R. Stephenson. Local explosion in self-similar growth-fragmentation processes. Electron. Commun. Probab., 21:Paper No. 66, 12, 216. [9 J. Bertoin and M. Yor. The entrance laws of self-similar Markov processes and exponential functionals of Lévy processes. Potential Anal., 17(4):389 4, 22. [1 J. D. Biggins. Martingale convergence in the branching random walk. J. Appl. Probability, 14(1):25 37, [11 J. D. Biggins. Uniform convergence of martingales in the branching random walk. Ann. Probab., 2(1): , [12 B. Dadoun. Asymptotics of self-similar growth-fragmentation processes. Electron. J. Probab., 22:3 pp., 217. [13 P. Jagers. General branching processes as Markov fields. Stochastic Process. Appl., 32(2): , [14 A. E. Kyprianou. Fluctuations of Lévy processes with applications. Springer, second edition, 214. [15 J. Lamperti. Semi-stable Markov processes. I. Z. Wahrscheinlichkeitstheorie und Verw. Gebiete, 22:25 225, [16 V. c. Rivero. Tail asymptotics for exponential functionals of Lévy processes: the convolution equivalent case. Ann. Inst. Henri Poincaré Probab. Stat., 48(4): , 212. [17 Q. Shi. Growth-fragmentation processes and bifurcators. Electron. J. Probab., 22:25 pp., 217. [18 Q. Shi and A. R. Watson. Probability tilting of compensated fragmentations. Preprint, arxiv: v1 [math.pr, 217. [19 Z. Shi. Branching random walks, volume 2151 of Lecture Notes in Mathematics. Springer, Cham,

Introduction to self-similar growth-fragmentations

Introduction to self-similar growth-fragmentations Introduction to self-similar growth-fragmentations Quan Shi CIMAT, 11-15 December, 2017 Quan Shi Growth-Fragmentations CIMAT, 11-15 December, 2017 1 / 34 Literature Jean Bertoin, Compensated fragmentation

More information

Martingales in self-similar growth-fragmentations and their connections with random planar maps

Martingales in self-similar growth-fragmentations and their connections with random planar maps Probab. Theory Relat. Fields https://doi.org/10.1007/s00440-017-0818-5 Martingales in self-similar growth-fragmentations and their connections with random planar maps Jean Bertoin 1 Timothy Budd 2,3 Nicolas

More information

1 Informal definition of a C-M-J process

1 Informal definition of a C-M-J process (Very rough) 1 notes on C-M-J processes Andreas E. Kyprianou, Department of Mathematical Sciences, University of Bath, Claverton Down, Bath, BA2 7AY. C-M-J processes are short for Crump-Mode-Jagers processes

More information

Potentials of stable processes

Potentials of stable processes Potentials of stable processes A. E. Kyprianou A. R. Watson 5th December 23 arxiv:32.222v [math.pr] 4 Dec 23 Abstract. For a stable process, we give an explicit formula for the potential measure of the

More information

On Optimal Stopping Problems with Power Function of Lévy Processes

On Optimal Stopping Problems with Power Function of Lévy Processes On Optimal Stopping Problems with Power Function of Lévy Processes Budhi Arta Surya Department of Mathematics University of Utrecht 31 August 2006 This talk is based on the joint paper with A.E. Kyprianou:

More information

Yaglom-type limit theorems for branching Brownian motion with absorption. by Jason Schweinsberg University of California San Diego

Yaglom-type limit theorems for branching Brownian motion with absorption. by Jason Schweinsberg University of California San Diego Yaglom-type limit theorems for branching Brownian motion with absorption by Jason Schweinsberg University of California San Diego (with Julien Berestycki, Nathanaël Berestycki, Pascal Maillard) Outline

More information

Stable and extended hypergeometric Lévy processes

Stable and extended hypergeometric Lévy processes Stable and extended hypergeometric Lévy processes Andreas Kyprianou 1 Juan-Carlos Pardo 2 Alex Watson 2 1 University of Bath 2 CIMAT as given at CIMAT, 23 October 2013 A process, and a problem Let X be

More information

Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals

Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals Noèlia Viles Cuadros BCAM- Basque Center of Applied Mathematics with Prof. Enrico

More information

Lecture 17 Brownian motion as a Markov process

Lecture 17 Brownian motion as a Markov process Lecture 17: Brownian motion as a Markov process 1 of 14 Course: Theory of Probability II Term: Spring 2015 Instructor: Gordan Zitkovic Lecture 17 Brownian motion as a Markov process Brownian motion is

More information

P (A G) dp G P (A G)

P (A G) dp G P (A G) First homework assignment. Due at 12:15 on 22 September 2016. Homework 1. We roll two dices. X is the result of one of them and Z the sum of the results. Find E [X Z. Homework 2. Let X be a r.v.. Assume

More information

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539 Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory

More information

Continuous-state branching processes, extremal processes and super-individuals

Continuous-state branching processes, extremal processes and super-individuals Continuous-state branching processes, extremal processes and super-individuals Clément Foucart Université Paris 13 with Chunhua Ma Nankai University Workshop Berlin-Paris Berlin 02/11/2016 Introduction

More information

A Note on the Central Limit Theorem for a Class of Linear Systems 1

A Note on the Central Limit Theorem for a Class of Linear Systems 1 A Note on the Central Limit Theorem for a Class of Linear Systems 1 Contents Yukio Nagahata Department of Mathematics, Graduate School of Engineering Science Osaka University, Toyonaka 560-8531, Japan.

More information

Exponential functionals of Lévy processes

Exponential functionals of Lévy processes Exponential functionals of Lévy processes Víctor Rivero Centro de Investigación en Matemáticas, México. 1/ 28 Outline of the talk Introduction Exponential functionals of spectrally positive Lévy processes

More information

Modern Discrete Probability Branching processes

Modern Discrete Probability Branching processes Modern Discrete Probability IV - Branching processes Review Sébastien Roch UW Madison Mathematics November 15, 2014 1 Basic definitions 2 3 4 Galton-Watson branching processes I Definition A Galton-Watson

More information

Lecture 2. We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales.

Lecture 2. We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales. Lecture 2 1 Martingales We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales. 1.1 Doob s inequality We have the following maximal

More information

Asymptotic behaviour near extinction of continuous state branching processes

Asymptotic behaviour near extinction of continuous state branching processes Asymptotic behaviour near extinction of continuous state branching processes G. Berzunza and J.C. Pardo August 2, 203 Abstract In this note, we study the asymptotic behaviour near extinction of sub- critical

More information

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3 Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................

More information

A Change of Variable Formula with Local Time-Space for Bounded Variation Lévy Processes with Application to Solving the American Put Option Problem 1

A Change of Variable Formula with Local Time-Space for Bounded Variation Lévy Processes with Application to Solving the American Put Option Problem 1 Chapter 3 A Change of Variable Formula with Local Time-Space for Bounded Variation Lévy Processes with Application to Solving the American Put Option Problem 1 Abstract We establish a change of variable

More information

n E(X t T n = lim X s Tn = X s

n E(X t T n = lim X s Tn = X s Stochastic Calculus Example sheet - Lent 15 Michael Tehranchi Problem 1. Let X be a local martingale. Prove that X is a uniformly integrable martingale if and only X is of class D. Solution 1. If If direction:

More information

BRANCHING PROCESSES 1. GALTON-WATSON PROCESSES

BRANCHING PROCESSES 1. GALTON-WATSON PROCESSES BRANCHING PROCESSES 1. GALTON-WATSON PROCESSES Galton-Watson processes were introduced by Francis Galton in 1889 as a simple mathematical model for the propagation of family names. They were reinvented

More information

UPPER DEVIATIONS FOR SPLIT TIMES OF BRANCHING PROCESSES

UPPER DEVIATIONS FOR SPLIT TIMES OF BRANCHING PROCESSES Applied Probability Trust 7 May 22 UPPER DEVIATIONS FOR SPLIT TIMES OF BRANCHING PROCESSES HAMED AMINI, AND MARC LELARGE, ENS-INRIA Abstract Upper deviation results are obtained for the split time of a

More information

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition Filtrations, Markov Processes and Martingales Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition David pplebaum Probability and Statistics Department,

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

Learning Session on Genealogies of Interacting Particle Systems

Learning Session on Genealogies of Interacting Particle Systems Learning Session on Genealogies of Interacting Particle Systems A.Depperschmidt, A.Greven University Erlangen-Nuremberg Singapore, 31 July - 4 August 2017 Tree-valued Markov processes Contents 1 Introduction

More information

arxiv: v1 [math.pr] 20 May 2018

arxiv: v1 [math.pr] 20 May 2018 arxiv:180507700v1 mathr 20 May 2018 A DOOB-TYE MAXIMAL INEQUALITY AND ITS ALICATIONS TO VARIOUS STOCHASTIC ROCESSES Abstract We show how an improvement on Doob s inequality leads to new inequalities for

More information

Insert your Booktitle, Subtitle, Edition

Insert your Booktitle, Subtitle, Edition C. Landim Insert your Booktitle, Subtitle, Edition SPIN Springer s internal project number, if known Monograph October 23, 2018 Springer Page: 1 job: book macro: svmono.cls date/time: 23-Oct-2018/15:27

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 15. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

The Contour Process of Crump-Mode-Jagers Branching Processes

The Contour Process of Crump-Mode-Jagers Branching Processes The Contour Process of Crump-Mode-Jagers Branching Processes Emmanuel Schertzer (LPMA Paris 6), with Florian Simatos (ISAE Toulouse) June 24, 2015 Crump-Mode-Jagers trees Crump Mode Jagers (CMJ) branching

More information

Exercises in stochastic analysis

Exercises in stochastic analysis Exercises in stochastic analysis Franco Flandoli, Mario Maurelli, Dario Trevisan The exercises with a P are those which have been done totally or partially) in the previous lectures; the exercises with

More information

Convergence at first and second order of some approximations of stochastic integrals

Convergence at first and second order of some approximations of stochastic integrals Convergence at first and second order of some approximations of stochastic integrals Bérard Bergery Blandine, Vallois Pierre IECN, Nancy-Université, CNRS, INRIA, Boulevard des Aiguillettes B.P. 239 F-5456

More information

Exponential martingales: uniform integrability results and applications to point processes

Exponential martingales: uniform integrability results and applications to point processes Exponential martingales: uniform integrability results and applications to point processes Alexander Sokol Department of Mathematical Sciences, University of Copenhagen 26 September, 2012 1 / 39 Agenda

More information

Stochastic flows associated to coalescent processes

Stochastic flows associated to coalescent processes Stochastic flows associated to coalescent processes Jean Bertoin (1) and Jean-François Le Gall (2) (1) Laboratoire de Probabilités et Modèles Aléatoires and Institut universitaire de France, Université

More information

1. Stochastic Processes and filtrations

1. Stochastic Processes and filtrations 1. Stochastic Processes and 1. Stoch. pr., A stochastic process (X t ) t T is a collection of random variables on (Ω, F) with values in a measurable space (S, S), i.e., for all t, In our case X t : Ω S

More information

Non-Essential Uses of Probability in Analysis Part IV Efficient Markovian Couplings. Krzysztof Burdzy University of Washington

Non-Essential Uses of Probability in Analysis Part IV Efficient Markovian Couplings. Krzysztof Burdzy University of Washington Non-Essential Uses of Probability in Analysis Part IV Efficient Markovian Couplings Krzysztof Burdzy University of Washington 1 Review See B and Kendall (2000) for more details. See also the unpublished

More information

A REFINED FACTORIZATION OF THE EXPONENTIAL LAW P. PATIE

A REFINED FACTORIZATION OF THE EXPONENTIAL LAW P. PATIE A REFINED FACTORIZATION OF THE EXPONENTIAL LAW P. PATIE Abstract. Let ξ be a (possibly killed) subordinator with Laplace exponent φ and denote by I φ e ξs ds, the so-called exponential functional. Consider

More information

Applications of Ito s Formula

Applications of Ito s Formula CHAPTER 4 Applications of Ito s Formula In this chapter, we discuss several basic theorems in stochastic analysis. Their proofs are good examples of applications of Itô s formula. 1. Lévy s martingale

More information

Almost sure asymptotics for the random binary search tree

Almost sure asymptotics for the random binary search tree AofA 10 DMTCS proc. AM, 2010, 565 576 Almost sure asymptotics for the rom binary search tree Matthew I. Roberts Laboratoire de Probabilités et Modèles Aléatoires, Université Paris VI Case courrier 188,

More information

SOLUTIONS OF SEMILINEAR WAVE EQUATION VIA STOCHASTIC CASCADES

SOLUTIONS OF SEMILINEAR WAVE EQUATION VIA STOCHASTIC CASCADES Communications on Stochastic Analysis Vol. 4, No. 3 010) 45-431 Serials Publications www.serialspublications.com SOLUTIONS OF SEMILINEAR WAVE EQUATION VIA STOCHASTIC CASCADES YURI BAKHTIN* AND CARL MUELLER

More information

A NEW PROOF OF THE WIENER HOPF FACTORIZATION VIA BASU S THEOREM

A NEW PROOF OF THE WIENER HOPF FACTORIZATION VIA BASU S THEOREM J. Appl. Prob. 49, 876 882 (2012 Printed in England Applied Probability Trust 2012 A NEW PROOF OF THE WIENER HOPF FACTORIZATION VIA BASU S THEOREM BRIAN FRALIX and COLIN GALLAGHER, Clemson University Abstract

More information

Branching Processes II: Convergence of critical branching to Feller s CSB

Branching Processes II: Convergence of critical branching to Feller s CSB Chapter 4 Branching Processes II: Convergence of critical branching to Feller s CSB Figure 4.1: Feller 4.1 Birth and Death Processes 4.1.1 Linear birth and death processes Branching processes can be studied

More information

Uniformly Uniformly-ergodic Markov chains and BSDEs

Uniformly Uniformly-ergodic Markov chains and BSDEs Uniformly Uniformly-ergodic Markov chains and BSDEs Samuel N. Cohen Mathematical Institute, University of Oxford (Based on joint work with Ying Hu, Robert Elliott, Lukas Szpruch) Centre Henri Lebesgue,

More information

Ernesto Mordecki 1. Lecture III. PASI - Guanajuato - June 2010

Ernesto Mordecki 1. Lecture III. PASI - Guanajuato - June 2010 Optimal stopping for Hunt and Lévy processes Ernesto Mordecki 1 Lecture III. PASI - Guanajuato - June 2010 1Joint work with Paavo Salminen (Åbo, Finland) 1 Plan of the talk 1. Motivation: from Finance

More information

Reflected Brownian Motion

Reflected Brownian Motion Chapter 6 Reflected Brownian Motion Often we encounter Diffusions in regions with boundary. If the process can reach the boundary from the interior in finite time with positive probability we need to decide

More information

Mandelbrot s cascade in a Random Environment

Mandelbrot s cascade in a Random Environment Mandelbrot s cascade in a Random Environment A joint work with Chunmao Huang (Ecole Polytechnique) and Xingang Liang (Beijing Business and Technology Univ) Université de Bretagne-Sud (Univ South Brittany)

More information

SMSTC (2007/08) Probability.

SMSTC (2007/08) Probability. SMSTC (27/8) Probability www.smstc.ac.uk Contents 12 Markov chains in continuous time 12 1 12.1 Markov property and the Kolmogorov equations.................... 12 2 12.1.1 Finite state space.................................

More information

On the martingales obtained by an extension due to Saisho, Tanemura and Yor of Pitman s theorem

On the martingales obtained by an extension due to Saisho, Tanemura and Yor of Pitman s theorem On the martingales obtained by an extension due to Saisho, Tanemura and Yor of Pitman s theorem Koichiro TAKAOKA Dept of Applied Physics, Tokyo Institute of Technology Abstract M Yor constructed a family

More information

CONTINUOUS-STATE BRANCHING PROCESSES AND SELF-SIMILARITY

CONTINUOUS-STATE BRANCHING PROCESSES AND SELF-SIMILARITY Applied Probability Trust 2 October 28 CONTINUOUS-STATE BRANCHING PROCESSES AND SELF-SIMILARITY A. E. KYPRIANOU, The University of Bath J.C. PARDO, The University of Bath Abstract In this paper we study

More information

Pathwise construction of tree-valued Fleming-Viot processes

Pathwise construction of tree-valued Fleming-Viot processes Pathwise construction of tree-valued Fleming-Viot processes Stephan Gufler November 9, 2018 arxiv:1404.3682v4 [math.pr] 27 Dec 2017 Abstract In a random complete and separable metric space that we call

More information

Some Properties of NSFDEs

Some Properties of NSFDEs Chenggui Yuan (Swansea University) Some Properties of NSFDEs 1 / 41 Some Properties of NSFDEs Chenggui Yuan Swansea University Chenggui Yuan (Swansea University) Some Properties of NSFDEs 2 / 41 Outline

More information

Infinitely iterated Brownian motion

Infinitely iterated Brownian motion Mathematics department Uppsala University (Joint work with Nicolas Curien) This talk was given in June 2013, at the Mittag-Leffler Institute in Stockholm, as part of the Symposium in honour of Olav Kallenberg

More information

A D VA N C E D P R O B A B I L - I T Y

A D VA N C E D P R O B A B I L - I T Y A N D R E W T U L L O C H A D VA N C E D P R O B A B I L - I T Y T R I N I T Y C O L L E G E T H E U N I V E R S I T Y O F C A M B R I D G E Contents 1 Conditional Expectation 5 1.1 Discrete Case 6 1.2

More information

arxiv: v1 [math.pr] 1 Jan 2013

arxiv: v1 [math.pr] 1 Jan 2013 The role of dispersal in interacting patches subject to an Allee effect arxiv:1301.0125v1 [math.pr] 1 Jan 2013 1. Introduction N. Lanchier Abstract This article is concerned with a stochastic multi-patch

More information

Lecture 12. F o s, (1.1) F t := s>t

Lecture 12. F o s, (1.1) F t := s>t Lecture 12 1 Brownian motion: the Markov property Let C := C(0, ), R) be the space of continuous functions mapping from 0, ) to R, in which a Brownian motion (B t ) t 0 almost surely takes its value. Let

More information

1 Independent increments

1 Independent increments Tel Aviv University, 2008 Brownian motion 1 1 Independent increments 1a Three convolution semigroups........... 1 1b Independent increments.............. 2 1c Continuous time................... 3 1d Bad

More information

Asymptotics for posterior hazards

Asymptotics for posterior hazards Asymptotics for posterior hazards Pierpaolo De Blasi University of Turin 10th August 2007, BNR Workshop, Isaac Newton Intitute, Cambridge, UK Joint work with Giovanni Peccati (Université Paris VI) and

More information

LECTURE 2: LOCAL TIME FOR BROWNIAN MOTION

LECTURE 2: LOCAL TIME FOR BROWNIAN MOTION LECTURE 2: LOCAL TIME FOR BROWNIAN MOTION We will define local time for one-dimensional Brownian motion, and deduce some of its properties. We will then use the generalized Ray-Knight theorem proved in

More information

for all f satisfying E[ f(x) ] <.

for all f satisfying E[ f(x) ] <. . Let (Ω, F, P ) be a probability space and D be a sub-σ-algebra of F. An (H, H)-valued random variable X is independent of D if and only if P ({X Γ} D) = P {X Γ}P (D) for all Γ H and D D. Prove that if

More information

Solutions to the Exercises in Stochastic Analysis

Solutions to the Exercises in Stochastic Analysis Solutions to the Exercises in Stochastic Analysis Lecturer: Xue-Mei Li 1 Problem Sheet 1 In these solution I avoid using conditional expectations. But do try to give alternative proofs once we learnt conditional

More information

Preliminary Exam: Probability 9:00am 2:00pm, Friday, January 6, 2012

Preliminary Exam: Probability 9:00am 2:00pm, Friday, January 6, 2012 Preliminary Exam: Probability 9:00am 2:00pm, Friday, January 6, 202 The exam lasts from 9:00am until 2:00pm, with a walking break every hour. Your goal on this exam should be to demonstrate mastery of

More information

Nonlinear Lévy Processes and their Characteristics

Nonlinear Lévy Processes and their Characteristics Nonlinear Lévy Processes and their Characteristics Ariel Neufeld Marcel Nutz January 11, 215 Abstract We develop a general construction for nonlinear Lévy processes with given characteristics. More precisely,

More information

Bernstein-gamma functions and exponential functionals of Lévy processes

Bernstein-gamma functions and exponential functionals of Lévy processes Bernstein-gamma functions and exponential functionals of Lévy processes M. Savov 1 joint work with P. Patie 2 FCPNLO 216, Bilbao November 216 1 Marie Sklodowska Curie Individual Fellowship at IMI, BAS,

More information

Random trees and branching processes

Random trees and branching processes Random trees and branching processes Svante Janson IMS Medallion Lecture 12 th Vilnius Conference and 2018 IMS Annual Meeting Vilnius, 5 July, 2018 Part I. Galton Watson trees Let ξ be a random variable

More information

Shifting processes with cyclically exchangeable increments at random

Shifting processes with cyclically exchangeable increments at random Shifting processes with cyclically exchangeable increments at random Gerónimo URIBE BRAVO (Collaboration with Loïc CHAUMONT) Instituto de Matemáticas UNAM Universidad Nacional Autónoma de México www.matem.unam.mx/geronimo

More information

The parabolic Anderson model on Z d with time-dependent potential: Frank s works

The parabolic Anderson model on Z d with time-dependent potential: Frank s works Weierstrass Institute for Applied Analysis and Stochastics The parabolic Anderson model on Z d with time-dependent potential: Frank s works Based on Frank s works 2006 2016 jointly with Dirk Erhard (Warwick),

More information

STAT 331. Martingale Central Limit Theorem and Related Results

STAT 331. Martingale Central Limit Theorem and Related Results STAT 331 Martingale Central Limit Theorem and Related Results In this unit we discuss a version of the martingale central limit theorem, which states that under certain conditions, a sum of orthogonal

More information

Infinitely divisible distributions and the Lévy-Khintchine formula

Infinitely divisible distributions and the Lévy-Khintchine formula Infinitely divisible distributions and the Cornell University May 1, 2015 Some definitions Let X be a real-valued random variable with law µ X. Recall that X is said to be infinitely divisible if for every

More information

On the submartingale / supermartingale property of diffusions in natural scale

On the submartingale / supermartingale property of diffusions in natural scale On the submartingale / supermartingale property of diffusions in natural scale Alexander Gushchin Mikhail Urusov Mihail Zervos November 13, 214 Abstract Kotani 5 has characterised the martingale property

More information

The range of tree-indexed random walk

The range of tree-indexed random walk The range of tree-indexed random walk Jean-François Le Gall, Shen Lin Institut universitaire de France et Université Paris-Sud Orsay Erdös Centennial Conference July 2013 Jean-François Le Gall (Université

More information

A slow transient diusion in a drifted stable potential

A slow transient diusion in a drifted stable potential A slow transient diusion in a drifted stable potential Arvind Singh Université Paris VI Abstract We consider a diusion process X in a random potential V of the form V x = S x δx, where δ is a positive

More information

MATH 6605: SUMMARY LECTURE NOTES

MATH 6605: SUMMARY LECTURE NOTES MATH 6605: SUMMARY LECTURE NOTES These notes summarize the lectures on weak convergence of stochastic processes. If you see any typos, please let me know. 1. Construction of Stochastic rocesses A stochastic

More information

RIGHT INVERSES OF LÉVY PROCESSES: THE EXCURSION MEA- SURE IN THE GENERAL CASE

RIGHT INVERSES OF LÉVY PROCESSES: THE EXCURSION MEA- SURE IN THE GENERAL CASE Elect. Comm. in Probab. 15 (21), 572 584 ELECTRONIC COMMUNICATIONS in PROBABILITY RIGHT INVERSES OF LÉVY PROCESSES: THE EXCURSION MEA- SURE IN THE GENERAL CASE MLADEN SAVOV 1 Department of Statistics,

More information

ON THE EXISTENCE OF RECURRENT EXTENSIONS OF SELF-SIMILAR MARKOV PROCESSES. P.J. Fitzsimmons University of California San Diego.

ON THE EXISTENCE OF RECURRENT EXTENSIONS OF SELF-SIMILAR MARKOV PROCESSES. P.J. Fitzsimmons University of California San Diego. ON THE EXISTENCE OF ECUENT EXTENSIONS OF SELF-SIMILA MAKOV POCESSES P.J. Fitzsimmons University of California San Diego March 29, 26 Abstract. Let X = (X t) t be a self-similar Markov process with values

More information

{σ x >t}p x. (σ x >t)=e at.

{σ x >t}p x. (σ x >t)=e at. 3.11. EXERCISES 121 3.11 Exercises Exercise 3.1 Consider the Ornstein Uhlenbeck process in example 3.1.7(B). Show that the defined process is a Markov process which converges in distribution to an N(0,σ

More information

Two viewpoints on measure valued processes

Two viewpoints on measure valued processes Two viewpoints on measure valued processes Olivier Hénard Université Paris-Est, Cermics Contents 1 The classical framework : from no particle to one particle 2 The lookdown framework : many particles.

More information

4 Sums of Independent Random Variables

4 Sums of Independent Random Variables 4 Sums of Independent Random Variables Standing Assumptions: Assume throughout this section that (,F,P) is a fixed probability space and that X 1, X 2, X 3,... are independent real-valued random variables

More information

FRINGE TREES, CRUMP MODE JAGERS BRANCHING PROCESSES AND m-ary SEARCH TREES

FRINGE TREES, CRUMP MODE JAGERS BRANCHING PROCESSES AND m-ary SEARCH TREES FRINGE TREES, CRUMP MODE JAGERS BRANCHING PROCESSES AND m-ary SEARCH TREES CECILIA HOLMGREN AND SVANTE JANSON Abstract. This survey studies asymptotics of random fringe trees and extended fringe trees

More information

Stochastic integration. P.J.C. Spreij

Stochastic integration. P.J.C. Spreij Stochastic integration P.J.C. Spreij this version: April 22, 29 Contents 1 Stochastic processes 1 1.1 General theory............................... 1 1.2 Stopping times...............................

More information

MS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10

MS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10 MS&E 321 Spring 12-13 Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10 Section 3: Regenerative Processes Contents 3.1 Regeneration: The Basic Idea............................... 1 3.2

More information

Definition: Lévy Process. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 2: Lévy Processes. Theorem

Definition: Lévy Process. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 2: Lévy Processes. Theorem Definition: Lévy Process Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 2: Lévy Processes David Applebaum Probability and Statistics Department, University of Sheffield, UK July

More information

Asymptotics for posterior hazards

Asymptotics for posterior hazards Asymptotics for posterior hazards Igor Prünster University of Turin, Collegio Carlo Alberto and ICER Joint work with P. Di Biasi and G. Peccati Workshop on Limit Theorems and Applications Paris, 16th January

More information

process on the hierarchical group

process on the hierarchical group Intertwining of Markov processes and the contact process on the hierarchical group April 27, 2010 Outline Intertwining of Markov processes Outline Intertwining of Markov processes First passage times of

More information

On an Effective Solution of the Optimal Stopping Problem for Random Walks

On an Effective Solution of the Optimal Stopping Problem for Random Walks QUANTITATIVE FINANCE RESEARCH CENTRE QUANTITATIVE FINANCE RESEARCH CENTRE Research Paper 131 September 2004 On an Effective Solution of the Optimal Stopping Problem for Random Walks Alexander Novikov and

More information

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past.

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past. 1 Markov chain: definition Lecture 5 Definition 1.1 Markov chain] A sequence of random variables (X n ) n 0 taking values in a measurable state space (S, S) is called a (discrete time) Markov chain, if

More information

An essay on the general theory of stochastic processes

An essay on the general theory of stochastic processes Probability Surveys Vol. 3 (26) 345 412 ISSN: 1549-5787 DOI: 1.1214/1549578614 An essay on the general theory of stochastic processes Ashkan Nikeghbali ETHZ Departement Mathematik, Rämistrasse 11, HG G16

More information

Stochastic Differential Equations.

Stochastic Differential Equations. Chapter 3 Stochastic Differential Equations. 3.1 Existence and Uniqueness. One of the ways of constructing a Diffusion process is to solve the stochastic differential equation dx(t) = σ(t, x(t)) dβ(t)

More information

CONTINUOUS STATE BRANCHING PROCESSES

CONTINUOUS STATE BRANCHING PROCESSES CONTINUOUS STATE BRANCHING PROCESSES BY JOHN LAMPERTI 1 Communicated by Henry McKean, January 20, 1967 1. Introduction. A class of Markov processes having properties resembling those of ordinary branching

More information

Lecture 22 Girsanov s Theorem

Lecture 22 Girsanov s Theorem Lecture 22: Girsanov s Theorem of 8 Course: Theory of Probability II Term: Spring 25 Instructor: Gordan Zitkovic Lecture 22 Girsanov s Theorem An example Consider a finite Gaussian random walk X n = n

More information

An Introduction to Probability Theory and Its Applications

An Introduction to Probability Theory and Its Applications An Introduction to Probability Theory and Its Applications WILLIAM FELLER (1906-1970) Eugene Higgins Professor of Mathematics Princeton University VOLUME II SECOND EDITION JOHN WILEY & SONS Contents I

More information

Product measure and Fubini s theorem

Product measure and Fubini s theorem Chapter 7 Product measure and Fubini s theorem This is based on [Billingsley, Section 18]. 1. Product spaces Suppose (Ω 1, F 1 ) and (Ω 2, F 2 ) are two probability spaces. In a product space Ω = Ω 1 Ω

More information

ITÔ S ONE POINT EXTENSIONS OF MARKOV PROCESSES. Masatoshi Fukushima

ITÔ S ONE POINT EXTENSIONS OF MARKOV PROCESSES. Masatoshi Fukushima ON ITÔ S ONE POINT EXTENSIONS OF MARKOV PROCESSES Masatoshi Fukushima Symposium in Honor of Kiyosi Itô: Stocastic Analysis and Its Impact in Mathematics and Science, IMS, NUS July 10, 2008 1 1. Itô s point

More information

Markov processes and queueing networks

Markov processes and queueing networks Inria September 22, 2015 Outline Poisson processes Markov jump processes Some queueing networks The Poisson distribution (Siméon-Denis Poisson, 1781-1840) { } e λ λ n n! As prevalent as Gaussian distribution

More information

Jump Processes. Richard F. Bass

Jump Processes. Richard F. Bass Jump Processes Richard F. Bass ii c Copyright 214 Richard F. Bass Contents 1 Poisson processes 1 1.1 Definitions............................. 1 1.2 Stopping times.......................... 3 1.3 Markov

More information

Intertwining of Markov processes

Intertwining of Markov processes January 4, 2011 Outline Outline First passage times of birth and death processes Outline First passage times of birth and death processes The contact process on the hierarchical group 1 0.5 0-0.5-1 0 0.2

More information

Mi-Hwa Ko. t=1 Z t is true. j=0

Mi-Hwa Ko. t=1 Z t is true. j=0 Commun. Korean Math. Soc. 21 (2006), No. 4, pp. 779 786 FUNCTIONAL CENTRAL LIMIT THEOREMS FOR MULTIVARIATE LINEAR PROCESSES GENERATED BY DEPENDENT RANDOM VECTORS Mi-Hwa Ko Abstract. Let X t be an m-dimensional

More information

The Azéma-Yor Embedding in Non-Singular Diffusions

The Azéma-Yor Embedding in Non-Singular Diffusions Stochastic Process. Appl. Vol. 96, No. 2, 2001, 305-312 Research Report No. 406, 1999, Dept. Theoret. Statist. Aarhus The Azéma-Yor Embedding in Non-Singular Diffusions J. L. Pedersen and G. Peskir Let

More information

Poisson random measure: motivation

Poisson random measure: motivation : motivation The Lévy measure provides the expected number of jumps by time unit, i.e. in a time interval of the form: [t, t + 1], and of a certain size Example: ν([1, )) is the expected number of jumps

More information

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R. Ergodic Theorems Samy Tindel Purdue University Probability Theory 2 - MA 539 Taken from Probability: Theory and examples by R. Durrett Samy T. Ergodic theorems Probability Theory 1 / 92 Outline 1 Definitions

More information

Weighted Sums of Orthogonal Polynomials Related to Birth-Death Processes with Killing

Weighted Sums of Orthogonal Polynomials Related to Birth-Death Processes with Killing Advances in Dynamical Systems and Applications ISSN 0973-5321, Volume 8, Number 2, pp. 401 412 (2013) http://campus.mst.edu/adsa Weighted Sums of Orthogonal Polynomials Related to Birth-Death Processes

More information

Lecture 4: Introduction to stochastic processes and stochastic calculus

Lecture 4: Introduction to stochastic processes and stochastic calculus Lecture 4: Introduction to stochastic processes and stochastic calculus Cédric Archambeau Centre for Computational Statistics and Machine Learning Department of Computer Science University College London

More information