Quasi stationary distributions and Fleming Viot Processes
|
|
- Milton Taylor
- 5 years ago
- Views:
Transcription
1 Quasi stationary distributions and Fleming Viot Processes Pablo A. Ferrari Nevena Marić Universidade de São Paulo 1
2 Example Continuous time Markov chain in Λ={0, 1, 2} with transition rates q(1, 0) = q(1, 2) = q(2, 1) = 1. q(0, 1) = q(0, 2) = 0 (0 is absorbing state). If one starts with (say) chains in state 1, which proportion of the survival chains will be in state 1 by time 1? And by time? 2
3 Example 2.1 of Burdzy, Holyst and March Λ={0, 1, 2} and q(1, 0) = q(1, 2) = q(2, 1) = 1. and ν(1) = ν(2) = =1+φ = φ (golden number) φ = =
4 Quasi stationary distributions (qsd) Irreducible jump Markov process with rates Q =(q(x, y)) on Λ {0}. P t (x, y) transition matrix. Λ countable and 0 absorbing state. Z t is ergodic with a unique invariant measure δ 0 Law starting with µ conditioned to non absorption until time t: ϕ µ y Λ t (x) = µ(y)p t(y, x) 1 y Λ µ(y)p t(y, 0), x Λ. A quasi stationary distribution (qsd) is a probability measure ν on Λ satisfying ϕ ν t = ν 4
5 ν is a left eigenvector for the restriction of the matrix Q to Λ with eigenvalue λ ν = y Λ ν(y)q(y, 0): ν must satisfy the system ( ν(y) q(y, x) = ) ν(y)q(y, 0) ν(x), x Λ. y Λ y Λ\{x} y Λ νq = λ ν ν ν(y)[q(y, x)+q(y, 0)ν(x)] = 0, x Λ. y Λ recall q(x, x) = ν(y)[q(y, x)+q(y, 0)ν(x)] = ν(x) (balance equations) y Λ {0}\{x} y Λ\{x} q(x, y) ( q(x, y)+q(x, 0)ν(y) ) 5
6 Yaglom limit for µ: lim t ϕµ t (y), y Λ if it exists and it is a probability on Λ. Λ finite, Darroch and Seneta (1967): there exists a unique qsd ν for Q and that the Yaglom limit converges to ν independently of the initial distribution. Λ infinite: neither existence nor uniqueness of qsd are guaranteed. Example: asymmetric random walk Seneta: p = q(i, i +1)=1 q(i, i 1), for i 0. In this case there are infinitely many qsd when p<1/2 and none when p 1/2. Minimal qsd (for p<1/2): ( p ν(x) x 1 p ) x/2 6
7 Existence For Λ = N under the condition lim x P(R >t Z 0 = x) =1 where R absorption time, foreacht>0 existence of qsd Ee θr < for some θ>0. (Ferrari, Kesten, Martínez and Picco [6]) 7
8 Existence Ergodicity coefficient of Q: α = α(q) := z Λ Maximal absorbing rate of Q: inf q(x, z) x Λ\{z} C = C(Q) :=supq(x, 0) x Λ Theorem 1. If α>c then there exists a unique qsd ν for Q and the Yaglom limit converges to ν for any initial measure µ. Jacka and Roberts [10]: under α>cuniqueness and Yaglom limit. 8
9 The Fleming-Viot process (fv). System of N>0 particles evolving on Λ. Particles move independently with rates Q between absorptions. When a particle is absorbed, it chooses one of the other particles uniformly and jumps instantaneously to its position. Generator (Master equation): Lf(ξ) = N i=1 y Λ\{ξ(i)} [ q(ξ(i),y)+q(ξ(i), 0) η(ξ,y) ] (f(ξ i,y ) f(ξ)) N 1 where ξ i,y (j) =y for j = i and ξ i,y (j) =ξ(j) otherwiseand η(ξ,y) := N 1{ξ(i) =y}. i=1 9
10 Empirical profile and conditioned process ξ t process in Λ (1,...,N) ; η t {η N Λ : x η(x) =N} unlabeled process, η t (x) =numberofξ particles in state x at time t. Theorem 2. Let µ probability on Λ. Assume (ξ N,µ 0 (i), i=1,...,n) iid with law µ. Then, for t>0 and x Λ, lim N lim N Eη N,µ t (x) N η N,µ t (x) N = ϕ µ t (x) = ϕ µ t (x), in Probability Fleming and Viot [8], Burdzy, Holyst and March [1], Grigorescu and Kang [9] and Löbus [12] in a Brownian motion setting. 10
11 Ergodicity of fv Λ finite, fv Markov in finite state space Hence ergodic (there exists unique stationary measure and the process converges to the stationary measure). For Λ infinite: Theorem 3. If α>0, then for each N the fv process with N particlesisergodic. 11
12 Stationary empirical profile and qsd Assume ergodicity. Let η N be distributed with the unique invariant measure. Theorem 4. α>c.foreachx Λ, the following limits exist lim N η N (x) N and ν is the unique qsd for Q. = ν(x), in Probability 12
13 Sketch of proofs Existence part of Theorem 1 is a corollary of Theorem 4. Uniqueness: Jacka and Robert. Theorem 3: stationary version of the process from the past as in perfect simulation. 13
14 Theorems 2 and 4 based on asymptotic independence. ϕ t unique solution of d dt ϕµ t (x) = ϕ µ t (y)[q(y, x)+q(y, 0)ϕ µ t (x)], y Λ x Λ η t satisfies d ( η N,µ dt E t (x) ) N = ( η N,µ t (y) E N y Λ (q(y, x)+q(y, 0) ηn,µ t (x) )) N 1 We prove: E[η N,µ t (y) η N,µ t (x)] Eη N,µ t (y) Eη N,µ t (x) = O(N) 14
15 qsd satisfies ν(y)[q(y, x)+q(y, 0)ν(x)] = 0, x Λ. y Λ η N invariant for fv satisfies: y Λ ( η N (y) ( E q(y, x)+q(y, 0) ηn (x) )) N N 1 = 0, x Λ. Under α>c: E[η N (y) η N (x)] Eη N (y) Eη N (x) =O(N) Variance order 1/N, setting x = y. Finally we show (ϕ N,µ t,n N) and(ρ N,N N) are tight. 15
16 Comments Fleming-Viot permits to show existence of a qsd in the α>c case (new). Compared with Brownian motion in a bounded region with absorbing boundary (Burdzy, Holyst and March [1], Grigorescu and Kang [9] and Löbus [12] and other related works): Existence of the fv process immediate here. they prove the convergence without factorization. We prove: vanishing correlations sufficient for convergence of expectations and in probability. To prove tightness classify ξ particles in types. Tightness proof needs α>c as the vanishing correlations proof. 16
17 Graphical construction of fv process 17
18 Graphical construction of fv process To each particle i =1,...,N, associate 3 marked Poisson processes: Regeneration times. PP (α): (a i n) n Z,marks(A i n) n Z Internal times. PP ( q α): (b i n) n Z, marks ((B i n(x), x Λ), n Z) Voter times. PP (C): (c i n) n Z, marks ((C i n, (F i n(x), x Λ)), n Z) 18
19 Law of marks: P(A i n = y) =α(y)/α, y Λ; P(Bn(x) i q(x, y) α(y) =y) =, x Λ, y Λ \{x}; q α P(Bn(x) i =x) =1 y Λ\{x} P(Bi n(x) =y). P (Fn(x) i q(x, 0) =1)= =1 P (F i C n(x) =0),x Λ. P (Cn i = j) = 1 N 1, j i. Call ω a realization of the marked PP. 19
20 Construction of ξ N,ξ [s,t] = ξn,ξ [s,t],ω Order Poisson times. Initial configuration ξ at time s. Configuration does not change between Poisson events. At each regeneration time a i n particle i adopts state A i n regardless the current configuration. If at the internal time b i n the state of particle i is x, thenat time b i n particle i adopts state B i n(x) regardless the state of the other particles. If at the voter time c i n the state of particle i is x and F i n(x) =1, then at time c i n particle i adopts the state of particle C i n;if F i n(x) = 0, then particle i does not change state. The final configuration is ξ N,ξ [s,t]. 20
21 Lemma 1. The process (ξ N,ξ [s,t],t s) is fv with initial condition ξ N,ξ [s,s] = ξ. 21
22 Generalized duality Define ω i [s, t] ={m ω : m involved in the definition of ξ N,ξ [s,t],ω (i)}, Generalized duality equation: No explicit formula for H. ξ N,ξ [s,t],ω (i) =H(ωi [s, t],ξ). (1) For any time s, ξ N,ξ [s,t](i) depends only on the finite number of Poisson events contained in ω i [s, t]. 22
23 Theorem 3. If α>0 the fv process is ergodic. Proof If number of marks in ω i [,t] is finite, then ξ N t,ω(i) =: lim s H(ωi [s, t],ξ), i {1,...,N}, t R is well defined and does not depend on ξ. By construction (ξ N t,t R) isastationaryfv process. The law of ξ N t is unique invariant measure. Number of points in ω i [,t] is finite if there is [s(ω),s(ω)+1] in the past of t with one regeneration mark for each k and no voter marks. 23
24 Particle correlations in the fv process Proposition 1. Let x, y Λ. For all t>0 ( η N E t (x)ηt N (y) ) N 2 ( η N E t (x) ) ( η N E t (y) ) < 1 N N N e2ct (2) Assume α>c.letη N be distributed according to the unique invariant measure for the fv process with N particles. Then ( η N (x)η N (y) ) E N 2 ( η N (x) ) ( η N (y) ) 1 E E < N N N α α C (3) 24
25 Coupling 4-fold coupling (ω i [s, t],ω j [s, t], ˆω i [s, t], ˆω j [s, t]) ω i [s, t] =ˆω i [s, t] ˆω j [s, t] ω i [s, t] = implies ω j [s, t] =ˆω j [s, t] marginal process (ˆω i [s, t], ˆω j [s, t])havethesamelawastwo independent processes with the same marginals as (ω i [s, t],ω j [s, t]). 25
26 P(ξ N,ξ t (j) =x, ξ N,ξ t (i) =y) P(ξ N,ξ t (j) =x)p(ξ N,ξ t (i) =y) ( ) = E 1{H(ω j,ξ)=x, H(ω i,ξ)=y)} 1{H(ˆω j,ξ)=x), H(ˆω i,ξ)=y)} If then ω i ω j = ω j (s, t) =ˆω j (s, t) andω i (s, t) =ˆω i (s, t) Hence, P(ξ N,ξ t (j) =x, ξ N,ξ t (i) =y) P(ξ N,ξ t P(ω i ω j ). (j) =x)p(ξ N,ξ t (i) =y) 26
27 Lemma 2. (s =0) P(ω i ω j ) 1 N 1 C α C (1 e2(c α)t ) (4) Proof: P(ω i ω j ) 2C N 1 t 0 E ˆΨ i [s, t] E ˆΨ j [s, t]ds ˆΨ i [s, t] Random walk that grows with rate Cx and decreases with rate αx. Expectation is bounded by e (t s)(c α). which gives the result. P(ω i ω j ) 2C N 1 t 0 e 2(C α)s ds 27
28 Proof of Proposition 1 Take η and ξ such that η(x) = j 1{ξ(j) =x}. Then ( η N,η t (x)η N,η t (y) ) E N 2 Eη N,η t = 1 N 2 N i=1 (x) Eη N,η t (y) N 2 = 1 ( N N 2 i=1 N j=1 N j=1 P(ξ N,ξ t P(ξ N,ξ t Using this, and (4) with α = 0 we get (2). Assume α>c.takingt = in (4) we get (3). (i) =x, ξ N,ξ t (j) =y) ) (i) =x)p(ξ N,ξ t (j) =y) 28
29 Tightness Proposition 2. For all t>0, x Λ, i =1,...,N and µ, Eη N,µ t (x) N e Ct z Λ µ(z)p t (z,x). As a consequence (Eη N,µ t /N, N N) is tight. 29
30 Assume α>0 and define µ α on Λ by µ α (x) = α x α, x Λ, where α x =inf z q(z,x). For z,x Λ define R λ (z,x) = 0 λe λt P t (z,x)dt. Proposition 3. Assume α>c and let η N distributed with invariant measure for fv. Then for x Λ, ρ N (x) C α C µ αr (α C) (x) As a consequence, the family of measures (η N /N, N N) is tight. 30
31 Types Particle i is type 0attimet if it has not been absorbed in the time interval [0,t]. If at absorption time s particle i jumps over particle j which has type k, thenattimes particle i changes its type to k +1. P(ξ N,µ t (i) =x, type(i, t) =0)= z Λ µ(z)p t (z,x). A t (x, k) =:P(ξ N,µ t (i) =x, type(i, t) =k) 31
32 Proof of Proposition 2 A t (x, k) (Ct)k k! Recursive hypothesis: µ(z)p t (z,x) (5) z Λ By (5) the statement is true for k =0. A t (x, k +1) t 0 C y Λ A s (y, k) P t s (y, x) ds. Using recursive hypothesis, = t 0 C (Cs)k k! = (Ct)k+1 (k +1)! µ(z) z Λ y Λ µ(z)p t (z,x). z Λ by Chapman-Kolmogorov. This proves (5). P s (z,y)p t s (y, x)ds 32
33 Proof of Proposition 3 Under the hypothesis α>cthe process ((ξt N (i), type(i, t)),i=1,...,n), t R) is Markovian constructed in a stationary way A(x, k) :=P(ξs N (i) =x, type(i, s) =k) does not depend on s. Last regeneration mark of site i before time s happened at time s Tα,whereT i α i is exponential of rate α. Then, A(x, 0) = 0 αe αt z Λ µ α (z)p t (z,x)dt = µ α R α (x). 33
34 Similar reasoning implies A(x, k) 0 e αt C z Λ A(z,k 1)P s (z,x) dt. = C α A k 1R α (x) ( C α ) kµα Rα k+1 (x). R k λ (z,x) expectation of P τ k (z,x), τ k sum of k independent exponential λ. Multiplying and dividing by (α C), P(ξ N s (i) =x) C α C ( C ) k ( 1 C ) µ α Rα k+1 (x) α α k=0 Expectation of µ α R K α, K geometric with p =1 (C/α). P(ξ N s (i) =x) C α C µ αr α C (x). 34
35 References [1] Burdzy, K., Holyst, R., March, P. (2000) A Fleming-Viot particle representation of the Dirichlet Laplacian, Comm. Math. Phys. 214, [2] Cavender, J. A. (1978) Quasi-stationary distributions of birth and death processes. Adv. Appl. Prob. 10, [3] Darroch, J.N., Seneta, E. (1967) On quasi-stationary distributions in absorbing continuous-time finite Markov chains, J. Appl. Prob. 4, [4] Ferrari, P.A. (1990) Ergodicity for spin systems with stirrings, Ann. Probab. 18, 4: [5] Fernández, R., Ferrari, P.A., Garcia, N. L. (2001) Loss network representation of Peierls contours. Ann. Probab. 29, no. 2,
36 [6] Ferrari, P.A, Kesten, H., Martínez,S., Picco, P. (1995) Existence of quasi stationary distributions. A renewal dynamical approach, Ann. Probab. 23, 2: [7] Ferrari, P.A., Martínez, S., Picco, P. (1992) Existence of non-trivial quasi-stationary distributions in the birth-death chain, Adv. Appl. Prob 24, [8] Fleming,.W.H., Viot, M. (1979) Some measure-valued Markov processes in population genetics theory, Indiana Univ. Math. J. 28, [9] Grigorescu, I., Kang,M. (2004) Hydrodynamic limit for a Fleming-Viot type system, Stoch. Proc. Appl. 110, no.1: [10] Jacka, S.D., Roberts, G.O. (1995) Weak convergence of conditioned processes on a countable state space, J. Appl. Prob. 32,
37 [11] Kipnis, C., Landim, C. (1999) Scaling Limits of Interacting Particle Systems, Springer-Verlag, Berlin. [12] Löbus, J.-U. (2006) A stationary Fleming-Viot type Brownian particle system. Preprint. [13] Nair, M.G., Pollett, P. K. (1993) On the relationship between µ-invariant measures and quasi-stationary distributions for continuous-time Markov chains, Adv. Appl. Prob. 25, [14] Seneta, E. (1981) Non-Negative Matrices and Markov Chains. Springer-Verlag, Berlin. [15] Seneta, E., Vere-Jones, D. (1966) On quasi-stationary distributions in discrete-time Markov chains with a denumerable infinity of states, J. Appl. Prob. 3, [16] Vere-Jones, D. (1969) Some limit theorems for evanescent processes, Austral. J. Statist 11,
38 [17] Yaglom, A. M. (1947) Certain limit theorems of the theory of branching stochastic processes (in Russian). Dokl. Akad. Nauk SSSR (n.s.) 56,
SIMILAR MARKOV CHAINS
SIMILAR MARKOV CHAINS by Phil Pollett The University of Queensland MAIN REFERENCES Convergence of Markov transition probabilities and their spectral properties 1. Vere-Jones, D. Geometric ergodicity in
More informationIrregular Birth-Death process: stationarity and quasi-stationarity
Irregular Birth-Death process: stationarity and quasi-stationarity MAO Yong-Hua May 8-12, 2017 @ BNU orks with W-J Gao and C Zhang) CONTENTS 1 Stationarity and quasi-stationarity 2 birth-death process
More informationMacroscopic quasi-stationary distribution and microscopic particle systems
Macroscopic quasi-stationary distribution and microscopic particle systems Matthieu Jonckheere, UBA-CONICET, BCAM visiting fellow Coauthors: A. Asselah, P. Ferrari, P. Groisman, J. Martinez, S. Saglietti.
More informationQuasi-Stationary Distributions in Linear Birth and Death Processes
Quasi-Stationary Distributions in Linear Birth and Death Processes Wenbo Liu & Hanjun Zhang School of Mathematics and Computational Science, Xiangtan University Hunan 411105, China E-mail: liuwenbo10111011@yahoo.com.cn
More informationL n = l n (π n ) = length of a longest increasing subsequence of π n.
Longest increasing subsequences π n : permutation of 1,2,...,n. L n = l n (π n ) = length of a longest increasing subsequence of π n. Example: π n = (π n (1),..., π n (n)) = (7, 2, 8, 1, 3, 4, 10, 6, 9,
More informationarxiv:math/ v1 [math.pr] 25 May 2006
Quasi saionary disribuions and Fleming-Vio processes in counable spaces arxiv:mah/0605665v1 [mah.pr] 25 May 2006 Pablo A. Ferrari, evena Marić Universidade de São Paulo Absrac We consider an irreducible
More informationCentral limit theorems for ergodic continuous-time Markov chains with applications to single birth processes
Front. Math. China 215, 1(4): 933 947 DOI 1.17/s11464-15-488-5 Central limit theorems for ergodic continuous-time Markov chains with applications to single birth processes Yuanyuan LIU 1, Yuhui ZHANG 2
More informationMean-field dual of cooperative reproduction
The mean-field dual of systems with cooperative reproduction joint with Tibor Mach (Prague) A. Sturm (Göttingen) Friday, July 6th, 2018 Poisson construction of Markov processes Let (X t ) t 0 be a continuous-time
More informationMinimal quasi-stationary distribution approximation for a birth and death process
Minimal quasi-stationary distribution approximation for a birth and death process Denis Villemonais To cite this version: Denis Villemonais. Minimal quasi-stationary distribution approximation for a birth
More informationConvergence exponentielle uniforme vers la distribution quasi-stationnaire de processus de Markov absorbés
Convergence exponentielle uniforme vers la distribution quasi-stationnaire de processus de Markov absorbés Nicolas Champagnat, Denis Villemonais Séminaire commun LJK Institut Fourier, Méthodes probabilistes
More informationLatent voter model on random regular graphs
Latent voter model on random regular graphs Shirshendu Chatterjee Cornell University (visiting Duke U.) Work in progress with Rick Durrett April 25, 2011 Outline Definition of voter model and duality with
More informationfor all f satisfying E[ f(x) ] <.
. Let (Ω, F, P ) be a probability space and D be a sub-σ-algebra of F. An (H, H)-valued random variable X is independent of D if and only if P ({X Γ} D) = P {X Γ}P (D) for all Γ H and D D. Prove that if
More informationON COMPOUND POISSON POPULATION MODELS
ON COMPOUND POISSON POPULATION MODELS Martin Möhle, University of Tübingen (joint work with Thierry Huillet, Université de Cergy-Pontoise) Workshop on Probability, Population Genetics and Evolution Centre
More informationNon-Essential Uses of Probability in Analysis Part IV Efficient Markovian Couplings. Krzysztof Burdzy University of Washington
Non-Essential Uses of Probability in Analysis Part IV Efficient Markovian Couplings Krzysztof Burdzy University of Washington 1 Review See B and Kendall (2000) for more details. See also the unpublished
More informationLower Tail Probabilities and Related Problems
Lower Tail Probabilities and Related Problems Qi-Man Shao National University of Singapore and University of Oregon qmshao@darkwing.uoregon.edu . Lower Tail Probabilities Let {X t, t T } be a real valued
More informationarxiv: v2 [math.na] 20 Dec 2016
SAIONARY AVERAGING FOR MULI-SCALE CONINUOUS IME MARKOV CHAINS USING PARALLEL REPLICA DYNAMICS ING WANG, PER PLECHÁČ, AND DAVID ARISOFF arxiv:169.6363v2 [math.na 2 Dec 216 Abstract. We propose two algorithms
More informationLinear-fractional branching processes with countably many types
Branching processes and and their applications Badajoz, April 11-13, 2012 Serik Sagitov Chalmers University and University of Gothenburg Linear-fractional branching processes with countably many types
More informationEXTINCTION TIMES FOR A GENERAL BIRTH, DEATH AND CATASTROPHE PROCESS
(February 25, 2004) EXTINCTION TIMES FOR A GENERAL BIRTH, DEATH AND CATASTROPHE PROCESS BEN CAIRNS, University of Queensland PHIL POLLETT, University of Queensland Abstract The birth, death and catastrophe
More informationThe SIS and SIR stochastic epidemic models revisited
The SIS and SIR stochastic epidemic models revisited Jesús Artalejo Faculty of Mathematics, University Complutense of Madrid Madrid, Spain jesus_artalejomat.ucm.es BCAM Talk, June 16, 2011 Talk Schedule
More informationA Method for Evaluating the Distribution of the Total Cost of a Random Process over its Lifetime
A Method for Evaluating the Distribution of the Total Cost of a Random Process over its Lifetime P.K. Pollett a and V.T. Stefanov b a Department of Mathematics, The University of Queensland Queensland
More informationSMSTC (2007/08) Probability.
SMSTC (27/8) Probability www.smstc.ac.uk Contents 12 Markov chains in continuous time 12 1 12.1 Markov property and the Kolmogorov equations.................... 12 2 12.1.1 Finite state space.................................
More informationStochastic process. X, a series of random variables indexed by t
Stochastic process X, a series of random variables indexed by t X={X(t), t 0} is a continuous time stochastic process X={X(t), t=0,1, } is a discrete time stochastic process X(t) is the state at time t,
More informationKesten s power law for difference equations with coefficients induced by chains of infinite order
Kesten s power law for difference equations with coefficients induced by chains of infinite order Arka P. Ghosh 1,2, Diana Hay 1, Vivek Hirpara 1,3, Reza Rastegar 1, Alexander Roitershtein 1,, Ashley Schulteis
More informationInsert your Booktitle, Subtitle, Edition
C. Landim Insert your Booktitle, Subtitle, Edition SPIN Springer s internal project number, if known Monograph October 23, 2018 Springer Page: 1 job: book macro: svmono.cls date/time: 23-Oct-2018/15:27
More informationN.G.Bean, D.A.Green and P.G.Taylor. University of Adelaide. Adelaide. Abstract. process of an MMPP/M/1 queue is not a MAP unless the queue is a
WHEN IS A MAP POISSON N.G.Bean, D.A.Green and P.G.Taylor Department of Applied Mathematics University of Adelaide Adelaide 55 Abstract In a recent paper, Olivier and Walrand (994) claimed that the departure
More informationFunctional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals
Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals Noèlia Viles Cuadros BCAM- Basque Center of Applied Mathematics with Prof. Enrico
More informationBirth and Death Processes. Birth and Death Processes. Linear Growth with Immigration. Limiting Behaviour for Birth and Death Processes
DTU Informatics 247 Stochastic Processes 6, October 27 Today: Limiting behaviour of birth and death processes Birth and death processes with absorbing states Finite state continuous time Markov chains
More informationThe parabolic Anderson model on Z d with time-dependent potential: Frank s works
Weierstrass Institute for Applied Analysis and Stochastics The parabolic Anderson model on Z d with time-dependent potential: Frank s works Based on Frank s works 2006 2016 jointly with Dirk Erhard (Warwick),
More informationarxiv: v2 [math.pr] 4 Sep 2017
arxiv:1708.08576v2 [math.pr] 4 Sep 2017 On the Speed of an Excited Asymmetric Random Walk Mike Cinkoske, Joe Jackson, Claire Plunkett September 5, 2017 Abstract An excited random walk is a non-markovian
More informationIntroduction to self-similar growth-fragmentations
Introduction to self-similar growth-fragmentations Quan Shi CIMAT, 11-15 December, 2017 Quan Shi Growth-Fragmentations CIMAT, 11-15 December, 2017 1 / 34 Literature Jean Bertoin, Compensated fragmentation
More informationQuasi stationary distributions and Fleming-Viot processes in countable spaces
E l e c r o n i c J o u r n a l o f P r o b a b i l i y Vol. 12 (2007, Paper no. 24, pages 684 702. Journal URL hp://www.mah.washingon.edu/~ejpecp/ Quasi saionary disribuions and Fleming-Vio processes
More informationWeighted Sums of Orthogonal Polynomials Related to Birth-Death Processes with Killing
Advances in Dynamical Systems and Applications ISSN 0973-5321, Volume 8, Number 2, pp. 401 412 (2013) http://campus.mst.edu/adsa Weighted Sums of Orthogonal Polynomials Related to Birth-Death Processes
More informationResistance Growth of Branching Random Networks
Peking University Oct.25, 2018, Chengdu Joint work with Yueyun Hu (U. Paris 13) and Shen Lin (U. Paris 6), supported by NSFC Grant No. 11528101 (2016-2017) for Research Cooperation with Oversea Investigators
More information8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains
8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains 8.1 Review 8.2 Statistical Equilibrium 8.3 Two-State Markov Chain 8.4 Existence of P ( ) 8.5 Classification of States
More informationSTOCHASTIC PROCESSES Basic notions
J. Virtamo 38.3143 Queueing Theory / Stochastic processes 1 STOCHASTIC PROCESSES Basic notions Often the systems we consider evolve in time and we are interested in their dynamic behaviour, usually involving
More informationA NOTE ON THE ASYMPTOTIC BEHAVIOUR OF A PERIODIC MULTITYPE GALTON-WATSON BRANCHING PROCESS. M. González, R. Martínez, M. Mota
Serdica Math. J. 30 (2004), 483 494 A NOTE ON THE ASYMPTOTIC BEHAVIOUR OF A PERIODIC MULTITYPE GALTON-WATSON BRANCHING PROCESS M. González, R. Martínez, M. Mota Communicated by N. M. Yanev Abstract. In
More informationLIMITS FOR QUEUES AS THE WAITING ROOM GROWS. Bell Communications Research AT&T Bell Laboratories Red Bank, NJ Murray Hill, NJ 07974
LIMITS FOR QUEUES AS THE WAITING ROOM GROWS by Daniel P. Heyman Ward Whitt Bell Communications Research AT&T Bell Laboratories Red Bank, NJ 07701 Murray Hill, NJ 07974 May 11, 1988 ABSTRACT We study the
More informationPROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS
PROBABILITY: LIMIT THEOREMS II, SPRING 15. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please
More informationProbability Distributions
Lecture : Background in Probability Theory Probability Distributions The probability mass function (pmf) or probability density functions (pdf), mean, µ, variance, σ 2, and moment generating function (mgf)
More informationPITMAN S 2M X THEOREM FOR SKIP-FREE RANDOM WALKS WITH MARKOVIAN INCREMENTS
Elect. Comm. in Probab. 6 (2001) 73 77 ELECTRONIC COMMUNICATIONS in PROBABILITY PITMAN S 2M X THEOREM FOR SKIP-FREE RANDOM WALKS WITH MARKOVIAN INCREMENTS B.M. HAMBLY Mathematical Institute, University
More informationEXTINCTION PROBABILITY IN A BIRTH-DEATH PROCESS WITH KILLING
EXTINCTION PROBABILITY IN A BIRTH-DEATH PROCESS WITH KILLING Erik A. van Doorn and Alexander I. Zeifman Department of Applied Mathematics University of Twente P.O. Box 217, 7500 AE Enschede, The Netherlands
More informationMarkov Chains and Stochastic Sampling
Part I Markov Chains and Stochastic Sampling 1 Markov Chains and Random Walks on Graphs 1.1 Structure of Finite Markov Chains We shall only consider Markov chains with a finite, but usually very large,
More information{σ x >t}p x. (σ x >t)=e at.
3.11. EXERCISES 121 3.11 Exercises Exercise 3.1 Consider the Ornstein Uhlenbeck process in example 3.1.7(B). Show that the defined process is a Markov process which converges in distribution to an N(0,σ
More informationPROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS
PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please
More informationA Note on the Central Limit Theorem for a Class of Linear Systems 1
A Note on the Central Limit Theorem for a Class of Linear Systems 1 Contents Yukio Nagahata Department of Mathematics, Graduate School of Engineering Science Osaka University, Toyonaka 560-8531, Japan.
More informationON THE ZERO-ONE LAW AND THE LAW OF LARGE NUMBERS FOR RANDOM WALK IN MIXING RAN- DOM ENVIRONMENT
Elect. Comm. in Probab. 10 (2005), 36 44 ELECTRONIC COMMUNICATIONS in PROBABILITY ON THE ZERO-ONE LAW AND THE LAW OF LARGE NUMBERS FOR RANDOM WALK IN MIXING RAN- DOM ENVIRONMENT FIRAS RASSOUL AGHA Department
More informationChapter 7. Markov chain background. 7.1 Finite state space
Chapter 7 Markov chain background A stochastic process is a family of random variables {X t } indexed by a varaible t which we will think of as time. Time can be discrete or continuous. We will only consider
More informationGARCH processes probabilistic properties (Part 1)
GARCH processes probabilistic properties (Part 1) Alexander Lindner Centre of Mathematical Sciences Technical University of Munich D 85747 Garching Germany lindner@ma.tum.de http://www-m1.ma.tum.de/m4/pers/lindner/
More informationEXTINCTION TIMES FOR A GENERAL BIRTH, DEATH AND CATASTROPHE PROCESS
J. Appl. Prob. 41, 1211 1218 (2004) Printed in Israel Applied Probability Trust 2004 EXTINCTION TIMES FOR A GENERAL BIRTH, DEATH AND CATASTROPHE PROCESS BEN CAIRNS and P. K. POLLETT, University of Queensland
More informationAsymptotic inference for a nonstationary double ar(1) model
Asymptotic inference for a nonstationary double ar() model By SHIQING LING and DONG LI Department of Mathematics, Hong Kong University of Science and Technology, Hong Kong maling@ust.hk malidong@ust.hk
More informationLecture 7. µ(x)f(x). When µ is a probability measure, we say µ is a stationary distribution.
Lecture 7 1 Stationary measures of a Markov chain We now study the long time behavior of a Markov Chain: in particular, the existence and uniqueness of stationary measures, and the convergence of the distribution
More informationConvergence exponentielle uniforme vers la distribution quasi-stationnaire en dynamique des populations
Convergence exponentielle uniforme vers la distribution quasi-stationnaire en dynamique des populations Nicolas Champagnat, Denis Villemonais Colloque Franco-Maghrébin d Analyse Stochastique, Nice, 25/11/2015
More informationQUASI-STATIONARITY FOR A NON-CONSERVATIVE EVOLUTION SEMIGROUP
QUASI-STATIONARITY FOR A NON-CONSERVATIVE EVOLUTION SEMIGROUP ILIE GRIGORESCU 1, MIN KANG 2, AND YISHU SONG 1 Abstract. We investigate a non-conservative branching particle process on an open domain D
More informationMarkov Chains. X(t) is a Markov Process if, for arbitrary times t 1 < t 2 <... < t k < t k+1. If X(t) is discrete-valued. If X(t) is continuous-valued
Markov Chains X(t) is a Markov Process if, for arbitrary times t 1 < t 2
More informationOn the martingales obtained by an extension due to Saisho, Tanemura and Yor of Pitman s theorem
On the martingales obtained by an extension due to Saisho, Tanemura and Yor of Pitman s theorem Koichiro TAKAOKA Dept of Applied Physics, Tokyo Institute of Technology Abstract M Yor constructed a family
More informationRandom Processes. DS GA 1002 Probability and Statistics for Data Science.
Random Processes DS GA 1002 Probability and Statistics for Data Science http://www.cims.nyu.edu/~cfgranda/pages/dsga1002_fall17 Carlos Fernandez-Granda Aim Modeling quantities that evolve in time (or space)
More informationZdzis law Brzeźniak and Tomasz Zastawniak
Basic Stochastic Processes by Zdzis law Brzeźniak and Tomasz Zastawniak Springer-Verlag, London 1999 Corrections in the 2nd printing Version: 21 May 2005 Page and line numbers refer to the 2nd printing
More informationYaglom-type limit theorems for branching Brownian motion with absorption. by Jason Schweinsberg University of California San Diego
Yaglom-type limit theorems for branching Brownian motion with absorption by Jason Schweinsberg University of California San Diego (with Julien Berestycki, Nathanaël Berestycki, Pascal Maillard) Outline
More informationSTAT STOCHASTIC PROCESSES. Contents
STAT 3911 - STOCHASTIC PROCESSES ANDREW TULLOCH Contents 1. Stochastic Processes 2 2. Classification of states 2 3. Limit theorems for Markov chains 4 4. First step analysis 5 5. Branching processes 5
More informationSelf-normalized Cramér-Type Large Deviations for Independent Random Variables
Self-normalized Cramér-Type Large Deviations for Independent Random Variables Qi-Man Shao National University of Singapore and University of Oregon qmshao@darkwing.uoregon.edu 1. Introduction Let X, X
More informationThe Transition Probability Function P ij (t)
The Transition Probability Function P ij (t) Consider a continuous time Markov chain {X(t), t 0}. We are interested in the probability that in t time units the process will be in state j, given that it
More informationMean field simulation for Monte Carlo integration. Part II : Feynman-Kac models. P. Del Moral
Mean field simulation for Monte Carlo integration Part II : Feynman-Kac models P. Del Moral INRIA Bordeaux & Inst. Maths. Bordeaux & CMAP Polytechnique Lectures, INLN CNRS & Nice Sophia Antipolis Univ.
More information1 Continuous-time chains, finite state space
Université Paris Diderot 208 Markov chains Exercises 3 Continuous-time chains, finite state space Exercise Consider a continuous-time taking values in {, 2, 3}, with generator 2 2. 2 2 0. Draw the diagramm
More informationTMA4265 Stochastic processes ST2101 Stochastic simulation and modelling
Norwegian University of Science and Technology Department of Mathematical Sciences Page of 7 English Contact during examination: Øyvind Bakke Telephone: 73 9 8 26, 99 4 673 TMA426 Stochastic processes
More informationCOMPOSITION SEMIGROUPS AND RANDOM STABILITY. By John Bunge Cornell University
The Annals of Probability 1996, Vol. 24, No. 3, 1476 1489 COMPOSITION SEMIGROUPS AND RANDOM STABILITY By John Bunge Cornell University A random variable X is N-divisible if it can be decomposed into a
More informationON VARIETIES IN WHICH SOLUBLE GROUPS ARE TORSION-BY-NILPOTENT
ON VARIETIES IN WHICH SOLUBLE GROUPS ARE TORSION-BY-NILPOTENT GÉRARD ENDIMIONI C.M.I., Université de Provence, UMR-CNRS 6632 39, rue F. Joliot-Curie, 13453 Marseille Cedex 13, France E-mail: endimion@gyptis.univ-mrs.fr
More informationOptimal filtering and the dual process
Optimal filtering and the dual process Omiros Papaspiliopoulos ICREA & Universitat Pompeu Fabra Joint work with Matteo Ruggiero Collegio Carlo Alberto & University of Turin The filtering problem X t0 X
More informationStochastic Processes (Week 6)
Stochastic Processes (Week 6) October 30th, 2014 1 Discrete-time Finite Markov Chains 2 Countable Markov Chains 3 Continuous-Time Markov Chains 3.1 Poisson Process 3.2 Finite State Space 3.2.1 Kolmogrov
More informationUniversity Of Calgary Department of Mathematics and Statistics
University Of Calgary Department of Mathematics and Statistics Hawkes Seminar May 23, 2018 1 / 46 Some Problems in Insurance and Reinsurance Mohamed Badaoui Department of Electrical Engineering National
More informationExcited random walks in cookie environments with Markovian cookie stacks
Excited random walks in cookie environments with Markovian cookie stacks Jonathon Peterson Department of Mathematics Purdue University Joint work with Elena Kosygina December 5, 2015 Jonathon Peterson
More informationA log-scale limit theorem for one-dimensional random walks in random environments
A log-scale limit theorem for one-dimensional random walks in random environments Alexander Roitershtein August 3, 2004; Revised April 1, 2005 Abstract We consider a transient one-dimensional random walk
More informationClassification of Countable State Markov Chains
Classification of Countable State Markov Chains Friday, March 21, 2014 2:01 PM How can we determine whether a communication class in a countable state Markov chain is: transient null recurrent positive
More informationSTABILIZATION OF AN OVERLOADED QUEUEING NETWORK USING MEASUREMENT-BASED ADMISSION CONTROL
First published in Journal of Applied Probability 43(1) c 2006 Applied Probability Trust STABILIZATION OF AN OVERLOADED QUEUEING NETWORK USING MEASUREMENT-BASED ADMISSION CONTROL LASSE LESKELÄ, Helsinki
More informationPractical conditions on Markov chains for weak convergence of tail empirical processes
Practical conditions on Markov chains for weak convergence of tail empirical processes Olivier Wintenberger University of Copenhagen and Paris VI Joint work with Rafa l Kulik and Philippe Soulier Toronto,
More informationMarkov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains
Markov Chains A random process X is a family {X t : t T } of random variables indexed by some set T. When T = {0, 1, 2,... } one speaks about a discrete-time process, for T = R or T = [0, ) one has a continuous-time
More informationMarkov Chains for Everybody
Markov Chains for Everybody An Introduction to the theory of discrete time Markov chains on countable state spaces. Wilhelm Huisinga, & Eike Meerbach Fachbereich Mathematik und Informatik Freien Universität
More informationOn Existence of Limiting Distribution for Time-Nonhomogeneous Countable Markov Process
Queueing Systems 46, 353 361, 24 24 Kluwer Academic Publishers Manufactured in The Netherlands On Existence of Limiting Distribution for Time-Nonhomogeneous Countable Markov Process V ABRAMOV vyachesl@zahavnetil
More informationA NOTE ON QUASI-STATIONARY DISTRIBUTIONS OF BIRTH-DEATH PROCESSES AND THE SIS LOGISTIC EPIDEMIC
(January 8, 2003) A NOTE ON QUASI-STATIONARY DISTRIBUTIONS OF BIRTH-DEATH PROCESSES AND THE SIS LOGISTIC EPIDEMIC DAMIAN CLANCY, University of Liverpoo PHILIP K. POLLETT, University of Queensand Abstract
More informationhttp://www.math.uah.edu/stat/markov/.xhtml 1 of 9 7/16/2009 7:20 AM Virtual Laboratories > 16. Markov Chains > 1 2 3 4 5 6 7 8 9 10 11 12 1. A Markov process is a random process in which the future is
More informationRare event simulation for the ruin problem with investments via importance sampling and duality
Rare event simulation for the ruin problem with investments via importance sampling and duality Jerey Collamore University of Copenhagen Joint work with Anand Vidyashankar (GMU) and Guoqing Diao (GMU).
More informationarxiv: v1 [math.pr] 1 Jan 2013
The role of dispersal in interacting patches subject to an Allee effect arxiv:1301.0125v1 [math.pr] 1 Jan 2013 1. Introduction N. Lanchier Abstract This article is concerned with a stochastic multi-patch
More informationSPECTRAL GAP FOR ZERO-RANGE DYNAMICS. By C. Landim, S. Sethuraman and S. Varadhan 1 IMPA and CNRS, Courant Institute and Courant Institute
The Annals of Probability 996, Vol. 24, No. 4, 87 902 SPECTRAL GAP FOR ZERO-RANGE DYNAMICS By C. Landim, S. Sethuraman and S. Varadhan IMPA and CNRS, Courant Institute and Courant Institute We give a lower
More informationQuasi-stationary distributions
Quasi-stationary distributions Erik A. van Doorn a and Philip K. Pollett b a Department of Applied Mathematics, University of Twente P.O. Box 217, 7500 AE Enschede, The Netherlands E-mail: e.a.vandoorn@utwente.nl
More informationExact Simulation of the Stationary Distribution of M/G/c Queues
1/36 Exact Simulation of the Stationary Distribution of M/G/c Queues Professor Karl Sigman Columbia University New York City USA Conference in Honor of Søren Asmussen Monday, August 1, 2011 Sandbjerg Estate
More informationSTEADY STATE AND SCALING LIMIT FOR A TRAFFIC CONGESTION MODEL
STEADY STATE AND SCALING LIMIT FOR A TRAFFIC CONGESTION MODEL ILIE GRIGORESCU AND MIN KANG Abstract. In a general model (AIMD) of transmission control protocol (TCP) used in internet traffic congestion
More informationComputable bounds for the decay parameter of a birth-death process
Loughborough University Institutional Repository Computable bounds for the decay parameter of a birth-death process This item was submitted to Loughborough University's Institutional Repository by the/an
More information88 CONTINUOUS MARKOV CHAINS
88 CONTINUOUS MARKOV CHAINS 3.4. birth-death. Continuous birth-death Markov chains are very similar to countable Markov chains. One new concept is explosion which means that an infinite number of state
More informationReview. DS GA 1002 Statistical and Mathematical Models. Carlos Fernandez-Granda
Review DS GA 1002 Statistical and Mathematical Models http://www.cims.nyu.edu/~cfgranda/pages/dsga1002_fall16 Carlos Fernandez-Granda Probability and statistics Probability: Framework for dealing with
More informationA review of Continuous Time MC STA 624, Spring 2015
A review of Continuous Time MC STA 624, Spring 2015 Ruriko Yoshida Dept. of Statistics University of Kentucky polytopes.net STA 624 1 Continuous Time Markov chains Definition A continuous time stochastic
More informationASYMPTOTIC BEHAVIOR OF A TAGGED PARTICLE IN SIMPLE EXCLUSION PROCESSES
ASYMPTOTIC BEHAVIOR OF A TAGGED PARTICLE IN SIMPLE EXCLUSION PROCESSES C. LANDIM, S. OLLA, S. R. S. VARADHAN Abstract. We review in this article central limit theorems for a tagged particle in the simple
More informationProperties of an infinite dimensional EDS system : the Muller s ratchet
Properties of an infinite dimensional EDS system : the Muller s ratchet LATP June 5, 2011 A ratchet source : wikipedia Plan 1 Introduction : The model of Haigh 2 3 Hypothesis (Biological) : The population
More informationStochastic modelling of epidemic spread
Stochastic modelling of epidemic spread Julien Arino Centre for Research on Inner City Health St Michael s Hospital Toronto On leave from Department of Mathematics University of Manitoba Julien Arino@umanitoba.ca
More informationMATH 56A: STOCHASTIC PROCESSES CHAPTER 2
MATH 56A: STOCHASTIC PROCESSES CHAPTER 2 2. Countable Markov Chains I started Chapter 2 which talks about Markov chains with a countably infinite number of states. I did my favorite example which is on
More informationRecurrence of Simple Random Walk on Z 2 is Dynamically Sensitive
arxiv:math/5365v [math.pr] 3 Mar 25 Recurrence of Simple Random Walk on Z 2 is Dynamically Sensitive Christopher Hoffman August 27, 28 Abstract Benjamini, Häggström, Peres and Steif [2] introduced the
More informationµ n 1 (v )z n P (v, )
Plan More Examples (Countable-state case). Questions 1. Extended Examples 2. Ideas and Results Next Time: General-state Markov Chains Homework 4 typo Unless otherwise noted, let X be an irreducible, aperiodic
More informationQuasi-stationary distributions for discrete-state models
Quasi-stationary distributions for discrete-state models Erik A. van Doorn a and Philip K. Pollett b a Department of Applied Mathematics, University of Twente P.O. Box 217, 7500 AE Enschede, The Netherlands
More informationStochastic Processes
Stochastic Processes 8.445 MIT, fall 20 Mid Term Exam Solutions October 27, 20 Your Name: Alberto De Sole Exercise Max Grade Grade 5 5 2 5 5 3 5 5 4 5 5 5 5 5 6 5 5 Total 30 30 Problem :. True / False
More informationIntroduction to Machine Learning CMU-10701
Introduction to Machine Learning CMU-10701 Markov Chain Monte Carlo Methods Barnabás Póczos & Aarti Singh Contents Markov Chain Monte Carlo Methods Goal & Motivation Sampling Rejection Importance Markov
More informationUniversity of Toronto Department of Statistics
Norm Comparisons for Data Augmentation by James P. Hobert Department of Statistics University of Florida and Jeffrey S. Rosenthal Department of Statistics University of Toronto Technical Report No. 0704
More informationTHE VARIANCE CONSTANT FOR THE ACTUAL WAITING TIME OF THE PH/PH/1 QUEUE. By Mogens Bladt National University of Mexico
The Annals of Applied Probability 1996, Vol. 6, No. 3, 766 777 THE VARIANCE CONSTANT FOR THE ACTUAL WAITING TIME OF THE PH/PH/1 QUEUE By Mogens Bladt National University of Mexico In this paper we consider
More information