SIMILAR MARKOV CHAINS

Similar documents
Irregular Birth-Death process: stationarity and quasi-stationarity

Quasi-stationary distributions

Macroscopic quasi-stationary distribution and microscopic particle systems

A note on transitive topological Markov chains of given entropy and period

David George Kendall Bibliography

Quasi-stationary distributions

Survival in a quasi-death process

Discrete time Markov chains. Discrete Time Markov Chains, Limiting. Limiting Distribution and Classification. Regular Transition Probability Matrices

Quasi-stationary distributions for discrete-state models

STOCHASTIC PROCESSES Basic notions

EXTINCTION TIMES FOR A GENERAL BIRTH, DEATH AND CATASTROPHE PROCESS

Statistics 150: Spring 2007

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321

Markov Chains, Random Walks on Graphs, and the Laplacian

8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains

Computable bounds for the decay parameter of a birth-death process

Quasi-Stationary Distributions in Linear Birth and Death Processes

SMSTC (2007/08) Probability.

Quasi stationary distributions and Fleming Viot Processes

Markov Chains. X(t) is a Markov Process if, for arbitrary times t 1 < t 2 <... < t k < t k+1. If X(t) is discrete-valued. If X(t) is continuous-valued

Multi Stage Queuing Model in Level Dependent Quasi Birth Death Process

A NOTE ON THE ASYMPTOTIC BEHAVIOUR OF A PERIODIC MULTITYPE GALTON-WATSON BRANCHING PROCESS. M. González, R. Martínez, M. Mota

THE PROBLEM OF B. V. GNEDENKO FOR PARTIAL SUMMATION SCHEMES ON BANACH SPACE

Linear-fractional branching processes with countably many types

Markov Chains and Stochastic Sampling

The SIS and SIR stochastic epidemic models revisited

On asymptotic behavior of a finite Markov chain

Stochastic process. X, a series of random variables indexed by t

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks

Convergence exponentielle uniforme vers la distribution quasi-stationnaire de processus de Markov absorbés

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

EXTINCTION PROBABILITY IN A BIRTH-DEATH PROCESS WITH KILLING

Faithful couplings of Markov chains: now equals forever

Markov Chains, Stochastic Processes, and Matrix Decompositions

An Introduction to Stochastic Modeling

2 Discrete-Time Markov Chains

EXTINCTION TIMES FOR A GENERAL BIRTH, DEATH AND CATASTROPHE PROCESS

Central limit theorems for ergodic continuous-time Markov chains with applications to single birth processes

12 Markov chains The Markov property

Probability & Computing

Examples of Countable State Markov Chains Thursday, October 16, :12 PM

Necessary and sufficient conditions for strong R-positivity

Irreducibility. Irreducible. every state can be reached from every other state For any i,j, exist an m 0, such that. Absorbing state: p jj =1

Convergence exponentielle uniforme vers la distribution quasi-stationnaire en dynamique des populations

Markov processes and queueing networks

Logarithmic scaling of planar random walk s local times

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i :=

88 CONTINUOUS MARKOV CHAINS

Lecture 7. We can regard (p(i, j)) as defining a (maybe infinite) matrix P. Then a basic fact is

Markov chains. Randomness and Computation. Markov chains. Markov processes

Elementary Applications of Probability Theory

Weighted Sums of Orthogonal Polynomials Related to Birth-Death Processes with Killing

Convergence Rates for Renewal Sequences

Lecture 7. µ(x)f(x). When µ is a probability measure, we say µ is a stationary distribution.

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1

STAT STOCHASTIC PROCESSES. Contents

Classification of Countable State Markov Chains

Department of Applied Mathematics Faculty of EEMCS. University of Twente. Memorandum No Birth-death processes with killing

MATH 564/STAT 555 Applied Stochastic Processes Homework 2, September 18, 2015 Due September 30, 2015

Note that in the example in Lecture 1, the state Home is recurrent (and even absorbing), but all other states are transient. f ii (n) f ii = n=1 < +

Matrix analytic methods. Lecture 1: Structured Markov chains and their stationary distribution

Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t

2. Transience and Recurrence

Fuzzy Statistical Limits

MARKOV PROCESSES. Valerio Di Valerio

Chapter 2: Markov Chains and Queues in Discrete Time

Math Homework 5 Solutions


Stochastic modelling of epidemic spread

A Method for Evaluating the Distribution of the Total Cost of a Random Process over its Lifetime

NIL, NILPOTENT AND PI-ALGEBRAS

RECENT RESULTS FOR SUPERCRITICAL CONTROLLED BRANCHING PROCESSES WITH CONTROL RANDOM FUNCTIONS

Example: physical systems. If the state space. Example: speech recognition. Context can be. Example: epidemics. Suppose each infected

Model reversibility of a two dimensional reflecting random walk and its application to queueing network

LIST OF MATHEMATICAL PAPERS

Countable state discrete time Markov Chains

3. The Voter Model. David Aldous. June 20, 2012

ERGODIC PROPERTIES OF NONNEGATIVE MATRICES-Π

Lecture 20: Reversible Processes and Queues

Non-homogeneous random walks on a semi-infinite strip

arxiv: v1 [math.pr] 13 Apr 2010

LIMITS FOR QUEUES AS THE WAITING ROOM GROWS. Bell Communications Research AT&T Bell Laboratories Red Bank, NJ Murray Hill, NJ 07974

Outlines. Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC)

Stationary Probabilities of Markov Chains with Upper Hessenberg Transition Matrices

CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions

Treball final de grau GRAU DE MATEMÀTIQUES Facultat de Matemàtiques Universitat de Barcelona MARKOV CHAINS

Statistics 992 Continuous-time Markov Chains Spring 2004

EXTINCTION AND QUASI-STATIONARITY IN THE VERHULST LOGISTIC MODEL: WITH DERIVATIONS OF MATHEMATICAL RESULTS

N.G.Bean, D.A.Green and P.G.Taylor. University of Adelaide. Adelaide. Abstract. process of an MMPP/M/1 queue is not a MAP unless the queue is a

Lectures on Probability and Statistical Models

Lecture 21. David Aldous. 16 October David Aldous Lecture 21

Stochastic modelling of epidemic spread

Homework 3 posted, due Tuesday, November 29.

The range of tree-indexed random walk

Markov Chains. Arnoldo Frigessi Bernd Heidergott November 4, 2015

Birth and Death Processes. Birth and Death Processes. Linear Growth with Immigration. Limiting Behaviour for Birth and Death Processes

Positive and null recurrent-branching Process

CDA6530: Performance Models of Computers and Networks. Chapter 3: Review of Practical Stochastic Processes

Lecture 20 : Markov Chains

Stochastic Processes. Theory for Applications. Robert G. Gallager CAMBRIDGE UNIVERSITY PRESS

Transcription:

SIMILAR MARKOV CHAINS by Phil Pollett The University of Queensland

MAIN REFERENCES Convergence of Markov transition probabilities and their spectral properties 1. Vere-Jones, D. Geometric ergodicity in denumerable Markov chains. Quart. J. Math. Oxford Ser. 2 13 (1962) 7 28. 2. Vere-Jones, D. On the spectra of some linear operators associated with queueing systems. Z. Wahrscheinlichkeitstheorie und Verw. Gebiete 2 (1963) 12 21. 3. Vere-Jones, D. Ergodic properties of nonnegative matrices. I. Pacific J. Math. 22 (1967) 361 386. 4. Vere-Jones, D. Ergodic properties of nonnegative matrices. II. Pacific J. Math. 26 (1968) 601 620. Classification of transient Markov chains and quasi-stationary distributions 5. Seneta, E.; Vere-Jones, D. On quasi-stationary distributions in discrete-time Markov chains with a denumerable infinity of states. J. Appl. Probability 3 (1966) 403 434. 6. Vere-Jones, D. Some limit theorems for evanescent processes. Austral. J. Statist. 11 (1969) 67 78. 2

Related work 7. Vere-Jones, D.; Kendall, David G. A commutativity problem in the theory of Markov chains. Teor. Veroyatnost. i Primenen. 4 (1959) 97 100. 8. Vere-Jones, D. A rate of convergence problem in the theory of queues. Teor. Verojatnost. i Primenen. 9 (1964) 104 112. 9. Vere-Jones, D. Note on a theorem of Kingman and a theorem of Chung. Ann. Math. Statist. 37 (1966) 1844 1846. 10. Heathcote, C. R.; Seneta, E.; Vere-Jones, D. A refinement of two theorems in the theory of branching processes. Teor. Verojatnost. i Primenen. 12 (1967) 341 346. 11. Rubin, H.; Vere-Jones, D. Domains of attraction for the subcritical Galton-Watson branching process. J. Appl. Probability 5 (1968) 216 219. 12. Seneta, E.; Vere-Jones, D. On the asymptotic behaviour of subcritical branching processes with continuous state space. Z. Wahrscheinlichkeitstheorie und Verw. Gebiete 10 (1968) 212 225. 13. Fahady, K. S.; Quine, M. P.; Vere-Jones, D. Heavy traffic approximations for the Galton-Watson process. Advances in Appl. Probability 3 (1971) 282 300. 14. Pollett, P. K.; Vere-Jones, D. A note on evanescent processes. Austral. J. Statist. 34 (1992), no. 3, 531 536. 3

Important early work on quasi-stationary distributions Yaglom, A.M. Certain limit theorems of the theory of branching processes (Russian) Doklady Akad. Nauk SSSR (N.S.) 56 (1947) 795 798. Bartlett, M.S. Deterministic and stochastic models for recurrent epidemics. Proceedings of the Third Berkeley Symposium on Mathematical Statistics and Probability, 1954 1955, Vol. IV, pp. 81 109. University of California Press, Berkeley and Los Angeles, 1956. Bartlett, M.S. Stochastic population models in ecology and epidemiology. Methuen s Monographs on Applied Probability and Statistics Methuen & Co., Ltd., London; John Wiley & Sons, Inc., New York, 1960. Darroch, J. N.; Seneta, E. On quasi-stationary distributions in absorbing discrete-time finite Markov chains. J. Appl. Probability 2 (1965) 88 100. Darroch, J. N.; Seneta, E. On quasi-stationary distributions in absorbing continuous-time finite Markov chains. J. Appl. Probability 4 (1967) 192 196. 4

Important early work on quasi-stationary distributions Mandl, Petr Sur le comportement asymptotique des probabilités dans les ensembles des états d une chaîne de Markov homogène (Russian) Časopis Pěst. Mat. 84 (1959) 140 149. Mandl, Petr On the asymptotic behaviour of probabilities within groups of states of a homogeneous Markov process (Czech) Časopis Pěst. Mat. 85 (1960) 448 456. Ewens, W.J. The diffusion equation and a pseudo-distribution in genetics. J. Roy. Statist. Soc., Ser B 25 (1963) 405 412. Kingman, J.F.C. The exponential decay of Markov transition probabilities. Proc. London Math. Soc. 13 (1963) 337 358. Ewens, W.J. The pseudo-transient distribution and its uses in genetics. J. Appl. Probab. 1 (1964) 141 156. Seneta, E. Quasi-stationary distributions and time-reversion in genetics. (With discussion) J. Roy. Statist. Soc. Ser. B 28 (1966) 253 277. Seneta, E. Quasi-stationary behaviour in the random walk with continuous time. Austral. J. Statist. 8 (1966) 92 98. 5

DISCRETE-TIME CHAINS Setting: {X n, n = 0, 1,... }, a time-homogeneous Markov chain taking values in a countable set S with transition probabilities p (n) = Pr(X m+n = j X m = i), i, j S. Let C be any irreducible and (for simplicity) aperiodic class. DVJ1: For i C, {p (n) ii } 1/n ρ as n. The limit ρ does not depend on i and it satisfies 0 < ρ 1. Moreover, p (n) ii ρ n and indeed, for i, j C, p (n) M ρ n, where M <. (If C is recurrent, n p (n) ii = implies ρ = 1. When C is transient, we can have ρ = 1, or, ρ < 1, which is called geometric ergodicity.) DVJ2: For any real r > 0, the series n p (n) rn, i, j C, converge or diverge together; in particular, they have the same radius of convergence R, and R = 1/ρ. And, all or none of the sequences {p (n) rn } tend to zero. 6

TRANSIENT CHAINS The key to unlocking this quasi-stationarity is to examine the behaviour of the transition probabilities at the radius of convergence R. Suppose that C is transient class which is geometrically ergodic (ρ < 1, R > 1). Although p (n) 0, it might be true that p (n) Rn m, where m > 0. How does this help? For i, j C, Pr(X n = j X n C, X 0 = i) = Pr(X n = j X 0 = i) Pr(X n C X 0 = i) = = p(n) Rn k C p (n) ik Rn p (n) k C p (n) ik m k C m ik, provided that we can justify taking limit under summation. 7

DVJ3: C is said to be R-transient or R-recurrent according as n p (n) Rn converges or diverges. If C is R-recurrent, then it is said to be R-positive or R-null according to whether the limit of p (n) Rn is positive or zero. DVJ4: If C is R-recurrent, then, for i C, the inequalities m i p (n) m j ρ n p (n) ji x i x j ρ n have unique positive solutions {m j } and {x j } and indeed they are eigenvectors: m i p (n) = m j ρ n p (n) ji x i = x j ρ n. C is then R-positive recurrent if and only if k C m k x k <, in which case lim n k C p (n) Rn x im j k C x k m k, and, if k m k <, then p (n) ik Rn = k C lim n p(n) ik Rn = x i k C 8 m k.

AND FINALLY S-DVJ: If C is R-positive recurrent and the left-eigenvector satisfies k m k <, then the limiting conditional (or quasi-stationary) distribution exists: as n, Pr(X n = j X n C, X 0 = i) m j. k C m k... AND MUCH MORE Other kinds of QSD, more general and more precise statements, continuous-time chains, general state spaces, numerical methods and in particular truncation methods, MCMC, countless applications of QSDs: chemical kinetics, population biology, ecology, epidemiology, reliability, telecommunications. A full bibliography is maintained at my web site: http://www.maths.uq.edu.au/ pkp/research.html 9

SIMILAR MARKOV CHAINS New setting: (X t, t 0), a time-homogeneous Markov chain in continuous time taking values in a countable set S, with transition function P = (p (t)), where p (t) = Pr(X s+t = j X s = i), i, j S. Assuming that p (0+) = δ (standard), the transitions rates are defined by q = p (0+). Set q i = q ii and assume q i < (stable). Definition: Two such chains X and X are said to be similar if their transition functions, P and P, satisfy p (t) = c p (t), i, j S, t > 0, for some collection of positive constants c, i, j S. 10

Immediate consequences of the definition: Since both chains are standard, c ii = 1 and the transition rates must satisfy q = c q, in particular, q i = q i. They share the same irreducible classes and the same classification of states. Birth-death chains: Lenin et al. proved that for birth-death chains the similarity constants must factorize as c = α i β j. (Note that c = β j /β i, since c ii = 1.) Is this true more generally? Definition: Let C be a subset of S. Two chains are said to be strongly similar over C if p (t) = p (t)β j /β i, i, j C, t > 0, for some collection of positive constants β j, j C. Proposition: If C is recurrent, then c = 1. (Proof: f = c f.) Lenin, R., Parthasarathy, P., Scheinhardt, W. and van Doorn, E. (2000) Families of birth-death processes with similar time-dependent behaviour. J. Appl. Probab. 37, 835 849. 11

EXTENSION OF DJV THEORY Kingman: If C is irreducible, then, for each i, j C, t 1 log p (t) λ ( 0), p (t) M e λt, for some M <, et cetera. Definition: C is said to be λ-transient or λ- recurrent according as 0 p (t)e λt dt converges or diverges. If C is λ-recurrent, then it is said to be λ-positive or λ-null according to whether the limit of p (t)e λt is positive or zero. Theorem: If C is λ-recurrent, then, for i C, the inequalities m i p (t) e λt m j p ji (t)x i e λt x j have unique positive solutions {m j } and {x j } and indeed they are eigenvectors: m i p (t) = e λt m j p ji (t)x i = e λt x j. C is then λ-positive recurrent if and only if k C m k x k <, in which case p (t)e λt x im j. k C x k m k 12

Suppose that P and P are similar. They share the same λ and the same λ-classification. Theorem: If C is a λ-positive recurrent class, then P and P are strongly similar over C. We may take β j = m j /m j, where {m j } and { m j } are the essentially unique λ-invariant measures (left eigenvectors) on C for P and for P, respectively. Proof: Let t in p (t)e λt = c p (t)e λt. We get, in an obvious notation, c = E x i x i m j m j, E = m i x i / m i x i, and, since c ii = 1, we have E x i m i = x i m i. Again: Are similar chains always strongly similar? 13

In the λ-null recurrent case, it may still be possible to deduce the desired factorization, for, although e λt p (t) 0, it may be possible to find a κ > 0 such that t κ e λt p (t) tends to a strictly positive limit. (Similar chains will have the same κ.) Lemma: Assume that C is λ-null recurrent and suppose that there is a κ > 0, which does not depend on i and j, such that t κ e λt p (t) tends to a strictly positive limit π for all i, j C. Then, there is a positive constant d such that π = dx i m i, i, j C, where {m j } and {x j } are, respectively, the essentially unique λ-invariant measure and vector (left- and right-eigenvectors) on C for P. Remark: Even in the λ-transient case it might still be possible to find a κ > 0 such that t κ e λt p (t) tends to a positive limit, and for the conclusions to the lemma to hold good. (Note that, by the usual irreducibility arguments, κ will be the same for all i and j in any given class.) 14