Quasi-Stationary Distributions in Linear Birth and Death Processes
|
|
- Edwin Barnett
- 6 years ago
- Views:
Transcription
1 Quasi-Stationary Distributions in Linear Birth and Death Processes Wenbo Liu & Hanjun Zhang School of Mathematics and Computational Science, Xiangtan University Hunan , China Abstract The quasi-stationary distributions {a j } for a linear birth and death process is determined by two methods. The first method obtains our desired results by computing directly while the second method bases on the relationship between {a j } and its limiting of probability generating function. In addition, we also obtain the stationary distribution for a linear birth, death and immigration process with the second method. Keywords: Linear birth and death process, Quasi-stationary distributions, Probability generating function, Stationary distribution 1. Introduction We are interested in the long-term behavior of absorbing Marov processes. The processes will be absorbed eventually, however, they often persist for extended periods of time and in fact appear to reach an equilibrium before reaching the absorbing state. The ey to analyze the behavior of the processes before they die out is to simplify condition on their not having been absorbed, which leads us to consider quasi-stationary distributions and limiting conditional distributions, rather than the classical stationary and limiting distributions for irreducible processes. Quasi-stationary distributions and limiting conditional distributions for Marov processes have been studied by Vere-Jones (1969), Flaspohler (1974), Pollett (1988), Darlington and Pollett (000) and Moler et.al (000) in the general setting of absorbing continuous-time denumerable Marov chains. As we all now, birth and death processes are the most important class of Marov processes and their relatively simple structure can mae us study them with a rather extensive analysis. Quasi-stationary distribution for birth and death processes have been studied by Cavender (1978), Pollett (1988), Van Doorn (1991), Schoutens (000) and Clancy and Pollett (003). Again, the linear birth and death processes are the most important class of birth and death processes. In this paper we will study the quasi-stationary distributions in the setting of a linear birth and death process on a semi-infinite lattice of integers, the finite boundary point being an absorbing state which is reached with certainly. In this article, we will calculate the quasi-stationary distributions for a linear birth and death process by two different methods. Meanwhile, we also obtain the stationary distribution for a linear birth, death and immigration process with our methods.. Preliminaries Let E be the set {0,1,,... } of non-negative integers, and let {λ n, n 0} and {μ n, n 0} be sequences of non-negative numbers. A continuous-time Marov Chain {X(t), t 0} having state E and q-matrix given by λ i if j = i + 1, i 0 μ q ij = i if j = i 1, i 1 (1) (λ i + μ i ) if j = i, i 0 0 otherwise, is called a birth and death process on E, with birth coefficients λ n, n 0, and death coefficient μ n, n 0. Suppose λ 0 = μ 0 = 0, λ n > 0, μ n > 0, n 1, then Q will be conservative and 0 is an absorbing state and C={1,,... } is irreducible for the minimal Q-function, F, and hence for any Q-function. As usual we define the potential coefficients π = {π i, i C} and π 1 = 1, π n = λ 1λ λ n 1 μ μ 3 μ n, n () The transition probability {P ij (t), i, j E, t 0} of a birth and death process satisfies the following conditions P ij (t) 1, (3) j P ij (t) 0, (4) Published by Canadian Center of Science and Education 7
2 P ij (0) = δ ij, (5) P ij (s + t) = P i (s)p j (t) (6) P ij (t) = q i P j (t), (7) P ij (t) = P i (t)q j, (8) for i, j E and t, s 0, δ ij is the Kronecer delta. The equations (6), (7) and (8) are called the Chapman- Kolmogorov equations, the bacward equations(be) and the forward equations(fe), respectively. We now, each Q- function, P ij (t), satisfies (7) since Q is conservative but might not satisfy (8). We define T = inf{t 0:X(t) = 0} (9) the absorption (hitting) time at 0. We shall only be interested in processes for which E i T < for all i 1. We will assume the eventual absorption at 0 is certain, which is equivalent to assuming 1 =. (10) λ n π n n=0 Imposing (10) implies that the condition 1 n π i = (11) λ n=0 n π n i=0 is satisfied, and as a consequence (see for example Reuter (1957), Kemperman (196)), the transition probabilities P ij (t) constitute, under some obvious side conditions, the unique solution of the equation (7) with initial conditions (5). Also, imposing (10) (and hence (11)) implies that the process is non-explosive and therefore honest (see for example Reuter (1957), Kemperman (196)), that is P ij (t) = 1, i E, t 0. (1) Given a transition function P ij (t), a set {u i, i E} of non-negative numbers such that u i P ij (t) = u j for all j E and t 0, i E j=0 is called an invariant measure for P ij (t). If, furthermore, i E u i = 1, then u = {u i, i E} is called a stationary distribution. Suppose that P ij (t) is an irreducible transition function, then the limits u j = lim t P ij (t) exist and are independent of i for all j E. The set {u i, i E} of numbers is an invariant measure and either (a) (b) u j = 0 for all j E, or u j > 0 for all j E and u j = 1. (13) j E We call A = {a j } the limiting conditional distribution (LCD) if for each j 1 a j = lim P i (X(t) = j T > t) and a j = 1 (14) t provided the limits exists for some (and hence for all) i. We say a measure A = {a j } on C is a quasi-stationary distribution (QSD) if a i = 1 and for each j 1 and t > 0 a j = P A (X(t) = j T > t). (15) For the above concepts on LCD and QSD, we can refer to Paes (1995) or Ferrari et al. (1995). From Van Doorn (1991) and Vere-Jones (1969) we now that any QSD is a (proper) LCD and any (proper) LCD is a QSD. In this paper, we focus on the process which starts from an single state i and derive the corresponding results. 8 ISSN E-ISSN
3 Our study will use three particular families of distributions: geometric, Pascal and Poisson. Their probabilities generating functions are respectively: (1 p)s g(s) =, for the geometric distribution with parameter p (0 < p < 1) (16) 1 sp g(s) = e ρ(s 1), for the Poisson distribution with parameter ρ (17) ( ) r 1 p g(s) =, for the Pascal distribution with parameters r and p (r > 0, 0 < p < 1) (18) 1 sp 3. Quasi-stationary distributions of birth and death processes In this section we mainly study the quasi-stationary distributions for a linear birth and death process and its generator determined by λ i = iλ, μ i = iμ, i 0, (19) 0 <λ<μ. The constants λ and μ are called the birth and death rates. It is clear that (10) and (11) are all satisfied. The properties of this process have been established in detail in Karlin and McGregor (1958). We now that Van Doorn (1991) obtained the quasi-stationary distributions by Karlin and McGregor s spectral representation of the transition probabilities of the process. However, the smallest point in the support of ψ is a slight difficult to be calculated. Here, we will calculate the quasi-stationary distributions with two other ways. The two methods base on the specificity that we have now the transition function of the process. The explicit form for P ij (t) follows from Anderson (1991) or Karlin and McGregor (1958): i j = min(i, j) and P ij (t) = γ (1 σ j γ )i+ j i j ( ) i 1 (1 σ) i+ j ( 1) σ γ 1 σ 1 σ γ 1 σ γ σ = γe (μ λ)t,γ= λ μ, (a) = { a(a + 1) (a + ) 1 1 = 0, (i) j ( j )!, (0) and its probability generating function P ij (t)z j = j=0 ( ) i rμ 1, i 1, z 1, (1) rλ 1 ( ) 1 z r = e (μ λ)t. μ λz 3.1 The first method In this section we obtain QSD (15) by computing LCD (14). Obviously, when the process starts from an single state i, (15) and (14) are equivalent. Aslo (14) can be written as P ij (t) a j = lim t P i (T > t). () First, we introduce the first method, that is, the quasi-stationary distributions of the process is obtained directly by taing the limit for (). Let β = e (μ λ)t, then the equation (0) becomes P ij (t) = γ j (1 β) i+ j (1 γβ) i+ j Again, since T is absorption time and hence i j ( i )( 1) ( γ γ β β + γβ γ(1 β) ) (i) j ( j )!. (3) Then P i (T > t) = 1 P i (T t) = 1 P i0 (t) = 1 P ij (t) j 1 P i0 (t) = 1 j (1 β)i+ (1 γβ) i (1 β) i γ (1 γβ) j i j ( i (1 β)i (1 γβ) i. (4) ( γ γ )( 1) β β + γβ ) (i) j γ(1 β) ( j )!. (5) Published by Canadian Center of Science and Education 9
4 Obviously, the expressions is 0 0 type when tae the limit, we use L Hôspital s Rule to tacle it. After a series of operations, we have 1 i j ( ) i (5) i(1 γ) γ j (i + j jγ) ( 1) (i) j i j ( ) i ( j )! + γ j 1 (γ 1) ( 1) (i) j ( j )! = Let i j, then, setting γ j 1 i(γ 1) A j = i j ( ) i γ j 1 i(γ 1) ( 1) (i) j [ ] (i + j jγ)γ + (γ 1) as t. ( j )! j ( ) i ( 1) (i) j [ ] (i + j jγ)γ + (γ 1). (6) ( j )! By (6), it is not hard to find A 1 = 1 γ, A = (1 γ)γ, A 3 = (1 γ)γ, recursively, we conjecture A j = (1 γ)γ j 1, j 1. Below, we will use the method of mathematical induction to testify. Suppose A j = (1 γ)γ j 1, then, A j+1 = (1 γ)γ j. Actually, from (6) and assumption, we see easily j ( ) i ( 1) (i) j [ ] (i + j jγ)γ + (γ 1) = i(1 γ) (7) ( j )! Obviously, the left of (7) is independent of j, and so A j+1 = γ j i(γ 1) j+1 ( ) i ( 1) (i) j+1 [ ] (i + j + 1 ( j + 1)γ)γ + (γ 1) = ( j + 1 )! γ j i(γ 1) ( i)(1 γ) = (1 γ)γ j (8) Therefore, we obtain a j = (1 γ)γ j 1 = (1 λ μ )(λ μ ) j 1, j 1. (9) The result is in eeping with Van Doorn (1991). Remar: The above proof course only considers the situation i j, actually, because of the feature of (i) j ( j )!, we now the result is invariable when i j. 3. The second method In this section we will introduce the second method and the idea partly refers to Karlin and Iavaré (198). From (1), we see that [ ] i 1 β + (β γ)z, i 1, z 1, (30) 1 βγ zγ(1 β) β = e (μ λ)t,γ = λ μ. To yield our result more conveniently,we induct the following lemma. Lemma 3.1. lim t β 1 (G i (z, t) 1) = ia(z), i 1, (1 γ)(z 1) A(z) =, 0 z 1. 1 zγ Proof: From (30), we have [ ] i (1 γ)(z 1)β 1 + = 1 βγ zγ + βzγ i ( )[ ] i (1 γ)(z 1)β 1 βγ zγ + βzγ and so, and thus, G i (z, t) 1 = i =1 ( )[ ] i (1 γ)(z 1)β, 1 βγ zγ + βzγ i(1 γ)(z 1) i(1 γ)(z 1) lim t β 1 (G i (z, t) 1) = lim = = ia(z). t 1 βγ zγ + βzγ 1 zγ 30 ISSN E-ISSN
5 Moreover, from (1), we have P i (T > t) = P i (X(t) = j) = G i (1, t) G i (0, t). To establish (14), we use lemma 3.1 to see that for 0 z 1, j=1 P i (X(t) = j T > t)z j = j=1 G i (z, t) G i (0, t) G i (1, t) G i (0, t) = (G i(z, t) 1)β 1 (G i (0, t) 1)β 1 (G i (1, t) 1)β 1 (G i (0, t) 1)β 1 A(z) A(0) A(1) A(0) = z zγ 1 zγ as t, and hence, from (16) we see that a j = lim t P i (X(t) = j T > t) is geometric distribution with parameter γ = λ μ,sowe obtain a j = (1 λ μ )(λ μ ) j 1, j 1. (31) Obviously, (31) is in accord with (9) and Van Doorn (1991). Certainly, some authors led to the same conclusion by other techniques, we may refer to Cavender (1978), Pollett (1988) and Van Doorn (1991). 4. Stationary distribution of linear birth,death and immigration processes In this section, we will adopt the method of section (3.) to compute stationary distribution for a linear birth, death and immigration process defined by λ i = a + iλ, μ i = iμ, i 0, (3) a > 0,λ 0,μ 0. The constants a,λ,μare called the immigration, birth, and death rates. Obviously, the (10) and (11) are all satisfied. And therefore P i (T > t) = P i (X(t) = j) = G i (1, t) = 1. j=0 We discuss the following several situations to calculate the stationary distribution of the processes. Case 1. 0 <λ<μ. The probability generating function [ ] i [ ] δ 1 β z(γ β) 1 γ, i 0, (33) 1 βγ γz(1 β) 1 βγ γz(1 β) β = e (μ λ)t,γ= λ μ,δ= a λ. From (33), we have ( ) δ 1 γ G i (z, t) as t, 1 δz and from (18), we now u j = lim t P ij (t) is Pascal distribution with parameters γ and δ, hence u j = C δ 1 j+δ 1 γ j (1 γ) δ = (1 γ) δ γ j (δ) j, j 0. (34) j! Case. 0 = λ<μ. The probability generating function is (1 β + zβ) i e δ(1 β)(z 1), (35) β = e μt,δ= a μ. So, G i (1, t) e δ(z 1) as t. (36) Published by Canadian Center of Science and Education 31
6 From (36), we now that u j is Poisson distribution with parameter δ, and so u j = 1 j! δ j e δ, j 0. Case 3. 0 <λ<μ, 0 = μ<λ, 0 <λ= μ. For these three conditions, we all have G i (z, t) 0, (37) and so, u j = lim P ij (t) = 0. (38) t From (38) and (13a), we now u j is invariant measure. References Anderson, W. J. (1991). Continuous-Time Marov Chains: An Applications-Oriented Approach, Springer, New Yor. Cavender, J. A. (1978). Quasi-stationary distributions of birth-and-death processes. Adv. Appl. Prob, 10, Clancy, D. and Pollett, P. K. (003). A note on quasi-stationary distributions of birth-death processes and the sis logistic epidemic. J. Appl. Prob, 40, Darlington S. J. and Pollett, P. K. (000). Quasistationarity in continuous-time Marov chains absorption is not certain. J. Appl. Prob, 37, Ferrari, P. A., Kesten, H., Martinez, S. and Picco, P. (1995). Existence of quasi-stationary distributions. A renewal dynamical approach. Ann. Prob, 3, Flaspohler, D. C. (1974). Quai-stationary distributions in absorbing continuous-time denumerable Marov chains. Ann. Inst. Statist. Math, 6, Karlin, S. and McGregor, J. L. (1958). Linear growth, birth and death processes. J. Math. Mech. (now Indiana Univ. Math. J), 7, Karlin, S. and Tavare, S. (198). Linear birth and death processes with illing. J. Appl. Prob, 19, Kemperman, J. H. B. (196). An analytical approach to the differential equations of the birth-and-death process. Michigan Math. J, 9, Moler J. A., Plo F and San Miguel, M. (000). Minimal quasi-stationary distributions under null R-recurrence. Test, 9, Paes, A. G. (1995). Quasi-stationary laws for Marov processes: examples of an always proximate absorbing state. Adv. Appl. Prob, 7, Pollett, P. K. (1988). Reversibility, invariance and μ-invariance. Adv. Appl. Prob, 0, Reuter, G. E. H. (1957). Denumerable Marov processes and the associated contraction semi-groups on l. Acta. Math, 91, Schoutens W. (000). Birth and death processes, orthogonal polynomials and limiting conditional distributions. Math. Sci, 5, Van Doorn, E. A. (1991). Quasi-stationary distributions and convergence to quasi-stationarity of birth-death processes. Adv. Appl. Prob, 3, Vere-Jones, D. (1969). Some limit theorems for evanescent processes, Austral. J. Statist, 11, ISSN E-ISSN
Irregular Birth-Death process: stationarity and quasi-stationarity
Irregular Birth-Death process: stationarity and quasi-stationarity MAO Yong-Hua May 8-12, 2017 @ BNU orks with W-J Gao and C Zhang) CONTENTS 1 Stationarity and quasi-stationarity 2 birth-death process
More informationDepartment of Applied Mathematics Faculty of EEMCS. University of Twente. Memorandum No Birth-death processes with killing
Department of Applied Mathematics Faculty of EEMCS t University of Twente The Netherlands P.O. Box 27 75 AE Enschede The Netherlands Phone: +3-53-48934 Fax: +3-53-48934 Email: memo@math.utwente.nl www.math.utwente.nl/publications
More informationEXTINCTION TIMES FOR A GENERAL BIRTH, DEATH AND CATASTROPHE PROCESS
(February 25, 2004) EXTINCTION TIMES FOR A GENERAL BIRTH, DEATH AND CATASTROPHE PROCESS BEN CAIRNS, University of Queensland PHIL POLLETT, University of Queensland Abstract The birth, death and catastrophe
More informationA NOTE ON QUASI-STATIONARY DISTRIBUTIONS OF BIRTH-DEATH PROCESSES AND THE SIS LOGISTIC EPIDEMIC
(January 8, 2003) A NOTE ON QUASI-STATIONARY DISTRIBUTIONS OF BIRTH-DEATH PROCESSES AND THE SIS LOGISTIC EPIDEMIC DAMIAN CLANCY, University of Liverpoo PHILIP K. POLLETT, University of Queensand Abstract
More informationEXTINCTION TIMES FOR A GENERAL BIRTH, DEATH AND CATASTROPHE PROCESS
J. Appl. Prob. 41, 1211 1218 (2004) Printed in Israel Applied Probability Trust 2004 EXTINCTION TIMES FOR A GENERAL BIRTH, DEATH AND CATASTROPHE PROCESS BEN CAIRNS and P. K. POLLETT, University of Queensland
More informationWeighted Sums of Orthogonal Polynomials Related to Birth-Death Processes with Killing
Advances in Dynamical Systems and Applications ISSN 0973-5321, Volume 8, Number 2, pp. 401 412 (2013) http://campus.mst.edu/adsa Weighted Sums of Orthogonal Polynomials Related to Birth-Death Processes
More informationSIMILAR MARKOV CHAINS
SIMILAR MARKOV CHAINS by Phil Pollett The University of Queensland MAIN REFERENCES Convergence of Markov transition probabilities and their spectral properties 1. Vere-Jones, D. Geometric ergodicity in
More informationQuasi stationary distributions and Fleming Viot Processes
Quasi stationary distributions and Fleming Viot Processes Pablo A. Ferrari Nevena Marić Universidade de São Paulo http://www.ime.usp.br/~pablo 1 Example Continuous time Markov chain in Λ={0, 1, 2} with
More informationBirth and Death Processes. Birth and Death Processes. Linear Growth with Immigration. Limiting Behaviour for Birth and Death Processes
DTU Informatics 247 Stochastic Processes 6, October 27 Today: Limiting behaviour of birth and death processes Birth and death processes with absorbing states Finite state continuous time Markov chains
More informationIntegrals for Continuous-time Markov chains
Integrals for Continuous-time Markov chains P.K. Pollett Abstract This paper presents a method of evaluating the expected value of a path integral for a general Markov chain on a countable state space.
More informationEXTINCTION PROBABILITY IN A BIRTH-DEATH PROCESS WITH KILLING
EXTINCTION PROBABILITY IN A BIRTH-DEATH PROCESS WITH KILLING Erik A. van Doorn and Alexander I. Zeifman Department of Applied Mathematics University of Twente P.O. Box 217, 7500 AE Enschede, The Netherlands
More informationQUASI-STATIONARY DISTRIBUTIONS FOR BIRTH-DEATH PROCESSES WITH KILLING
QUASI-STATIONARY DISTRIBUTIONS FOR BIRTH-DEATH PROCESSES WITH KILLING PAULINE COOLEN-SCHRIJNER AND ERIK A. VAN DOORN Received 9 January 2006; Revised 9 June 2006; Accepted 28 July 2006 The Karlin-McGregor
More informationSTAT STOCHASTIC PROCESSES. Contents
STAT 3911 - STOCHASTIC PROCESSES ANDREW TULLOCH Contents 1. Stochastic Processes 2 2. Classification of states 2 3. Limit theorems for Markov chains 4 4. First step analysis 5 5. Branching processes 5
More informationMarkov Chains. X(t) is a Markov Process if, for arbitrary times t 1 < t 2 <... < t k < t k+1. If X(t) is discrete-valued. If X(t) is continuous-valued
Markov Chains X(t) is a Markov Process if, for arbitrary times t 1 < t 2
More informationComputable bounds for the decay parameter of a birth-death process
Loughborough University Institutional Repository Computable bounds for the decay parameter of a birth-death process This item was submitted to Loughborough University's Institutional Repository by the/an
More informationProbability Distributions
Lecture 1: Background in Probability Theory Probability Distributions The probability mass function (pmf) or probability density functions (pdf), mean, µ, variance, σ 2, and moment generating function
More informationMacroscopic quasi-stationary distribution and microscopic particle systems
Macroscopic quasi-stationary distribution and microscopic particle systems Matthieu Jonckheere, UBA-CONICET, BCAM visiting fellow Coauthors: A. Asselah, P. Ferrari, P. Groisman, J. Martinez, S. Saglietti.
More informationAARMS Homework Exercises
1 For the gamma distribution, AARMS Homework Exercises (a) Show that the mgf is M(t) = (1 βt) α for t < 1/β (b) Use the mgf to find the mean and variance of the gamma distribution 2 A well-known inequality
More informationQuasi-stationary distributions
Quasi-stationary distributions Erik A. van Doorn a and Philip K. Pollett b a Department of Applied Mathematics, University of Twente P.O. Box 217, 7500 AE Enschede, The Netherlands E-mail: e.a.vandoorn@utwente.nl
More informationCentral limit theorems for ergodic continuous-time Markov chains with applications to single birth processes
Front. Math. China 215, 1(4): 933 947 DOI 1.17/s11464-15-488-5 Central limit theorems for ergodic continuous-time Markov chains with applications to single birth processes Yuanyuan LIU 1, Yuhui ZHANG 2
More informationA Birth and Death Process Related to the Rogers-Ramanujan Continued Fraction
A Birth and Death Process Related to the Rogers-Ramanujan Continued Fraction P. R. Parthasarathy, R. B. Lenin Department of Mathematics Indian Institute of Technology, Madras Chennai - 600 036, INDIA W.
More informationSMSTC (2007/08) Probability.
SMSTC (27/8) Probability www.smstc.ac.uk Contents 12 Markov chains in continuous time 12 1 12.1 Markov property and the Kolmogorov equations.................... 12 2 12.1.1 Finite state space.................................
More informationIEOR 6711: Stochastic Models I Professor Whitt, Thursday, November 29, Weirdness in CTMC s
IEOR 6711: Stochastic Models I Professor Whitt, Thursday, November 29, 2012 Weirdness in CTMC s Where s your will to be weird? Jim Morrison, The Doors We are all a little weird. And life is a little weird.
More informationA Method for Evaluating the Distribution of the Total Cost of a Random Process over its Lifetime
A Method for Evaluating the Distribution of the Total Cost of a Random Process over its Lifetime P.K. Pollett a and V.T. Stefanov b a Department of Mathematics, The University of Queensland Queensland
More informationQuasi-stationary distributions for discrete-state models
Quasi-stationary distributions for discrete-state models Erik A. van Doorn a and Philip K. Pollett b a Department of Applied Mathematics, University of Twente P.O. Box 217, 7500 AE Enschede, The Netherlands
More informationThe SIS and SIR stochastic epidemic models revisited
The SIS and SIR stochastic epidemic models revisited Jesús Artalejo Faculty of Mathematics, University Complutense of Madrid Madrid, Spain jesus_artalejomat.ucm.es BCAM Talk, June 16, 2011 Talk Schedule
More informationIEOR 6711, HMWK 5, Professor Sigman
IEOR 6711, HMWK 5, Professor Sigman 1. Semi-Markov processes: Consider an irreducible positive recurrent discrete-time Markov chain {X n } with transition matrix P (P i,j ), i, j S, and finite state space.
More informationStatistics 150: Spring 2007
Statistics 150: Spring 2007 April 23, 2008 0-1 1 Limiting Probabilities If the discrete-time Markov chain with transition probabilities p ij is irreducible and positive recurrent; then the limiting probabilities
More informationContinuous-Time Markov Chain
Continuous-Time Markov Chain Consider the process {X(t),t 0} with state space {0, 1, 2,...}. The process {X(t),t 0} is a continuous-time Markov chain if for all s, t 0 and nonnegative integers i, j, x(u),
More informationErik A. van Doorn Department of Applied Mathematics University of Twente Enschede, The Netherlands
REPRESENTATIONS FOR THE DECAY PARAMETER OF A BIRTH-DEATH PROCESS Erik A. van Doorn Department of Applied Mathematics University of Twente Enschede, The Netherlands University of Leeds 30 October 2014 Acknowledgment:
More informationStationary Probabilities of Markov Chains with Upper Hessenberg Transition Matrices
Stationary Probabilities of Marov Chains with Upper Hessenberg Transition Matrices Y. Quennel ZHAO Department of Mathematics and Statistics University of Winnipeg Winnipeg, Manitoba CANADA R3B 2E9 Susan
More informationMarkov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains
Markov Chains A random process X is a family {X t : t T } of random variables indexed by some set T. When T = {0, 1, 2,... } one speaks about a discrete-time process, for T = R or T = [0, ) one has a continuous-time
More informationProbability Distributions
Lecture : Background in Probability Theory Probability Distributions The probability mass function (pmf) or probability density functions (pdf), mean, µ, variance, σ 2, and moment generating function (mgf)
More informationLinear-fractional branching processes with countably many types
Branching processes and and their applications Badajoz, April 11-13, 2012 Serik Sagitov Chalmers University and University of Gothenburg Linear-fractional branching processes with countably many types
More informationClassification of Countable State Markov Chains
Classification of Countable State Markov Chains Friday, March 21, 2014 2:01 PM How can we determine whether a communication class in a countable state Markov chain is: transient null recurrent positive
More informationStochastic Modelling Unit 1: Markov chain models
Stochastic Modelling Unit 1: Markov chain models Russell Gerrard and Douglas Wright Cass Business School, City University, London June 2004 Contents of Unit 1 1 Stochastic Processes 2 Markov Chains 3 Poisson
More information8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains
8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains 8.1 Review 8.2 Statistical Equilibrium 8.3 Two-State Markov Chain 8.4 Existence of P ( ) 8.5 Classification of States
More informationEXTINCTION AND QUASI-STATIONARITY IN THE VERHULST LOGISTIC MODEL: WITH DERIVATIONS OF MATHEMATICAL RESULTS
EXTINCTION AND QUASI-STATIONARITY IN THE VERHULST LOGISTIC MODEL: WITH DERIVATIONS OF MATHEMATICAL RESULTS INGEMAR NÅSELL Abstract. We formulate and analyze a stochastic version of the Verhulst deterministic
More informationMarkov Chains, Stochastic Processes, and Matrix Decompositions
Markov Chains, Stochastic Processes, and Matrix Decompositions 5 May 2014 Outline 1 Markov Chains Outline 1 Markov Chains 2 Introduction Perron-Frobenius Matrix Decompositions and Markov Chains Spectral
More information6 Continuous-Time Birth and Death Chains
6 Continuous-Time Birth and Death Chains Angela Peace Biomathematics II MATH 5355 Spring 2017 Lecture notes follow: Allen, Linda JS. An introduction to stochastic processes with applications to biology.
More information1 Types of stochastic models
1 Types of stochastic models Models so far discussed are all deterministic, meaning that, if the present state were perfectly known, it would be possible to predict exactly all future states. We have seen
More informationSTOCHASTIC PROCESSES Basic notions
J. Virtamo 38.3143 Queueing Theory / Stochastic processes 1 STOCHASTIC PROCESSES Basic notions Often the systems we consider evolve in time and we are interested in their dynamic behaviour, usually involving
More informationStochastic Processes
MTHE/STAT455, STAT855 Fall 202 Stochastic Processes Final Exam, Solutions (5 marks) (a) (0 marks) Condition on the first pair to bond; each of the n adjacent pairs is equally likely to bond Given that
More informationSTAT 380 Continuous Time Markov Chains
STAT 380 Continuous Time Markov Chains Richard Lockhart Simon Fraser University Spring 2018 Richard Lockhart (Simon Fraser University)STAT 380 Continuous Time Markov Chains Spring 2018 1 / 35 Continuous
More information(implicitly assuming time-homogeneity from here on)
Continuous-Time Markov Chains Models Tuesday, November 15, 2011 2:02 PM The fundamental object describing the dynamics of a CTMC (continuous-time Markov chain) is the probability transition (matrix) function:
More informationBirth-death chain models (countable state)
Countable State Birth-Death Chains and Branching Processes Tuesday, March 25, 2014 1:59 PM Homework 3 posted, due Friday, April 18. Birth-death chain models (countable state) S = We'll characterize the
More informationStochastic process. X, a series of random variables indexed by t
Stochastic process X, a series of random variables indexed by t X={X(t), t 0} is a continuous time stochastic process X={X(t), t=0,1, } is a discrete time stochastic process X(t) is the state at time t,
More informationExamples of Countable State Markov Chains Thursday, October 16, :12 PM
stochnotes101608 Page 1 Examples of Countable State Markov Chains Thursday, October 16, 2008 12:12 PM Homework 2 solutions will be posted later today. A couple of quick examples. Queueing model (without
More informationAt the boundary states, we take the same rules except we forbid leaving the state space, so,.
Birth-death chains Monday, October 19, 2015 2:22 PM Example: Birth-Death Chain State space From any state we allow the following transitions: with probability (birth) with probability (death) with probability
More informationOn Existence of Limiting Distribution for Time-Nonhomogeneous Countable Markov Process
Queueing Systems 46, 353 361, 24 24 Kluwer Academic Publishers Manufactured in The Netherlands On Existence of Limiting Distribution for Time-Nonhomogeneous Countable Markov Process V ABRAMOV vyachesl@zahavnetil
More informationQuantitative Model Checking (QMC) - SS12
Quantitative Model Checking (QMC) - SS12 Lecture 06 David Spieler Saarland University, Germany June 4, 2012 1 / 34 Deciding Bisimulations 2 / 34 Partition Refinement Algorithm Notation: A partition P over
More informationA probabilistic proof of Perron s theorem arxiv: v1 [math.pr] 16 Jan 2018
A probabilistic proof of Perron s theorem arxiv:80.05252v [math.pr] 6 Jan 208 Raphaël Cerf DMA, École Normale Supérieure January 7, 208 Abstract Joseba Dalmau CMAP, Ecole Polytechnique We present an alternative
More informationThe Transition Probability Function P ij (t)
The Transition Probability Function P ij (t) Consider a continuous time Markov chain {X(t), t 0}. We are interested in the probability that in t time units the process will be in state j, given that it
More informationSTA 624 Practice Exam 2 Applied Stochastic Processes Spring, 2008
Name STA 624 Practice Exam 2 Applied Stochastic Processes Spring, 2008 There are five questions on this test. DO use calculators if you need them. And then a miracle occurs is not a valid answer. There
More informationG(t) := i. G(t) = 1 + e λut (1) u=2
Note: a conjectured compactification of some finite reversible MCs There are two established theories which concern different aspects of the behavior of finite state Markov chains as the size of the state
More informationResearch Reports on Mathematical and Computing Sciences
ISSN 1342-2804 Research Reports on Mathematical and Computing Sciences Long-tailed degree distribution of a random geometric graph constructed by the Boolean model with spherical grains Naoto Miyoshi,
More information1 IEOR 4701: Continuous-Time Markov Chains
Copyright c 2006 by Karl Sigman 1 IEOR 4701: Continuous-Time Markov Chains A Markov chain in discrete time, {X n : n 0}, remains in any state for exactly one unit of time before making a transition (change
More informationA PLANAR INTEGRAL SELF-AFFINE TILE WITH CANTOR SET INTERSECTIONS WITH ITS NEIGHBORS
#A INTEGERS 9 (9), 7-37 A PLANAR INTEGRAL SELF-AFFINE TILE WITH CANTOR SET INTERSECTIONS WITH ITS NEIGHBORS Shu-Juan Duan School of Math. and Computational Sciences, Xiangtan University, Hunan, China jjmmcn@6.com
More informationRecap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks
Recap Probability, stochastic processes, Markov chains ELEC-C7210 Modeling and analysis of communication networks 1 Recap: Probability theory important distributions Discrete distributions Geometric distribution
More informationConvergence exponentielle uniforme vers la distribution quasi-stationnaire en dynamique des populations
Convergence exponentielle uniforme vers la distribution quasi-stationnaire en dynamique des populations Nicolas Champagnat, Denis Villemonais Colloque Franco-Maghrébin d Analyse Stochastique, Nice, 25/11/2015
More informationA Simple Interpretation Of Two Stochastic Processes Subject To An Independent Death Process
Applied Mathematics E-Notes, 92009, 89-94 c ISSN 1607-2510 Available free at mirror sites of http://wwwmathnthuedutw/ amen/ A Simple Interpretation Of Two Stochastic Processes Subject To An Independent
More information88 CONTINUOUS MARKOV CHAINS
88 CONTINUOUS MARKOV CHAINS 3.4. birth-death. Continuous birth-death Markov chains are very similar to countable Markov chains. One new concept is explosion which means that an infinite number of state
More informationConvergence exponentielle uniforme vers la distribution quasi-stationnaire de processus de Markov absorbés
Convergence exponentielle uniforme vers la distribution quasi-stationnaire de processus de Markov absorbés Nicolas Champagnat, Denis Villemonais Séminaire commun LJK Institut Fourier, Méthodes probabilistes
More informationMath Homework 5 Solutions
Math 45 - Homework 5 Solutions. Exercise.3., textbook. The stochastic matrix for the gambler problem has the following form, where the states are ordered as (,, 4, 6, 8, ): P = The corresponding diagram
More informationLIMITS FOR QUEUES AS THE WAITING ROOM GROWS. Bell Communications Research AT&T Bell Laboratories Red Bank, NJ Murray Hill, NJ 07974
LIMITS FOR QUEUES AS THE WAITING ROOM GROWS by Daniel P. Heyman Ward Whitt Bell Communications Research AT&T Bell Laboratories Red Bank, NJ 07701 Murray Hill, NJ 07974 May 11, 1988 ABSTRACT We study the
More informationNEW FRONTIERS IN APPLIED PROBABILITY
J. Appl. Prob. Spec. Vol. 48A, 209 213 (2011) Applied Probability Trust 2011 NEW FRONTIERS IN APPLIED PROBABILITY A Festschrift for SØREN ASMUSSEN Edited by P. GLYNN, T. MIKOSCH and T. ROLSKI Part 4. Simulation
More informationISyE 6761 (Fall 2016) Stochastic Processes I
Fall 216 TABLE OF CONTENTS ISyE 6761 (Fall 216) Stochastic Processes I Prof. H. Ayhan Georgia Institute of Technology L A TEXer: W. KONG http://wwong.github.io Last Revision: May 25, 217 Table of Contents
More informationA review of Continuous Time MC STA 624, Spring 2015
A review of Continuous Time MC STA 624, Spring 2015 Ruriko Yoshida Dept. of Statistics University of Kentucky polytopes.net STA 624 1 Continuous Time Markov chains Definition A continuous time stochastic
More informationMATH37012 Week 10. Dr Jonathan Bagley. Semester
MATH37012 Week 10 Dr Jonathan Bagley Semester 2-2018 2.18 a) Finding and µ j for a particular category of B.D. processes. Consider a process where the destination of the next transition is determined by
More informationMarkov chains. 1 Discrete time Markov chains. c A. J. Ganesh, University of Bristol, 2015
Markov chains c A. J. Ganesh, University of Bristol, 2015 1 Discrete time Markov chains Example: A drunkard is walking home from the pub. There are n lampposts between the pub and his home, at each of
More informationPERTURBATION ANALYSIS FOR DENUMERABLE MARKOV CHAINS WITH APPLICATION TO QUEUEING MODELS
Adv Appl Prob 36, 839 853 (2004) Printed in Northern Ireland Applied Probability Trust 2004 PERTURBATION ANALYSIS FOR DENUMERABLE MARKOV CHAINS WITH APPLICATION TO QUEUEING MODELS EITAN ALTMAN and KONSTANTIN
More informationContinuous time Markov chains
Chapter 2 Continuous time Markov chains As before we assume that we have a finite or countable statespace I, but now the Markov chains X {X(t) : t } have a continuous time parameter t [, ). In some cases,
More informationStochastic Processes
Stochastic Processes 8.445 MIT, fall 20 Mid Term Exam Solutions October 27, 20 Your Name: Alberto De Sole Exercise Max Grade Grade 5 5 2 5 5 3 5 5 4 5 5 5 5 5 6 5 5 Total 30 30 Problem :. True / False
More informationMinimal quasi-stationary distribution approximation for a birth and death process
Minimal quasi-stationary distribution approximation for a birth and death process Denis Villemonais To cite this version: Denis Villemonais. Minimal quasi-stationary distribution approximation for a birth
More informationOn hitting times and fastest strong stationary times for skip-free and more general chains
On hitting times and fastest strong stationary times for skip-free and more general chains James Allen Fill 1 Department of Applied Mathematics and Statistics The Johns Hopkins University jimfill@jhu.edu
More informationLecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321
Lecture 11: Introduction to Markov Chains Copyright G. Caire (Sample Lectures) 321 Discrete-time random processes A sequence of RVs indexed by a variable n 2 {0, 1, 2,...} forms a discretetime random process
More informationCensoring Technique in Studying Block-Structured Markov Chains
Censoring Technique in Studying Block-Structured Markov Chains Yiqiang Q. Zhao 1 Abstract: Markov chains with block-structured transition matrices find many applications in various areas. Such Markov chains
More informationAn Algebraic Interpretation of the Multiplicity Sequence of an Algebraic Branch
An Algebraic Interpretation of the Multiplicity Sequence of an Algebraic Branch Une Interprétation Algébrique de la Suite des Ordres de Multiplicité d une Branche Algébrique Proc. London Math. Soc. (2),
More informationImprovements on Geometric Means Related to the Tracy-Singh Products of Positive Matrices
MATEMATIKA, 2005, Volume 21, Number 2, pp. 49 65 c Department of Mathematics, UTM. Improvements on Geometric Means Related to the Tracy-Singh Products of Positive Matrices 1 Adem Kilicman & 2 Zeyad Al
More informationGENERALIZED CANTOR SETS AND SETS OF SUMS OF CONVERGENT ALTERNATING SERIES
Journal of Applied Analysis Vol. 7, No. 1 (2001), pp. 131 150 GENERALIZED CANTOR SETS AND SETS OF SUMS OF CONVERGENT ALTERNATING SERIES M. DINDOŠ Received September 7, 2000 and, in revised form, February
More informationON THE SIMILARITY OF CENTERED OPERATORS TO CONTRACTIONS. Srdjan Petrović
ON THE SIMILARITY OF CENTERED OPERATORS TO CONTRACTIONS Srdjan Petrović Abstract. In this paper we show that every power bounded operator weighted shift with commuting normal weights is similar to a contraction.
More informationCompression on the digital unit sphere
16th Conference on Applied Mathematics, Univ. of Central Oklahoma, Electronic Journal of Differential Equations, Conf. 07, 001, pp. 1 4. ISSN: 107-6691. URL: http://ejde.math.swt.edu or http://ejde.math.unt.edu
More informationChapter 5. Continuous-Time Markov Chains. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan
Chapter 5. Continuous-Time Markov Chains Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan Continuous-Time Markov Chains Consider a continuous-time stochastic process
More informationFIXED POINTS OF MEROMORPHIC SOLUTIONS OF HIGHER ORDER LINEAR DIFFERENTIAL EQUATIONS
Annales Academiæ Scientiarum Fennicæ Mathematica Volumen 3, 2006, 9 2 FIXED POINTS OF MEROMORPHIC SOLUTIONS OF HIGHER ORDER LINEAR DIFFERENTIAL EQUATIONS Liu Ming-Sheng and Zhang Xiao-Mei South China Normal
More informationDiscrete time Markov chains. Discrete Time Markov Chains, Limiting. Limiting Distribution and Classification. Regular Transition Probability Matrices
Discrete time Markov chains Discrete Time Markov Chains, Limiting Distribution and Classification DTU Informatics 02407 Stochastic Processes 3, September 9 207 Today: Discrete time Markov chains - invariant
More informationMatrices with convolutions of binomial functions and Krawtchouk matrices
Available online at wwwsciencedirectcom Linear Algebra and its Applications 429 (2008 50 56 wwwelseviercom/locate/laa Matrices with convolutions of binomial functions and Krawtchouk matrices orman C Severo
More information2. Transience and Recurrence
Virtual Laboratories > 15. Markov Chains > 1 2 3 4 5 6 7 8 9 10 11 12 2. Transience and Recurrence The study of Markov chains, particularly the limiting behavior, depends critically on the random times
More informationLecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes
Lecture Notes 7 Random Processes Definition IID Processes Bernoulli Process Binomial Counting Process Interarrival Time Process Markov Processes Markov Chains Classification of States Steady State Probabilities
More informationThe Distribution of Mixing Times in Markov Chains
The Distribution of Mixing Times in Markov Chains Jeffrey J. Hunter School of Computing & Mathematical Sciences, Auckland University of Technology, Auckland, New Zealand December 2010 Abstract The distribution
More informationIrreducibility. Irreducible. every state can be reached from every other state For any i,j, exist an m 0, such that. Absorbing state: p jj =1
Irreducibility Irreducible every state can be reached from every other state For any i,j, exist an m 0, such that i,j are communicate, if the above condition is valid Irreducible: all states are communicate
More information2 Discrete-Time Markov Chains
2 Discrete-Time Markov Chains Angela Peace Biomathematics II MATH 5355 Spring 2017 Lecture notes follow: Allen, Linda JS. An introduction to stochastic processes with applications to biology. CRC Press,
More informationEXAM IN COURSE TMA4265 STOCHASTIC PROCESSES Wednesday 7. August, 2013 Time: 9:00 13:00
Norges teknisk naturvitenskapelige universitet Institutt for matematiske fag Page 1 of 7 English Contact: Håkon Tjelmeland 48 22 18 96 EXAM IN COURSE TMA4265 STOCHASTIC PROCESSES Wednesday 7. August, 2013
More informationMATH 564/STAT 555 Applied Stochastic Processes Homework 2, September 18, 2015 Due September 30, 2015
ID NAME SCORE MATH 56/STAT 555 Applied Stochastic Processes Homework 2, September 8, 205 Due September 30, 205 The generating function of a sequence a n n 0 is defined as As : a ns n for all s 0 for which
More informationMulti Stage Queuing Model in Level Dependent Quasi Birth Death Process
International Journal of Statistics and Systems ISSN 973-2675 Volume 12, Number 2 (217, pp. 293-31 Research India Publications http://www.ripublication.com Multi Stage Queuing Model in Level Dependent
More informationSurvival in a quasi-death process
Survival in a quasi-death process Erik A. van Doorn a and Philip K. Pollett b a Department of Applied Mathematics, University of Twente P.O. Box 217, 7500 AE Enschede, The Netherlands E-mail: e.a.vandoorn@utwente.nl
More informationBudapest University of Tecnology and Economics. AndrásVetier Q U E U I N G. January 25, Supported by. Pro Renovanda Cultura Hunariae Alapítvány
Budapest University of Tecnology and Economics AndrásVetier Q U E U I N G January 25, 2000 Supported by Pro Renovanda Cultura Hunariae Alapítvány Klebelsberg Kunó Emlékére Szakalapitvány 2000 Table of
More informationRENEWAL THEORY STEVEN P. LALLEY UNIVERSITY OF CHICAGO. X i
RENEWAL THEORY STEVEN P. LALLEY UNIVERSITY OF CHICAGO 1. RENEWAL PROCESSES A renewal process is the increasing sequence of random nonnegative numbers S 0,S 1,S 2,... gotten by adding i.i.d. positive random
More informationSTABILIZATION OF AN OVERLOADED QUEUEING NETWORK USING MEASUREMENT-BASED ADMISSION CONTROL
First published in Journal of Applied Probability 43(1) c 2006 Applied Probability Trust STABILIZATION OF AN OVERLOADED QUEUEING NETWORK USING MEASUREMENT-BASED ADMISSION CONTROL LASSE LESKELÄ, Helsinki
More informationStochastic modelling of epidemic spread
Stochastic modelling of epidemic spread Julien Arino Department of Mathematics University of Manitoba Winnipeg Julien Arino@umanitoba.ca 19 May 2012 1 Introduction 2 Stochastic processes 3 The SIS model
More informationExact Simulation of the Stationary Distribution of M/G/c Queues
1/36 Exact Simulation of the Stationary Distribution of M/G/c Queues Professor Karl Sigman Columbia University New York City USA Conference in Honor of Søren Asmussen Monday, August 1, 2011 Sandbjerg Estate
More information