The SIS and SIR stochastic epidemic models revisited

Size: px
Start display at page:

Download "The SIS and SIR stochastic epidemic models revisited"

Transcription

1 The SIS and SIR stochastic epidemic models revisited Jesús Artalejo Faculty of Mathematics, University Complutense of Madrid Madrid, Spain jesus_artalejomat.ucm.es BCAM Talk, June 16, 2011

2 Talk Schedule Part I. Quasi-stationary and ratio of expectations distributions 1. Introduction 2. Quasi-stationarity 3. RE-distribution 4. Application to the SIS stochastic model 5. Application to the SIR stochastic model

3 Part II. Extinction time and time to infection 1. Introduction 2. The SIS model: Time to infection 3. The SIR model: Extinction time

4 Part I. Quasi-stationary and ratio of expectations distributions This first part of the talk is based on the paper: Artalejo, J.R. and Lopez-Herrero, M.J. (2010) Quasi-stationary and ratio of expectations distributions: A comparative study Journal of Theoretical Biology 266 (2010),

5 1. Introduction We deal with continuous time Markov chains {X(t); t 0} (CTMC) with state space S = S A S T, where S T is a set of transient states and S A is a set of absorbing states (e.g. S A = {0}). The stationary distribution is degenerate so the fundamental problem is: There exist a need for probabilistic measures of the system behavior before absorption. Analogues of the stationary distribution of the irreducible case should be considered and compared.

6 In the spirit of the seminal work by Darroch and Seneta (1967), we will consider two possibilities: Quasi-stationary distribution. Ratio of expectations (RE) distribution. A comparison between both distributions is justified only if the convergence to quasi-stationary regime is relatively fast.

7 2. Quasi-stationarity The starting point is the conditional probability u i (t) =P {X(t) =i T>t}, i S T, where T =sup{t 0 X(t) S T } denotes the absorption time. Definition 1. Suppose that the chain starts with the initial distribution a i = P {X(0) = i}, i S T. If there exists a starting distribution a i = u i such that P {X(t) =i T>t} = u i, i S T, for all t 0, thenu = {u i ; i S T } is called a quasi-stationary distribution.

8 There also exists a limiting interpretation which states that lim t P {X(t) =i T>t} = u i, i S T, independently of the initial distribution {a i ; i S T }. The above limiting result shows that the quasi-stationary distribution is a good measure of the system dynamics before absorption, but restricting only to those realizations in which the time to absorption is sufficiently large.

9 The existence and computation of the quasi-stationary distribution becomes difficult when S T is infinite. A. S T is finite and irreducible There exists a unique quasi-stationary distribution, but an analytical (explicit form) solution only exists in a few special cases. Example 1. If {X(t); t 0} is a birth and death process with S A = {0}, then (1 δ in )μ i+1 u i+1 (λ i + μ i )u i + λ i 1 u i 1 = μ 1 u 1 u i, 1 i N, where S = {0,...,N} and λ 0 =0. The above non-linear equation also holds when S = N.

10 Particularizations for the case S = N 1. The case of homogeneous rates λ 0 =0,λ i = λ, i 1, μ i = μ, i 1. We have S A = {0}, S T = N {0} and u i = ³ 1 R 2 i ³ R i 1,i 1, for R = λ μ < The linear growth model λ i = iλ, i 0, μ i = iμ, i 1. Once more, we have S A = {0}. The quasi-stationary distribution is a geometric law u i =(1 R)R i 1,i 1, for R = λ μ < 1.

11 Computation of the quasi-stationary distribution Q ST : sub-generator associated to the transient states S T. α : eigenvalue with maximal real part of Q ST (α is real and positive). α 0 : real part of the eigenvalue with the next smallest real part. The vector u = {u i ; i S T } is a left eigenvector corresponding to α. α is a rate of convergence to absorption, while α 0 α is a rate of convergence to the quasi-stationary regime. The quasi-stationary distribution should be used only if R u =2α/α 0 < 1 α<α 0 α.

12 Approximations of the quasi-stationary distribution {X(t); t 0} is a birth and death process with S = {0,...,N} and S A = {0}. A first approximation is based on a birth and death process, X (0) (t), withthe same rates except μ (0) 1 =0. The resulting limiting probabilities are given by p (0) i = p (0) 1 π i, 1 i N, p (0) 1 = ³ P Ni=1 π i 1, where π 1 =1and π i = Q i 1 k=1 λ k μ k+1, 2 i N.

13 A second approximation uses a birth and death process, X (1) (t), withthesame birth rates but μ (1) i = μ i 1, 1 i N. The new chain is positive recurrent and the limiting probabilities are as follows p (1) i = p (1) 1 τ i, 1 i N, p (1) 1 = ³ P Ni=1 τ i 1, where τ 1 =1and τ i = Q i 1 k=1 λ k μ k, 2 i N.

14 Recursive computation of the quasi-stationary distribution {X(t); t 0} is a birth and death process with S = {0,...,N} and S A = {0}. Then, we have u i = u 1 π i ix k=1 1 (1 δ k1 ) P k 1 j=1 u j τ k, 2 i N, NX i=1 u i =1.

15 Algorithm 1. Step 1. Start with u (0) i = p (0) i (or p (1) i ), 1 i N. Step 2. For i =2,...,N, compute u (1) i = u (0) 1 π i ix k=1 1 (1 δ k1 ) µ u (0) 1 +(1 δ k2 ) P k 1 j=2 u(1) j. τ k Step 3. Compute the normalization u (1) 1 = u (0) 1 u (0) 1 + P N i=2 u (1) i, u (1) i = u (1) i u (0) 1 + P N i=2 u (1) i, 2 i N. Step 4. Increase n and repeat the same steps until for any given >0. max 1 i N u(n) i u (n 1) <, i

16 B. S T is finite but reducible (van Doorn and Pollett, 2008) Suppose that S T consists of L communicating classes S k,for1 k L. A partial order on {S k ;1 k L} is defined by writing S i S j when class S i is accessible from S j. Let α k be the (negative) eigenvalue with maximal real part of the sub-generator Q k corresponding to the states in S k. Then, the eigenvalue of Q ST with maximal real part is obtained as α, where α =min 1 k L α k. We also define I(α) ={k : α k = α} and a(α) =mini(α). Theorem 1. If α has a geometric multiplicity one, then the Markov chain has a unique quasi-stationary distribution {u i ; i S T } from which S a(α) is accessible. The jth component of {u i ; i S T } is positive if and only if state j is accessible from S a(α). A simple necessary and sufficient condition for establishing that α has geometric multiplicity one is that {S k ; k I(α)} is linearly ordered, that is, S i S j i j, foralli, j I(α).

17 Example 2. The quasi-stationary distribution of the pure death process on S = {0, 1, 2} with absorbing state 0 and μ i > 0, fori S T = {1, 2}, is given by (u 1,u 2 )= ( (1, 0), if μ1 μ 2, ³ μ2 μ 1, 1 μ 2 μ 1, if μ2 <μ 1. In particular, if μ 1 = μ 2 = μ then all the quasi-stationary mass is concentrated at the state i =1.

18 3. RE-distribution Let T j be the time that the CTMC spends in state j S T before absorption. Definition 2. Given that X(0) = i S T, we define the RE-distribution, P i =(P i (j)), as follows P i (j) = E i[t j ] E i [T ],i,j S T. The ratio of means distribution (Darroch and Seneta, 1967) corresponds with the unconditional version: P (j) = P P i S T a i E i [T j ] i S T a i E i [T ],j S T.

19 Remarks and computation The RE-distribution always exists provided that the expected time to absorption E i [T ] <. The RE-distribution assigns positive probability to all state j accessible from the initial state i. Let us construct the ideal replicated model obtained by assuming that at each extinction the biological model restarts in the same initial state i S T. The stationary distribution of this replicated (regenerative) model amounts to the RE-distribution P i. For a fixed j S T,afirst-step argument yields E i h Tj i = δ ij q i + P k S T k6=i q ik q i E k h Tj i,i ST.

20 4. Application to the SIS stochastic model Closed population model of N individuals. Classified either as susceptible or infective individual. Susceptible can be infected, then they recover and return to the susceptible pool. Evolution of the epidemic: Birth and death process {I(t); t 0}. I(t) :number of infective individuals at time t. S = {0, 1,...,N} (0 is an absorbing state).

21 Classical SIS rates Infection rate λ i = β N i(n i). Recovery rate μ i = γi. R 0 = β γ denotes the transmission factor. λ 1 ¾ μ 2 - μ N 1 - λ N 1¾ μ N ¾μ 1 - λ2 ¾ N 1 N Figure 1. States and transitions of the birth and death model

22 In the case of a birth and death process, we have E 0 h Tj i =0, (λ i + μ i )E i h Tj i = μi E i 1 h Tj i + λi E i+1 h Tj i, i 6= j, 1 i N, (λ j + μ j )E j h Tj i = μj E j 1 h Tj i + λj E j+1 h Tj i +1. Theorem 2. For the SIS epidemic model, the RE-distribution reduces to. P i (j) = NP j=1 min(i,j) 1 P j k=1 min(i,j) 1 P j k=1 ³ R0 j k (N k)! N (N j)! ³ R0 N j k (N k)! (N j)!, 1 j N.

23 Stochastic ordering relationships For a birth and death process with S = {0,...,N} and S A = {0}: P 1 = p (0), p (0) st u, P 1 st u st P N, P i st P i 0, 1 i i 0 N, where X st Y F X (x) F Y (x), for all x R.

24 For the SIS model: u st p (1), P N st p (1), for N fixed and R 0 sufficiently large, p (1) st P N, for N fixed and R 0 sufficiently small.

25 Numerical example N =100,γ=1and several choices of R 0 = β. R u < 1 to use u is meaningful. bp and P e are mixtures of the RE-distributions. p (1) u = max 1 j N p(1) j u j (maximum pointwise distance). R p (1) u P 1 u P N u bp u ep u Table 1. Distributions distances with respect to u

26 5. Application to the SIR stochastic model Closed population model of N individuals. Classified as susceptible, infective or removed individuals. Susceptible can be infected, then they recover and become immune. Evolution of the epidemic: Bidimensional CTMC {(I(t),S(t)); t 0}. I(t) :number of infective individuals at time t (I(0) = m 1). S(t) :number of susceptibles at time t (S(0) = n). S = {(i, j); 0 j n, 0 i m+n j} (S T = {(0,j); 0 j n}).

27 Classical SIR rates Infection rate λ ij = β N ij. Recovery rate μ i = γi. S(t) μ 1 ¾ ¾ ¾ μ 1 ¾ ¾ ¾ ¾ μ 1 ¾ ¾ ¾ ¾ ¾ μ 1 R R R μ 2 λ 13 μ 2 λ 12 μ 2 λ 11 μ 2 R R R μ 3 λ 23 μ 3 λ 22 μ 3 λ 21 μ 3 R R R λ 33 ¾ ¾ ¾ ¾ ¾ ¾ I(t) Figure 2. States and transitions of the SIR epidemic model μ 4 λ 32 μ 4 λ 31 μ 4 R R λ 42 μ 5 λ 41 μ 5 R λ 51 μ 6

28 The quasi-stationary probabilities are u (i,j) = δ (1,0)(i,j), 0 j n, 1 i m + n j, because min (i,j) ST (λ ij +μ i )=λ 10 +μ 1 = γ, for μ i = γi,and1 i m+n. The RE-distribution of the SIR epidemic model is given by P (m,n) (i, j) = np j=0 A ij λ ij +μ i m+n j P i=1, (i, j) S A T, ij λ ij +μ i where A ij is the probability of reaching the state (i, j) S starting from (m, n) before the extinction occurs.

29 Algorithm 2. Step 1. Set A ij = δ (i,j)(m,n). Step 2. For i = m 1,m 2,..., 0 calculate A in = A i+1,n μ i+1 λ i+1,n + μ i+1. Step 3. For k =1, calculate 3.a. λ m+k 1,n k+1 A m+k,n k = A m+k 1,n k+1. λ m+k 1,n k+1 + μ m+k 1

30 3.b. For i = m + k 1,m+ k 2,...,2 compute λ i 1,n k+1 μ A i,n k = A i+1 i+1,n k + A i 1,n k+1. λ i+1,n k + μ i+1 λ i 1,n k+1 + μ i 1 3.c For i =0, 1 calculate A i,n k = A i+1,n k μ i+1 λ i+1,n k + μ i+1. Step 4. Set k = k +1. If k n go to Step 3.a. This scheme also gives a method for computing the final size of the epidemic: A 0j : the probability that j susceptible individuals survive.

31 Remarks The RE-distribution gives positive probability to all transient states S T. Once a state has been visited, the system leaves it forever. It supports the idea that in a relatively rapid period the extinction time is reached. We observe that R u 1. More concretely, we have R u = ( 1, if γ β/n, if γ>β/n. 2γ β/n+γ,

32 Part II. Extinction time and time to infection This second part of the talk is based on the paper: Artalejo, J.R., Economou, A. and Lopez-Herrero, M.J. (2011) Stochastic epidemic models revisited: Analysis of some continuous performance measures Journal of Biological Dynamics (forthcoming).

33 1. Introduction Let {X(t); t 0} be a CTMC with finite state space S = S A S T. L : the unconditional first passage time to the absorbing set S A, given that the initial distribution is α. The general theory says that L follows a PH distribution, so we have P {L x} =1 α exp{q ST x}e, x 0. If M = Q ST is invertible, then P {L < } =1, ϕ(s) =E[e sl ]= α(si M) 1 Me, E[L k ]=k!α( M 1 ) k e, for k 1.

34 Computation of P {L x} (or f L (x)) and E[L k ] The extinction time is the time to reach the absorbing set S A. The time to infection can be viewed as the first passage time to an auxiliary state a. Method 1. To compute a Padé approximation to the matrix exponential exp{mx} (MATLAB). Method 2. To employ numerical inversion of Laplace transforms (Abate and Whitt, 1995). Method 3. To exploit the specific matrix structure (birth and death, triangular) to avoid subtractions, matrices with positive and negative entries, etc.

35 2. The SIS model: Time to infection At time t, population consists of i infected individuals and N i non-infected individuals. Select at random one of those N i non-infected individuals. Define S i as the time until the selected individuals gets infected. Trivial cases: S 0 =, S N has no sense.

36 For a given i, 0 i N 1, we define v i = P {S i < }, ψ i (s) = E[e ss i 1 {Si < } ],Re(s) 0, Mi k = E[Si k 1 {S i < } ], k 0, where 1 A is the indicator random variable on the set A. The probabilities v i are strictly between 0 and 1 because P {S i < } λ i λ i +μ i... λ N 1 λ N 1 +μ N 1 > 0, P {S i = } μ i λ i +μ i... μ 1 λ 1 +μ 1 > 0.

37 Three possibilities for the next event: recovery. infection of a non-tagged individual. infection of the selected individual. A first step argument leads to the tri-diagonal system ψ 0 (s) = 0, ψ i (s) = μ i N i 1 ψ s + λ i + μ i 1 (s)+ ψ i s + λ i + μ i N i i+1 (s) λ + i 1 s + λ i + μ i N i, 1 i N 1. λ i

38 Theorem 3. The Laplace transforms ψ i (s) are computed by ψ N 1 (s) = ψ i (s) = 2 ³ λ N 1 (s + g N 2 + λ N 2 )+μ N 1 D N 2 2(s + λ N 1 )(s + g N 2 + λ N 2 )+μ N 1 (2(s + g N 2 )+λ N 2 ), N 2 X k=i D k λ k +ψ N 1 (s) N k N k 1 N 2 Y k=i ky n=i λ n N n 1 s + g n + λ n N n λ k N k 1 s + g k + λ k N k, 1 i N 2, where the coefficients are given by the recursive scheme λ i 1 N i+1 s + g i 1 + g i = μ i,d i = s + g i 1 + λ i 1 g 1 = μ 1, D 1 = λ 1 N 1, λ i N i + μ i D i 1 s + g i 1 + λ i 1, 2 i N 2.

39 The Tauberian result f Si (0) = lim s sψ i (s) gives f Si (0) = λ i N i, for 1 i N 1. Observe that v i = M 0 i = P {S i < } = ψ i (0), 0 i N 1. By differentiating equations for ψ i (s) k 1 times, and setting s =0,weget M0 k =0, (λ i + μ i ) Mi k = μ imi 1 k + λ N i 1 i Mi+1 k N i + kmk 1 i, 1 i N 1. Recursive modifications: (i) replace ψ i (s) by Mi k, (ii) set s = 0, (iii) use D 1 = km f 1 k 1 and D i = km f i k 1 + μ i D i 1 (g i 1 + λ i 1 ) 1.

40 Numerical example Head lice infection (Stone et al. (2008)) N =100,ξ=0.01 (external rate of infection), i =1, 40, 90 P {S i < } β =0.05 β =0.5 β =1.0 β =5.0 β = γ = γ = γ = γ = Table 2. Probability of becoming infected

41 (i, β, γ) = (1, 1.5, 1.0) (i, β, γ) = (1, 1.5, 2.0) (i, β, γ) = (2, 1.5, 1.0) x Figure 3. Density function of S i when N =100

42 3. The SIR model: Extinction time At time t, population consists of i infected individuals and j susceptibles. Let L ij be the extinction time (i.e. absorption by S T ). We notice that u ij = P {L ij < } =1, (i, j) S T. We define ϕ ij (s) = E[e sl ij], (i, j) S, Re(s) 0, Mij k = E[L k ij ], (i, j) S, k 0.

43 Using a firststepargumentwefind that ϕ 0j (s) = 1, 0 j n, ϕ ij (s) = μ i s + λ ij + μ i ϕ i 1,j (s)+ λ ij s + λ ij + μ i ϕ i+1,j 1 (s), 0 j n, 1 i m + n j. Solving the second equation for j =0,wehave ϕ i0 (s) = μ i s + μ i ϕ i 1,0 (s) = = iy k=1 μ k s + μ k, 1 i m + n. Computation in the natural order 1 j n and 1 i m + n j. Numerical inversion leads to f Lij (x) with f Lij (0) = δ i1 μ 1.

44 For determining the moments, we first notice that M0j 0 =1, Mk 0j =0, 0 j n, k 1, and for the transient states M 0 ij =1, 0 j n, 1 i m + n j. Moments of order k 1 can be computed in the natural order from M k i0 = k ix M k ij = l=1 M k 1 l0 μ l, 1 i m + n, k 1, λ ij μ i Mi 1,j k λ ij + μ + Mi+1,j 1 k i λ ij + μ + k M k 1 i λ ij + μ i 1 j n, 1 i m + n j, k 1. ij,

45 (β, γ) = (0.05, 5.0) (β, γ) = (10.0, 5.0) x Figure 4. Density function of L 1,29 when γ =5.0

Markov-modulated interactions in SIR epidemics

Markov-modulated interactions in SIR epidemics Markov-modulated interactions in SIR epidemics E. Almaraz 1, A. Gómez-Corral 2 (1)Departamento de Estadística e Investigación Operativa, Facultad de Ciencias Matemáticas (UCM), (2)Instituto de Ciencias

More information

Journal of Computational and Applied Mathematics

Journal of Computational and Applied Mathematics Journal of Computational and Applied Mathematics 233 (2010) 2563 2574 Contents lists available at ScienceDirect Journal of Computational and Applied Mathematics journal homepage: wwwelseviercom/locate/cam

More information

Stochastic modelling of epidemic spread

Stochastic modelling of epidemic spread Stochastic modelling of epidemic spread Julien Arino Centre for Research on Inner City Health St Michael s Hospital Toronto On leave from Department of Mathematics University of Manitoba Julien Arino@umanitoba.ca

More information

Stochastic modelling of epidemic spread

Stochastic modelling of epidemic spread Stochastic modelling of epidemic spread Julien Arino Department of Mathematics University of Manitoba Winnipeg Julien Arino@umanitoba.ca 19 May 2012 1 Introduction 2 Stochastic processes 3 The SIS model

More information

STOCHASTIC PROCESSES Basic notions

STOCHASTIC PROCESSES Basic notions J. Virtamo 38.3143 Queueing Theory / Stochastic processes 1 STOCHASTIC PROCESSES Basic notions Often the systems we consider evolve in time and we are interested in their dynamic behaviour, usually involving

More information

Lecture 21. David Aldous. 16 October David Aldous Lecture 21

Lecture 21. David Aldous. 16 October David Aldous Lecture 21 Lecture 21 David Aldous 16 October 2015 In continuous time 0 t < we specify transition rates or informally P(X (t+δ)=j X (t)=i, past ) q ij = lim δ 0 δ P(X (t + dt) = j X (t) = i) = q ij dt but note these

More information

SIMILAR MARKOV CHAINS

SIMILAR MARKOV CHAINS SIMILAR MARKOV CHAINS by Phil Pollett The University of Queensland MAIN REFERENCES Convergence of Markov transition probabilities and their spectral properties 1. Vere-Jones, D. Geometric ergodicity in

More information

Markov Chains, Stochastic Processes, and Matrix Decompositions

Markov Chains, Stochastic Processes, and Matrix Decompositions Markov Chains, Stochastic Processes, and Matrix Decompositions 5 May 2014 Outline 1 Markov Chains Outline 1 Markov Chains 2 Introduction Perron-Frobenius Matrix Decompositions and Markov Chains Spectral

More information

Irregular Birth-Death process: stationarity and quasi-stationarity

Irregular Birth-Death process: stationarity and quasi-stationarity Irregular Birth-Death process: stationarity and quasi-stationarity MAO Yong-Hua May 8-12, 2017 @ BNU orks with W-J Gao and C Zhang) CONTENTS 1 Stationarity and quasi-stationarity 2 birth-death process

More information

Macroscopic quasi-stationary distribution and microscopic particle systems

Macroscopic quasi-stationary distribution and microscopic particle systems Macroscopic quasi-stationary distribution and microscopic particle systems Matthieu Jonckheere, UBA-CONICET, BCAM visiting fellow Coauthors: A. Asselah, P. Ferrari, P. Groisman, J. Martinez, S. Saglietti.

More information

= P{X 0. = i} (1) If the MC has stationary transition probabilities then, = i} = P{X n+1

= P{X 0. = i} (1) If the MC has stationary transition probabilities then, = i} = P{X n+1 Properties of Markov Chains and Evaluation of Steady State Transition Matrix P ss V. Krishnan - 3/9/2 Property 1 Let X be a Markov Chain (MC) where X {X n : n, 1, }. The state space is E {i, j, k, }. The

More information

AARMS Homework Exercises

AARMS Homework Exercises 1 For the gamma distribution, AARMS Homework Exercises (a) Show that the mgf is M(t) = (1 βt) α for t < 1/β (b) Use the mgf to find the mean and variance of the gamma distribution 2 A well-known inequality

More information

IEOR 6711, HMWK 5, Professor Sigman

IEOR 6711, HMWK 5, Professor Sigman IEOR 6711, HMWK 5, Professor Sigman 1. Semi-Markov processes: Consider an irreducible positive recurrent discrete-time Markov chain {X n } with transition matrix P (P i,j ), i, j S, and finite state space.

More information

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains Markov Chains A random process X is a family {X t : t T } of random variables indexed by some set T. When T = {0, 1, 2,... } one speaks about a discrete-time process, for T = R or T = [0, ) one has a continuous-time

More information

IEOR 6711: Stochastic Models I Professor Whitt, Thursday, November 29, Weirdness in CTMC s

IEOR 6711: Stochastic Models I Professor Whitt, Thursday, November 29, Weirdness in CTMC s IEOR 6711: Stochastic Models I Professor Whitt, Thursday, November 29, 2012 Weirdness in CTMC s Where s your will to be weird? Jim Morrison, The Doors We are all a little weird. And life is a little weird.

More information

Survival in a quasi-death process

Survival in a quasi-death process Survival in a quasi-death process Erik A. van Doorn a and Philip K. Pollett b a Department of Applied Mathematics, University of Twente P.O. Box 217, 7500 AE Enschede, The Netherlands E-mail: e.a.vandoorn@utwente.nl

More information

Weighted Sums of Orthogonal Polynomials Related to Birth-Death Processes with Killing

Weighted Sums of Orthogonal Polynomials Related to Birth-Death Processes with Killing Advances in Dynamical Systems and Applications ISSN 0973-5321, Volume 8, Number 2, pp. 401 412 (2013) http://campus.mst.edu/adsa Weighted Sums of Orthogonal Polynomials Related to Birth-Death Processes

More information

Birth-death chain models (countable state)

Birth-death chain models (countable state) Countable State Birth-Death Chains and Branching Processes Tuesday, March 25, 2014 1:59 PM Homework 3 posted, due Friday, April 18. Birth-death chain models (countable state) S = We'll characterize the

More information

Matrix analytic methods. Lecture 1: Structured Markov chains and their stationary distribution

Matrix analytic methods. Lecture 1: Structured Markov chains and their stationary distribution 1/29 Matrix analytic methods Lecture 1: Structured Markov chains and their stationary distribution Sophie Hautphenne and David Stanford (with thanks to Guy Latouche, U. Brussels and Peter Taylor, U. Melbourne

More information

Example: physical systems. If the state space. Example: speech recognition. Context can be. Example: epidemics. Suppose each infected

Example: physical systems. If the state space. Example: speech recognition. Context can be. Example: epidemics. Suppose each infected 4. Markov Chains A discrete time process {X n,n = 0,1,2,...} with discrete state space X n {0,1,2,...} is a Markov chain if it has the Markov property: P[X n+1 =j X n =i,x n 1 =i n 1,...,X 0 =i 0 ] = P[X

More information

Markov Chains Handout for Stat 110

Markov Chains Handout for Stat 110 Markov Chains Handout for Stat 0 Prof. Joe Blitzstein (Harvard Statistics Department) Introduction Markov chains were first introduced in 906 by Andrey Markov, with the goal of showing that the Law of

More information

MARKOV PROCESSES. Valerio Di Valerio

MARKOV PROCESSES. Valerio Di Valerio MARKOV PROCESSES Valerio Di Valerio Stochastic Process Definition: a stochastic process is a collection of random variables {X(t)} indexed by time t T Each X(t) X is a random variable that satisfy some

More information

Outlines. Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC)

Outlines. Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC) Markov Chains (2) Outlines Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC) 2 pj ( n) denotes the pmf of the random variable p ( n) P( X j) j We will only be concerned with homogenous

More information

Irreducibility. Irreducible. every state can be reached from every other state For any i,j, exist an m 0, such that. Absorbing state: p jj =1

Irreducibility. Irreducible. every state can be reached from every other state For any i,j, exist an m 0, such that. Absorbing state: p jj =1 Irreducibility Irreducible every state can be reached from every other state For any i,j, exist an m 0, such that i,j are communicate, if the above condition is valid Irreducible: all states are communicate

More information

Lectures on Probability and Statistical Models

Lectures on Probability and Statistical Models Lectures on Probability and Statistical Models Phil Pollett Professor of Mathematics The University of Queensland c These materials can be used for any educational purpose provided they are are not altered

More information

Stochastic2010 Page 1

Stochastic2010 Page 1 Stochastic2010 Page 1 Extinction Probability for Branching Processes Friday, April 02, 2010 2:03 PM Long-time properties for branching processes Clearly state 0 is an absorbing state, forming its own recurrent

More information

LTCC. Exercises. (1) Two possible weather conditions on any day: {rainy, sunny} (2) Tomorrow s weather depends only on today s weather

LTCC. Exercises. (1) Two possible weather conditions on any day: {rainy, sunny} (2) Tomorrow s weather depends only on today s weather 1. Markov chain LTCC. Exercises Let X 0, X 1, X 2,... be a Markov chain with state space {1, 2, 3, 4} and transition matrix 1/2 1/2 0 0 P = 0 1/2 1/3 1/6. 0 0 0 1 (a) What happens if the chain starts in

More information

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1 MATH 56A: STOCHASTIC PROCESSES CHAPTER. Finite Markov chains For the sake of completeness of these notes I decided to write a summary of the basic concepts of finite Markov chains. The topics in this chapter

More information

Lecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes

Lecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes Lecture Notes 7 Random Processes Definition IID Processes Bernoulli Process Binomial Counting Process Interarrival Time Process Markov Processes Markov Chains Classification of States Steady State Probabilities

More information

Markov Chains. Sarah Filippi Department of Statistics TA: Luke Kelly

Markov Chains. Sarah Filippi Department of Statistics  TA: Luke Kelly Markov Chains Sarah Filippi Department of Statistics http://www.stats.ox.ac.uk/~filippi TA: Luke Kelly With grateful acknowledgements to Prof. Yee Whye Teh's slides from 2013 14. Schedule 09:30-10:30 Lecture:

More information

Stochastic Processes

Stochastic Processes MTHE/STAT455, STAT855 Fall 202 Stochastic Processes Final Exam, Solutions (5 marks) (a) (0 marks) Condition on the first pair to bond; each of the n adjacent pairs is equally likely to bond Given that

More information

Erik A. van Doorn Department of Applied Mathematics University of Twente Enschede, The Netherlands

Erik A. van Doorn Department of Applied Mathematics University of Twente Enschede, The Netherlands REPRESENTATIONS FOR THE DECAY PARAMETER OF A BIRTH-DEATH PROCESS Erik A. van Doorn Department of Applied Mathematics University of Twente Enschede, The Netherlands University of Leeds 30 October 2014 Acknowledgment:

More information

6 Continuous-Time Birth and Death Chains

6 Continuous-Time Birth and Death Chains 6 Continuous-Time Birth and Death Chains Angela Peace Biomathematics II MATH 5355 Spring 2017 Lecture notes follow: Allen, Linda JS. An introduction to stochastic processes with applications to biology.

More information

Cover Page. The handle holds various files of this Leiden University dissertation

Cover Page. The handle   holds various files of this Leiden University dissertation Cover Page The handle http://hdlhandlenet/1887/39637 holds various files of this Leiden University dissertation Author: Smit, Laurens Title: Steady-state analysis of large scale systems : the successive

More information

6.207/14.15: Networks Lectures 4, 5 & 6: Linear Dynamics, Markov Chains, Centralities

6.207/14.15: Networks Lectures 4, 5 & 6: Linear Dynamics, Markov Chains, Centralities 6.207/14.15: Networks Lectures 4, 5 & 6: Linear Dynamics, Markov Chains, Centralities 1 Outline Outline Dynamical systems. Linear and Non-linear. Convergence. Linear algebra and Lyapunov functions. Markov

More information

Chapter 5. Continuous-Time Markov Chains. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan

Chapter 5. Continuous-Time Markov Chains. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan Chapter 5. Continuous-Time Markov Chains Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan Continuous-Time Markov Chains Consider a continuous-time stochastic process

More information

Statistics 150: Spring 2007

Statistics 150: Spring 2007 Statistics 150: Spring 2007 April 23, 2008 0-1 1 Limiting Probabilities If the discrete-time Markov chain with transition probabilities p ij is irreducible and positive recurrent; then the limiting probabilities

More information

QUASI-STATIONARY DISTRIBUTIONS FOR BIRTH-DEATH PROCESSES WITH KILLING

QUASI-STATIONARY DISTRIBUTIONS FOR BIRTH-DEATH PROCESSES WITH KILLING QUASI-STATIONARY DISTRIBUTIONS FOR BIRTH-DEATH PROCESSES WITH KILLING PAULINE COOLEN-SCHRIJNER AND ERIK A. VAN DOORN Received 9 January 2006; Revised 9 June 2006; Accepted 28 July 2006 The Karlin-McGregor

More information

process on the hierarchical group

process on the hierarchical group Intertwining of Markov processes and the contact process on the hierarchical group April 27, 2010 Outline Intertwining of Markov processes Outline Intertwining of Markov processes First passage times of

More information

Probabilistic Model Checking for Biochemical Reaction Systems

Probabilistic Model Checking for Biochemical Reaction Systems Probabilistic Model Checing for Biochemical Reaction Systems Ratana Ty Ken-etsu Fujita Ken ichi Kawanishi Graduated School of Science and Technology Gunma University Abstract Probabilistic model checing

More information

A Method for Evaluating the Distribution of the Total Cost of a Random Process over its Lifetime

A Method for Evaluating the Distribution of the Total Cost of a Random Process over its Lifetime A Method for Evaluating the Distribution of the Total Cost of a Random Process over its Lifetime P.K. Pollett a and V.T. Stefanov b a Department of Mathematics, The University of Queensland Queensland

More information

2. Transience and Recurrence

2. Transience and Recurrence Virtual Laboratories > 15. Markov Chains > 1 2 3 4 5 6 7 8 9 10 11 12 2. Transience and Recurrence The study of Markov chains, particularly the limiting behavior, depends critically on the random times

More information

Markov Chains and Stochastic Sampling

Markov Chains and Stochastic Sampling Part I Markov Chains and Stochastic Sampling 1 Markov Chains and Random Walks on Graphs 1.1 Structure of Finite Markov Chains We shall only consider Markov chains with a finite, but usually very large,

More information

MATH 56A: STOCHASTIC PROCESSES CHAPTER 0

MATH 56A: STOCHASTIC PROCESSES CHAPTER 0 MATH 56A: STOCHASTIC PROCESSES CHAPTER 0 0. Chapter 0 I reviewed basic properties of linear differential equations in one variable. I still need to do the theory for several variables. 0.1. linear differential

More information

On hitting times and fastest strong stationary times for skip-free and more general chains

On hitting times and fastest strong stationary times for skip-free and more general chains On hitting times and fastest strong stationary times for skip-free and more general chains James Allen Fill 1 Department of Applied Mathematics and Statistics The Johns Hopkins University jimfill@jhu.edu

More information

Quasi-stationary distributions

Quasi-stationary distributions Quasi-stationary distributions Erik A. van Doorn a and Philip K. Pollett b a Department of Applied Mathematics, University of Twente P.O. Box 217, 7500 AE Enschede, The Netherlands E-mail: e.a.vandoorn@utwente.nl

More information

An Introduction to Stochastic Epidemic Models

An Introduction to Stochastic Epidemic Models An Introduction to Stochastic Epidemic Models Linda J. S. Allen Department of Mathematics and Statistics Texas Tech University Lubbock, Texas 79409-1042, U.S.A. linda.j.allen@ttu.edu 1 Introduction The

More information

APM 541: Stochastic Modelling in Biology Discrete-time Markov Chains. Jay Taylor Fall Jay Taylor (ASU) APM 541 Fall / 92

APM 541: Stochastic Modelling in Biology Discrete-time Markov Chains. Jay Taylor Fall Jay Taylor (ASU) APM 541 Fall / 92 APM 541: Stochastic Modelling in Biology Discrete-time Markov Chains Jay Taylor Fall 2013 Jay Taylor (ASU) APM 541 Fall 2013 1 / 92 Outline 1 Motivation 2 Markov Processes 3 Markov Chains: Basic Properties

More information

INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING

INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING ERIC SHANG Abstract. This paper provides an introduction to Markov chains and their basic classifications and interesting properties. After establishing

More information

Intertwining of Markov processes

Intertwining of Markov processes January 4, 2011 Outline Outline First passage times of birth and death processes Outline First passage times of birth and death processes The contact process on the hierarchical group 1 0.5 0-0.5-1 0 0.2

More information

Probability Distributions

Probability Distributions Lecture : Background in Probability Theory Probability Distributions The probability mass function (pmf) or probability density functions (pdf), mean, µ, variance, σ 2, and moment generating function (mgf)

More information

4.7.1 Computing a stationary distribution

4.7.1 Computing a stationary distribution At a high-level our interest in the rest of this section will be to understand the limiting distribution, when it exists and how to compute it To compute it, we will try to reason about when the limiting

More information

EXTINCTION AND QUASI-STATIONARITY IN THE VERHULST LOGISTIC MODEL: WITH DERIVATIONS OF MATHEMATICAL RESULTS

EXTINCTION AND QUASI-STATIONARITY IN THE VERHULST LOGISTIC MODEL: WITH DERIVATIONS OF MATHEMATICAL RESULTS EXTINCTION AND QUASI-STATIONARITY IN THE VERHULST LOGISTIC MODEL: WITH DERIVATIONS OF MATHEMATICAL RESULTS INGEMAR NÅSELL Abstract. We formulate and analyze a stochastic version of the Verhulst deterministic

More information

LIMITING PROBABILITY TRANSITION MATRIX OF A CONDENSED FIBONACCI TREE

LIMITING PROBABILITY TRANSITION MATRIX OF A CONDENSED FIBONACCI TREE International Journal of Applied Mathematics Volume 31 No. 18, 41-49 ISSN: 1311-178 (printed version); ISSN: 1314-86 (on-line version) doi: http://dx.doi.org/1.173/ijam.v31i.6 LIMITING PROBABILITY TRANSITION

More information

Endemic persistence or disease extinction: the effect of separation into subcommunities

Endemic persistence or disease extinction: the effect of separation into subcommunities Mathematical Statistics Stockholm University Endemic persistence or disease extinction: the effect of separation into subcommunities Mathias Lindholm Tom Britton Research Report 2006:6 ISSN 1650-0377 Postal

More information

CS 246 Review of Linear Algebra 01/17/19

CS 246 Review of Linear Algebra 01/17/19 1 Linear algebra In this section we will discuss vectors and matrices. We denote the (i, j)th entry of a matrix A as A ij, and the ith entry of a vector as v i. 1.1 Vectors and vector operations A vector

More information

Department of Applied Mathematics Faculty of EEMCS. University of Twente. Memorandum No Birth-death processes with killing

Department of Applied Mathematics Faculty of EEMCS. University of Twente. Memorandum No Birth-death processes with killing Department of Applied Mathematics Faculty of EEMCS t University of Twente The Netherlands P.O. Box 27 75 AE Enschede The Netherlands Phone: +3-53-48934 Fax: +3-53-48934 Email: memo@math.utwente.nl www.math.utwente.nl/publications

More information

The Transition Probability Function P ij (t)

The Transition Probability Function P ij (t) The Transition Probability Function P ij (t) Consider a continuous time Markov chain {X(t), t 0}. We are interested in the probability that in t time units the process will be in state j, given that it

More information

Birth-death chain. X n. X k:n k,n 2 N k apple n. X k: L 2 N. x 1:n := x 1,...,x n. E n+1 ( x 1:n )=E n+1 ( x n ), 8x 1:n 2 X n.

Birth-death chain. X n. X k:n k,n 2 N k apple n. X k: L 2 N. x 1:n := x 1,...,x n. E n+1 ( x 1:n )=E n+1 ( x n ), 8x 1:n 2 X n. Birth-death chains Birth-death chain special type of Markov chain Finite state space X := {0,...,L}, with L 2 N X n X k:n k,n 2 N k apple n Random variable and a sequence of variables, with and A sequence

More information

Stochastic process. X, a series of random variables indexed by t

Stochastic process. X, a series of random variables indexed by t Stochastic process X, a series of random variables indexed by t X={X(t), t 0} is a continuous time stochastic process X={X(t), t=0,1, } is a discrete time stochastic process X(t) is the state at time t,

More information

Quasi-Stationary Distributions in Linear Birth and Death Processes

Quasi-Stationary Distributions in Linear Birth and Death Processes Quasi-Stationary Distributions in Linear Birth and Death Processes Wenbo Liu & Hanjun Zhang School of Mathematics and Computational Science, Xiangtan University Hunan 411105, China E-mail: liuwenbo10111011@yahoo.com.cn

More information

Modelling Complex Queuing Situations with Markov Processes

Modelling Complex Queuing Situations with Markov Processes Modelling Complex Queuing Situations with Markov Processes Jason Randal Thorne, School of IT, Charles Sturt Uni, NSW 2795, Australia Abstract This article comments upon some new developments in the field

More information

Statistics 992 Continuous-time Markov Chains Spring 2004

Statistics 992 Continuous-time Markov Chains Spring 2004 Summary Continuous-time finite-state-space Markov chains are stochastic processes that are widely used to model the process of nucleotide substitution. This chapter aims to present much of the mathematics

More information

Probability Distributions

Probability Distributions Lecture 1: Background in Probability Theory Probability Distributions The probability mass function (pmf) or probability density functions (pdf), mean, µ, variance, σ 2, and moment generating function

More information

Multi Stage Queuing Model in Level Dependent Quasi Birth Death Process

Multi Stage Queuing Model in Level Dependent Quasi Birth Death Process International Journal of Statistics and Systems ISSN 973-2675 Volume 12, Number 2 (217, pp. 293-31 Research India Publications http://www.ripublication.com Multi Stage Queuing Model in Level Dependent

More information

Quasi-stationary distributions for discrete-state models

Quasi-stationary distributions for discrete-state models Quasi-stationary distributions for discrete-state models Erik A. van Doorn a and Philip K. Pollett b a Department of Applied Mathematics, University of Twente P.O. Box 217, 7500 AE Enschede, The Netherlands

More information

8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains

8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains 8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains 8.1 Review 8.2 Statistical Equilibrium 8.3 Two-State Markov Chain 8.4 Existence of P ( ) 8.5 Classification of States

More information

P(X 0 = j 0,... X nk = j k )

P(X 0 = j 0,... X nk = j k ) Introduction to Probability Example Sheet 3 - Michaelmas 2006 Michael Tehranchi Problem. Let (X n ) n 0 be a homogeneous Markov chain on S with transition matrix P. Given a k N, let Z n = X kn. Prove that

More information

Markov processes and queueing networks

Markov processes and queueing networks Inria September 22, 2015 Outline Poisson processes Markov jump processes Some queueing networks The Poisson distribution (Siméon-Denis Poisson, 1781-1840) { } e λ λ n n! As prevalent as Gaussian distribution

More information

MATH 56A SPRING 2008 STOCHASTIC PROCESSES 65

MATH 56A SPRING 2008 STOCHASTIC PROCESSES 65 MATH 56A SPRING 2008 STOCHASTIC PROCESSES 65 2.2.5. proof of extinction lemma. The proof of Lemma 2.3 is just like the proof of the lemma I did on Wednesday. It goes like this. Suppose that â is the smallest

More information

Stochastic Processes

Stochastic Processes Stochastic Processes 8.445 MIT, fall 20 Mid Term Exam Solutions October 27, 20 Your Name: Alberto De Sole Exercise Max Grade Grade 5 5 2 5 5 3 5 5 4 5 5 5 5 5 6 5 5 Total 30 30 Problem :. True / False

More information

INTERTWINING OF BIRTH-AND-DEATH PROCESSES

INTERTWINING OF BIRTH-AND-DEATH PROCESSES K Y BERNETIKA VOLUM E 47 2011, NUMBER 1, P AGES 1 14 INTERTWINING OF BIRTH-AND-DEATH PROCESSES Jan M. Swart It has been known for a long time that for birth-and-death processes started in zero the first

More information

IEOR 6711: Stochastic Models I, Fall 2003, Professor Whitt. Solutions to Final Exam: Thursday, December 18.

IEOR 6711: Stochastic Models I, Fall 2003, Professor Whitt. Solutions to Final Exam: Thursday, December 18. IEOR 6711: Stochastic Models I, Fall 23, Professor Whitt Solutions to Final Exam: Thursday, December 18. Below are six questions with several parts. Do as much as you can. Show your work. 1. Two-Pump Gas

More information

Birth and Death Processes. Birth and Death Processes. Linear Growth with Immigration. Limiting Behaviour for Birth and Death Processes

Birth and Death Processes. Birth and Death Processes. Linear Growth with Immigration. Limiting Behaviour for Birth and Death Processes DTU Informatics 247 Stochastic Processes 6, October 27 Today: Limiting behaviour of birth and death processes Birth and death processes with absorbing states Finite state continuous time Markov chains

More information

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero. Sec 6 Eigenvalues and Eigenvectors Definition An eigenvector of an n n matrix A is a nonzero vector x such that A x λ x for some scalar λ A scalar λ is called an eigenvalue of A if there is a nontrivial

More information

2 Discrete-Time Markov Chains

2 Discrete-Time Markov Chains 2 Discrete-Time Markov Chains Angela Peace Biomathematics II MATH 5355 Spring 2017 Lecture notes follow: Allen, Linda JS. An introduction to stochastic processes with applications to biology. CRC Press,

More information

CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions

CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions Instructor: Erik Sudderth Brown University Computer Science April 14, 215 Review: Discrete Markov Chains Some

More information

Calculus for the Life Sciences II Assignment 6 solutions. f(x, y) = 3π 3 cos 2x + 2 sin 3y

Calculus for the Life Sciences II Assignment 6 solutions. f(x, y) = 3π 3 cos 2x + 2 sin 3y Calculus for the Life Sciences II Assignment 6 solutions Find the tangent plane to the graph of the function at the point (0, π f(x, y = 3π 3 cos 2x + 2 sin 3y Solution: The tangent plane of f at a point

More information

Lecture 20 : Markov Chains

Lecture 20 : Markov Chains CSCI 3560 Probability and Computing Instructor: Bogdan Chlebus Lecture 0 : Markov Chains We consider stochastic processes. A process represents a system that evolves through incremental changes called

More information

Math 1553, Introduction to Linear Algebra

Math 1553, Introduction to Linear Algebra Learning goals articulate what students are expected to be able to do in a course that can be measured. This course has course-level learning goals that pertain to the entire course, and section-level

More information

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks Recap Probability, stochastic processes, Markov chains ELEC-C7210 Modeling and analysis of communication networks 1 Recap: Probability theory important distributions Discrete distributions Geometric distribution

More information

An M/M/1 Queue in Random Environment with Disasters

An M/M/1 Queue in Random Environment with Disasters An M/M/1 Queue in Random Environment with Disasters Noam Paz 1 and Uri Yechiali 1,2 1 Department of Statistics and Operations Research School of Mathematical Sciences Tel Aviv University, Tel Aviv 69978,

More information

Central limit theorems for ergodic continuous-time Markov chains with applications to single birth processes

Central limit theorems for ergodic continuous-time Markov chains with applications to single birth processes Front. Math. China 215, 1(4): 933 947 DOI 1.17/s11464-15-488-5 Central limit theorems for ergodic continuous-time Markov chains with applications to single birth processes Yuanyuan LIU 1, Yuhui ZHANG 2

More information

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321 Lecture 11: Introduction to Markov Chains Copyright G. Caire (Sample Lectures) 321 Discrete-time random processes A sequence of RVs indexed by a variable n 2 {0, 1, 2,...} forms a discretetime random process

More information

The cost/reward formula has two specific widely used applications:

The cost/reward formula has two specific widely used applications: Applications of Absorption Probability and Accumulated Cost/Reward Formulas for FDMC Friday, October 21, 2011 2:28 PM No class next week. No office hours either. Next class will be 11/01. The cost/reward

More information

Markov Chains and Pandemics

Markov Chains and Pandemics Markov Chains and Pandemics Caleb Dedmore and Brad Smith December 8, 2016 Page 1 of 16 Abstract Markov Chain Theory is a powerful tool used in statistical analysis to make predictions about future events

More information

EXTINCTION PROBABILITY IN A BIRTH-DEATH PROCESS WITH KILLING

EXTINCTION PROBABILITY IN A BIRTH-DEATH PROCESS WITH KILLING EXTINCTION PROBABILITY IN A BIRTH-DEATH PROCESS WITH KILLING Erik A. van Doorn and Alexander I. Zeifman Department of Applied Mathematics University of Twente P.O. Box 217, 7500 AE Enschede, The Netherlands

More information

Section 1.7: Properties of the Leslie Matrix

Section 1.7: Properties of the Leslie Matrix Section 1.7: Properties of the Leslie Matrix Definition: A matrix A whose entries are nonnegative (positive) is called a nonnegative (positive) matrix, denoted as A 0 (A > 0). Definition: A square m m

More information

Statistics 253/317 Introduction to Probability Models. Winter Midterm Exam Monday, Feb 10, 2014

Statistics 253/317 Introduction to Probability Models. Winter Midterm Exam Monday, Feb 10, 2014 Statistics 253/317 Introduction to Probability Models Winter 2014 - Midterm Exam Monday, Feb 10, 2014 Student Name (print): (a) Do not sit directly next to another student. (b) This is a closed-book, closed-note

More information

SMSTC (2007/08) Probability.

SMSTC (2007/08) Probability. SMSTC (27/8) Probability www.smstc.ac.uk Contents 12 Markov chains in continuous time 12 1 12.1 Markov property and the Kolmogorov equations.................... 12 2 12.1.1 Finite state space.................................

More information

A NOTE ON QUASI-STATIONARY DISTRIBUTIONS OF BIRTH-DEATH PROCESSES AND THE SIS LOGISTIC EPIDEMIC

A NOTE ON QUASI-STATIONARY DISTRIBUTIONS OF BIRTH-DEATH PROCESSES AND THE SIS LOGISTIC EPIDEMIC (January 8, 2003) A NOTE ON QUASI-STATIONARY DISTRIBUTIONS OF BIRTH-DEATH PROCESSES AND THE SIS LOGISTIC EPIDEMIC DAMIAN CLANCY, University of Liverpoo PHILIP K. POLLETT, University of Queensand Abstract

More information

EXTINCTION TIMES FOR A GENERAL BIRTH, DEATH AND CATASTROPHE PROCESS

EXTINCTION TIMES FOR A GENERAL BIRTH, DEATH AND CATASTROPHE PROCESS (February 25, 2004) EXTINCTION TIMES FOR A GENERAL BIRTH, DEATH AND CATASTROPHE PROCESS BEN CAIRNS, University of Queensland PHIL POLLETT, University of Queensland Abstract The birth, death and catastrophe

More information

Stochastic Processes (Week 6)

Stochastic Processes (Week 6) Stochastic Processes (Week 6) October 30th, 2014 1 Discrete-time Finite Markov Chains 2 Countable Markov Chains 3 Continuous-Time Markov Chains 3.1 Poisson Process 3.2 Finite State Space 3.2.1 Kolmogrov

More information

Integrals for Continuous-time Markov chains

Integrals for Continuous-time Markov chains Integrals for Continuous-time Markov chains P.K. Pollett Abstract This paper presents a method of evaluating the expected value of a path integral for a general Markov chain on a countable state space.

More information

(b) What is the variance of the time until the second customer arrives, starting empty, assuming that we measure time in minutes?

(b) What is the variance of the time until the second customer arrives, starting empty, assuming that we measure time in minutes? IEOR 3106: Introduction to Operations Research: Stochastic Models Fall 2006, Professor Whitt SOLUTIONS to Final Exam Chapters 4-7 and 10 in Ross, Tuesday, December 19, 4:10pm-7:00pm Open Book: but only

More information

Reproduction numbers for epidemic models with households and other social structures I: Definition and calculation of R 0

Reproduction numbers for epidemic models with households and other social structures I: Definition and calculation of R 0 Mathematical Statistics Stockholm University Reproduction numbers for epidemic models with households and other social structures I: Definition and calculation of R 0 Lorenzo Pellis Frank Ball Pieter Trapman

More information

Chapter 16 focused on decision making in the face of uncertainty about one future

Chapter 16 focused on decision making in the face of uncertainty about one future 9 C H A P T E R Markov Chains Chapter 6 focused on decision making in the face of uncertainty about one future event (learning the true state of nature). However, some decisions need to take into account

More information

Data analysis and stochastic modeling

Data analysis and stochastic modeling Data analysis and stochastic modeling Lecture 7 An introduction to queueing theory Guillaume Gravier guillaume.gravier@irisa.fr with a lot of help from Paul Jensen s course http://www.me.utexas.edu/ jensen/ormm/instruction/powerpoint/or_models_09/14_queuing.ppt

More information

IEOR 6711: Professor Whitt. Introduction to Markov Chains

IEOR 6711: Professor Whitt. Introduction to Markov Chains IEOR 6711: Professor Whitt Introduction to Markov Chains 1. Markov Mouse: The Closed Maze We start by considering how to model a mouse moving around in a maze. The maze is a closed space containing nine

More information

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i :=

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i := 2.7. Recurrence and transience Consider a Markov chain {X n : n N 0 } on state space E with transition matrix P. Definition 2.7.1. A state i E is called recurrent if P i [X n = i for infinitely many n]

More information