DTU Informatics 247 Stochastic Processes 6, October 27 Today: Limiting behaviour of birth and death processes Birth and death processes with absorbing states Finite state continuous time Markov chains Next week Renewal phenomena Twothree weeks from now Phase type distributions Limiting Behaviour for For an irreducible birth and death process we have If π j > then lim P ijt π j t πpt π or πa We can always solve recursively for π π π n n π n+ n+ π n [ + n i+ i n n π i+ i ] Linear Growth with Immigration n n + a, n n With θ k θ k k π k k i+ i aa + a + k k! k a + + k k! + k k k + k k k + k k k k a k a
Logistic Model Probability of Absorption Birth/death rate per individual αm Xt Xt N, n αnm n, n nn N. θ N+m π N+M α m N+m in N M N N + m m c M N N + m m im i i + i + N α m, m M N α m Theorem Theorem 6. Page 39 Consider a birth and death process with birth and death parameters n, n, n with is an absorbing state. The probability of absorption into state from the initial state m is u m ρ i { im ρ i + i ρ i the mean time to absorption is { w m + m k ρ k i ρ i < i jk+ j ρ j i ρ i i ρ i < where ρ, ρ i i j j j. Derivation of Absorption Probabilities First step analysis, with the embedded Markov chain... P + +... 2 2 + 2 2 + 2....... Let u i be the probability of absorption when starting in state i u i i i + u i + i + u i+, i, u i+ u i i u i u i, i Derivation of Absorption Probabilities cont. With v i u i+ u i we get v i i v i leading to If m j ρ i < we get otherwise u u i v i ρ i v with ρ i i j j j u i+ u i v i ρ i v o ρ i u u m u u ρ i, m > u j j ρ i + j ρ i
Derivation of Mean Time to Absorption Derivation of Mean Time to Absorption cont. Let w i be mean time to absorption starting in i w w i i + + i i + w i + i + w i+, i With z i w i w i+ negative! z i + i z i Inserting z m w m w m we get w m w m w m w m i ρ i w i ρ i + w z m m i ji+ i j m + j ρ i + z j j j z lim m w m w m w i ρ i Population Processes: probability of absorption Population Processes: mean time to absorption n n, n n, X m, EXt me t i ρ i, ρ i im { m when > when with m and < i ρ i i i i x i dx i log x i dx log x dx
Finite Contiunuous Time Markov Chains P ij t P{Xt + s j Xs i} a P ij t b N j P ijt c P ik s + t N j P ijsp jk t {, i j d lim t + P ij t, i j c is the Chapman-Kolmogorov equations, in matrix form P t PtA forward equations P t APt backward equations P e At A n t n A n t n I + n n Pt + s PtPs Stationary Distribution Infinitesimal Description πa π, π,..., π N elementwise π j q j i j q q... q N q q... q N...... q N q N... q NN π i q ij with q j k j q jk P{Xt + h j Xt i} q ij h + oh for i j P{Xt + h i Xt i} q i h + oh Sojourn Description. Embedded Markov chain of state sequences ξ i has one step transition probabilities p ij q ij q i 2. Successive sojourn times S ξi are exponentially distributed with mean q ξi.
Two State Markov Chain Summary of most Important Results A A 2 α α α 2 + α 2 α α 2 α 2 + α α + α α Thus And A n [ α + ] n A Pt I α + I α + [ α + t] n A n e α+t A I + α + A α + Ae α+t Pt e At π Under an assumption of irreducibility for t Additional Reading Kai Lai Chung: Markov Chains with Stationary Transition Probabilities