MATH37012 Week 10 Dr Jonathan Bagley Semester 2-2018 2.18 a) Finding and µ j for a particular category of B.D. processes. Consider a process where the destination of the next transition is determined by two independent, competing processes; for example: birth versus death; or, in a queue, arrivals versus departures. Further suppose, when in state j, the time to the next birth/arrival is V exp( ) and the time to the next death/departure, W exp(µ j ). Then, using l.o.m., the holding time in state j is min(v, W ) exp( ) which gives q j =. Next, observe that r j,j+1 = P (V < W ). We have P (V < W min(v, W ) (t, t + δt]) = P (V < W, min(v, W ) (t, t + δt]) P (min(v, W ) (t, t + δt]) P (V (t, t + δt], W > t + δt) ( )exp{ ( )t}δt exp( t)δt exp{ µ j (t + δt)} ( )exp{ ( )t}δt, which is independent of t. We have shown that the jump destination is independent of the holding time, as required for the Markov property. We have also shown that r j,j+1 = P (V < W ) =. Consequently q j,j+1 = r j,j+1 q j = ( ) =. It then follows that r j,j 1 = µ j and hence that q j,j 1 = µ j ( ) = µ j. Now observe that the and µ j above are equal to those defined in 2.15. We will use this when modelling queues as B.D. processes. 1
For the Yule process, we have r j,j 1 0, which is equivalent to µ j 0, or min(v, W ) = V. We shall also need the following. 2.18 b) The thinned Poisson process. Suppose we have a Poisson process, rate λ, where each point is independently retained with probability p. It can be shown (perhaps you did so in Probability 2) that this results in a new Poisson process with rate pλ. The intervals between points in this new Poisson process are independent and each exp(pλ) distributed. Using l.o.m, the time from when we begin observing the process, up to the next point, is exp(pλ) distributed. Apply 2.18 a) and b) to the telephone conversation question on example sheet 7. This process can be viewed as a B.D. process with S = {0, 1, 2}. Continuous time Markov chains: stationary and asymptotic behaviour. Recall, in the discrete case, that under certain conditions, i, j S p ij (n) π j as n. where π is the stationary distribution. We now derive a similar result in continuous time. 2.19 Definition X(t) is irreducible if, for any pair i, j of states,we have that p ij (t) > 0 for some t > 0. Note that it is known, either p ij (t) > 0 t > 0, or p ij (t) = 0 t > 0. 2.20 Lemma X(t) irreducible implies, h > 0, X(nh) n 1 is an irreducible, aperiodic, discrete time M.C. 2
Proof Clearly X(nh) has the Markov property. Second, X(t) irreducible implies p ij (nh) > 0 for all n 1, i, j; and therefore X(nh) is also irreducible. Finally p ii (h) > 0 for all h > 0 gives X(nh) aperiodic. X(nh) is called the skeleton chain (not to be confused with the jump chain defined earlier). 2.21 Definition A distribution π is a stationary distribution for X(t) if π = πp (t) t 0. We can now state 2.22 Theorem Let X(t) be an irreducible continuous time finite state space M.C. or honest B.D. process. Then a distribution π is a stationary distribution for X(t) if and only if πq = 0. Furthermore, if a stationary distribution exists, it is unique. The proof requires the following: 2.23 Theorem Under the hypotheses of 2.22, lim p ij(t) exists i, j S and is independent of i. Proof The proof is omitted. It makes use of the skeleton chain (Kingman 1963). If you are interested in the proof, there are links to Kingman s paper and a biography of the man, on the 3
course materials page. Proof (of 2.22) We consider separately, the cases a) S < and b) honest B.D. a) When S < we have πq = 0 πq n = 0 n 1 π Qn t n n=1 = 0 t Q n t n π{ I} n=0 = 0 t Q n t n π = π t n=0 which, by the definition in 2.14, implies πp (t) = π t 0; and so π is a stationary distribution. b) When X(t) is a B.D. process with S =, we cannot use the argument of 2.14. First, by construction (see later), any distribution satisfying πq = 0 is unique. Now πq = 0 implies πqp (t) = 0 t 0 which (2.13) implies πp (t)q = 0 t 0. Since X(t) is honest, we have p ij (t) = 1 i S, t 0. This, together with π being j S a distribution, implies that πp (t) is also a distribution. Hence, by the above uniqueness, π = πp (t) t 0; and so π is a stationary distribution for X(t). Suppose π is a stationary distribution for X(t). Then π = πp (t) t 0 implies π = πp (nh) h 0 and n 1; and therefore π is a stationary distribution for X(nh). Now, by the discrete theory, we have, for each i, j S, p ij (nh) π j as n. 4
But 2.23 tells us that lim p ij (t) exists; and so must equal π j. Letting t in P (t) = P (t)q now gives lim P (t) = ΠQ where each row of Π is equal to π. But if lim P (t) is a matrix of constants, and all the lim p ij(t) exist, these constants must equal 0. Consequently 0 = πq, as required. Finally, if X(t) has more than one stationary distribution, then so does X(nh), which we know, by discrete theory, is not true. We can now state a limit theorem. 2.24 Theorem Under the hypotheses of 2.22, if X(t) has a (unique) stationary distribution π, then i, j S p ij (t) π j as t. Otherwise, i, j S p ij (t) 0 as t. Proof The proof of the first claim is contained in above. For the second claim, let α j = lim p ij(t). Then also, for h > 0, α j = lim p ij (nh). n Suppose now, at least one α j is not zero. Then, by discrete theory applied to X(nh), we have that α is a distribution. Consequently, as in above, lim P (t) = AQ where each row of A is equal to α. Then it follows, as in above, that 0 = αq. That is, α is a stationary distribution. 5