Continuous time Markov chains

Size: px
Start display at page:

Download "Continuous time Markov chains"

Transcription

1 Chapter 2 Continuous time Markov chains As before we assume that we have a finite or countable statespace I, but now the Markov chains X {X(t) : t } have a continuous time parameter t [, ). In some cases, but not the ones of interest to us, this may lead to analytical problems, which we skip in this lecture. 2.1 Q-Matrices In continuous time there are no smallest time steps and hence we cannot speak about one-step transition matrices any more. If we are in state j I at time t, then we can ask for the probability of being in a different state k I at time t + h, f(h) IP{X(t + h) k X(t) j}. We are interested in small time steps, i.e. small values of h >. Clearly, f(). Assuming that f is differentiable at, the most interesting information we can obtain about f at the origin is its derivative f (), which we call q jk. IP{X(t + h) k X(t) j} f(h) f() lim lim f () q jk. h h h h We can write this as IP{X(t + h) k X(t) j} q jk h + o(h). Here o(h) is a convenient abbreviation for any function with the property that lim h o(h)/h, we say that the function is of smaller order than h. An advantage of this notation is that the actual function o might be a different one in each line, but we do not need to invent a new name for each occurrence. Again we have the Markov property, which states that if we know the state X(t) then all additional information about X at times prior to t is irrelevant for the future: For all k j, t < t 1 <... < t n < t and x,...,x n I, IP{X(t + h) k X(t) j,x(t i ) x i i} IP{X(t + h) k X(t) j} q jk h + o(h). 1

2 From this we get, for every j I, IP{X(t + h) j X(t) j} 1 k I q jk h + o(h) 1 + q jj h + o(h), if we define q jj k I q jk. As above we have q jj f () for f(h) IP{X(t + h) j X(t) j}, but now f() 1. We can enter this information into a matrix Q (q ij : i,j I), which contains all the information about the transitions of the Markov chain X. This matrix is called the Q-matrix of the Markov chain. Its necessary properties are all off-diagonal entries q ij, i j, are non-negative, all diagonal entries q ii are nonpositive and, the sum over the entries in each row is zero (!). We say that q ij gives the rate at which we try to enter state j when we are in state i, or the jump intensity from i to j. Example 2.1: The Poisson process Certain random occurrences (for example claim times to an insurance company, arrival times of customers in a shop,...) happen randomly in such a way that the probability of one occurrence in the time interval (t,t + h) is λh + o(h), the probability of more than one occurrence is o(h). occurrences in the time interval (t,t + h) are independent of what happened before time t. Suppose now that X(t) is the number of occurrences by time t. Then X is a continuous time Markov chain with statespace I {, 1, 2,...}. Because, by our assumptions, λh + o(h) if k j + 1, IP{X(t + h) k X(t) j} + o(h) if k > j + 1, if k < j, we get the Q-matrix λ λ... λ λ... Q λ λ This process is called a Poisson process with rate λ. Reason for this name: An analysis argument shows that, if X(), then IP{X(t) n} e λt(λt)n n! i.e. X(t) is Poisson distributed with parameter λt. Example 2.2: The pure birth-process for n,1,2,..., Suppose we have a population of (immortal) animals reproducing in such a way that, independent of what happened before time t and of what happens to the other animals, in the interval (t, t + h) each animal 2

3 gives birth to 1 child with probability λh + o(h), gives birth to more than 1 child with probability o(h), gives birth to children with probability 1 λh + o(h). Let p λh + o(h) and q 1 p and X(t) the number of animals at time t. Then, and and finally, for k > 1, IP{X(t + h) n X(t) n} IP{ no births in (t,t + h) X(t) n} q n [1 λh + o(h)] n 1 nλh + o(h), IP{X(t + h) n + 1 X(t) n} IP{ one birth in (t,t + h) X(t) n} npq n 1 nλh[1 λh + o(h)] n 1 nλh + o(h), IP{X(t + h) n + k X(t) n} IP{k births in (t,t + h) X(t) n} ( ) ( ) n n p k q n k + o(h) λ k h k [1 λh + o(h)] n k + o(h) k k o(h). Hence, X is a continuous time Markov chain with Q-matrix λ λ... 2λ 2λ... Q 3λ 3λ An analysis argument shows that, if X() 1, then. IP{X(t) n} e λt (1 e λt ) n 1 for n 1,2,3,..., i.e. X(t) is geometrically distributed with parameter e λt Construction of the Markov chain Given the Q-matrix one can construct the paths of a continuous time Markov chain as follows. Suppose the chain starts in a fixed state X i for i I. Let J min{t : X t X } be the first jump time of X (we always assume that the minimum exists). 3

4 Theorem 2.1 Under the law IP i of the Markov chain started in X i the random variables J and X(J) are independent. The distribution of J is exponential with rate q i : j i q ij, which means IP i {J > t} e q it for t. Moreover, and the chain starts afresh at time J. IP i {X(J) j} q ij q i, Picture! Let J 1,J 2,J 3,... be the times between successive jumps. Then Y n X(J J n ) defines a discrete time Markov chain with one-step transition matrix P given by { qij p ij q i if i j, if i j. Given Y n i the next waiting time J n+1 is exponentially distributed with rate q i and independent of Y 1,...,Y n 1 and J 1,...,J n Exponential times Why does the exponential distribution play a special role for continuous Markov chains? Recall the Markov property in the following way: Suppose X() i and let J be the time before the first jump and t,h >. Then, using the Markov property in the second step, IP{J > t + h J > t} IP{J > t + h J > t,x(t) i} IP{J > t + h X(t) i} IP{J > h}. Hence the time J we have to wait for a jump satisfies the lack of memory property: if you have waited for t time units and no jump has occurred, the remaining waiting time has the same distribution as the original waiting time. This is sometimes called the waiting time paradox or constant failure rate property. The only distribution with the lack of memory property is the exponential distribution. Check that, for IP{J > x} e µx, we have IP{J > t + h J > t} e µ(t+h) e µt e µh IP{J > h}. Another important property of the exponential distribution is the following: Theorem 2.2 If S and T are independent exponentially distributed random variables with rate α resp. β, then their minimum S T is also exponentially distributed with rate α + β and it is independent of the event {S T S}. Moreover, IP{S T S} Prove this as an exercise, see Sheet 9! α α + β and IP{S T T } β α + β. 4

5 Example 2.3: The M/M/1 queue. Suppose customers arrive according to a Poisson process with of rate λ at a single server. Each customer requires an independent random service time, which is exponential with mean 1/µ (i.e. with rate µ). Let X(t) be the number of people in the queue (including people currently being served). Then X is a continuous-time Markov chain with λ λ... µ (λ + µ) λ... Q µ (λ + µ) λ The previous theorem says: If there is at least one person in the queue, the distribution of the jump time (i.e. the time until the queue changes) is exponential with rate λ+µ and the probability that at the jump time the queue is getting shorter is µ/(λ + µ). 2.2 Kolmogorov s equations and global Markov property Apart from the local version of the Markov property, for small h >, { qjk h + o(h) if j k, IP{X(t + h) k X(t) j,(x(s),s t)} 1 + q jj h + o(h) if j k, continuous-time Markov chains also satisfy a global Markov property. To describe it let ( ) P(t) p ij (t),i,j I,t given by p ij (t) IP i {X(t) j}, be the transition matrix function of the Markov chain. From P one can get the full information about the law of the Markov chain, for < t 1 <... < t n and j 1,...,j n I, IP i {X(t 1 ) j 1,...,X(t n ) j n } p ij1 (t 1 )p j1 j 2 (t 2 t 1 ) p jn 1 j n (t n t n 1 ). P has the following properties, (a) p ij (t) and j I p ij (t) 1 for all i I. Also lim p ij (t) t { 1 if i j, otherwise. (b) k I p ik (t)p kj (s) p ij (t + s) for all i,j I and s,t. The equation in (b) is called the Chapman-Kolmogorov equation. It is easily proved, IP i {X(t + s) j} IP i {X(t) k,x(t + s) j} k I IP i {X(t) k}ip i {X(t + s) j X(t) k} k I IP i {X(t) k}ip k {X(s) j}. k I 5

6 Note that, by definition p jk (h) q jk h + o(h) for j k, and hence p jk () q jk and p jj () q jj. From this we can see how the Q-matrix is obtainable from the transition matrix function. The converse operation is more involved, we restrict attention to the case of finite statespace I. Consider P(t) as a matrix, then the Chapman-Kolmogorov equation can be written as P(t + s) P(t)P(s) for all s,t. One can differentiate this equation with resp to s or t and gets P (t) QP(t) and P (t) P(t)Q. These equations are called the Kolmogorov backward resp. Kolmogorov forward equations. We also know P() I, the identity matrix. This matrix-valued differential equation has a unique solution P(t),t. For the case of a scalar function p(t) and a scalar q instead of the matrix-valued function P and matrix Q the corresponding equation p (t) qp(t), p() 1, would have the unique solution p(t) e tq. In the matrix case one can argue similarly, defining e Qt Q k t k, k! k with Q I. Then we get the transition matrix function P(t) e Qt which satisfies the Kolmogorov forward and backward equations. 2.3 Resolvents Let Q be the Q-matrix of the Markov chain X and P(t),t the transition matrix function Laplace transforms A function f : [, ) IR is called good if f is continuous and either bounded (there exists K > such that f(t) K for all t) or integrable ( f(t) dt < ) or both. If f is good we can associate the Laplace transform ˆf : (, ) IR which is defined by ˆf(λ) An important example is the case f(t) e αt. Then ˆf(λ) e λt f(t)dt. e λt e αt dt 1 λ + α. An important result about Laplace transforms is the following: 6

7 Theorem 2.3 (Uniqueness Theorem) If f and g are good functions on [, ) and ˆf(λ) ĝ(λ) for all λ >. Then f g. This implies that we can invert Laplace transforms uniquely, at least when restricting attention to good functions Resolvents The basic idea here is to calculate the exponential P(t) e Qt of the Q-matrix using the Laplace transform. Let λ > and argue that R(λ) : e λt P(t)dt e λt e Qt dt (λi Q) 1. e (λi Q)t dt Here the integrals over matrix-valued functions are understood componentwise, and the last step is done by analogy with the real-valued situation, but can be made rigorous. Note that the inverse of the matrix λi Q can be calculated for all values λ which are not an eigenvalue of Q. Recall that the inverse fails to exist if and only if the characteristic polynomial det(λi Q) is zero, which is also the criterion for λ to be an eigenvalue of Q. Now R(λ) (λi Q) 1, if it exists, can be calculated and P(t) can be recovered by inverting the Laplace transform. The matrix function R(λ) is called the resolvent of Q (or of X). Its components are the Laplace transforms of the functions p ij, r ij (λ) : e λt p ij (t)dt pˆ ij (λ). Resolvents also have a probabilistic interpretation: Suppose we have an alarm-clock which rings, independently of the Markov chain, at a random time A, which is exponentially distributed with rate λ, i.e. IP{A > t} e λt, IP{A (t,t + dt)} λe λt dt. What is the state of the Markov chain when the alarm clock rings? The probability of being in state j is hence IP i {X(A) j} IP i {X(t) j,a (t,t + dt)} p ij (t)λe λt dt λr ij (λ), IP i {X(A) j} λr ij (λ) for B independent of X, exponential with rate λ. We now discuss the use of resolvents in calculations for continuous-time Markov chains using the following problem: Problem: Consider a continuous-time Markov chain X with state space I : {A, B, C} and transitions between the states described as follows: When currently in state A at time t, in the next time interval (t,t + h) of small length h and independent of the past behaviour of the chain, with 7

8 probability h + o(h) the chain jumps into state B, probability 2h + o(h) the chain jumps into state C, probability 1 3h + o(h) the chain remains in state A. Whilst in state B, the chain tries to enter state C at rate 2. The chain cannot jump into state A directly from state B. On entering state C, the chain remains there for an independent exponential amount of time of rate 3 before jumping. When the jump occurs, with probability 2/3 it is into state A, and with probability 1/3 the jump is to state B. Questions: (a) Give the Q-matrix of the continuous-time Markov chain X. (b) If the Markov chain is initially in state A, that is X() A, what is the probability that the Markov chain is still in state A at time t, that is, IP A {X(t) A}? (c) Starting from state C at time, show that the probability X is in state B at time t is given by e 3t for t. (d) Initially, X is in state C. Find the distribution for the position of X after an independent exponential time, T, of rate 4 has elapsed. In particular, you should find IP C {X(T) B} 1 7. (e) Given that X starts in state C, find the probability that X enters state A at some point before time t. [See Section for solution.] (f) What is the probability that we have reached state C at some point prior to time U, where U is an independent exponential random variable of rate µ 2.5, given that the chain is in state B at time zero? [See Section for solution.] The method to obtain (a) should be familiar to you by now. For the jumps from state C we use Theorem 2.2: Recall q CA is the rate of jumping from C to A and q CB is the rate of jumping from C to B. We are given 3 q CA + q CB, the rate of leaving state C. The probability that the first jump goes to state A is, again by Theorem 2.2, q CA /(q CA + q CB ), which is given as 2/3. Hence we get q CA 2 and q CB 1. Altogether, Q

9 We now solve (b) with the resolvent method. Recall p AA (t) IP A {X(t) A}, and r AA (t) As R(λ) (λi Q) 1 we start by inverting First, λi Q λ λ λ + 3 e λt p AA (t)dt ˆp AA (t). ( ) ( ) λ λ det(λi Q) (λ + 2)det + 2det 2 λ λ(λ + 3)(λ + 5). (Side remark: λ is always a factor in this determinant, because Q is a singular matrix. Recall that the sum over the rows is zero!) Now the inverse of a matrix A is given by (A 1 ) ij ( 1)i+j det A det(m ji), where M ji is the matrix with row j and column i removed. Hence, in our example, the entry in position AA is obtained by ( ) λ det r AA (λ) (λi Q) 1 AA 1 λ + 3 λ2 + 5λ + 4 det(λi Q) λ(λ + 3)(λ + 5). It remains to invert the Laplace transform. For this purpose we need to form partial fractions. Solving λ 2 + 5λ + 4 α(λ + 3)(λ + 5) + βλ(λ + 5) + γλ(λ + 3) gives α 4/15 (plug λ ) and β 1/3 (plug λ 3) and γ 2/5 (plug λ 5). Hence r AA (λ) 4 15λ + 1 3(λ + 3) + 2 5(λ + 5). Now the inverse Laplace transform of 1/(λ + β) is e βt. We thus get and note that p AA () 1 as it should be! p AA (t) e 3t e 5t, Now solve part (c). To find IP C {X(t) B} consider r CB (λ) e λt p CB (t)dt ˆp CB (t). We need to find the CB coefficient of the matrix (λi Q) 1, recalling the inversion formula we get ( ) λ det 2 1 λ + 5 r CB (λ) λ(λ + 3)(λ + 5) λ(λ + 3)(λ + 5) 1 λ(λ + 3) 1 ( 1 3 λ 1 ), λ + 3 9

10 and inverting Laplace transforms gives IP C {X(t) B} p CB (t) e 3t. Suppose for part (d) that T is exponentially distributed with rate λ and independent of the chain. Recall, IP i {X(T) j} λr ij (λ), and by matrix inversion, r CA (λ) ( ) λ + 2 det 2 1 det(λi Q) 2(λ + 2) λ(λ + 3)(λ + 5). As T is exponential with parameter λ 4 we get (no Laplace inversion needed!) IP C {X(T) A} 2(4 + 2) (4 + 3)(4 + 5) Similarly, and Plugging λ 4 we get ( ) λ det 2 1 r CB (λ) det(λi Q) ( ) λ det λ + 2 r CC (λ) det(λi Q) λ + 5 λ(λ + 3)(λ + 5), (λ + 3)(λ + 2) λ(λ + 3)(λ + 5). IP C {X(T) B} 4r CB (4) 1 7 and IP C{X(T) C} At this point it is good to check that the three values add up to one and thus define a probability distribution on the statespace I {A, B, C } First hitting times Let T j inf{t > : X t j} be the first positive time we are in state j. Define, for i,j I, F ij (t) IP i {T j t} IP{T j t X() i}, then F ij (t) is the probability that we have entered state j at some time prior to time t, if we start the chain in i. We have the first hitting time matrix function, ( ) F(t) F ij (t) : i,j I,t, with F() I and F ij (t) f ij (s)ds IP i {T j ds}, 1

11 where f ij (t) is the first hitting time density. The matrix functions f (f ij (t),t ) and P are linked by the integral equation p ij (t) f ij (s)p jj (t s)ds, for i,j I and t. (2.3.1) This can be checked as follows, use the strong Markov property, which is the fact that the chain starts afresh at time T j in state j and evolves independently of everything that happened before time T j to get p ij (t) IP i {X t j} IP i {X t j;t j t} IP i {T j ds}ip j {X t s j} IP i {X t j,t j ds} f ij (s)p jj (t s)ds. Given two functions f, g : [, ) IR we can define the convolution function f g by f g(t) f(s)g(t s)ds, and check by substituting u t s that f g g f. We can rewrite the integral equation (2.3.1) as p ij (t) f ij p jj (t). In order to solve this equation for the unknown f ij we use Laplace transforms again. Theorem 2.4 (Convolution Theorem) If f,g are good functions, then f g(λ) ˆf(λ)ĝ(λ). To make this plausible look at the example of two functions f(t) e αt and g(t) e βt. Then Then f g(t) f g(λ) f(s)g(t s)ds e βt e λs (f g)(s)ds 1 α β 1 1 λ + α λ + β ˆf(λ)ĝ(λ). e (α β)s ds e βt e αt. α β { 1 λ + β 1 } λ + α The convolution theorem applied to the integral equation (2.3.1) gives a formula for the Laplace transform of the first hitting time density ˆf ij (λ) r ij(λ) r jj (λ) for i,j I and λ >. (2.3.2) We use this now to given an answer to (e) in our problem. We are looking for IP C {T A t} F CA (t) 11 f CA (s)ds.

12 We first find the Laplace transform ˆf CA of f CA, using r CA and r AA, ( ) λ + 2 det 2 1 2(λ + 2) ˆf CA ( ) λ λ det 2 + 5λ + 4, 1 λ + 3 and by partial fractions ˆf CA 2 3(λ + 1) + 4 3(λ + 4). In this form, the Laplace transform can be inverted, which gives Now, by integration, we get f CA (t) 2 3 e t e 4t for t. IP C {T A t} Check that IP C {T A }, as expected. 2 3 f CA (s)ds e s ds e t 1 3 e 4t. e 4s ds As in the case of r ij ˆp ij the Laplace transforms ˆf ij also admit a direct probabilistic interpretation with the help of an alarm clock, which goes off at a random time A, which is independent of the Markov chain X and exponentially distributed with rate λ. Indeed, recalling that λe xλ,x > is the density of A and f ij is the density of T j, IP i {T j A} a IP i {T j a}λe aλ da ˆf ij (λ). f ij (s) f ij (s)dsλe aλ da s f ij (s)e sλ ds λe aλ dads Altogether, ˆf ij (λ) IP i {T j < A} for i,j I and λ >. We use this now to given an answer to (f) in our problem. The question asks for IP B {T C < U} where U is exponential with rate µ 2.5 and independent of the Markov chain. We now know that ( ) µ IP B {T C < U} ˆf BC (µ) r det BC(µ) r CC (µ) 2 2(µ + 3) ( ) µ (µ + 3)(µ + 2) 2 µ + 2, det µ

13 and plugging in µ 2.5 gives IP B {T C < U} 4 9. Example: A machine can be in three states A,B,C and the transition between the states is given by a continuous-time Markov chain with Q-matrix Q At time zero the machine is in state A. Suppose a supervisor arrives for inspection at the site of the machine at an independent exponential random time T, with rate λ 2. What is the probability that he has arrived when the machine is in state C for the first time? We write T C inf{t > : X(t) C} for the first entry and S C inf{t > T C : X(t) C} for the first exit time from C. Then the required probability is IP A {T C < T, S C > T }. Note that J S C T C is the length of time spent by the chain in state C during the first visit. This time is exponentially distributed with rate q CA + q CB 3 and we get IP A {T C < T, S C > T } IP A {T C < T }IP A {S C > T T C < T } IP A {T C < T }IP A {T T C < J T > T C }. By the lack of memory property, given T > T C the law of T T T C is exponential with rate λ 2 again. Hence IP A {T C < T, S C > T } IP A {T C < T }IP C { T < J}, where T is an independent exponential random variable with rate λ 2. The right hand side can be calculated. We write down λ λi Q 3 λ + 3, 1 2 λ + 3 and derive and, by Theorem 2.2, ( ) 1 2 IP A {T C < T } ˆf AC (λ) r det AC(λ) r CC (λ) λ + 3 ( ) λ det 3 λ + 3 IP C {J > T } λ λ + 3 2(λ + 3) λ 2 + 6λ + 6, recalling that J and T are independent exponentially distributed with rates 3 resp. λ 2. Hence, IP A {T C < T, S C > T } 2λ λ 2 + 6λ Long-term behaviour and invariant distribution We now look at the long-term behaviour of continuous time Markov chains. The notions of an irreducible and recurrent Markov chain can be defined in the same manner as in the discrete case: 13

14 A chain is irreducible if one can get from any state to any other state in finite time, and an irreducible chain is recurrent if the probability that we return to this state in finite time is one. We assume in this chapter that our chain satisfies these two conditions and also a technical condition called non-explosive, which ensures that the process is defined at all times, see Norris for further details The invariant distribution Recall that in the discrete time case the stationary distribution π on I was given as the solution of the equation πp π and, by iteration, π πp πp 2 πp 3 πp n for all n. Hence we would expect an invariant distribution π of a continuous time Markov chain with transition matrix function (P(t) : t ) to satisfy πp(t) π for all t. If the statespace is finite, we can differentiate with respect to time to get πp (t). Setting t and recalling P () Q we get πq. The two boxed statements can be shown to be equivalent in the general case, see Norris Theorem Any probability distribution π satisfying πq is called an invariant distribution, or sometimes equilibrium or stationary distribution. Theorem 2.5 Suppose (X(t) : t ) satisfies our assumptions and π solves then, π i q ij,π j > for all j I, and π i 1, i I i I lim p ij(t) π j for all i I, t and if V j (t) 1 {X(s)j} ds is the time spent in state j up to time t, we have V j (t) lim π j with probability one. t t This theorem is analogous to (parts of) the Big Theorem in the discrete time case. Note that it also implies that the invariant 14

15 2.4.2 Symmetrisability Suppose m (m i : i I) satisfies m i >. We say that Q is m-symmetrisable if we can find m i > with m i q ij m j q ji for all i,j I These are the detailed balance equations. Theorem 2.6 If we find an m solving the detailed balance equations such that M i I m i <, then π i m i /M defines the unique invariant distribution. For the proof note that, for all j I, (πq) j 1 m i q ij 1 m j q ji. M M i I i I Note that, as in the discrete case, the matrix Q may not be symmetrisable, but the invariant distribution may still exist. Then the invariant distribution may be found using generating functions. Example: The M/M/1 queue. A single server has a service rate µ, customers arrive individually at a rate λ. Let X(t) be the number of cutomers in the queue (including the customer currently served) and I {,1,2,...}. The Q-matrix is given by λ λ... µ λ µ λ... Q µ λ µ λ We first try to find the invariant distribution using symmetrisability. For this purpose we have to solve the detailed balance equations m i q ij m j q ji, explicitly m λ m 1 µ m 1 λ µ m, m 1 λ m 2 µ m 2 λ µ m 1, m n λ m n+1 µ m n+1 λ µ m n, for all n 2. Hence denoting by ρ λ/µ the traffic intensity we have m i ρ i m. If ρ < 1, then M m i m ρ i m 1 ρ <, i i Hence get that π i m i M (1 ρ)ρi for all i. In other words, if ρ < 1 the invariant distribution is the geometric distribution with parameter ρ. Over a long time range the mean queue length is ( iπ i (1 ρ)iρ i (1 ρ)ρ ρ i) i1 i1 i1 15 ρ 1 ρ.

16 PS: In the case ρ > 1 the chain is not recurrent, in the case ρ 1 the chain is null recurrent and no invariant distribution exists. As a (more widely applicable) alternative one can also solve the equation πq using the generating function ˆπ(s) : s n π n π + π 1 s + π 2 s 2 +. We write out the equation πq as n λπ + µπ 1 λπ (λ + µ)π 1 + µπ 2 λπ 1 (λ + µ)π 2 + µπ 3, and so on. Multiplying the first equation by s, the second by s 2 and so forth and adding up, we get λs 2ˆπ(s) (λ + µ)s(ˆπ(s) π ) + µ(ˆπ(s) π ) λπ s, hence ˆπ(s)(λs 2 (λ + µ)s + µ) π µ(1 s). This implies ˆπ(s) To find π recall that ˆπ(1) 1 hence π µ(1 s) λs 2 (λ + µ)s + µ π µ(1 s) (s 1)(λs µ) π µ µ λs. π µ λ µ 1 ρ, and we infer that ˆπ(s) (1 ρ)µ µ λs 1 ρ 1 ρs. To recover the π n we expand ˆπ(s) as a power series and equate the coefficients, ˆπ(s) (1 ρ) ρ n s n, n hence π n (1 ρ)ρ n. The mean queue length in the long term can be found using which implies that the mean queue length is ˆπ (s) ns n 1 π n, and ˆπ (s) n1 nπ n ˆπ (1) n1 ρ 1 ρ. ρ(1 ρ) (1 ρs) 2. 16

17 Last Example: The pure birth process. Recall that the pure birth process X is a continuous time Markov chain with Q-matrix λ λ... 2λ 2λ... Q 3λ 3λ This process is strictly increasing, and hence transient, we show that, if X() 1, then IP{X(t) n} e λt (1 e λt ) n 1 for n 1,2,3,..., (2.4.1) i.e. X(t) is geometrically distributed with parameter e λt. Note that this shows that, as t, IP{X(t) n}. n e λt (1 e λt ) k 1 1 (1 e λt ) n, k1 hence the process X(t) converges to infinity. To prove (2.4.1) we use the resolvent method. We now write ρ for the Laplace parameter, hence we have to invert ρ + λ λ... ρ + 2λ 2λ... ρi Q ρ + 3λ 3λ The coefficients r in,n 1,2,... of the inverse matrix hence satisfy Solving this gives r 11 (ρ + λ) 1, and r 1,n 1 ( (n 1)λ) + r 1,n (ρ + nλ), for n 2. r 1n (n 1)λ ρ + nλ r 1,n 1 λ n 1 (n 1)! n k1 1 ρ + kλ. We use partial fractions, which means finding a k,k 1,...,n such that r 1n n k1 a k ρ + kλ. We thus get the equations Plugging in ρ λk gives hence λ n 1 (n 1)! a k n a k (ρ + jλ). k1 j k λ n 1 (n 1)! a k (λ(j k)), (n 1)! j k (j k) j k ( ) n 1 ( 1) k 1. k 1 17

18 Now we have r 1n Inverting the Laplace transform yields p 1n (t) k1 n k1 ( ) n 1 ( 1) k 1 1 k 1 ρ + kλ. ( ) ( ) n n 1 n 1 ( 1) k 1 e kλt e λt n 1 ( 1) k e kλt e λt (1 e λt ) n 1, k 1 k by the binomial formula. This verifies (2.4.1). k 18

SMSTC (2007/08) Probability.

SMSTC (2007/08) Probability. SMSTC (27/8) Probability www.smstc.ac.uk Contents 12 Markov chains in continuous time 12 1 12.1 Markov property and the Kolmogorov equations.................... 12 2 12.1.1 Finite state space.................................

More information

The Transition Probability Function P ij (t)

The Transition Probability Function P ij (t) The Transition Probability Function P ij (t) Consider a continuous time Markov chain {X(t), t 0}. We are interested in the probability that in t time units the process will be in state j, given that it

More information

Markov chains. 1 Discrete time Markov chains. c A. J. Ganesh, University of Bristol, 2015

Markov chains. 1 Discrete time Markov chains. c A. J. Ganesh, University of Bristol, 2015 Markov chains c A. J. Ganesh, University of Bristol, 2015 1 Discrete time Markov chains Example: A drunkard is walking home from the pub. There are n lampposts between the pub and his home, at each of

More information

STAT 380 Continuous Time Markov Chains

STAT 380 Continuous Time Markov Chains STAT 380 Continuous Time Markov Chains Richard Lockhart Simon Fraser University Spring 2018 Richard Lockhart (Simon Fraser University)STAT 380 Continuous Time Markov Chains Spring 2018 1 / 35 Continuous

More information

Continuous-Time Markov Chain

Continuous-Time Markov Chain Continuous-Time Markov Chain Consider the process {X(t),t 0} with state space {0, 1, 2,...}. The process {X(t),t 0} is a continuous-time Markov chain if for all s, t 0 and nonnegative integers i, j, x(u),

More information

IEOR 6711, HMWK 5, Professor Sigman

IEOR 6711, HMWK 5, Professor Sigman IEOR 6711, HMWK 5, Professor Sigman 1. Semi-Markov processes: Consider an irreducible positive recurrent discrete-time Markov chain {X n } with transition matrix P (P i,j ), i, j S, and finite state space.

More information

Quantitative Model Checking (QMC) - SS12

Quantitative Model Checking (QMC) - SS12 Quantitative Model Checking (QMC) - SS12 Lecture 06 David Spieler Saarland University, Germany June 4, 2012 1 / 34 Deciding Bisimulations 2 / 34 Partition Refinement Algorithm Notation: A partition P over

More information

Statistics 150: Spring 2007

Statistics 150: Spring 2007 Statistics 150: Spring 2007 April 23, 2008 0-1 1 Limiting Probabilities If the discrete-time Markov chain with transition probabilities p ij is irreducible and positive recurrent; then the limiting probabilities

More information

Introduction to Queuing Networks Solutions to Problem Sheet 3

Introduction to Queuing Networks Solutions to Problem Sheet 3 Introduction to Queuing Networks Solutions to Problem Sheet 3 1. (a) The state space is the whole numbers {, 1, 2,...}. The transition rates are q i,i+1 λ for all i and q i, for all i 1 since, when a bus

More information

Chapter 5. Continuous-Time Markov Chains. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan

Chapter 5. Continuous-Time Markov Chains. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan Chapter 5. Continuous-Time Markov Chains Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan Continuous-Time Markov Chains Consider a continuous-time stochastic process

More information

Continuous time Markov chains

Continuous time Markov chains Continuous time Markov chains Alejandro Ribeiro Dept. of Electrical and Systems Engineering University of Pennsylvania aribeiro@seas.upenn.edu http://www.seas.upenn.edu/users/~aribeiro/ October 16, 2017

More information

Statistics 992 Continuous-time Markov Chains Spring 2004

Statistics 992 Continuous-time Markov Chains Spring 2004 Summary Continuous-time finite-state-space Markov chains are stochastic processes that are widely used to model the process of nucleotide substitution. This chapter aims to present much of the mathematics

More information

Stochastic process. X, a series of random variables indexed by t

Stochastic process. X, a series of random variables indexed by t Stochastic process X, a series of random variables indexed by t X={X(t), t 0} is a continuous time stochastic process X={X(t), t=0,1, } is a discrete time stochastic process X(t) is the state at time t,

More information

EXAM IN COURSE TMA4265 STOCHASTIC PROCESSES Wednesday 7. August, 2013 Time: 9:00 13:00

EXAM IN COURSE TMA4265 STOCHASTIC PROCESSES Wednesday 7. August, 2013 Time: 9:00 13:00 Norges teknisk naturvitenskapelige universitet Institutt for matematiske fag Page 1 of 7 English Contact: Håkon Tjelmeland 48 22 18 96 EXAM IN COURSE TMA4265 STOCHASTIC PROCESSES Wednesday 7. August, 2013

More information

Irreducibility. Irreducible. every state can be reached from every other state For any i,j, exist an m 0, such that. Absorbing state: p jj =1

Irreducibility. Irreducible. every state can be reached from every other state For any i,j, exist an m 0, such that. Absorbing state: p jj =1 Irreducibility Irreducible every state can be reached from every other state For any i,j, exist an m 0, such that i,j are communicate, if the above condition is valid Irreducible: all states are communicate

More information

Lecture 4a: Continuous-Time Markov Chain Models

Lecture 4a: Continuous-Time Markov Chain Models Lecture 4a: Continuous-Time Markov Chain Models Continuous-time Markov chains are stochastic processes whose time is continuous, t [0, ), but the random variables are discrete. Prominent examples of continuous-time

More information

Figure 10.1: Recording when the event E occurs

Figure 10.1: Recording when the event E occurs 10 Poisson Processes Let T R be an interval. A family of random variables {X(t) ; t T} is called a continuous time stochastic process. We often consider T = [0, 1] and T = [0, ). As X(t) is a random variable

More information

Markov processes and queueing networks

Markov processes and queueing networks Inria September 22, 2015 Outline Poisson processes Markov jump processes Some queueing networks The Poisson distribution (Siméon-Denis Poisson, 1781-1840) { } e λ λ n n! As prevalent as Gaussian distribution

More information

Markov Processes Cont d. Kolmogorov Differential Equations

Markov Processes Cont d. Kolmogorov Differential Equations Markov Processes Cont d Kolmogorov Differential Equations The Kolmogorov Differential Equations characterize the transition functions {P ij (t)} of a Markov process. The time-dependent behavior of the

More information

CDA5530: Performance Models of Computers and Networks. Chapter 3: Review of Practical

CDA5530: Performance Models of Computers and Networks. Chapter 3: Review of Practical CDA5530: Performance Models of Computers and Networks Chapter 3: Review of Practical Stochastic Processes Definition Stochastic ti process X = {X(t), t T} is a collection of random variables (rvs); one

More information

IEOR 6711: Stochastic Models I, Fall 2003, Professor Whitt. Solutions to Final Exam: Thursday, December 18.

IEOR 6711: Stochastic Models I, Fall 2003, Professor Whitt. Solutions to Final Exam: Thursday, December 18. IEOR 6711: Stochastic Models I, Fall 23, Professor Whitt Solutions to Final Exam: Thursday, December 18. Below are six questions with several parts. Do as much as you can. Show your work. 1. Two-Pump Gas

More information

Solutions to Homework Discrete Stochastic Processes MIT, Spring 2011

Solutions to Homework Discrete Stochastic Processes MIT, Spring 2011 Exercise 6.5: Solutions to Homework 0 6.262 Discrete Stochastic Processes MIT, Spring 20 Consider the Markov process illustrated below. The transitions are labelled by the rate q ij at which those transitions

More information

Part I Stochastic variables and Markov chains

Part I Stochastic variables and Markov chains Part I Stochastic variables and Markov chains Random variables describe the behaviour of a phenomenon independent of any specific sample space Distribution function (cdf, cumulative distribution function)

More information

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks Recap Probability, stochastic processes, Markov chains ELEC-C7210 Modeling and analysis of communication networks 1 Recap: Probability theory important distributions Discrete distributions Geometric distribution

More information

Birth and Death Processes. Birth and Death Processes. Linear Growth with Immigration. Limiting Behaviour for Birth and Death Processes

Birth and Death Processes. Birth and Death Processes. Linear Growth with Immigration. Limiting Behaviour for Birth and Death Processes DTU Informatics 247 Stochastic Processes 6, October 27 Today: Limiting behaviour of birth and death processes Birth and death processes with absorbing states Finite state continuous time Markov chains

More information

Lecture 20: Reversible Processes and Queues

Lecture 20: Reversible Processes and Queues Lecture 20: Reversible Processes and Queues 1 Examples of reversible processes 11 Birth-death processes We define two non-negative sequences birth and death rates denoted by {λ n : n N 0 } and {µ n : n

More information

Queueing Theory. VK Room: M Last updated: October 17, 2013.

Queueing Theory. VK Room: M Last updated: October 17, 2013. Queueing Theory VK Room: M1.30 knightva@cf.ac.uk www.vincent-knight.com Last updated: October 17, 2013. 1 / 63 Overview Description of Queueing Processes The Single Server Markovian Queue Multi Server

More information

Markov Processes and Queues

Markov Processes and Queues MIT 2.853/2.854 Introduction to Manufacturing Systems Markov Processes and Queues Stanley B. Gershwin Laboratory for Manufacturing and Productivity Massachusetts Institute of Technology Markov Processes

More information

STOCHASTIC PROCESSES Basic notions

STOCHASTIC PROCESSES Basic notions J. Virtamo 38.3143 Queueing Theory / Stochastic processes 1 STOCHASTIC PROCESSES Basic notions Often the systems we consider evolve in time and we are interested in their dynamic behaviour, usually involving

More information

Lecture 21. David Aldous. 16 October David Aldous Lecture 21

Lecture 21. David Aldous. 16 October David Aldous Lecture 21 Lecture 21 David Aldous 16 October 2015 In continuous time 0 t < we specify transition rates or informally P(X (t+δ)=j X (t)=i, past ) q ij = lim δ 0 δ P(X (t + dt) = j X (t) = i) = q ij dt but note these

More information

STAT STOCHASTIC PROCESSES. Contents

STAT STOCHASTIC PROCESSES. Contents STAT 3911 - STOCHASTIC PROCESSES ANDREW TULLOCH Contents 1. Stochastic Processes 2 2. Classification of states 2 3. Limit theorems for Markov chains 4 4. First step analysis 5 5. Branching processes 5

More information

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains Markov Chains A random process X is a family {X t : t T } of random variables indexed by some set T. When T = {0, 1, 2,... } one speaks about a discrete-time process, for T = R or T = [0, ) one has a continuous-time

More information

Data analysis and stochastic modeling

Data analysis and stochastic modeling Data analysis and stochastic modeling Lecture 7 An introduction to queueing theory Guillaume Gravier guillaume.gravier@irisa.fr with a lot of help from Paul Jensen s course http://www.me.utexas.edu/ jensen/ormm/instruction/powerpoint/or_models_09/14_queuing.ppt

More information

Markov Chains in Continuous Time

Markov Chains in Continuous Time Chapter 23 Markov Chains in Continuous Time Previously we looke at Markov chains, where the transitions betweenstatesoccurreatspecifietime- steps. That it, we mae time (a continuous variable) avance in

More information

Part II: continuous time Markov chain (CTMC)

Part II: continuous time Markov chain (CTMC) Part II: continuous time Markov chain (CTMC) Continuous time discrete state Markov process Definition (Markovian property) X(t) is a CTMC, if for any n and any sequence t 1

More information

TMA4265 Stochastic processes ST2101 Stochastic simulation and modelling

TMA4265 Stochastic processes ST2101 Stochastic simulation and modelling Norwegian University of Science and Technology Department of Mathematical Sciences Page of 7 English Contact during examination: Øyvind Bakke Telephone: 73 9 8 26, 99 4 673 TMA426 Stochastic processes

More information

1 Continuous-time chains, finite state space

1 Continuous-time chains, finite state space Université Paris Diderot 208 Markov chains Exercises 3 Continuous-time chains, finite state space Exercise Consider a continuous-time taking values in {, 2, 3}, with generator 2 2. 2 2 0. Draw the diagramm

More information

P (L d k = n). P (L(t) = n),

P (L d k = n). P (L(t) = n), 4 M/G/1 queue In the M/G/1 queue customers arrive according to a Poisson process with rate λ and they are treated in order of arrival The service times are independent and identically distributed with

More information

Stochastic Processes

Stochastic Processes Stochastic Processes 8.445 MIT, fall 20 Mid Term Exam Solutions October 27, 20 Your Name: Alberto De Sole Exercise Max Grade Grade 5 5 2 5 5 3 5 5 4 5 5 5 5 5 6 5 5 Total 30 30 Problem :. True / False

More information

MATH 56A: STOCHASTIC PROCESSES CHAPTER 6

MATH 56A: STOCHASTIC PROCESSES CHAPTER 6 MATH 56A: STOCHASTIC PROCESSES CHAPTER 6 6. Renewal Mathematically, renewal refers to a continuous time stochastic process with states,, 2,. N t {,, 2, 3, } so that you only have jumps from x to x + and

More information

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321 Lecture 11: Introduction to Markov Chains Copyright G. Caire (Sample Lectures) 321 Discrete-time random processes A sequence of RVs indexed by a variable n 2 {0, 1, 2,...} forms a discretetime random process

More information

Assignment 3 with Reference Solutions

Assignment 3 with Reference Solutions Assignment 3 with Reference Solutions Exercise 3.: Poisson Process Given are k independent sources s i of jobs as shown in the figure below. The interarrival time between jobs for each source is exponentially

More information

LTCC. Exercises solutions

LTCC. Exercises solutions 1. Markov chain LTCC. Exercises solutions (a) Draw a state space diagram with the loops for the possible steps. If the chain starts in state 4, it must stay there. If the chain starts in state 1, it will

More information

LECTURE #6 BIRTH-DEATH PROCESS

LECTURE #6 BIRTH-DEATH PROCESS LECTURE #6 BIRTH-DEATH PROCESS 204528 Queueing Theory and Applications in Networks Assoc. Prof., Ph.D. (รศ.ดร. อน นต ผลเพ ม) Computer Engineering Department, Kasetsart University Outline 2 Birth-Death

More information

LTCC. Exercises. (1) Two possible weather conditions on any day: {rainy, sunny} (2) Tomorrow s weather depends only on today s weather

LTCC. Exercises. (1) Two possible weather conditions on any day: {rainy, sunny} (2) Tomorrow s weather depends only on today s weather 1. Markov chain LTCC. Exercises Let X 0, X 1, X 2,... be a Markov chain with state space {1, 2, 3, 4} and transition matrix 1/2 1/2 0 0 P = 0 1/2 1/3 1/6. 0 0 0 1 (a) What happens if the chain starts in

More information

All models are wrong / inaccurate, but some are useful. George Box (Wikipedia). wkc/course/part2.pdf

All models are wrong / inaccurate, but some are useful. George Box (Wikipedia).  wkc/course/part2.pdf PART II (3) Continuous Time Markov Chains : Theory and Examples -Pure Birth Process with Constant Rates -Pure Death Process -More on Birth-and-Death Process -Statistical Equilibrium (4) Introduction to

More information

Introduction to queuing theory

Introduction to queuing theory Introduction to queuing theory Queu(e)ing theory Queu(e)ing theory is the branch of mathematics devoted to how objects (packets in a network, people in a bank, processes in a CPU etc etc) join and leave

More information

reversed chain is ergodic and has the same equilibrium probabilities (check that π j =

reversed chain is ergodic and has the same equilibrium probabilities (check that π j = Lecture 10 Networks of queues In this lecture we shall finally get around to consider what happens when queues are part of networks (which, after all, is the topic of the course). Firstly we shall need

More information

Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t

Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t 2.2 Filtrations Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of σ algebras {F t } such that F t F and F t F t+1 for all t = 0, 1,.... In continuous time, the second condition

More information

Continuous Time Markov Chains

Continuous Time Markov Chains Continuous Time Markov Chains Stochastic Processes - Lecture Notes Fatih Cavdur to accompany Introduction to Probability Models by Sheldon M. Ross Fall 2015 Outline Introduction Continuous-Time Markov

More information

2905 Queueing Theory and Simulation PART III: HIGHER DIMENSIONAL AND NON-MARKOVIAN QUEUES

2905 Queueing Theory and Simulation PART III: HIGHER DIMENSIONAL AND NON-MARKOVIAN QUEUES 295 Queueing Theory and Simulation PART III: HIGHER DIMENSIONAL AND NON-MARKOVIAN QUEUES 16 Queueing Systems with Two Types of Customers In this section, we discuss queueing systems with two types of customers.

More information

CDA6530: Performance Models of Computers and Networks. Chapter 3: Review of Practical Stochastic Processes

CDA6530: Performance Models of Computers and Networks. Chapter 3: Review of Practical Stochastic Processes CDA6530: Performance Models of Computers and Networks Chapter 3: Review of Practical Stochastic Processes Definition Stochastic process X = {X(t), t2 T} is a collection of random variables (rvs); one rv

More information

A review of Continuous Time MC STA 624, Spring 2015

A review of Continuous Time MC STA 624, Spring 2015 A review of Continuous Time MC STA 624, Spring 2015 Ruriko Yoshida Dept. of Statistics University of Kentucky polytopes.net STA 624 1 Continuous Time Markov chains Definition A continuous time stochastic

More information

Introduction to Markov Chains, Queuing Theory, and Network Performance

Introduction to Markov Chains, Queuing Theory, and Network Performance Introduction to Markov Chains, Queuing Theory, and Network Performance Marceau Coupechoux Telecom ParisTech, departement Informatique et Réseaux marceau.coupechoux@telecom-paristech.fr IT.2403 Modélisation

More information

TMA 4265 Stochastic Processes

TMA 4265 Stochastic Processes TMA 4265 Stochastic Processes Norges teknisk-naturvitenskapelige universitet Institutt for matematiske fag Solution - Exercise 9 Exercises from the text book 5.29 Kidney transplant T A exp( A ) T B exp(

More information

MARKOV PROCESSES. Valerio Di Valerio

MARKOV PROCESSES. Valerio Di Valerio MARKOV PROCESSES Valerio Di Valerio Stochastic Process Definition: a stochastic process is a collection of random variables {X(t)} indexed by time t T Each X(t) X is a random variable that satisfy some

More information

Stochastic Processes

Stochastic Processes MTHE/STAT455, STAT855 Fall 202 Stochastic Processes Final Exam, Solutions (5 marks) (a) (0 marks) Condition on the first pair to bond; each of the n adjacent pairs is equally likely to bond Given that

More information

Modelling data networks stochastic processes and Markov chains

Modelling data networks stochastic processes and Markov chains Modelling data networks stochastic processes and Markov chains a 1, 3 1, 2 2, 2 b 0, 3 2, 3 u 1, 3 α 1, 6 c 0, 3 v 2, 2 β 1, 1 Richard G. Clegg (richard@richardclegg.org) November 2016 Available online

More information

Statistics 253/317 Introduction to Probability Models. Winter Midterm Exam Friday, Feb 8, 2013

Statistics 253/317 Introduction to Probability Models. Winter Midterm Exam Friday, Feb 8, 2013 Statistics 253/317 Introduction to Probability Models Winter 2014 - Midterm Exam Friday, Feb 8, 2013 Student Name (print): (a) Do not sit directly next to another student. (b) This is a closed-book, closed-note

More information

Continuous Time Processes

Continuous Time Processes page 102 Chapter 7 Continuous Time Processes 7.1 Introduction In a continuous time stochastic process (with discrete state space), a change of state can occur at any time instant. The associated point

More information

Lecture 10: Semi-Markov Type Processes

Lecture 10: Semi-Markov Type Processes Lecture 1: Semi-Markov Type Processes 1. Semi-Markov processes (SMP) 1.1 Definition of SMP 1.2 Transition probabilities for SMP 1.3 Hitting times and semi-markov renewal equations 2. Processes with semi-markov

More information

QUEUING MODELS AND MARKOV PROCESSES

QUEUING MODELS AND MARKOV PROCESSES QUEUING MODELS AND MARKOV ROCESSES Queues form when customer demand for a service cannot be met immediately. They occur because of fluctuations in demand levels so that models of queuing are intrinsically

More information

Continuous-time Markov Chains

Continuous-time Markov Chains Continuous-time Markov Chains Gonzalo Mateos Dept. of ECE and Goergen Institute for Data Science University of Rochester gmateosb@ece.rochester.edu http://www.ece.rochester.edu/~gmateosb/ October 23, 2017

More information

Lecture 7: Simulation of Markov Processes. Pasi Lassila Department of Communications and Networking

Lecture 7: Simulation of Markov Processes. Pasi Lassila Department of Communications and Networking Lecture 7: Simulation of Markov Processes Pasi Lassila Department of Communications and Networking Contents Markov processes theory recap Elementary queuing models for data networks Simulation of Markov

More information

1.3 Convergence of Regular Markov Chains

1.3 Convergence of Regular Markov Chains Markov Chains and Random Walks on Graphs 3 Applying the same argument to A T, which has the same λ 0 as A, yields the row sum bounds Corollary 0 Let P 0 be the transition matrix of a regular Markov chain

More information

Queuing Networks: Burke s Theorem, Kleinrock s Approximation, and Jackson s Theorem. Wade Trappe

Queuing Networks: Burke s Theorem, Kleinrock s Approximation, and Jackson s Theorem. Wade Trappe Queuing Networks: Burke s Theorem, Kleinrock s Approximation, and Jackson s Theorem Wade Trappe Lecture Overview Network of Queues Introduction Queues in Tandem roduct Form Solutions Burke s Theorem What

More information

Math 597/697: Solution 5

Math 597/697: Solution 5 Math 597/697: Solution 5 The transition between the the ifferent states is governe by the transition matrix 0 6 3 6 0 2 2 P = 4 0 5, () 5 4 0 an v 0 = /4, v = 5, v 2 = /3, v 3 = /5 Hence the generator

More information

Birth-Death Processes

Birth-Death Processes Birth-Death Processes Birth-Death Processes: Transient Solution Poisson Process: State Distribution Poisson Process: Inter-arrival Times Dr Conor McArdle EE414 - Birth-Death Processes 1/17 Birth-Death

More information

8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains

8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains 8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains 8.1 Review 8.2 Statistical Equilibrium 8.3 Two-State Markov Chain 8.4 Existence of P ( ) 8.5 Classification of States

More information

1 IEOR 4701: Continuous-Time Markov Chains

1 IEOR 4701: Continuous-Time Markov Chains Copyright c 2006 by Karl Sigman 1 IEOR 4701: Continuous-Time Markov Chains A Markov chain in discrete time, {X n : n 0}, remains in any state for exactly one unit of time before making a transition (change

More information

Probability Distributions

Probability Distributions Lecture : Background in Probability Theory Probability Distributions The probability mass function (pmf) or probability density functions (pdf), mean, µ, variance, σ 2, and moment generating function (mgf)

More information

88 CONTINUOUS MARKOV CHAINS

88 CONTINUOUS MARKOV CHAINS 88 CONTINUOUS MARKOV CHAINS 3.4. birth-death. Continuous birth-death Markov chains are very similar to countable Markov chains. One new concept is explosion which means that an infinite number of state

More information

Intro Refresher Reversibility Open networks Closed networks Multiclass networks Other networks. Queuing Networks. Florence Perronnin

Intro Refresher Reversibility Open networks Closed networks Multiclass networks Other networks. Queuing Networks. Florence Perronnin Queuing Networks Florence Perronnin Polytech Grenoble - UGA March 23, 27 F. Perronnin (UGA) Queuing Networks March 23, 27 / 46 Outline Introduction to Queuing Networks 2 Refresher: M/M/ queue 3 Reversibility

More information

57:022 Principles of Design II Final Exam Solutions - Spring 1997

57:022 Principles of Design II Final Exam Solutions - Spring 1997 57:022 Principles of Design II Final Exam Solutions - Spring 1997 Part: I II III IV V VI Total Possible Pts: 52 10 12 16 13 12 115 PART ONE Indicate "+" if True and "o" if False: + a. If a component's

More information

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1 MATH 56A: STOCHASTIC PROCESSES CHAPTER. Finite Markov chains For the sake of completeness of these notes I decided to write a summary of the basic concepts of finite Markov chains. The topics in this chapter

More information

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors Chapter 7 Canonical Forms 7.1 Eigenvalues and Eigenvectors Definition 7.1.1. Let V be a vector space over the field F and let T be a linear operator on V. An eigenvalue of T is a scalar λ F such that there

More information

October 4, 2017 EIGENVALUES AND EIGENVECTORS. APPLICATIONS

October 4, 2017 EIGENVALUES AND EIGENVECTORS. APPLICATIONS October 4, 207 EIGENVALUES AND EIGENVECTORS. APPLICATIONS RODICA D. COSTIN Contents 4. Eigenvalues and Eigenvectors 3 4.. Motivation 3 4.2. Diagonal matrices 3 4.3. Example: solving linear differential

More information

MATH37012 Week 10. Dr Jonathan Bagley. Semester

MATH37012 Week 10. Dr Jonathan Bagley. Semester MATH37012 Week 10 Dr Jonathan Bagley Semester 2-2018 2.18 a) Finding and µ j for a particular category of B.D. processes. Consider a process where the destination of the next transition is determined by

More information

A matrix over a field F is a rectangular array of elements from F. The symbol

A matrix over a field F is a rectangular array of elements from F. The symbol Chapter MATRICES Matrix arithmetic A matrix over a field F is a rectangular array of elements from F The symbol M m n (F ) denotes the collection of all m n matrices over F Matrices will usually be denoted

More information

BIRTH DEATH PROCESSES AND QUEUEING SYSTEMS

BIRTH DEATH PROCESSES AND QUEUEING SYSTEMS BIRTH DEATH PROCESSES AND QUEUEING SYSTEMS Andrea Bobbio Anno Accademico 999-2000 Queueing Systems 2 Notation for Queueing Systems /λ mean time between arrivals S = /µ ρ = λ/µ N mean service time traffic

More information

Chapter 2. Poisson Processes. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan

Chapter 2. Poisson Processes. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan Chapter 2. Poisson Processes Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan Outline Introduction to Poisson Processes Definition of arrival process Definition

More information

Question Points Score Total: 70

Question Points Score Total: 70 The University of British Columbia Final Examination - April 204 Mathematics 303 Dr. D. Brydges Time: 2.5 hours Last Name First Signature Student Number Special Instructions: Closed book exam, no calculators.

More information

CS 246 Review of Linear Algebra 01/17/19

CS 246 Review of Linear Algebra 01/17/19 1 Linear algebra In this section we will discuss vectors and matrices. We denote the (i, j)th entry of a matrix A as A ij, and the ith entry of a vector as v i. 1.1 Vectors and vector operations A vector

More information

Lecture 20 : Markov Chains

Lecture 20 : Markov Chains CSCI 3560 Probability and Computing Instructor: Bogdan Chlebus Lecture 0 : Markov Chains We consider stochastic processes. A process represents a system that evolves through incremental changes called

More information

STA 624 Practice Exam 2 Applied Stochastic Processes Spring, 2008

STA 624 Practice Exam 2 Applied Stochastic Processes Spring, 2008 Name STA 624 Practice Exam 2 Applied Stochastic Processes Spring, 2008 There are five questions on this test. DO use calculators if you need them. And then a miracle occurs is not a valid answer. There

More information

The story of the film so far... Mathematics for Informatics 4a. Continuous-time Markov processes. Counting processes

The story of the film so far... Mathematics for Informatics 4a. Continuous-time Markov processes. Counting processes The story of the film so far... Mathematics for Informatics 4a José Figueroa-O Farrill Lecture 19 28 March 2012 We have been studying stochastic processes; i.e., systems whose time evolution has an element

More information

Markov Chains. Sarah Filippi Department of Statistics TA: Luke Kelly

Markov Chains. Sarah Filippi Department of Statistics  TA: Luke Kelly Markov Chains Sarah Filippi Department of Statistics http://www.stats.ox.ac.uk/~filippi TA: Luke Kelly With grateful acknowledgements to Prof. Yee Whye Teh's slides from 2013 14. Schedule 09:30-10:30 Lecture:

More information

Renewal theory and its applications

Renewal theory and its applications Renewal theory and its applications Stella Kapodistria and Jacques Resing September 11th, 212 ISP Definition of a Renewal process Renewal theory and its applications If we substitute the Exponentially

More information

Queueing systems. Renato Lo Cigno. Simulation and Performance Evaluation Queueing systems - Renato Lo Cigno 1

Queueing systems. Renato Lo Cigno. Simulation and Performance Evaluation Queueing systems - Renato Lo Cigno 1 Queueing systems Renato Lo Cigno Simulation and Performance Evaluation 2014-15 Queueing systems - Renato Lo Cigno 1 Queues A Birth-Death process is well modeled by a queue Indeed queues can be used to

More information

Examination paper for TMA4265 Stochastic Processes

Examination paper for TMA4265 Stochastic Processes Department of Mathematical Sciences Examination paper for TMA4265 Stochastic Processes Academic contact during examination: Andrea Riebler Phone: 456 89 592 Examination date: December 14th, 2015 Examination

More information

Lecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes

Lecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes Lecture Notes 7 Random Processes Definition IID Processes Bernoulli Process Binomial Counting Process Interarrival Time Process Markov Processes Markov Chains Classification of States Steady State Probabilities

More information

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i :=

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i := 2.7. Recurrence and transience Consider a Markov chain {X n : n N 0 } on state space E with transition matrix P. Definition 2.7.1. A state i E is called recurrent if P i [X n = i for infinitely many n]

More information

Queueing Networks and Insensitivity

Queueing Networks and Insensitivity Lukáš Adam 29. 10. 2012 1 / 40 Table of contents 1 Jackson networks 2 Insensitivity in Erlang s Loss System 3 Quasi-Reversibility and Single-Node Symmetric Queues 4 Quasi-Reversibility in Networks 5 The

More information

Chapter 3: Markov Processes First hitting times

Chapter 3: Markov Processes First hitting times Chapter 3: Markov Processes First hitting times L. Breuer University of Kent, UK November 3, 2010 Example: M/M/c/c queue Arrivals: Poisson process with rate λ > 0 Example: M/M/c/c queue Arrivals: Poisson

More information

Markov Chain Model for ALOHA protocol

Markov Chain Model for ALOHA protocol Markov Chain Model for ALOHA protocol Laila Daniel and Krishnan Narayanan April 22, 2012 Outline of the talk A Markov chain (MC) model for Slotted ALOHA Basic properties of Discrete-time Markov Chain Stability

More information

STAT 380 Markov Chains

STAT 380 Markov Chains STAT 380 Markov Chains Richard Lockhart Simon Fraser University Spring 2016 Richard Lockhart (Simon Fraser University) STAT 380 Markov Chains Spring 2016 1 / 38 1/41 PoissonProcesses.pdf (#2) Poisson Processes

More information

Chapter 1. Introduction. 1.1 Stochastic process

Chapter 1. Introduction. 1.1 Stochastic process Chapter 1 Introduction Process is a phenomenon that takes place in time. In many practical situations, the result of a process at any time may not be certain. Such a process is called a stochastic process.

More information

MAS275 Probability Modelling Exercises

MAS275 Probability Modelling Exercises MAS75 Probability Modelling Exercises Note: these questions are intended to be of variable difficulty. In particular: Questions or part questions labelled (*) are intended to be a bit more challenging.

More information

Random Walk on a Graph

Random Walk on a Graph IOR 67: Stochastic Models I Second Midterm xam, hapters 3 & 4, November 2, 200 SOLUTIONS Justify your answers; show your work.. Random Walk on a raph (25 points) Random Walk on a raph 2 5 F B 3 3 2 Figure

More information