Chapter 5. Continuous-Time Markov Chains. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan

Similar documents
The Transition Probability Function P ij (t)

Continuous-Time Markov Chain

Statistics 150: Spring 2007

Continuous Time Markov Chains

STAT 380 Continuous Time Markov Chains

SMSTC (2007/08) Probability.

Markov Chains. X(t) is a Markov Process if, for arbitrary times t 1 < t 2 <... < t k < t k+1. If X(t) is discrete-valued. If X(t) is continuous-valued

IEOR 6711, HMWK 5, Professor Sigman

Lecture 20: Reversible Processes and Queues

Lecture 4a: Continuous-Time Markov Chain Models

Examination paper for TMA4265 Stochastic Processes

TMA4265 Stochastic processes ST2101 Stochastic simulation and modelling

EXAM IN COURSE TMA4265 STOCHASTIC PROCESSES Wednesday 7. August, 2013 Time: 9:00 13:00

Introduction to Queuing Networks Solutions to Problem Sheet 3

CDA6530: Performance Models of Computers and Networks. Chapter 3: Review of Practical Stochastic Processes

Continuous time Markov chains

Data analysis and stochastic modeling

1 IEOR 4701: Continuous-Time Markov Chains

STAT STOCHASTIC PROCESSES. Contents

CDA5530: Performance Models of Computers and Networks. Chapter 3: Review of Practical

Markov chains. 1 Discrete time Markov chains. c A. J. Ganesh, University of Bristol, 2015

Stochastic process. X, a series of random variables indexed by t

LECTURE #6 BIRTH-DEATH PROCESS

Markov processes and queueing networks

Chapter 2. Poisson Processes. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan

LTCC. Exercises. (1) Two possible weather conditions on any day: {rainy, sunny} (2) Tomorrow s weather depends only on today s weather

Irreducibility. Irreducible. every state can be reached from every other state For any i,j, exist an m 0, such that. Absorbing state: p jj =1

STA 624 Practice Exam 2 Applied Stochastic Processes Spring, 2008

Solutions to Homework Discrete Stochastic Processes MIT, Spring 2011

Quantitative Model Checking (QMC) - SS12

Time Reversibility and Burke s Theorem

TCOM 501: Networking Theory & Fundamentals. Lecture 6 February 19, 2003 Prof. Yannis A. Korilis

CS 798: Homework Assignment 3 (Queueing Theory)

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks

Birth and Death Processes. Birth and Death Processes. Linear Growth with Immigration. Limiting Behaviour for Birth and Death Processes

Queueing Theory I Summary! Little s Law! Queueing System Notation! Stationary Analysis of Elementary Queueing Systems " M/M/1 " M/M/m " M/M/1/K "

Markov Processes Cont d. Kolmogorov Differential Equations

STOCHASTIC PROCESSES Basic notions

2905 Queueing Theory and Simulation PART III: HIGHER DIMENSIONAL AND NON-MARKOVIAN QUEUES

Math 597/697: Solution 5

Interlude: Practice Final

Markov Processes and Queues

Markov Chains Handout for Stat 110

Figure 10.1: Recording when the event E occurs

IEOR 6711: Stochastic Models I, Fall 2003, Professor Whitt. Solutions to Final Exam: Thursday, December 18.

Performance Evaluation of Queuing Systems

Statistics 992 Continuous-time Markov Chains Spring 2004

reversed chain is ergodic and has the same equilibrium probabilities (check that π j =

Queueing Networks and Insensitivity

STATS 3U03. Sang Woo Park. March 29, Textbook: Inroduction to stochastic processes. Requirement: 5 assignments, 2 tests, and 1 final

Chapter 8 Queuing Theory Roanna Gee. W = average number of time a customer spends in the system.

Introduction to queuing theory

Queuing Networks: Burke s Theorem, Kleinrock s Approximation, and Jackson s Theorem. Wade Trappe

Examples of Countable State Markov Chains Thursday, October 16, :12 PM

Problem Set 8

Stochastic Processes

Stochastic Processes

M/M/3/3 AND M/M/4/4 RETRIAL QUEUES. Tuan Phung-Duc, Hiroyuki Masuyama, Shoji Kasahara and Yutaka Takahashi

Chapter 3 Balance equations, birth-death processes, continuous Markov Chains

Cover Page. The handle holds various files of this Leiden University dissertation

Modelling Complex Queuing Situations with Markov Processes

Contents Preface The Exponential Distribution and the Poisson Process Introduction to Renewal Theory

GI/M/1 and GI/M/m queuing systems

(implicitly assuming time-homogeneity from here on)

SOLUTIONS IEOR 3106: Second Midterm Exam, Chapters 5-6, November 8, 2012

QUEUING MODELS AND MARKOV PROCESSES

Budapest University of Tecnology and Economics. AndrásVetier Q U E U I N G. January 25, Supported by. Pro Renovanda Cultura Hunariae Alapítvány

Markov Chain Model for ALOHA protocol

All models are wrong / inaccurate, but some are useful. George Box (Wikipedia). wkc/course/part2.pdf

Introduction to Markov Chains, Queuing Theory, and Network Performance

(b) What is the variance of the time until the second customer arrives, starting empty, assuming that we measure time in minutes?

Example: physical systems. If the state space. Example: speech recognition. Context can be. Example: epidemics. Suppose each infected

IE 336 Seat # Name (one point) < KEY > Closed book. Two pages of hand-written notes, front and back. No calculator. 60 minutes.

T. Liggett Mathematics 171 Final Exam June 8, 2011

Chapter 16 focused on decision making in the face of uncertainty about one future

A review of Continuous Time MC STA 624, Spring 2015

Lecture 7: Simulation of Markov Processes. Pasi Lassila Department of Communications and Networking

6 Continuous-Time Birth and Death Chains

HITTING TIME IN AN ERLANG LOSS SYSTEM

Birth-Death Processes

Readings: Finish Section 5.2

2. Transience and Recurrence

At the boundary states, we take the same rules except we forbid leaving the state space, so,.

MATH37012 Week 10. Dr Jonathan Bagley. Semester

Cover Page. The handle holds various files of this Leiden University dissertation

Homework 4 due on Thursday, December 15 at 5 PM (hard deadline).

The story of the film so far... Mathematics for Informatics 4a. Continuous-time Markov processes. Counting processes

Uncertainty Runs Rampant in the Universe C. Ebeling circa Markov Chains. A Stochastic Process. Into each life a little uncertainty must fall.

Inventory Ordering Control for a Retrial Service Facility System Semi- MDP

ISM206 Lecture, May 12, 2005 Markov Chain

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i :=

Queueing. Chapter Continuous Time Markov Chains 2 CHAPTER 5. QUEUEING

2 Discrete-Time Markov Chains

Advanced Computer Networks Lecture 2. Markov Processes

Part I Stochastic variables and Markov chains

Operations Research II, IEOR161 University of California, Berkeley Spring 2007 Final Exam. Name: Student ID:

Math Homework 5 Solutions

Lecture 20 : Markov Chains

MATH 56A: STOCHASTIC PROCESSES CHAPTER 6

Markovské řetězce se spojitým parametrem

Transcription:

Chapter 5. Continuous-Time Markov Chains Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan

Continuous-Time Markov Chains Consider a continuous-time stochastic process {X(t),t 0} taking on values in the set of nonnegative integers. We say that the process {X(t),t 0} is a continuous-time Markov chain if for all s, t 0, and nonnegative integers i, j, x(u), 0 u s, P {X(t + s) =j X(s) =i, X(u) =x(u), 0 u<s} = P {X(t + s) =j X(s) =i}. In other words, a continuous-time Markov chain is a stochastic process having the Markovian property that the conditional distribution of the future state at time t + s, given the present state at s and all past states depends only on the present state and is independent of the past. Suppose that a continuous-time Markov chain enters state i at some time, say time 0, and suppose that the process does not leave state i Prof. Shun-Ren Yang, CS, NTHU 1

Continuous-Time Markov Chains (that is, a transition does not occur) during the next s time units. What is the probability that the process will not leave state i during the following t time units? Note that as the process is in state i at time s, it follows, by the Markovian property, that the probability it remains in that state during the interval [s, s + t] is just the (unconditional) probability that it stays in state i for at least t time units. That is, if we let τ i denote the amount of time that the process stays in state i before making a transition into a different state, then P {τ i >s+ t τ i >s} = P {τ i >t} for all s, t 0. Hence, the random variable τ i is memoryless and must thus be exponentially distributed. Prof. Shun-Ren Yang, CS, NTHU 2

Continuous-Time Markov Chains In fact, the above gives us a way of constructing a continuous-time Markov chain. Namely, it is a stochastic process having the properties that each time it enters state i : 1. the amount of time it spends in that state before making a transition into a different state is exponentially distributed with rate, say, v i ;and 2. when the process leaves state i, it will next enter state j with some probability, call it P ij, where j i P ij =1. A state i for which v i = is called an instantaneous state since when entered it is instantaneously left. If v i = 0, then state i is called absorbing since once entered it is never left. Hence, a continuous-time Markov chain is a stochastic process that moves from state to state in accordance with a (discrete-time) Markov Prof. Shun-Ren Yang, CS, NTHU 3

Continuous-Time Markov Chains chain, but is such that the amount of time it spends in each state, before proceeding to the next state, is exponentially distributed. In addition, the amount of time the process spends in state i, and the next state visited, must be independent random variables. For if the next state visited were dependent on τ i, then information as to how long the process has already been in state i would be relevant to the prediction of the next state and this would contradict the Markovian assumption. Let q ij be defined by q ij = v i P ij, all i j. Since v i is the rate at which the process leaves state i and P ij is the probability that it then goes to j, it follows that q ij is the rate when in state i that the process makes a transition into state j; and in fact we Prof. Shun-Ren Yang, CS, NTHU 4

Continuous-Time Markov Chains call q ij the transition rate from i to j. Let us denote by P ij (t) the probability that a Markov chain, presently in state i, will be in state j after an additional time t. That is, P ij (t) =P {X(t + s) =j X(s) =i}. Prof. Shun-Ren Yang, CS, NTHU 5

Birth and Death Processes A continuous-time Markov chain with states 0, 1,... for which q ij =0 whenever i j > 1 is called a birth and death process. Thus a birth and death process is a continuous-time Markov chain with states 0, 1,... for which transitions from state i can only go to either state i 1 or state i + 1. The state of the process is usually thought of as representing the size of some population, and when the state increases by 1 we say that a birth occurs, and when it decreases by 1 we say that a death occurs. Let λ i and μ i be given by λ i = q i,i+1, μ i = q i,i 1. The values {λ i,i 0} and {μ i,i 1} are called respectively the birth rates and the death rates. Prof. Shun-Ren Yang, CS, NTHU 6

Birth and Death Processes Since j q ij = v i,weseethat v i = λ i + μ i, P i,i+1 = λ i =1 P i,i 1. λ i + μ i Hence, we can think of a birth and death process by supposing that whenever there are i people in the system the time until the next birth is exponential with rate λ i and is independent of the time until the next death, which is exponential with rate μ i. Prof. Shun-Ren Yang, CS, NTHU 7

An Example: The M/M/s Queue Suppose that customers arrive at an s-server service station in accordance with a Poisson process having rate λ. Each customer, upon arrival, goes directly into service if any of the servers are free, and if not, then the customer joins the queue (that is, he waits in line). When a server finishes serving a customer, the customer leaves the system, and the next customer in line, if there are any waiting, enters the service. The successive service times are assumed to be independent exponential random variables having mean 1/μ. If we let X(t) denote the number in the system at time t, then Prof. Shun-Ren Yang, CS, NTHU 8

An Example: The M/M/s Queue {X(t),t 0} is a birth and death process with nμ 1 n s μ n = sμ n > s, λ n = λ, n 0. Prof. Shun-Ren Yang, CS, NTHU 9

The Kolmogorov Differential Equations Recall that P ij (t) =P {X(t + s) =j X(s) =i} represents the probability that a process presently in state i will be in state j atimet later. By exploiting the Markovian property, we will derive two sets of differential equations for P ij (t), which may sometimes be explicitly solved. However, before doing so we need the following lemmas. Lemma 1. 1 P ii (t) 1. lim = v i. t 0 t P ij (t) 2. lim = q ij, i j. t 0 t Prof. Shun-Ren Yang, CS, NTHU 10

The Kolmogorov Differential Equations Lemma 2. For all s, t, P ij (t + s) = k=0 P ik (t)p kj (s). From Lemma 2 we obtain P ij (t + h) = k P ik (h)p kj (t), or, equivalently, P ij (t + h) P ij (t) = k i P ik (h)p kj (t) [1 P ii (h)]p ij (t). Dividing by h and then taking the limit as h 0 yields, upon Prof. Shun-Ren Yang, CS, NTHU 11

The Kolmogorov Differential Equations application of Lemma 1, lim h 0 P ij (t + h) P ij (t) h = lim h 0 k i P ik (h) P kj (t) v i P ij (t). h Assuming that we can interchange the limit and summation on the right-hand side of the above equation, we thus obtain, again using Lemma 1, the following. Theorem. (Kolmogorov s Backward Equations) For all i, j, and t 0, P ij(t) = k i q ik P kj (t) v i P ij (t). Prof. Shun-Ren Yang, CS, NTHU 12

The Kolmogorov Differential Equations The set of differential equations for P ij (t) given in the above Theorem are known as the Kolmogorov backward equations. They are called the backward equations because in computing the probability distribution of the state at time t + h we conditioned on the state (all the way) back at time h. That is, we started our calculation with P ij (t + h) = k = k P {X(t + h) =j X(0) = i, X(h) =k} P kj (t)p ik (h). P {X(h) =k X(0) = i} Prof. Shun-Ren Yang, CS, NTHU 13

The Kolmogorov Differential Equations We may derive another set of equations, known as the Kolmogorov s forward equations, by now conditioning on the state at time t. This yields P ij (t + h) = P ik (t)p kj (h) k or P ij (t + h) P ij (t) = k P ik (t)p kj (h) P ij (t) = k j P ik (t)p kj (h) [1 P jj (h)]p ij (t). Therefore, lim h 0 P ij (t + h) P ij (t) h = lim h 0 { k j P ik (t) P kj(h) h 1 P jj(h) P ij (t)}. h Prof. Shun-Ren Yang, CS, NTHU 14

The Kolmogorov Differential Equations Assuming that we can interchange limit with summation, we obtain by Lemma 1 that P ij(t) = k j q kj P ik (t) v j P ij (t). Theorem. (Kolmogorov s Forward Equations) Under suitable regularity conditions, P ij(t) = k j q kj P ik (t) v j P ij (t). Prof. Shun-Ren Yang, CS, NTHU 15

The Kolmogorov Differential Equations Example. The Two-State Chain. Consider a two-state continuous-time Markov chain that spends an exponential time with rate λ in state 0 before going to state 1, where it spends an exponential time with rate μ before returning to state 0. The forward equations yield P 00(t) = μp 01 (t) λp 00 (t) = (λ + μ)p 00 (t)+μ, where the last equation follows from P 01 (t) =1 P 00 (t). Hence, e (λ+μ)t [P 00(t)+(λ + μ)p 00 (t)] = μe (λ+μ)t or d dt [e(λ+μ)t P 00 (t)] = μe (λ+μ)t. Prof. Shun-Ren Yang, CS, NTHU 16

The Kolmogorov Differential Equations Thus, e (λ+μ)t P 00 (t) = μ λ + μ e(λ+μ)t + c. Since P 00 (0) = 1, we see that c = λ/(λ + μ), and thus P 00 (t) = μ λ + μ + λ λ + μ e (λ+μ)t. Similarly (or by symmetry), P 11 (t) = λ λ + μ + μ λ + μ e (λ+μ)t. Prof. Shun-Ren Yang, CS, NTHU 17

Limiting Probabilities Since a continuous-time Markov chain is a semi-markov process with F ij (t) =1 e v it it follows that if the discrete-time Markov chain with transition probabilities P ij is irreducible and positive recurrent, then the limiting probabilities P j = lim t P ij (t) are given by P j = π j/v j π i /v i where the π j are the unique nonnegative solution of i π j = i π i = 1. i π i P ij, Prof. Shun-Ren Yang, CS, NTHU 18

Limiting Probabilities From the above two equations, we see that the P j are the unique nonnegative solution of v j P j = i P j = 1, j v i P i P ij, or, equivalently, using q ij = v i P ij, v j P j = P i q ij, i P j = 1. (1) j Prof. Shun-Ren Yang, CS, NTHU 19

Limiting Probabilities Remarks 1. It follows from the results for semi-markov processes that P j also equals the long-run proportion of time the process is in state j. 2. If the initial state is chosen according to the limiting probabilities {P j }, then the resultant process will be stationary. That is, P i P ij (t) =P j for all t. The above is proven as follows: P ij (t)p i = i i i P ij (t) lim s P ki (s) = lim P ij (t)p ki (s) s i = lim P kj (t + s) s = P j. Prof. Shun-Ren Yang, CS, NTHU 20

Limiting Probabilities Remarks 3. Another way of obtaining Equations (??) is by way of the forward equations P ij(t) = k j q kj P ik (t) v j P ij (t). If we assume that the limiting probabilities P j = lim t P ij (t) exist, then P ij (t) would necessarily converge to 0 as t. (Why?) Hence, assuming that we can interchange limit and summation in the above, we obtain upon letting t, 0= k j P k q kj v j P j. It is worth noting that the above is a more formal version of the following heuristic argument which yields an equation for P j, the probability of being in state j at t = by conditioning on the Prof. Shun-Ren Yang, CS, NTHU 21

Limiting Probabilities Remarks state h units prior in time: P j = i P ij (h)p i = i j(q ij h + o(h))p i +(1 v j h + o(h))p j or 0= P i q ij v j P j + o(h) h, i j and the result follows by letting h 0. 4. Equation (??) has a nice interpretation, which is as follows: In any interval (0,t), the number of transitions into state j must equal to within 1 the number of transitions out of state j. (Why?) Hence, in the long run the rate at which transitions into state j occur must equal the rate at which transitions out of state j occur. Prof. Shun-Ren Yang, CS, NTHU 22

Limiting Probabilities Remarks Now when the process is in state j it leaves at rate v j, and, since P j is the proportion of time it is in state j, it thus follows that v j P j = rate at which the process leaves state j. Similarly, when the process is in state i it departs to j at rate q ij, and, since P i is the proportion of time in state i, we see that the rate at which transitions from i to j occur is equal to q ij P i. Hence, P i q ij = rate at which the process enters state j. i Therefore, (??) is just a statement of the equality of the rate at which the process enters and leaves state j. Because it balances (that is, equates) these rates, Equations (??) are sometimes referred to as balance equations. Prof. Shun-Ren Yang, CS, NTHU 23

Limiting Probabilities Remarks 5. When the continuous-time Markov chain is irreducible and P j > 0 for all j, we say that the chain is ergodic. Prof. Shun-Ren Yang, CS, NTHU 24

Limiting Probabilities An Example Let us now determine the limiting probabilities for a birth and death process. From Equations (??), or, equivalently, by equating the rate at which the process leaves a state with the rate at which it enters that state, we obtain State Rate Process Leaves Rate Process Enters 0 λ 0 P 0 = μ 1 P 1 n, n > 0 (λ n + μ n )P n = μ n+1 P n+1 + λ n 1 P n 1 Rewriting these equations gives λ 0 P 0 = μ 1 P 1, λ n P n = μ n+1 P n+1 +(λ n 1 P n 1 μ n P n ), n 1, Prof. Shun-Ren Yang, CS, NTHU 25

Limiting Probabilities An Example or, equivalently, λ 0 P 0 = μ 1 P 1, λ 1 P 1 = μ 2 P 2 +(λ 0 P 0 μ 1 P 1 )=μ 2 P 2, λ 2 P 2 = μ 3 P 3 +(λ 1 P 1 μ 2 P 2 )=μ 3 P 3, λ n P n = μ n+1 P n+1 +(λ n 1 P n 1 μ n P n )=μ n+1 P n+1. Solving in terms of P 0 yields P 1 = λ 0 μ 1 P 0, P 2 = λ 1 μ 2 P 1 = λ 1λ 0 μ 2 μ 1 P 0, P 3 = λ 2 μ 3 P 2 = λ 2λ 1 λ 0 μ 3 μ 2 μ 1 P 0. Prof. Shun-Ren Yang, CS, NTHU 26

Limiting Probabilities An Example P n = λ n 1 μ n P n 1 = λ n 1λ n 2 λ 1 λ 0 μ n μ n 1 μ 2 μ 1 P 0. Using n=0 P n = 1 we obtain or and hence P n = 1=P 0 + P 0 P 0 =[1+ n=1 n=1 λ n 1 λ 1 λ 0 μ n μ 2 μ 1 λ 0 λ 1 λ n 1 μ 1 μ 2 μ n ] 1, λ 0 λ 1 λ n 1, n 1. λ 0 λ 1 λ n 1 μ 1 μ 2 μ n (1 + ) μ 1 μ 2 μ n n=1 Prof. Shun-Ren Yang, CS, NTHU 27

Limiting Probabilities An Example The above equations also show us what condition is needed for the limiting probabilities to exist. Namely, n=1 λ 0 λ 1 λ n 1 μ 1 μ 2 μ n <. Prof. Shun-Ren Yang, CS, NTHU 28