Continuous Time Markov Chains

Similar documents
Continuous-Time Markov Chain

Chapter 5. Continuous-Time Markov Chains. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan

Statistics 150: Spring 2007

The Transition Probability Function P ij (t)

Exponential Distribution and Poisson Process

STAT 380 Continuous Time Markov Chains

Markov Chains. X(t) is a Markov Process if, for arbitrary times t 1 < t 2 <... < t k < t k+1. If X(t) is discrete-valued. If X(t) is continuous-valued

EXAM IN COURSE TMA4265 STOCHASTIC PROCESSES Wednesday 7. August, 2013 Time: 9:00 13:00

Data analysis and stochastic modeling

TMA4265 Stochastic processes ST2101 Stochastic simulation and modelling

Solutions to Homework Discrete Stochastic Processes MIT, Spring 2011

Examination paper for TMA4265 Stochastic Processes

Time Reversibility and Burke s Theorem

IEOR 6711, HMWK 5, Professor Sigman

CDA6530: Performance Models of Computers and Networks. Chapter 3: Review of Practical Stochastic Processes

Renewal theory and its applications

Lecture 21. David Aldous. 16 October David Aldous Lecture 21

LECTURE #6 BIRTH-DEATH PROCESS

Quantitative Model Checking (QMC) - SS12

Continuous time Markov chains

Performance Evaluation of Queuing Systems

Queuing Networks: Burke s Theorem, Kleinrock s Approximation, and Jackson s Theorem. Wade Trappe

Lecture 20: Reversible Processes and Queues

CDA5530: Performance Models of Computers and Networks. Chapter 3: Review of Practical

The exponential distribution and the Poisson process

TCOM 501: Networking Theory & Fundamentals. Lecture 6 February 19, 2003 Prof. Yannis A. Korilis

IEOR 6711: Stochastic Models I, Fall 2003, Professor Whitt. Solutions to Final Exam: Thursday, December 18.

Problem Set 8

M/G/1 and M/G/1/K systems

Systems Simulation Chapter 6: Queuing Models

reversed chain is ergodic and has the same equilibrium probabilities (check that π j =

Birth and Death Processes. Birth and Death Processes. Linear Growth with Immigration. Limiting Behaviour for Birth and Death Processes

A review of Continuous Time MC STA 624, Spring 2015

Math 597/697: Solution 5

2905 Queueing Theory and Simulation PART III: HIGHER DIMENSIONAL AND NON-MARKOVIAN QUEUES

Stochastic Processes

Introduction to queuing theory

Intro Refresher Reversibility Open networks Closed networks Multiclass networks Other networks. Queuing Networks. Florence Perronnin

STAT STOCHASTIC PROCESSES. Contents

GI/M/1 and GI/M/m queuing systems

1 IEOR 4701: Continuous-Time Markov Chains

Continuous Time Processes

Queueing Theory I Summary! Little s Law! Queueing System Notation! Stationary Analysis of Elementary Queueing Systems " M/M/1 " M/M/m " M/M/1/K "

Stochastic process. X, a series of random variables indexed by t

(b) What is the variance of the time until the second customer arrives, starting empty, assuming that we measure time in minutes?

Markov processes and queueing networks

6 Continuous-Time Birth and Death Chains

LTCC. Exercises. (1) Two possible weather conditions on any day: {rainy, sunny} (2) Tomorrow s weather depends only on today s weather

Chapter 3 Balance equations, birth-death processes, continuous Markov Chains

T. Liggett Mathematics 171 Final Exam June 8, 2011

Lecture 4a: Continuous-Time Markov Chain Models

Lecture 7: Simulation of Markov Processes. Pasi Lassila Department of Communications and Networking

Queuing Theory. Richard Lockhart. Simon Fraser University. STAT 870 Summer 2011

QUEUING MODELS AND MARKOV PROCESSES

Markov chains. 1 Discrete time Markov chains. c A. J. Ganesh, University of Bristol, 2015

Networking = Plumbing. Queueing Analysis: I. Last Lecture. Lecture Outline. Jeremiah Deng. 29 July 2013

SMSTC (2007/08) Probability.

Chapter 8 Queuing Theory Roanna Gee. W = average number of time a customer spends in the system.

Figure 10.1: Recording when the event E occurs

Queuing Theory. 3. Birth-Death Process. Law of Motion Flow balance equations Steady-state probabilities: , if

Introduction to Markov Chains, Queuing Theory, and Network Performance

Readings: Finish Section 5.2

MARKOV PROCESSES. Valerio Di Valerio

Irreducibility. Irreducible. every state can be reached from every other state For any i,j, exist an m 0, such that. Absorbing state: p jj =1

SOLUTIONS IEOR 3106: Second Midterm Exam, Chapters 5-6, November 8, 2012

Promotion and Application of the M/M/1/N Queuing System

Interlude: Practice Final

λ λ λ In-class problems

CS 798: Homework Assignment 3 (Queueing Theory)

Stochastic Models in Computer Science A Tutorial

PBW 654 Applied Statistics - I Urban Operations Research

CPSC 531: System Modeling and Simulation. Carey Williamson Department of Computer Science University of Calgary Fall 2017

Approximating Deterministic Changes to Ph(t)/Ph(t)/1/c and Ph(t)/M(t)/s/c Queueing Models

Probability Distributions

Queuing Theory. Queuing Theory. Fatih Cavdur April 27, 2015

THE QUEEN S UNIVERSITY OF BELFAST

Name of the Student:

Q = (c) Assuming that Ricoh has been working continuously for 7 days, what is the probability that it will remain working at least 8 more days?

Queueing systems. Renato Lo Cigno. Simulation and Performance Evaluation Queueing systems - Renato Lo Cigno 1

Performance Modelling of Computer Systems

Since D has an exponential distribution, E[D] = 0.09 years. Since {A(t) : t 0} is a Poisson process with rate λ = 10, 000, A(0.

Renewal processes and Queues

Markovské řetězce se spojitým parametrem

Introduction to Queuing Networks Solutions to Problem Sheet 3

ACM 116 Problem Set 4 Solutions

System Simulation Part II: Mathematical and Statistical Models Chapter 5: Statistical Models

Other properties of M M 1

Stein s Method for Steady-State Approximations: Error Bounds and Engineering Solutions

Markov Processes Cont d. Kolmogorov Differential Equations

Example: physical systems. If the state space. Example: speech recognition. Context can be. Example: epidemics. Suppose each infected

STA 624 Practice Exam 2 Applied Stochastic Processes Spring, 2008

Part I Stochastic variables and Markov chains

STOCHASTIC PROCESSES Basic notions

2905 Queueing Theory and Simulation PART IV: SIMULATION

Bulk input queue M [X] /M/1 Bulk service queue M/M [Y] /1 Erlangian queue M/E k /1

EXAMINATIONS OF THE ROYAL STATISTICAL SOCIETY

Math 416 Lecture 11. Math 416 Lecture 16 Exam 2 next time

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks

UNIVERSITY OF YORK. MSc Examinations 2004 MATHEMATICS Networks. Time Allowed: 3 hours.

stochnotes Page 1

Transcription:

Continuous Time Markov Chains Stochastic Processes - Lecture Notes Fatih Cavdur to accompany Introduction to Probability Models by Sheldon M. Ross Fall 2015 Outline Introduction Continuous-Time Markov Chains Birth and Death Processes The Transition Probability Function Limiting Probabilities

Introduction In this chapter we consider a class of probability models that has a wide variety of applications in the real world. The members of this class are the continuous-time analogs of the Markov chains of Chapter 4 and as such are characterized by the Markovian property that, given the present state, the future is independent of the past. One example of a continuous-time Markov chain is the Poisson process of Chapter 5. Continuous-Time Markov Chains Suppose that we have a continuous-time stochastic process {X (t), t 0} taking on values in the set of non-negative integers. We can say that {X (t), t 0} is a continuous-time Markov chain (CT-MC), if, for all s, t 0 and non-negative integers i, j, x(u), 0 u < s P{X (t+s) = j X (s) = i, X (u) = x(u)} = P{X (t+s) = j X (s) = i}

Continuous-Time Markov Chains In addition, if P{X (t + s) = j X (s) = i} is independent of s, then, the CT-MC is said to have stationary or homogenous transition probabilities. We assume that all MCs in this section have stationary transition probabilities. Continuous-Time Markov Chains Another definition of a CT-MC is a stochastic process having the properties that each time it enters state i the amount of time it spends in that state before making a transition into a different state is exponentially distributed with mean 1/v i, and when the process leaves state i, it next enters state j with some probability, P ij where P ii = 0, P ij = 1, j i i

Continuous-Time Markov Chains In other words, a CT-MC is a stochastic process that moves from state to state in accordance with a (discrete-time) Markov chain, but is such that the amount of time it spends in each state, before proceeding to the next state, is exponentially distributed. In addition, the amount of time the process spends in state i and the next state it enters must be independent RVs Example (A Shoeshine Shop) Consider a shoeshine establishment consisting of two chairs chair 1 and chair 2. A customer upon arrival goes initially to chair 1 where his shoes are cleaned and polish is applied. After this is done the customer moves on to chair 2 where the polish is buffed. The service times at the two chairs are assumed to be independent random variables that are exponentially distributed with respective rates µ 1 and µ 2, and the customers arrive in accordance with a Poisson process with rate λ. We also assume that a potential customer will enter the system only if both chairs are empty.

Example We can analyze this system as a CT-MC with a state space State Description 0 system is empty 1 a customer is in chair 1 2 a customer is in chair 2 Example We then have and v 0 = λ, v 1 = µ 1, v 2 = µ 2 P 01 = P 12 = P 20 = 1

Birth and Death Processes Consider a system whose state at any time is represented by the number of people in the system at that time. Suppose that whenever there are n people in the system new arrivals enter the system at an exponential rate λ n, and people leave the system at an exponential rate µ n. Such a system is called a birth and death (arrival and departure) process (BDP or ADP), and the parameters {λ n } n=0 and {µ n} n=1 are called birth (arrival) and death (departure) rates, respectively. Birth and Death Processes A BDP is thus a CT-MC with states {0, 1,...} for which transitions from state n may go only to state n 1 or state n + 1, and and v 0 = λ 0 ; v i = λ i + µ i, i > 0 P 01 = 1; P i,i+1 = λ i λ i + µ i, i > 0; P i,i 1 = µ i λ i + µ i, i > 0

Example (Poisson Process) Consider a BDP for which µ n = 0, n 0 λ n = λ, n 0 This is process in which no departures occur, and the time between arrivals is exponentially distributed with mean 1/λ. Hence, this is the Poisson process. Birth and Death Process A BDP for which µ n = 0, n is is said to be a pure birth process, and a BDP for which λ n = 0, n is said to be pure death process.

Example A model in which µ n = nµ, n 1 λ n = nλ + θ, n 0 is called a linear growth process with immigration and used to study biological systems. If X (t) is the population size at time t, and if we assume that X (0) = i and let M(t) = E[X (t)] Example Derive and solve a differential equation to determine M(t): We can write M(t + h) = E[X (t + h)] = E{E[X (t + h) X (t)]} X (t + 1), X (t+h) = X (t 1), X (t), w.p. [θ + X (t)λ]h + o(h) w.p. X (t)µh + o(h) w.p. 1 [θ + X (t)λ + X (t)µ]h + o(h)

Example We then have lim h 0 E[X (t + h) X (t)] = X (t) + [θ + X (t)λ X (t)µ]h + o(h) E{E[X (t + h) X (t)]} = E{X (t) + [θ + X (t)λ X (t)µ]h + o(h)} M(t + h) = M(t) + (λ µ)m(t)h + θh + o(h) M(t + h) M(t) = (λ µ)m(t) + θ + o(h) [ h ] [ h M(t + h) M(t) = lim (λ µ)m(t) + θ + o(h) h h 0 h dm(t) dt = (λ µ)m(t) + θ ] Example If we define h(t) = (λ µ)m(t) + θ dh(t) dt We can hence write = (λ µ) dm(t) dt h (t) λ µ = h(t) h (t) h(t) = λ µ log [h(t)] = (λ µ)t + c h(t) = Ke (λ µ)t

Example In terms of M(t), θ + (λ µ)m(t) = Ke (λ µ)t To find K, we use that M(0) = i and for t = 0, θ + (λ µ)i = K M(t) = θ λ µ [e(λ µ)t 1] + ie (λ µ)t Note that, we have assumed that λ µ. If λ = µ, dm(t) dt = θ M(t) = θt + i Example Suppose that customers arrive at a single-server service station in accordance with a Poisson process having rate λ, and the successive service times are assumed to be independent exponential RVs with mean 1/µ. This is known as the M/M/1 queuing system where the first and second M refer to the Poisson arrivals and the exponential service times, both are Markovian, and 1 refers to the number of servers.

Example If we let X (t) be the number of customers in the system at time t, then, {X (t), t 0} is a BDP process with with µ n = µ, n 1 λ n = λ, n 0 Example For a multi-server exponential queuing system with s servers, if we let X (t) be the number of customers in the system at time t, then, {X (t), t 0} is a BDP process with with and { nµ, 1 n s µ n = sµ, n > s λ n = λ, n 0

The Transition Probability Function We let P ij (t) = P{X (t + s) = j X (s) = i} be the probability that a process presently in state i will be in state j after time t. P ij (t) are often called the transition probabilities of the CT-MC. The Transition Probability Function Proposition For a pure birth process with λ i λ j when i j, we have P ij (t) = j k=i e λ kt j r=i r k λ r j 1 λ r λ k k=i e λ kt j 1 r=i r k λ r λ r λ k, i < j and P ii (t) = e λ i t

Kolmogorov s Backward Equations Theorem: Kolmogorov s Backward Equations For all states i, j and times t 0, P ij(t) = k i q ik P kj (t) v i P ij (t) Example The backward equations for the pure birth process are P ij(t) = λ i P i+1,j (t) λ i P ij (t)

Example: Backward Equations for the BDP P 0j(t) = λ 0 P 1j (t) λ 0 P 0j (t) [ P ij(t) = (λ i +µ i ) or λ i λ i + µ i P i+1,j (t) + P 0j(t) = λ 0 [P 1j (t) P 0j (t)] µ ] i P i 1,j (t) (λ i +µ i )P ij (t) λ i + µ i P ij(t) = λ i P i+1,j (t) + µ i P i 1,j (t) (λ i + µ i )P ij (t) Kolmogorov s Forward Equations Theorem: Kolmogorov s Forward Equations Under suitable regularity conditions, P ij(t) = k j q kj P ik (t) v j P ij (t)

Kolmogorov s Backward Equations For the pure birth process, we have P ij(t) = λ j 1 P i,j 1 (t) λ j P ij (t) Noting that P ij (t) = 0 whenever j < i, we can write P ii(t) = λ i P i,i (t) and P ij(t) = λ j 1 P i,j 1 (t) λ j P ij (t), j i + 1 Kolmogorov s Backward Equations For the BDP, we have P i0(t) = k 0 q k0 P ik (t) λ 0 P i0 (t) = µ 1 P i,1 (t) λ 0 P i0 (t) P ij(t) = k 0 q kj P ik (t) (λ j + µ j )P ij (t) = λ j 1 P i,j 1 (t) (λ j + µ j )P ij (t)

Limiting Probabilities In analogy with a basic result in discrete-time Markov chains, the probability that a continuous-time Markov chain will be in state j at time t often converges to a limiting value that is independent of the initial state. If we call this value P j, then, P j lim t P ij (t) where we assume that the limit exists and is independent of the initial state i. Limiting Probabilities To derive P j using the forward equations, we can write P ij(t) = q kj P ik (t) v j P ij (t) k j and lim t P ij(t) = lim q kj P ik (t) v j P ij (t) t k j = q kj P k v j P j k j

Limiting Probabilities We note that, since P ij (t) converges to 0, we can write 0 = k j q kj P k v j P j v j P j = k j q kj P k We can then use the following to find the limiting probabilities: v j P j = k j q kj P k, j 1 = j P j Limiting Probabilities for the BDP We can write, State 0: λ 0 P 0 = µ 1 P 1 State 1: (λ 1 + µ 1 )P 1 = µ 2 P 2 + λ 0 P 0 State 2: (λ 2 + µ 2 )P 2 = µ 3 P 3 + λ 1 P 1......... State n: (λ n + µ n )P n = µ n+1 P n+1 + λ n 1 P n 1

Limiting Probabilities for the BDP By organizing, we obtain λ 0 P 0 = µ 1 P 1 λ 1 P 1 = µ 2 P 2 λ 2 P 2 = µ 3 P 3... =... λ n P n = µ n+1 P n+1 Limiting Probabilities for the BDP Solving in terms of P 0, P 1 = λ 0 µ 1 P 0 P 2 = λ 1λ 0 µ 2 µ 1 P 0 P 3 = λ 2λ 1 λ 0 µ 3 µ 2 µ 1 P 0... =... P n = λ n 1... λ 1 λ 0 µ n... µ 2 µ 1 P 0

Limiting Probabilities for the BDP Using that n=0 P n = 1, we obtain and so 1 = P 0 + n=1 λ n 1... λ 1 λ 0 µ n... µ 2 µ 1 P 0 P 0 = 1 1 + n=1 λ n 1...λ 1 λ 0 µ n...µ 2 µ 1 P n = λ 0 λ 1... λ n 1 1 µ 1 µ 2... µ n 1+ n=1 λ n 1...λ 1 λ 0 µn...µ 2 µ 1, n 1 Limiting Probabilities for the BDP The foregoing equations also show us the necessary conditions for the limiting probabilities to exist: n=1 λ n 1... λ 1 λ 0 µ n... µ 2 µ 1 < This condition also might be shown to be sufficient.

Limiting Probabilities Let us reconsider the shoeshine shop of Example 6.1, and determine the proportion of time the process is in each of the states 0, 1, 2. Because this is not a birth and death process (since the process can go directly from state 2 to state 0), we start with the balance equations for the limiting probabilities. Limiting Probabilities for the BDP We can write, State 0: λ 0 P 0 = µ 2 P 2 State 1: µ 1 P 1 = λp 0 State 2: µ 2 P 2 = µ 1 P 1

Limiting Probabilities for the BDP We then write, and P 1 = λ µ 1 P 0, P 2 = λ µ 2 P 0 and so 2 P i = 1 P 0 = i=0 µ 1 µ 2 µ 1 µ 2 + λ(µ 1 + µ 2 ) P 1 = λµ 2 µ 1 µ 2 + λ(µ 1 + µ 2 ), P 1 = λµ 1 µ 1 µ 2 + λ(µ 1 + µ 2 ) Thanks! Questions?