IEOR 6711, HMWK 5, Professor Sigman

Similar documents
1 IEOR 4701: Continuous-Time Markov Chains

Statistics 150: Spring 2007

Introduction to Queuing Networks Solutions to Problem Sheet 3

Solutions to Homework Discrete Stochastic Processes MIT, Spring 2011

Stochastic process. X, a series of random variables indexed by t

Lecture 20: Reversible Processes and Queues

IEOR 6711: Stochastic Models I, Fall 2003, Professor Whitt. Solutions to Final Exam: Thursday, December 18.

Markov processes and queueing networks

Continuous-Time Markov Chain

Part I Stochastic variables and Markov chains

Markov chains. 1 Discrete time Markov chains. c A. J. Ganesh, University of Bristol, 2015

Chapter 5. Continuous-Time Markov Chains. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan

MARKOV PROCESSES. Valerio Di Valerio

Continuous time Markov chains

Markov Chains. X(t) is a Markov Process if, for arbitrary times t 1 < t 2 <... < t k < t k+1. If X(t) is discrete-valued. If X(t) is continuous-valued

Irreducibility. Irreducible. every state can be reached from every other state For any i,j, exist an m 0, such that. Absorbing state: p jj =1

TCOM 501: Networking Theory & Fundamentals. Lecture 6 February 19, 2003 Prof. Yannis A. Korilis

Queuing Networks: Burke s Theorem, Kleinrock s Approximation, and Jackson s Theorem. Wade Trappe

ECE-517: Reinforcement Learning in Artificial Intelligence. Lecture 4: Discrete-Time Markov Chains

Figure 10.1: Recording when the event E occurs

The Transition Probability Function P ij (t)

Lecture 7: Simulation of Markov Processes. Pasi Lassila Department of Communications and Networking

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks

Queueing Theory. VK Room: M Last updated: October 17, 2013.

(b) What is the variance of the time until the second customer arrives, starting empty, assuming that we measure time in minutes?

Data analysis and stochastic modeling

Stationary remaining service time conditional on queue length

reversed chain is ergodic and has the same equilibrium probabilities (check that π j =

Introduction to queuing theory

Time Reversibility and Burke s Theorem

CDA5530: Performance Models of Computers and Networks. Chapter 3: Review of Practical

GI/M/1 and GI/M/m queuing systems

Since D has an exponential distribution, E[D] = 0.09 years. Since {A(t) : t 0} is a Poisson process with rate λ = 10, 000, A(0.

Queueing Theory I Summary! Little s Law! Queueing System Notation! Stationary Analysis of Elementary Queueing Systems " M/M/1 " M/M/m " M/M/1/K "

Q = (c) Assuming that Ricoh has been working continuously for 7 days, what is the probability that it will remain working at least 8 more days?

LECTURE #6 BIRTH-DEATH PROCESS

Exact Simulation of the Stationary Distribution of M/G/c Queues

Birth-Death Processes

Performance Evaluation of Queuing Systems

Exercises Stochastic Performance Modelling. Hamilton Institute, Summer 2010

2905 Queueing Theory and Simulation PART III: HIGHER DIMENSIONAL AND NON-MARKOVIAN QUEUES

NEW FRONTIERS IN APPLIED PROBABILITY

M/G/1 and M/G/1/K systems

1 Continuous-time chains, finite state space

Continuous Time Processes

CDA6530: Performance Models of Computers and Networks. Chapter 3: Review of Practical Stochastic Processes

Contents Preface The Exponential Distribution and the Poisson Process Introduction to Renewal Theory

Statistics 253/317 Introduction to Probability Models. Winter Midterm Exam Monday, Feb 10, 2014

Part II: continuous time Markov chain (CTMC)

Chapter 8 Queuing Theory Roanna Gee. W = average number of time a customer spends in the system.

Queueing systems. Renato Lo Cigno. Simulation and Performance Evaluation Queueing systems - Renato Lo Cigno 1

Class 11 Non-Parametric Models of a Service System; GI/GI/1, GI/GI/n: Exact & Approximate Analysis.

Continuous-time Markov Chains

Introduction to Markov Chains, Queuing Theory, and Network Performance

Performance Modelling of Computer Systems

Continuous Time Markov Chains

SOLUTIONS IEOR 3106: Second Midterm Exam, Chapters 5-6, November 8, 2012

Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t

The exponential distribution and the Poisson process

BIRTH DEATH PROCESSES AND QUEUEING SYSTEMS

Statistics 992 Continuous-time Markov Chains Spring 2004

SMSTC (2007/08) Probability.

T. Liggett Mathematics 171 Final Exam June 8, 2011

Homework 4 due on Thursday, December 15 at 5 PM (hard deadline).

Renewal theory and its applications

1 Gambler s Ruin Problem

Queuing Theory. Richard Lockhart. Simon Fraser University. STAT 870 Summer 2011

Non Markovian Queues (contd.)

Review of Queuing Models

Chapter 1. Introduction. 1.1 Stochastic process

IEOR 4106: Spring Solutions to Homework Assignment 7: Due on Tuesday, March 22.

Probability Distributions

MS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10. x n+1 = f(x n ),

STA 624 Practice Exam 2 Applied Stochastic Processes Spring, 2008

Lecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes

arxiv: v2 [math.pr] 24 Mar 2018

Intro Refresher Reversibility Open networks Closed networks Multiclass networks Other networks. Queuing Networks. Florence Perronnin

MATH37012 Week 10. Dr Jonathan Bagley. Semester

IEOR 3106: Second Midterm Exam, Chapters 5-6, November 7, 2013

Lecture 21. David Aldous. 16 October David Aldous Lecture 21

Other properties of M M 1

Continuous time Markov chains

2905 Queueing Theory and Simulation PART IV: SIMULATION

Lecture 10: Semi-Markov Type Processes

An M/M/1 Queue in Random Environment with Disasters

1 Some basic renewal theory: The Renewal Reward Theorem

Renewal processes and Queues

Structured Markov Chains

Quantitative Model Checking (QMC) - SS12

Queueing. Chapter Continuous Time Markov Chains 2 CHAPTER 5. QUEUEING

1 Gambler s Ruin Problem

Exponential Distribution and Poisson Process

A review of Continuous Time MC STA 624, Spring 2015

Stochastic Processes. Theory for Applications. Robert G. Gallager CAMBRIDGE UNIVERSITY PRESS

IEOR 4106: Introduction to Operations Research: Stochastic Models Spring 2011, Professor Whitt Class Lecture Notes: Tuesday, March 1.

Systems Simulation Chapter 6: Queuing Models

Networking = Plumbing. Queueing Analysis: I. Last Lecture. Lecture Outline. Jeremiah Deng. 29 July 2013

Simple queueing models

1 Stationary point processes

Math 416 Lecture 11. Math 416 Lecture 16 Exam 2 next time

Transcription:

IEOR 6711, HMWK 5, Professor Sigman 1. Semi-Markov processes: Consider an irreducible positive recurrent discrete-time Markov chain {X n } with transition matrix P (P i,j ), i, j S, and finite state space. Suppose that, just as for a CTMC, it is embedded into continuous time, to get a continuous-time process {X(t) : t } but the holding times H i are positive rvs distributed as a general distribution F i (x) P (H i x), x, with finite first moment < E(H i ) 1/a i <, i S. So, just as a CTMC, the process makes transitions according to a Markov chain {X n }, and when making a transition from i to j (probability P i,j, independent of the past) the chain remains in state j, independent of the past, for an amount of time H j F j, but because the holding times are not assumed exponential, the Markov property does not hold in continuous time for {X(t)}. Nonetheless show that the limiting distribution 1 P j lim t t t I{X(s) j}ds, wp1, j S, still exists and is exactly the same as when the F i are exponential at rate a i, i S; the same as for the CTMC case; π j a P j j i π. (1) i a i (Recall Proposition 1.3, Page 1 of your Lecture Notes on Continuous-Time Markov Chains.) Thus the limiting distribution of a semi-markov process only depends on the F i through their means. SOLUTION: The proof is exactly that given in Proposition 1.3, Page 1 mentioned above; no where did the proof use the exponential distribution of holding times, only finite and non-zero first moment. Renewal Reward Theorem still yields P j 1/a j E(T j,j ), where T jj is the continuous-time return time back to state j given that initially X() j and has holding time H j. Here we also are assuming a finite state space so that E(T j,j ) < and also the denominator in the formula (1) is always finite. 2. Consider a stable FIFO M/M/1 queue, < ρ < 1. Let X(t) denote the number in system at time t, and let P n (1 ρ)ρ n, n denote the stationary distribution. Show (direct calculation) that if X() (P n ) (e.g., the chain is started off with its stationary distribution, hence is a stationary process), then the time until the first departure (after time t ), t d 1, has an exponential distribution at rate λ. SOLUTION: We condition on wether the server is busy or not at time t. If busy (probability P (X() > ) ρ 1 P ), then the next departure will occur after the (remaining) service completion of length S that is exponential at rate µ by the memoryless property; t d 1 S exp(µ). If not busy (probability P (X() 1 ρ P ), then first an arrival must come (remaining length T exp(λ) by the memoryless property), and then this customer must enter service for an independent amount of time S exp(µ); so in this case t d 1 T + S exp(λ) exp(µ) (a convolution). We now can compute the Laplace transform of t d 1 yielding (after some algebra) that for any s, 1

and indeed t d 1 exp(λ). E std 1 ρ µ µ + s + (1 ρ) λ λ + s µ µ + s λ λ + s, 3. Consider the M/M/1 queue (arrival rate λ service time rate µ) with the following twist: Each customer independently will get impatient after an amount of time that is exponentially distributed at rate γ while waiting in line (queue) and leave before ever entering service, and without ever returning. A customer who does enter service completes service (e.g., customers are only impatient while waiting in the line, not when in service.) (a) You arrive finding exactly one customer in the system (hence they are in service) and you join the queue to wait. What is the probability that you will get served? SOLUTION: Letting C denote your independent exp(γ) rv, independent of the exp(µ) remaining service time S, we want P (C > S) µ/(µ + γ). (b) You arrive finding exactly two (2) customers in the system (one in service, one in line)) and you join the end of the queue to wait. What is the probability that you will get served? SOLUTION Let S denote the remaining service time of who is in service (exp(µ) by the memoryless property), and let S 1 denote the service time of the customer waiting in front of you (and let C 1 denote their exponential impatience time, and let C 2 denote yours.) S, S 1, C 1, C 2 are independent exponential rvs. Let D denote the length of time until the server will be free to serve you: D S + S 1 I{C 1 > S}. We need to compute P (D < C 2 ). Denote the desired probability by p. Observe that p p 1 p 2 where p 1 the probability that you will move into the next position (e.g., of the customer in front of you; first in line), and p 2 is the (conditional) probability that you will get served given that you do move into the next position. By the memoryless property of exponential distributions, p 2 is simply the same as the answer to (a), p 2 P (C 1 > S) µ/(µ + γ). Moreover, p 1 P (C 2 > min{s, C 1 }) because {C 2 > min{s, C 1 }} is the event that you move into the next position: Either you do so because the customer in service departs (S completes) before either of the two of you in line get impatient, or because the customer in front of you gets impatient before S is complete and before you get impatient. (Both scenarios cause you to move to the next position (head of the line)). min{s, C 1 } exp(µ + γ) independent of C 2 exp(γ), so p 1 (µ + γ)/(µ + 2γ). Thus p p 1 p 2 µ µ + 2γ. This method can be generalized (by induction) to handle computing the probability that you get served if upon arrival you fine n 1 in front of you and the answer is p µ µ + nγ. (c) Model as a Birth and Death process, give the birth and death rates, λ n, µ n, n. SOLUTION X(t) the number of customers in the system at time t forms a B& D process. λ n λ, n, µ n (n 1)γ + µ, n 1. (µ ). 2

(d) Set up the Birth and Death balance equations for the limiting probabilities P n (but do not try to solve.) SOLUTION λp n (nγ + µ)p n+1, n. (e) Compute the ratio P n+1 /P n and prove using the ratio test from calculus, that the limiting probabilities exist for all values of λ >, µ >, γ >. Thus this chain is always positive recurrent (e.g., a condition such as ρ < 1 is not needed); explain intuitively why this should be so. SOLUTION λ P n+1 /P n nγ + µ, as n. In particular the ratio is eventually strictly bounded less than a constant less than 1; hence n P n <. Intuitively, the system is always stable because if the line gets too big, then more and more customers will get impatient and leave, hence reducing the congestion. 4. Time-reversible CTMCs: Just as in discrete time, if we consider a positive recurrent CTMC in stationarity that has been started since the infinite past, {X (t) : < t < }, then the (stationary) time-reversal X (r) (t) X ( t) : t is itself a CTMC. It has the same holding time rates {a i } and the same stationary distribution P (P j ) as the original forward time CTMC. The only thing that can differ are the infinitesimal generators Q and Q (r) (equivalently, the embedded chain transition matrices.) A positive recurrent CTMC is called time-reversible if the time-reversed process {X (r) : t } has the same distribution as the forward-time process {X (t) : t }. In words: the long-run rate that the chain moves from i to j equals the long-run rate that the chain moves from from j to i, for any two states i, j S. Thus because the holding time rates {a i } and stationary distribution P (P j ) are the same, this is equivalent to saying that a positive recurrent CTMC is time-reversible if for all pairs of states i, j S, a i P i P i,j a j P j P j,i (2) (a) Show that if the embedded chain {X n } is also positive recurrent, then (2) reduces to {X n } being time-reversible; π i P i,j π j P j,i, as we would expect. SOLUTION: Recall (Problem 1 above), that in this case P j π j a j i π i a i, j S. Using this formula for P i and P j in Equation (2) above yields the result. (b) Recall that in general, if {X n } is a positive recurrent MC, then its time-reversal MC exists and has transition matrix P (r) i,j 3 π j π i P j,i.

Now let us consider our positive recurrent CTMC {X(t)} and suppose that its embedded chain {X n } is null recurrent, hence π does not exist. Nonetheless, {X (r) (t)} must have an embedded Markov chain. Find its transition matrix in this case (denote it by P (r) i,j.) SOLUTION: Fom the Equation (2) discussion, the general (time-reversible or not) version would be The long-run rate that the reversed continuous-time chain moves from i to j equals the long-run rate that the forward continuous-time chain moves from from j to i, for any two states i, j S. This yields: a i P i P (r) i,j a j P j P j,i, (3) Thus we get P (r) i,j a jp j a i P i P j,i. Notice that when {X n } is positive recurrent, then a j P j a i P i π j π i, via using P j π j a j i π i a i, j S. (c) Since a birth and death process can only make transitions of magnitude ±1 we see that in such a case Equation (2) reduces to the long-run rate that the chain moves from i to i + 1 equals the long-run rate that the chain moves from from i + 1 to i, for all states i S ; the birth and death balance equations. We conclude: Every positive recurrent birth and death (B&D) process is time-reversible. We now apply this to the M/M/1 queue: X(t) the number of customers in a FIFO M/M/1 queue is a B& D process, so we conclude that when ρ < 1, it is timereversible. Assume that it is started at time t with its stationary distribution. Let ψ {t n : n 1} denote the Poisson arrival times starting from time t : < t 1 < t 2 <, and let ψ (d) {t d n : n 1} denote the point process of departure times after time t : < t d 1 < td 2 <. We know from Exercise 2 above that td 1 has an exponential distribution at rate λ, but here you will deduce more. Argue (from time-reversibility) that ψ (d) must have the same distribution as ψ, and hence must itself be a Poisson process at rate λ: the departure process from a stationary M/M/1 queue is itself a Poisson process. SOLUTION: The points of ψ are precisely the times at which the forward process jumps up by 1 (e.g., the times at which births occur.) Thus they must have the same distribution (from time-reversibility) as the points in time for which the time-reversed process jumps up by 1; but such times have the same distribution as the departure times ψ (d) (e.g., the times at which deaths occur). 4

5. Consider a CTMC {X(t)} (with P i,i, i S) with embedded chain transition matrix P (P i,j ) and holding time rates {a i }. Assume that a sup{a i : i S} <. Consider an alternative CTMC {X(t)}for which all holding time rates are fixed at the constant rate a; a i a, and has embedded chain transition probabilities given by { ai P i,j a P i,j if j i, 1 a i a if j i. (So P i,i > is possible.) (a) Let N(t) denote the number of transitions by time t for {X(t)}. Explain why {N(t) : t } forms a Poisson process at rate a. SOLUTION: By the Markov property, for any state i, given X(t) i, the chain will spend an exponential amount of time at rate a in state i independent of the past and then move. But this rate a does not even depend on i, so the sequence of consecutive holding times always forms an iid sequence of rvs distributed as exponential a; a Poisson process at rate a. (b) Show that the balance equations are the same for the two chains. SOLUTION: Balance equations for X(t) are which algebraically reduce to ap j ap j (1 a j a ) + P i a( a i a P i,j), j S, (4) i j a j P j i j P i a i P ij, j S. (5) (c) Explain why {X(t)} and {X(t)} have the same distribution as stochastic processes; P i,j (t) P i,j (t), t for all pairs i, j. SOLUTION: Recall that a geometric sum of iid exponentials is yet again exponential. So all that is happening here is that each original holding time H i exp(a i ) has been broken down and re-expressed as a geometric sum of iid exponentials at rate a in which the geometric has success probability a i /a. That is what the P i,j do. So when state i is entered, in both models the total amount of time spent there until changing to a state j i is exponential at rate a i. 6. Sampling a CTMC to obtain a discrete-time Markov chain: Suppose that {X(t)} is an irreducible (non-explosive) CTMC with infinitesimal generator (rates matrix) Q (q i,j ), and P (t) (P i,j (t)) e Qt, t. (a) Argue that Z n X(n), n, is an irreducible discrete-time Markov chain with transition matrix P ( P i,j ) given by P i,j P i,j (1) e Q. SOLUTION: It is Markov because P (X(n + 1) j X(n) i, X(n 1) i n 1,..., X() i ) P (X(n + 1) j X(n) i) via the Markov property assumed on {X(t)}: Give the present state X(n), the future X(n + 1) is independent of all of the past {X(u) : u < n}. P (X(n + 1) j X(n) i) P (X(1) j X() i) P i,j (1) e Q. 5

(b) We now generalize (a) to sampling at the times of a Poisson process. Independently of {X(t)} let {t n : n 1} be a Poisson process at rate λ. Let Z n X(t n ), n 1, Z X(). Argue that {Z n : n } is an irreducible discrete-time Markov chain with transition matrix P ( P i,j ) given by P [I (Q/λ)] 1. SOLUTION: By independence, each t n is a stopping time, and hence (via the strong Markov property) given Z n, {X(t n + t) : t > } is yet again the same CTMC but with initial condition Z n, independent of the past. This will be so for any independent renewal process used for sampling. If we let T n t n+1 t n denote the iid interarrival times for the renewal process, then given Z n we only need the independent (and independent of the past) rv T n to predict the future value Z n+1. P i,j P (Z n+1 j Z n i) does not depend on n because the T n, n are iid. Let T t 1 exp(λ), it has density λe λt, t. We know that P (t) e Qt, t. Thus, conditional on T t, E( P T t) P (t) e Qt, and so E( P T ) e QT. Taking expected values then yields P E(P (T )) (6) P (t)λe λt dt (7) P (u/λ)e u du (8) P (u/λ)e ui du (9) e Qu/λ e ui du (1) e u(i (Q/λ)) du (11) [I (Q/λ)] 1. (12) Irreducibility follows since in fact P i,j > for all pairs (i, j), a condition known as strong irreducibility: We know that P i,j (t) > for some t and since holding times are continuous (exponential) it follows that P i,j (t) > within a time interval t (s 1, s 2 ), s 1 < s 2, and hence P i,j P i,j (t)λe λt dt s2 s 1 P i,j (t)λe λt dt >. (c) Prove that {X(t)} is positive recurrent if and only if {Z n } is positive recurrent in which case π P ; they share the same stationary distribution. SOLUTION: By irreducibility, it suffices to prove that a probability solution to π π P (for {Z n }) exists if and only if a probability solution to P Q (for {X(t)}) exists and that π P. This is immediate: π π P if and only if π π[i (Q/λ)] 1 if and only if π[i (Q/λ)] π if and only if πq. 6