Introduction to Queuing Networks Solutions to Problem Sheet 3

Similar documents
IEOR 6711, HMWK 5, Professor Sigman

Lecture 20: Reversible Processes and Queues

Statistics 150: Spring 2007

Markov Chains. X(t) is a Markov Process if, for arbitrary times t 1 < t 2 <... < t k < t k+1. If X(t) is discrete-valued. If X(t) is continuous-valued

Stochastic process. X, a series of random variables indexed by t

Chapter 5. Continuous-Time Markov Chains. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan

Performance Evaluation of Queuing Systems

BIRTH DEATH PROCESSES AND QUEUEING SYSTEMS

Irreducibility. Irreducible. every state can be reached from every other state For any i,j, exist an m 0, such that. Absorbing state: p jj =1

The Transition Probability Function P ij (t)

1 IEOR 4701: Continuous-Time Markov Chains

Part I Stochastic variables and Markov chains

Simple queueing models

reversed chain is ergodic and has the same equilibrium probabilities (check that π j =

1 Continuous-time chains, finite state space

Continuous-Time Markov Chain

MARKOV PROCESSES. Valerio Di Valerio

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks

M/G/1 and M/G/1/K systems

Markov chains. 1 Discrete time Markov chains. c A. J. Ganesh, University of Bristol, 2015

Data analysis and stochastic modeling

IEOR 6711: Stochastic Models I, Fall 2003, Professor Whitt. Solutions to Final Exam: Thursday, December 18.

2905 Queueing Theory and Simulation PART III: HIGHER DIMENSIONAL AND NON-MARKOVIAN QUEUES

Birth-Death Processes

Lecture 7: Simulation of Markov Processes. Pasi Lassila Department of Communications and Networking

Queues and Queueing Networks

The story of the film so far... Mathematics for Informatics 4a. Continuous-time Markov processes. Counting processes

Queueing Theory. VK Room: M Last updated: October 17, 2013.

MAT SYS 5120 (Winter 2012) Assignment 5 (not to be submitted) There are 4 questions.

Markov processes and queueing networks

STAT 380 Continuous Time Markov Chains

CDA5530: Performance Models of Computers and Networks. Chapter 3: Review of Practical

Queueing. Chapter Continuous Time Markov Chains 2 CHAPTER 5. QUEUEING

Queueing Theory I Summary! Little s Law! Queueing System Notation! Stationary Analysis of Elementary Queueing Systems " M/M/1 " M/M/m " M/M/1/K "

UNIVERSITY OF YORK. MSc Examinations 2004 MATHEMATICS Networks. Time Allowed: 3 hours.

Continuous Time Processes

CDA6530: Performance Models of Computers and Networks. Chapter 3: Review of Practical Stochastic Processes

Modelling Complex Queuing Situations with Markov Processes

λ λ λ In-class problems

Q = (c) Assuming that Ricoh has been working continuously for 7 days, what is the probability that it will remain working at least 8 more days?

SOLUTIONS IEOR 3106: Second Midterm Exam, Chapters 5-6, November 8, 2012

Modelling data networks stochastic processes and Markov chains

The shortest queue problem

Part II: continuous time Markov chain (CTMC)

Performance Modelling of Computer Systems

Chapter 8 Queuing Theory Roanna Gee. W = average number of time a customer spends in the system.

Solutions to Homework Discrete Stochastic Processes MIT, Spring 2011

A Study on M x /G/1 Queuing System with Essential, Optional Service, Modified Vacation and Setup time

MATH37012 Week 10. Dr Jonathan Bagley. Semester

Statistics 253/317 Introduction to Probability Models. Winter Midterm Exam Friday, Feb 8, 2013

ECE-517: Reinforcement Learning in Artificial Intelligence. Lecture 4: Discrete-Time Markov Chains

LECTURE #6 BIRTH-DEATH PROCESS

Continuous time Markov chains

Classification of Queuing Models

CDA5530: Performance Models of Computers and Networks. Chapter 4: Elementary Queuing Theory

Lecture 21. David Aldous. 16 October David Aldous Lecture 21

Math 597/697: Solution 5

QUEUING SYSTEM. Yetunde Folajimi, PhD

Queuing Theory. Richard Lockhart. Simon Fraser University. STAT 870 Summer 2011

CPSC 531: System Modeling and Simulation. Carey Williamson Department of Computer Science University of Calgary Fall 2017

Queueing systems. Renato Lo Cigno. Simulation and Performance Evaluation Queueing systems - Renato Lo Cigno 1

GI/M/1 and GI/M/m queuing systems

Queuing Networks: Burke s Theorem, Kleinrock s Approximation, and Jackson s Theorem. Wade Trappe

YORK UNIVERSITY FACULTY OF ARTS DEPARTMENT OF MATHEMATICS AND STATISTICS MATH , YEAR APPLIED OPTIMIZATION (TEST #4 ) (SOLUTIONS)

Chapter 3 Balance equations, birth-death processes, continuous Markov Chains

Readings: Finish Section 5.2

TCOM 501: Networking Theory & Fundamentals. Lecture 6 February 19, 2003 Prof. Yannis A. Korilis

Name of the Student:

Time Reversibility and Burke s Theorem

Examination paper for TMA4265 Stochastic Processes

Queuing Theory. Using the Math. Management Science

Introduction to Queueing Theory

Statistics 253/317 Introduction to Probability Models. Winter Midterm Exam Monday, Feb 10, 2014

Introduction to Markov Chains, Queuing Theory, and Network Performance

STA 624 Practice Exam 2 Applied Stochastic Processes Spring, 2008

6 Solving Queueing Models

Continuous time Markov chains

Figure 10.1: Recording when the event E occurs

Assignment 3 with Reference Solutions

Quantitative Model Checking (QMC) - SS12

Queueing Theory II. Summary. ! M/M/1 Output process. ! Networks of Queue! Method of Stages. ! General Distributions

Stat 150 Practice Final Spring 2015

Networking = Plumbing. Queueing Analysis: I. Last Lecture. Lecture Outline. Jeremiah Deng. 29 July 2013

IEOR 3106: Second Midterm Exam, Chapters 5-6, November 7, 2013

P (L d k = n). P (L(t) = n),

Computer Networks More general queuing systems

Interlude: Practice Final

Outline. Finite source queue M/M/c//K Queues with impatience (balking, reneging, jockeying, retrial) Transient behavior Advanced Queue.

CS 798: Homework Assignment 3 (Queueing Theory)

2905 Queueing Theory and Simulation PART IV: SIMULATION

Contents Preface The Exponential Distribution and the Poisson Process Introduction to Renewal Theory

Intro Refresher Reversibility Open networks Closed networks Multiclass networks Other networks. Queuing Networks. Florence Perronnin

Math 416 Lecture 11. Math 416 Lecture 16 Exam 2 next time

Multi Stage Queuing Model in Level Dependent Quasi Birth Death Process

Lecture 10: Semi-Markov Type Processes

Probability and Statistics Concepts

A Simple Solution for the M/D/c Waiting Time Distribution

Modelling data networks stochastic processes and Markov chains

Markov Processes and Queues

Stationary remaining service time conditional on queue length

Transcription:

Introduction to Queuing Networks Solutions to Problem Sheet 3 1. (a) The state space is the whole numbers {, 1, 2,...}. The transition rates are q i,i+1 λ for all i and q i, for all i 1 since, when a bus arrives, the bus stop empties. Thus, the transition rate matrix is given by λ λ... (λ + ) λ... Q (λ + ) λ.......... (b) The Markov chain is not reversible since there is a transition from 2 to, but not from to 2, for example. In more detail, the local balance equation corresponding to those two states reads π π 2, which implies that π 2. (We assume that >.) Similarly, π 3, π 4,... are all zero. Next, there is a transition from 1 to 2, not the other way round. Writing down the corresponding local balance equation, we get λπ 1 π 2, which implies that π 1. Finally, looking at the local balance equation for states and 1, we have λπ π 1 which implies that π because π 1. Hence, all π n are zero. But there is no such probability distribution! (c) As the Markov chain isn t reversible, we need to solve the global balance equations πq to find the invariant distribution. Writing out the components of this vector equality, we get λπ (π 1 + π 2 + π 3 +...) λπ j (λ + )π j+1, j, 1, 2,..., from which it follows that π j jπ λ+) for all j. Combining this with the condition that j π j 1, we get π j ) j, j. (1) The only condition required for there to be an invariant distribution is > ; since buses have infinite capacity, the queue is stable so long as buses arrive at any positive rate. (d) Notice that the invariant distribution given in (1) is geometric, of the form (1 ) j with λ/(λ + ). Recall that the mean queue length in such a queue is given by E[N] /(1 ). (This formula can be derived using generating functions but may be worth remembering.) Thus, E[N] λ/. Hence, by Little s law, the mean sojourn time is given by E[W ] E[N]/λ 1/. The direct way to reach this answer is to note that the times between buses are exponentially distributed with mean 1/ and so, by the memoryless property of the exponential distribution (established in Problem 1), the typical customer has to wait for a mean time of 1/ irrespective of how long it was since the last bus arrived. 2. Let us condition on the event W w. Since the arrival process N(t) is a Poisson process of rate λ, the number of arrivals in a period of length w is a Poisson random variable with mean λw. In other words, P (A k W w) (λw)k e λw. k! 1

Now, we are told that W has an Exp() distribution, i.e., W has density f(w) e w on w. Hence, we get P (A k) (λw) k P (A k W w)f(w)dw e λw e w dw k! ) k ((λ + )w)k (λ + ) e (λ+)w dw k! ) k. The last equality holds because the integrand is the density of a Gamma random variable with shape parameter k and scale parameter λ +. Alternatively, to evaluate this integral, make the change of variables x (λ + )w and rewrite it as I k x k k! e x dx. Integrating by parts, it is easy to verify that I k I k 1. By induction, I k I. But I e x dx, and so I 1. Therefore, I k 1 for all k 1, 2,... We have thus shown that P (A k) ) k, for all k, 1, 2,..., which is what we were asked to show. Compare this with (1) in the solution to the last problem. Do you see why they are the same? 3. (a) The transition rates are given by q i,i+1 Hence, the jump probabilities are given by p i,i+1 q i,i+1 q i λ 1 + i, q i,i 1 1(i 1). λ λ + (i + 1)1(i 1) 1 p i,i 1. In order to compute the stationary distribution π, we use the fact that the Markov process is a birth-death process. Hence, it is reversible if it has an invariant distribution. Assuming that there is an invariant distribtuion π, it must solve the detailed balance equations π i q ij π j q ji for all i, j in the state space, i.e., π i λ i + 1 π i+1 π i+1 i + 1 π i, where is defined as λ/. Iterating the above equation, we get π i ( i /i!)π. The invariant distribution π must also be a probability distribution, i.e., it should sum to 1. It is clear that this can be achieved for any by setting π e. Hence, π i i i! e, i, 1, 2,... is an invariant distribution for the reversible Markov process X(t). Moreover, it is the unique invariant distribution since the chain is irreducible. 2

(b) i. We consider the queue in equilibrium and condition on there being an arrival in time (t, t + dt) which decides to join the queue. Using Bayes formula, we get P (X t i job arrives in (t, t + dt) and decides to join queue) P (X t i, and job arrives in (t, t + dt) and decides to join queue) P (job arrives in (t, t + dt) and decides to join queue) P (X t i)p (job arrives in (t, t + dt) and decides to join queue X t i) j P (X t j)p (job arrives in (t, t + dt) and decides to join queue X t j) π i λdt 1 1+i j π jλdt 1 1+j i /(i + 1)! j j /(j + 1)!. In order to obtain the third equality above, we have used the fact that the arrival process is Poisson of rate λ, and so the probability of an arrival in (t, t + dt) is λdt, independent of the past (and hence of the current state X t, which is a function of the past of the arrival and service processes). The probability that the arriving job decides to enter the queue does of course depend on X t, which we have taken into account. Now, j j (j + 1)! 1 k1 k (k! 1 ( k k! )! k e 1, since! 1 by convention. (The empty product is taken to be the multiplicative identity, by convention, just as the empty sum is taken to be the additive identity, zero.) Substituting this above, we obtain i+1 P (X t i job arrives in (t, t + dt) and decides to join queue) 1 e 1 (i + 1)!. This is the distribution of the number of customers already present in the system, as seen by a typical arrival who decides to join the queue. Note that it is different from the invariant distribution. ii. The calculation here is very similar, except that we condition on there being an arrival in time (t, t + dt) who decides not to join the queue. Using Bayes formula, we get Now, P (X t i job arrives in (t, t + dt) and decides not to join queue) P (X t i, and job arrives in (t, t + dt) and decides not to join queue) P (job arrives in (t, t + dt) and decides not to join queue) P (X t i)p (job arrives in (t, t + dt) and decides not to join queue X t i) j P (X t j)p (job arrives in (t, t + dt) and decides not to join queue X t j) π i λdt i 1+i j π jλdt j 1+j j j j (j + 1)! i i /(i + 1)! j jj /(j + 1)!. j (j + 1 1) j (j + 1)! j j j! j (j + 1)!. The first sum above is e, while the second sum was evaluated in the last part, and was found to be (e 1)/. Hence, j j (j + 1)! ( 1)e + 1. j 3 j

Substituting this in the expression for the conditional probability of seeing i customers conditioned on the arrival deciding not to join the queue, we get P (X t i job arrives in (t, t + dt) and decides not to join queue) ii+1 1 (i + 1)! ( 1)e + 1. This is the distribution as seen by a typical arrival who decides to balk. Note that it is different both from the invariant distribution, and from the distribution seen by an arrival who decides to join the queue. 4. (a) Let X(t) denote the number of customers in the system at time t. Then X(t) is a CTMC with transition rates given by q i,i+1 { α, i λ, i 1,, i, q i,i 1, i 1, 2, i 2. Since this is a birth and death process, it is reversible if it has an invariant distribution. Assuming that there is one, and denoting it by π, it must solve the detailed balance equations for all i. Thus, π 1 π q 1 q 1 α π, π i q i,i+1 π i+1 q i+1,i π i+1 λ 2 π i, i 1. Hence, π i α 2) i 1 for i 1. Since π has to be a probability vector, we have i which implies that π 2 λ 2α+2 λ, as required. (b) We have by Bayes theorem that ( π i π 1 + α 1 ) ( 1 λ π 1 + 2α ) 1, 2 λ 2 whereas, for i 1, P (X A ) P (X(t) and arrival in (t, t + dt)) P (arrival in (t, t + dt)) π α π α + i1 π iλ, (2) P (X A i) P (X(t) i and arrival in (t, t + dt)) P (arrival in (t, t + dt)) π i λ π α + i1 π iλ. (3) Now the denominator on the RHS in (2) and (3) is the same, but π is multiplied by different constants (α and λ) in the numerator depending on whether i or i 1. Since π is neither nor 1, P (X A i) can t be the same as π i for all i. Thus, the distribution seen by arrivals is not the same as the invariant distribution. 5. (a) The stationary distribution is π n (1 ) n, n. 4

(b) Let X(t) denote the number of customers in the system at time t. Since the system is in equilibrium, X(t) is a random variable with distribution π. Between time t and the arrival of C, which happens an Exp(λ) time after t, there are clearly no arrivals (since C is defined as the first arrival after t), but there may be some service completions. Let X denote the number of customers in the system at the arrival of C, excluding C. Then, P (X j) P (X(t) k, X j) kj π k P (X j X(t) k). (4) We evaluate the last conditional probability as follows. Suppose there are currently n jobs in the system. What is the probability that the next service completion happens before the next arrival? Since the time to the next arrival is Exp(λ), irrespective of how long it has been since the last arrival, and the time to the next service completion is Exp(), this probability is (check this for yourself). Hence, for the event {X j} to occur, conditional on X(t) k, there must be exactly k j service completions before the arrival of C. If j, this means that the arrival must happen before the (k j + 1) th service completion; if j, all k customers should be served before the arrival of C. Thus, we have, P (X X(t) k) P (X j X(t) k) kj ( ) k, + λ ( ) k j λ + λ + λ, j 1. (Compare this with Problem 2. There, we calculated the number of arrivals during a service completion. Here, we are calculating the number of service completions during an inter-arrival period. Both are geometrically distributed, but the geometric distribution here is truncated at the number of customers in the system since there can t be more service completions than that.) Substituting the above formulae in (4), we get whereas, for j 1, P (X ) (1 ) k( ) k ( (1 ) + λ 1 + k k ( (1 ) 1 ) 1 (1 )(1 + ), 1 + P (X j) as we were asked to show. kj (1 ) k( ) k j λ + λ + λ (1 ) j+i ( 1 ) i ( (1 ) j 1 + 1 + 1 + i i (1 ) j[( 1 ) 1 ] 1 (1 ) j+1, 1 + ) k ) i+1 +λ 5