Solutions to Homework Discrete Stochastic Processes MIT, Spring 2011

Similar documents
Statistics 150: Spring 2007

IEOR 6711, HMWK 5, Professor Sigman

Queuing Networks: Burke s Theorem, Kleinrock s Approximation, and Jackson s Theorem. Wade Trappe

Markov Chains. X(t) is a Markov Process if, for arbitrary times t 1 < t 2 <... < t k < t k+1. If X(t) is discrete-valued. If X(t) is continuous-valued

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science

Performance Evaluation of Queuing Systems

Solutions to Homework Discrete Stochastic Processes MIT, Spring 2011

Lecture 20: Reversible Processes and Queues

Page 0 of 5 Final Examination Name. Closed book. 120 minutes. Cover page plus five pages of exam.

reversed chain is ergodic and has the same equilibrium probabilities (check that π j =

1 IEOR 4701: Continuous-Time Markov Chains

Queueing Theory I Summary! Little s Law! Queueing System Notation! Stationary Analysis of Elementary Queueing Systems " M/M/1 " M/M/m " M/M/1/K "

IEOR 6711: Stochastic Models I, Fall 2003, Professor Whitt. Solutions to Final Exam: Thursday, December 18.

Queueing Theory. VK Room: M Last updated: October 17, 2013.

Since D has an exponential distribution, E[D] = 0.09 years. Since {A(t) : t 0} is a Poisson process with rate λ = 10, 000, A(0.

Continuous-Time Markov Chain

Time Reversibility and Burke s Theorem

Continuous Time Markov Chains

Chapter 5. Continuous-Time Markov Chains. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan

Queueing Theory II. Summary. ! M/M/1 Output process. ! Networks of Queue! Method of Stages. ! General Distributions

All models are wrong / inaccurate, but some are useful. George Box (Wikipedia). wkc/course/part2.pdf

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science

(b) What is the variance of the time until the second customer arrives, starting empty, assuming that we measure time in minutes?

2905 Queueing Theory and Simulation PART III: HIGHER DIMENSIONAL AND NON-MARKOVIAN QUEUES

Queueing systems. Renato Lo Cigno. Simulation and Performance Evaluation Queueing systems - Renato Lo Cigno 1

SOLUTIONS IEOR 3106: Second Midterm Exam, Chapters 5-6, November 8, 2012

Non Markovian Queues (contd.)

GI/M/1 and GI/M/m queuing systems

Part I Stochastic variables and Markov chains

Introduction to Markov Chains, Queuing Theory, and Network Performance

Figure 10.1: Recording when the event E occurs

6 Solving Queueing Models

Markov Processes and Queues

Stochastic process. X, a series of random variables indexed by t

Introduction to Queueing Theory

Photo: US National Archives

Classification of Queuing Models

Chapter 3 Balance equations, birth-death processes, continuous Markov Chains

TCOM 501: Networking Theory & Fundamentals. Lecture 6 February 19, 2003 Prof. Yannis A. Korilis

Introduction to Queueing Theory

Exercises Stochastic Performance Modelling. Hamilton Institute, Summer 2010

Q = (c) Assuming that Ricoh has been working continuously for 7 days, what is the probability that it will remain working at least 8 more days?

Birth-Death Processes

STAT 380 Continuous Time Markov Chains

Introduction to Queueing Theory

Operations Research II, IEOR161 University of California, Berkeley Spring 2007 Final Exam. Name: Student ID:

MAT SYS 5120 (Winter 2012) Assignment 5 (not to be submitted) There are 4 questions.

DISCRETE STOCHASTIC PROCESSES Draft of 2nd Edition

Introduction to Queuing Networks Solutions to Problem Sheet 3

Data analysis and stochastic modeling

The Transition Probability Function P ij (t)

Queues and Queueing Networks

CS 798: Homework Assignment 3 (Queueing Theory)

Stochastic Processes. Theory for Applications. Robert G. Gallager CAMBRIDGE UNIVERSITY PRESS

CPSC 531: System Modeling and Simulation. Carey Williamson Department of Computer Science University of Calgary Fall 2017

Class 11 Non-Parametric Models of a Service System; GI/GI/1, GI/GI/n: Exact & Approximate Analysis.

Assignment 3 with Reference Solutions

Renewal theory and its applications

Readings: Finish Section 5.2

Exercises Solutions. Automation IEA, LTH. Chapter 2 Manufacturing and process systems. Chapter 5 Discrete manufacturing problems

Chapter 2. Poisson Processes. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan

Introduction to queuing theory

ECE-517: Reinforcement Learning in Artificial Intelligence. Lecture 4: Discrete-Time Markov Chains

Part II: continuous time Markov chain (CTMC)

CHAPTER 4. Networks of queues. 1. Open networks Suppose that we have a network of queues as given in Figure 4.1. Arrivals

PBW 654 Applied Statistics - I Urban Operations Research

Quiz Queue II. III. ( ) ( ) =1.3333

Exam of Discrete Event Systems

STA 624 Practice Exam 2 Applied Stochastic Processes Spring, 2008

Simple queueing models

Math 416 Lecture 11. Math 416 Lecture 16 Exam 2 next time

Intro Refresher Reversibility Open networks Closed networks Multiclass networks Other networks. Queuing Networks. Florence Perronnin

Chapter 5: Special Types of Queuing Models

IOE 202: lectures 11 and 12 outline

Continuous Time Processes

1 Continuous-time chains, finite state space

Richard C. Larson. March 5, Photo courtesy of Johnathan Boeke.

Stability and Rare Events in Stochastic Models Sergey Foss Heriot-Watt University, Edinburgh and Institute of Mathematics, Novosibirsk

IE 336 Seat # Name (one point) < KEY > Closed book. Two pages of hand-written notes, front and back. No calculator. 60 minutes.

Continuous time Markov chains

MARKOV PROCESSES. Valerio Di Valerio

QUEUING SYSTEM. Yetunde Folajimi, PhD

Systems Simulation Chapter 6: Queuing Models

Stochastic Models of Manufacturing Systems

Networking = Plumbing. Queueing Analysis: I. Last Lecture. Lecture Outline. Jeremiah Deng. 29 July 2013

λ λ λ In-class problems

Slides 9: Queuing Models

18.440: Lecture 33 Markov Chains

Analysis of A Single Queue

BIRTH DEATH PROCESSES AND QUEUEING SYSTEMS

Lecture 7: Simulation of Markov Processes. Pasi Lassila Department of Communications and Networking

Review of Queuing Models

Chapter 6 Queueing Models. Banks, Carson, Nelson & Nicol Discrete-Event System Simulation

Queueing Networks and Insensitivity

16:330:543 Communication Networks I Midterm Exam November 7, 2005

QUEUING MODELS AND MARKOV PROCESSES

Performance Modelling of Computer Systems

Link Models for Packet Switching

Irreducibility. Irreducible. every state can be reached from every other state For any i,j, exist an m 0, such that. Absorbing state: p jj =1

Lecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes

Transcription:

Exercise 6.5: Solutions to Homework 0 6.262 Discrete Stochastic Processes MIT, Spring 20 Consider the Markov process illustrated below. The transitions are labelled by the rate q ij at which those transitions occur. The process can be viewed as a single server queue where arrivals become increasingly discouraged as the queue lengthens. The word timeaverage below refers to the limiting time-average over each sample-path of the process, except for a set of sample paths of probability 0. λ λ/2 λ/3 λ/4 λ/5 0 2 3 4 µ µ µ µ µ Part a) Find the time-average fraction of time p i spent in each state i > 0 in terms of p 0 and then solve for p 0. Hint: First find an equation relating p i to p i+ for each i. It also may help to recall the power series expansion of e x. Solution: From equation (6.36) we know: By iterating over i we get: λ p i = p i+ µ, for i 0 i + ( ) i+ λ λ p i+ = p i = p 0, for i 0 µ(i + ) µ (i + )! = p i i=0 [ ( ) i = p 0 + = p 0 e λ/µ i= ] λ µ i! Where the last derivation is in fact the Taylor expansion of the function e x. Thus, λ/µ p 0 = ( e ) λ i p i = e λ/µ, for i µ i! Part b) Find a closed form solution to i p iv i where v i is the departure rate from state i. Show that the process is positive recurrent for all choices of λ > 0 and µ > 0 and explain

intuitively why this must be so. Solution: We observe that v 0 = λ and v i = µ + λ/(i + ) for i. i λ/µ λ λ/µ piv i = λe + ( ) {[µ + λ/(i + )] e } µ i! i 0 i i i+ λ/µ λ λ = λe + µe ( ) ( ) λ/µ + µe λ/µ µ i! µ (i + )! i i ( ) ( ) = λe λ/µ λ + µe λ/µ e λ/µ + µe λ/µ e λ/µ µ ( ) = 2µ e λ/µ The value of i p iv i is finite for all values of λ > 0 and µ > 0. So this process is positive recurrent for for all choices of transition rates λ and µ. λp We saw that P = i i+ µ(i+), so p i must decrease rapidly in i for sufficiently large i. Thus the fraction of time spent in very high numbered states must be negligible. This suggests that the steady-state equations for the p i must have a solution. Since ν i is bounded between µ and µ + λ for all i, it is intuitively clear that i ν ip i is finite, so the embedded chain must be positive recurrent. Part c) For the embedded Markov chain corresponding to this process, find the steady state probabilities π i for each i 0 and the transition probabilities P ij for each i, j. Solution: Since as shown in part (b), ( ) i 0 p iv i = / i 0 π i/v i <, we know that for all j 0, π j is proportional to p j v j : π j = p j v j, forj 0 k 0 p kv k So π 0 = λe λ/µ = ρ and for j : 2µ( e λ/µ ) 2 e ρ [ µ + λ/(j + )] ρ j j! e ρ π j = 2µ ( e ρ ) µ [ + ρ/(j + )] ρ j e ρ = 2µ ( e ρ ) j! ρ i [ ] ρ = +, for j j! j + 2 (e ρ ) 2

The embedded Markov chain will look like: λ/(λ+2µ) λ/(λ+3µ) λ/(λ+4µ) λ/(λ+5µ) 0 2 3 4 2µ/(λ+2µ) 3µ/(λ+3µ) 4µ/(λ+4µ) 5µ/(λ+5µ) 6µ/(λ+6µ) The transition probabilities are: P 0 = P i,i = (i + )µ, λ + (i + )µ for i P i,i+ = λ, λ + (i + )µ for i Finding the steady state distribution of this Markov chain gives the same result as found above. Part d) For each i, find both the time-average interval and the time-average number of overall state transitions between successive visits to i. Solution: Looking at this process as a delayed renewal reward process where each entry to state i is a renewal and the inter-renewal intervals are independent. The reward is equal to whenever the process is in state i. Given that transition n of the embedded chain enters state i, the interval U n is exponential with rate v i, so E[U n X n = i] = /v i. During this U n time, reward is and then it is zero until the next renewal of the process. The total average fraction of time spent in state i is p i with high probability. So in the steady state, the total fraction of time spent in state i (p i ) should be equal to the fraction of time spent in state i in one inter-renewal interval. The expected length of time spent in state i in one inter-renewal interval is /v i and the expected inter renewal interval (W i ) is what we want to know: Thus W i = p v. i i U p i = = W i v i W i e ρ W 0 = λ W i = (i + )! e ρ, µρ i (i + + ρ) for i 3

Applying Theorem 5..4 to the embedded chain, the expected number of transitions, E [T ii ] from one visit to state i to the next, is T ii = /π i. Exercise 6.9: Let q i,i+ = 2 i for all i 0 and let q i,i are 0. = 2 i for all i. All other transition rates Part a) Solve the steady-state equations and show that p i = 2 i for all i 0. Solution: The defined Markov process can be shown as: /2 0 2 3 4 2 2 4 8 6 4 8 For each i 0, p i q i,i+ = p i+ q i+,i.. p i = p 2 i, fori = ( ) p j = p 0 + + + 2 4 j 0 So p 0 = 2 and p i =, i 2 i+ 0 Part b) Find the transition probabilities for the embedded Markov chain and show that the chain is null-recurrent. Solution: The embedded Markov chain is: /2 /2 /2 /2 0 2 3 4 /2 /2 /2 /2 /2 The steady state probabilities satisfy π 0 = /2π and /2π i = /2π i+ for i. So 2π 0 = π = π 2 = π 3 =. This is a null-recurrent chain, as essentially shown in Exercise 5.2. Part c) For any state i, consider the renewal process for which the Markov process starts in state i and renewals occur on each transition to state i. Show that, for each i, the expected inter-renewal interval is equal to 2. Hint: Use renewal reward theory. 4

Solution: As explained in Exercise 6.5 part (d), the expected inter-renewal intervals of recurrence of state i (W i ) satisfies the equation p i =. Hence, v i W i W i = = = 2 v i ipi 2 (i+) 2. Where v i = q i,i+ + q i,i = 2 i Part d) Show that the expected number of transitions between each entry into state i is infinite. Explain why this does not mean that an infinite number of transitions can occur in a finite time. Solution: We have seen in part b) that the embedded chain is null-recurrent. This means that, given X 0 = i, for any given i, that a return to i must happen in a finite number of transitions (i.e., lim n F ii (n) = ). We have seen many rv s that have an infinite expectation, but, being rv s, have a finite sample value WP. Exercise 6.4: A small bookie shop has room for at most two customers. Potential customers arrive at a Poisson rate of 0 customers per hour; They enter if there is room and are turned away, never to return, otherwise. The bookie serves the admitted customers in order, requiring an exponentially distributed time of mean 4 minutes per customer. Part a) Find the steady state distribution of the number of customers in the shop. Solution: The arrival rate of the customers is 0 customers per hour and the service time is exponentially distributed with rate 5 customers per hour (or equivalently with mean 4 minutes per customer). The Markov process corresponding to this bookie store is: 0 0 0 5 5 To find the steady state distribution of this process we use the fact that p 0 q 0, = p q,0 and p q,2 = p 2 q 2,. So: 2 0p 0 = 5p 0p = 5p 2 6 9 4 Thus, p = 9, p0 = 9, p 2 = 9. = p 0 + p + p 2 5

Part b) Find the rate at which potential customers are turned away. Solution: The customers are turned away when the process is in state 2 and when the process is in state 2, at rate λ = 0 the customers are turned away. So the overall rate at which the customers are turned away is λp = 40 2 9. Part c) Suppose the bookie hires an assistant; the bookie and assistant, working together, now serve each customer in an exponentially distributed time of mean 2 minutes, but there is only room for one customer (i.e., the customer being served) in the shop. Find the new rate at which customers are turned away. Solution: The new Markov process will look like: 0 0 30 The new steady state probabilities satisfy 0 p = 30 and + =. Thus, = 0 p p 0 p p 4 and p 0 = 3 4. The customers are turned away if the process is in state and then this happens with rate λ = 0. Thus, the overall rate at which the customers are turned away is λp = 5 Exercise 6.6: Consider the job sharing computer system illustrated below. Incoming jobs arrive from the left in a Poisson stream. Each job, independently of other jobs, requires pre-processing in system with probability Q. Jobs in system are served FCFS and the service times for successive jobs entering system are IID with an exponential distribution of mean /µ. The jobs entering system 2 are also served FCFS and successive service times are IID with an exponential distribution of mean /µ 2. The service times in the two systems are independent of each other and of the arrival times. Assume that µ > λq and that µ 2 > λ. Assume that the combined system is in steady state. 2. Part a) Is the input to system Poisson? Explain. Solution: Yes. The incoming jobs from the left are Poisson process. This process is split in two processes independently where each job needs a preprocessing in system 6

with probability Q. We know that if a Poisson process is split into two processes, each of the processes are also Poisson. So the jobs entering the system is Poisson with rate λq. Part b) Are each of the two input processes coming into system 2 Poisson? Solution: By Burke s theorem, the output process of a M/M/ queue is a Poisson process that has the same rate as the input process. So both sequences entering system 2 are Poisson, the first one has rate Qλ and the second one has rate ( Q)λ. The overall input is merged process of these two that is going to be a Poisson with rate λ (Since these processes are independent of each other.) Part d) Give the joint steady-state PMF of the number of jobs in the two systems. Explain briefly. Solution: We call the number of customers being served in system at time t as X (t) and number of customers being served in system 2 at time t, as X 2 (t). The splitting of the input arrivals from the left is going to make two independent processes with rates Qλ and ( Q)λ. The first process goes into system and defines X (t). The output jobs of system at time t is independent of its previous arrivals. Thus the input sequence of system 2 is independent of system. The two input processes of system 2 are also independent. Thus, X (t) is the number of customers in an M/M/ queue with input rate Qλ and service rate µ and X 2 (t) is the number of customers in an M/M/ queue with input rate λ and service rate µ 2. The total number of the customers in the system is X(t) = X (t) + X 2 (t) where X (t) is independent of X 2 (t). System one can be modeled as a birth death process where q i,i+ = Qλ and q i,i = µ. Thus, Pr{X i = i} = ( ρ )ρ in the steady state where ρ = Qλ/µ. The same is true j for system 2, thus, Pr{X 2 = j} = ( ρ 2 )ρ 2 in the steady state where ρ 2 = λ/µ 2. 7

Due to the independency of X and X 2, Pr{X = k} = Pr{X + X 2 = k} k = Pr{X = i, X 2 = k i} i=0 k = Pr{X = i} Pr{X 2 = k i} = i=0 k ( ρ )ρ i ( i =0 = ( ρ )( ρ2)ρ k ρ 2)ρ k 2 i k i 2 (ρ /ρ 2 ) i=0 k 2 ρ k ρ = ( ρ )( ρ 2 ) ρ 2 ρ Part e) What is the probability that the first job to leave system after time t is the same as the first job that entered the entire system after time t? Solution: The first job that enters the system after time t is the same as the first job to leave system after time t if and only if X (t) = 0 (system should be empty at time t, unless other jobs will leave system before the specified job) and the first entering job to the whole system needs preprocessing and is routed to system (and should need) which happens with probability Q. Since these two events are independent, the probability of the desired event will be ( ρ )Q = ( Qλ )Q. µ Part f) What is the probability that the first job to leave system 2 after time t both passed through system and arrived at system after time t? Solution: This is the event that both systems are empty at time t and the first arriving job is routed to system and is finished serving in system before the first job without preprocesing enters system 2. These three events are independent of each other. The probability that both systems are empty at time t in steady state is Pr{X (t) = 0, X 2 (t) = 0} = ( ρ )( ρ 2 ). The probability that the first job is routed to system is Q. The service time of the first job in system is called Y which is exponentially distributed with rate µ and the probability that the first job is finished before the first job without preprocessing enters system 2 is Pr{Y < Z} where Z is the r.v. which is the arrival time of the first job that does not need preprocessing. it is also exponentially distributed with rate ( Q)λ. Thus, Pr{Y < Z} = µ µ +( Q)λ. 8

Hence, the total probability of the described event is ( Qλ λ µ µ )( µ )Q 2 µ +( Q)λ. 9

MIT OpenCourseWare http://ocw.mit.edu 6.262 Discrete Stochastic Processes Spring 20 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.