IEOR 4106: Introduction to Operations Research: Stochastic Models Spring 2011, Professor Whitt Class Lecture Notes: Tuesday, March 1.

Similar documents
Q = (c) Assuming that Ricoh has been working continuously for 7 days, what is the probability that it will remain working at least 8 more days?

IEOR 3106: Second Midterm Exam, Chapters 5-6, November 7, 2013

(b) What is the variance of the time until the second customer arrives, starting empty, assuming that we measure time in minutes?

IEOR 6711: Professor Whitt. Introduction to Markov Chains

IEOR 6711: Stochastic Models I, Fall 2003, Professor Whitt. Solutions to Final Exam: Thursday, December 18.

Markov Model. Model representing the different resident states of a system, and the transitions between the different states

Statistics 150: Spring 2007

Data analysis and stochastic modeling

The Transition Probability Function P ij (t)

ISE/OR 760 Applied Stochastic Modeling

IEOR 6711, HMWK 5, Professor Sigman

Availability. M(t) = 1 - e -mt

MARKOV PROCESSES. Valerio Di Valerio

Continuous Time Markov Chain Examples

ISyE 6650 Probabilistic Models Fall 2007

Outlines. Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC)

Chapter 5. Continuous-Time Markov Chains. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan

Uncertainty Runs Rampant in the Universe C. Ebeling circa Markov Chains. A Stochastic Process. Into each life a little uncertainty must fall.

Markov chains. 1 Discrete time Markov chains. c A. J. Ganesh, University of Bristol, 2015

(implicitly assuming time-homogeneity from here on)

Markov chains. Randomness and Computation. Markov chains. Markov processes

Markov Reliability and Availability Analysis. Markov Processes

Math 304 Handout: Linear algebra, graphs, and networks.

6.842 Randomness and Computation March 3, Lecture 8

Markov Chains Handout for Stat 110

Irreducibility. Irreducible. every state can be reached from every other state For any i,j, exist an m 0, such that. Absorbing state: p jj =1

LIMITING PROBABILITY TRANSITION MATRIX OF A CONDENSED FIBONACCI TREE

Queueing Theory I Summary! Little s Law! Queueing System Notation! Stationary Analysis of Elementary Queueing Systems " M/M/1 " M/M/m " M/M/1/K "

SOLUTIONS IEOR 3106: Second Midterm Exam, Chapters 5-6, November 8, 2012

IEOR 6711: Stochastic Models I Fall 2012, Professor Whitt, Thursday, October 4 Renewal Theory: Renewal Reward Processes

Stochastic processes. MAS275 Probability Modelling. Introduction and Markov chains. Continuous time. Markov property

Markov Chains. X(t) is a Markov Process if, for arbitrary times t 1 < t 2 <... < t k < t k+1. If X(t) is discrete-valued. If X(t) is continuous-valued

Lecture 5: Introduction to Markov Chains

IEOR 6711: Stochastic Models I Professor Whitt, Thursday, November 29, Weirdness in CTMC s

MATH 446/546 Test 2 Fall 2014

Queueing systems. Renato Lo Cigno. Simulation and Performance Evaluation Queueing systems - Renato Lo Cigno 1

MATH3200, Lecture 31: Applications of Eigenvectors. Markov Chains and Chemical Reaction Systems

Answers to selected exercises

Part A. P (w 1 )P (w 2 w 1 )P (w 3 w 1 w 2 ) P (w M w 1 w 2 w M 1 ) P (w 1 )P (w 2 w 1 )P (w 3 w 2 ) P (w M w M 1 )

IEOR 3106: Introduction to Operations Research: Stochastic Models. Professor Whitt. SOLUTIONS to Homework Assignment 2

= P{X 0. = i} (1) If the MC has stationary transition probabilities then, = i} = P{X n+1

UNCORRECTED PROOFS. P{X(t + s) = j X(t) = i, X(u) = x(u), 0 u < t} = P{X(t + s) = j X(t) = i}.

Stochastic Petri Nets. Jonatan Lindén. Modelling SPN GSPN. Performance measures. Almost none of the theory. December 8, 2010

Countable state discrete time Markov Chains

IEOR 4701: Stochastic Models in Financial Engineering. Summer 2007, Professor Whitt. SOLUTIONS to Homework Assignment 9: Brownian motion

Exercises Stochastic Performance Modelling. Hamilton Institute, Summer 2010

Statistics 992 Continuous-time Markov Chains Spring 2004

STAT 380 Continuous Time Markov Chains

Probabilistic Analysis of a Desalination Plant with Major and Minor Failures and Shutdown During Winter Season

4452 Mathematical Modeling Lecture 16: Markov Processes

Time Reversibility and Burke s Theorem

Reliability Analysis of a Fuel Supply System in Automobile Engine

IEOR 3106: Introduction to Operations Research: Stochastic Models Final Exam, Thursday, December 16, 2010

QUEUING MODELS AND MARKOV PROCESSES

Solutions to Homework Discrete Stochastic Processes MIT, Spring 2011

Since D has an exponential distribution, E[D] = 0.09 years. Since {A(t) : t 0} is a Poisson process with rate λ = 10, 000, A(0.

Queuing Networks: Burke s Theorem, Kleinrock s Approximation, and Jackson s Theorem. Wade Trappe

This is now an algebraic equation that can be solved simply:

THE HEAVY-TRAFFIC BOTTLENECK PHENOMENON IN OPEN QUEUEING NETWORKS. S. Suresh and W. Whitt AT&T Bell Laboratories Murray Hill, New Jersey 07974

Information Theory. Lecture 5 Entropy rate and Markov sources STEFAN HÖST

Stochastic2010 Page 1

T. Liggett Mathematics 171 Final Exam June 8, 2011

4.1 Markov Processes and Markov Chains

Problems. HW problem 5.7 Math 504. Spring CSUF by Nasser Abbasi

Part I Stochastic variables and Markov chains

Multi-State Availability Modeling in Practice

Lecture: Local Spectral Methods (1 of 4)

Quantitative Model Checking (QMC) - SS12

Stochastic process. X, a series of random variables indexed by t

An M/M/1 Queue in Random Environment with Disasters

Continuing the calculation of the absorption probability for a branching process. Homework 3 is due Tuesday, November 29.

Queueing. Chapter Continuous Time Markov Chains 2 CHAPTER 5. QUEUEING

57:022 Principles of Design II Final Exam Solutions - Spring 1997

Electronic Companion Fluid Models for Overloaded Multi-Class Many-Server Queueing Systems with FCFS Routing

Models of Molecular Evolution

MAS275 Probability Modelling Exercises

Randomized Algorithms

18.440: Lecture 33 Markov Chains

SYMBOLS AND ABBREVIATIONS

Homework set 2 - Solutions

HITTING TIME IN AN ERLANG LOSS SYSTEM

Chapter 29 out of 37 from Discrete Mathematics for Neophytes: Number Theory, Probability, Algorithms, and Other Stuff by J. M.

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks

1 IEOR 4701: Continuous-Time Markov Chains

MAA704, Perron-Frobenius theory and Markov chains.

Exercises Solutions. Automation IEA, LTH. Chapter 2 Manufacturing and process systems. Chapter 5 Discrete manufacturing problems

Markov Repairable Systems with History-Dependent Up and Down States

ABC methods for phase-type distributions with applications in insurance risk problems

RELIABILITY ANALYSIS OF A FUEL SUPPLY SYSTEM IN AN AUTOMOBILE ENGINE

Stochastic Petri Net

6 Continuous-Time Birth and Death Chains

Bisection Ideas in End-Point Conditioned Markov Process Simulation

Cost Analysis of a vacation machine repair model

Link Models for Circuit Switching

Random Walk on a Graph

Lesson Plan. AM 121: Introduction to Optimization Models and Methods. Lecture 17: Markov Chains. Yiling Chen SEAS. Stochastic process Markov Chains

MIT Manufacturing Systems Analysis Lectures 6 9: Flow Lines

TCOM 501: Networking Theory & Fundamentals. Lecture 6 February 19, 2003 Prof. Yannis A. Korilis

Quantitative Evaluation of Emedded Systems Solution 2: Discrete Time Markov Chains (DTMC)

Computing Consecutive-Type Reliabilities Non-Recursively

Transcription:

IEOR 46: Introduction to Operations Research: Stochastic Models Spring, Professor Whitt Class Lecture Notes: Tuesday, March. Continuous-Time Markov Chains, Ross Chapter 6 Problems for Discussion and Solutions. Pooh Bear and the Three Honey Trees. A bear of little brain named Pooh is fond of honey. Bees producing honey are located in three trees: tree A, tree B and tree C. Tending to be somewhat forgetful, Pooh goes back and forth among these three honey trees randomly (in a Markovian manner) as follows: From A, Pooh goes next to B or C with probability / each; from B, Pooh goes next to A with probability 3/4, and to C with probability /4; from C, Pooh always goes next to A. Pooh stays a random time at each tree. (Assume that the travel times can be ignored.) Pooh stays at each tree an exponential length of time, with the mean being 5 hours at tree A or B, but with mean 4 hours at tree C. (a) Construct a CTMC enabling you to find the limiting proportion of time that Pooh spends at each honey tree. (b) What is the average number of trips per day Pooh makes from tree B to tree A? ANSWERS: (a) Find the limiting fraction of time that Pooh spends at each tree. These problems are part of a longer set of lecture notes, which have been posted on the web page. The focus here is on different ways to model. We are thus focusing on Section 3 of the notes. For this problem formulation, it is natural to use the SMP (semi-markov process) formulation of a CTMC (continuous-time Markov chain), involving the embedded DTMC and the mean holding times in each state. (See Sections 6. and 7.6 of Ross.) We thus define the embedded transition matrix P directly and the mean holding times /ν i directly. Note that this problem is formulated directly in terms of the DTMC, describing the random motion at successive transitions, so it is natural to use this initial modelling approach. Here the transition matrix for the DTMC is P = A B C / / 3/4 /4 In the displayed transition matrix P, we have only labelled the rows. The columns are assumed to be labelled in the same order. In general, the steady-state probability is α i = π i(/ν i ) j π j(/ν j )

In this case, the steady state probability vector of the discrete-time Markov chain is obtained by solving π = πp, yielding π = ( 8 7, 4 7, 5 7 ). Then the final steady-state distribution, accounting for the random holding times is α = (, 4, 4 ). You could alternatively work with the infinitesimal transition rate matrix Q. If we want to define the infinitesimal transition matrix Q (Ross uses lower case q), then we can do so by setting Q i,j = ν i P i,j for i j. As usual, the diagonal elements Q i,i are set equal to minus the i th row sum; i.e., Q i,i = Q i,j. j:j i But we do not need the Q matrix to solve for the steady-state distribution. We can use the SMP representation. If we do define Q, then we can alternatively obtain the steady-state probability vector α above by solving the equation αq =, where the elements α i are required to sum to. Note that this is just a system of linear equations, just like π = πp. You should work to understand why we here have instead of α for the vector. (b) What is the average number of trips Pooh makes per day from tree B to tree A? The long-run fraction of time spent at B is /4, by part (a). Thus, on average, Pooh spends 6 hours per day at tree B. When at tree B, the rate of trips from B is /5 per hour (the reciprocal of 5 hours), and thus, on average, Pooh makes 6/5 =. trips per day from tree B. However, 3/4 of the trips from tree B are to tree A, so the average number of trips per day from B to A is (6/5) (3/4) = (8/) =.9.

. Copier Breakdown and Repair. Consider two copier machines that are maintained by a single repairman. Machine i functions for an exponentially distributed amount of time with mean /γ i, and thus rate γ i, before it breaks down. The repair times for copier i are exponential with mean /β i, and thus rate β i, but the repairman can only work on one machine at a time. Assume that the machines are repaired in the order in which they fail. Suppose that we wish to construct a CTMC model of this system, with the goal of finding the long-run proportions of time that each copier is working and the repairman is busy. How can we proceed? (a) Let {X(t) : t } be a stochastic process, where X(t) represents the number of working machines at time t. Is {X(t) : t } a Markov process? (b) Formulate a CTMC describing the evolution of the system. (c) Suppose that γ =, β =, γ = 3 and β = 4. Find the stationary distribution. (d) Now suppose, instead, that machine is much more important than machine, so that the repairman will always service machine if it is down, regardless of the state of machine. Formulate a CTMC for this modified problem and find the stationary distribution. ANSWERS: (a) Let {X(t) : t } be a stochastic process, where X(t) represents the number of working machines at time t. Is {X(t) : t } a Markov process? This process is not a Markov process. To be a Markov process, we need the conditional distribution of a future state, given a present state and past states to depend only upon the present state; i.e., we need P (X(t) = j X(s) = i, X(u), u s) = P (X(t) = j X(s) = i) for all s and t with s < t, and for all i and j. Here, however, the Markov property does not hold: When both machines are down, the next transition depends on which of the two machines failed first. (b) Formulate a CTMC describing the evolution of the system. However, we can use 5 states with the states being: for no copiers failed, for copier is failed (and copier is working), for copier is failed (and copier is working), (, ) for both copiers down (failed) with copier having failed first and being repaired, and (, ) for both copiers down with copier having failed first and being repaired. (Of course, these states could be relabelled,,, 3 and 4, but we do not do that.) From the problem specification, it is natural to work with transition rates, where these transition rates are obtained directly from the originally-specified failure rates and repair rates (the rates of the exponential random variables). In Figure we display a rate diagram showing the possible transitions with these 5 states together with the appropriate rates. It can be helpful to construct such rate diagrams as part of the modelling process. From Figure, we see that there are 8 possible transitions. The 8 possible transitions should clearly have transition rates Q, = γ, Q, = γ, Q, = β, Q,(,) = γ, Q, = β, Q,(,) = γ, Q (,), = β, Q (,), = β. 3

Rate Diagram (,) (,) j = rate copier j fails, j = rate copier j repaired Figure : A rate diagram showing the transition rates among the 5 states in Problem, involving copier breakdown and repair. We can thus define the transition-rate matrix Q. For this purpose, recall that the diagonal entries are minus the sum of the non-diagonal row elements, i.e., Q i,i = j,j i Q i,j for all i. That can be explained by the fact that the rate matrix Q is defined to be the derivative (from above) of the transition matrix P (t) at t = ; see the CTMC notes. In other words, the rate matrix should be Q = (, ) (, ) (γ + γ ) γ γ β (γ + β ) γ β (γ + β ) γ β β β β (c) Suppose that γ =, β =, γ = 3 and β = 4. Find the stationary distribution. We first substitute the specified numbers for the rates γ i and β i in the rate matrix Q above, 4

obtaining Q = (, ) (, ) 4 3 5 3 4 5 4 4 Then we solve the system of linear equations αq = with αe =, which is easy to do with a computer and is not too hard by hand. Just as with DTMC s, one of the equations in αq = is redundant, so that with the extra added equation αe =, there is a unique solution. Performing the calculation, we see that the limiting probability vector is ( 44 α (α, α, α, α (,), α (,) ) = 9, 6 9, 36 9, 4 ) 9, 9. 9 Thus, the long-run proportion of time that copier is working is α + α = 8/9.6, while the long-run proportion of time that copier is working is α +α = 6/9.47. The long-run proportion of time that the repairman is busy is α + α + α (,) + α (,) = α = 85/9.659, (d) Now suppose, instead, that machine is much more important than machine, so that the repairman will always service machine if it is down, regardless of the state of machine. Formulate a CTMC for this modified problem and find the stationary distribution. With this alternative repair strategy, we can revise the state space. Now it does suffice to use 4 states, letting the state correspond to the set of failed copiers, because now we know what the repairman will do when both copiers are down; he will always work on copier. Thus it suffices to use the single state (, ) to indicate that both machines have failed. There now is only one possible transition from state (, ): Q (,), = µ. We display the revised rate diagram in Figure below. The associated rate matrix is now Q = (, ) or, with the numbers assigned to the parameters, Q = (, ) (γ + γ ) γ γ β (γ + β ) γ β (γ + β ) γ β β 4 3 5 3 4 5 Just as before, we obtain the limiting probabilities by solving αq = with αe =. Now we obtain ( α (α, α, α, α (,) ) = 57, 4 57, 8 57, 5 ). 57 Thus, the long-run proportion of time that copier is working is α +α = 38/57 = /3.67, while the long-run proportion of time that copier is working is α + α = 4/57.4. The 5

Revised Rate Diagram (,) j = rate copier j fails, j = rate copier j repaired Figure : A revised rate diagram showing the transition rates among the 4 states in Problem, where the repairman always works on copier first when both have failed. new strategy has increased the long-run proportion of time copier is working from.6 to.67, at the expense of decreasing the long-run proportion of time copier is working from.47 to.4. The long-run proportion of time the repairman is busy is α = 37/57.649, which is very slightly less than before. We conclude by making some further commentary. We might think that the revised strategy is wasteful, because the repairman quits working on copier when copier fails after copier previously failed. By shifting to work on copier, we might think that the repairman is being inefficient, wasting his expended effort working on copier, making it more likely that both copiers will remain failed. In practice, under other assumptions, that might indeed be true, but here because of the lack-of-memory property of the exponential distribution, the expended work on copier has no influence on the remaining required repair times. From a pure efficiency perspective, it might be advantageous to give one of the two copiers priority at this point, but not because of the expended work on copier. On the other hand, we might prefer the original strategy from a fairness perspective. In any case, the CTMC model lets us analyze the consequences of alternative strategies. As always, the relevance of the conclusions depends on the validity of the model assumptions. But even when the model assumptions are not completely realistic or not strongly verified, the analysis can provide insight. 6