Markov Processes and Queues

Similar documents
Continuous Time Processes

Queueing Theory. VK Room: M Last updated: October 17, 2013.

Performance Evaluation of Queuing Systems

Introduction to Markov Chains, Queuing Theory, and Network Performance

Solutions to Homework Discrete Stochastic Processes MIT, Spring 2011

Chapter 5. Continuous-Time Markov Chains. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan

Single-part-type, multiple stage systems

MIT Manufacturing Systems Analysis Lectures 15 16: Assembly/Disassembly Systems

Continuous-Time Markov Chain

QUEUING MODELS AND MARKOV PROCESSES

Part II: continuous time Markov chain (CTMC)

Stochastic process. X, a series of random variables indexed by t

Figure 10.1: Recording when the event E occurs

Single-part-type, multiple stage systems. Lecturer: Stanley B. Gershwin

Queuing Networks: Burke s Theorem, Kleinrock s Approximation, and Jackson s Theorem. Wade Trappe

Statistics 150: Spring 2007

BIRTH DEATH PROCESSES AND QUEUEING SYSTEMS

The Transition Probability Function P ij (t)

TCOM 501: Networking Theory & Fundamentals. Lecture 6 February 19, 2003 Prof. Yannis A. Korilis

MIT Manufacturing Systems Analysis Lectures 6 9: Flow Lines

Lecture 20: Reversible Processes and Queues

MIT Manufacturing Systems Analysis Lectures 19 21

Modelling data networks stochastic processes and Markov chains

Queueing Review. Christos Alexopoulos and Dave Goldsman 10/6/16. (mostly from BCNN) Georgia Institute of Technology, Atlanta, GA, USA

57:022 Principles of Design II Final Exam Solutions - Spring 1997

Time Reversibility and Burke s Theorem

Exercises Stochastic Performance Modelling. Hamilton Institute, Summer 2010

Queueing Theory I Summary! Little s Law! Queueing System Notation! Stationary Analysis of Elementary Queueing Systems " M/M/1 " M/M/m " M/M/1/K "

λ λ λ In-class problems

reversed chain is ergodic and has the same equilibrium probabilities (check that π j =

GI/M/1 and GI/M/m queuing systems

MARKOV PROCESSES. Valerio Di Valerio

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks

CDA5530: Performance Models of Computers and Networks. Chapter 4: Elementary Queuing Theory

Queues and Queueing Networks

LECTURE #6 BIRTH-DEATH PROCESS

Queueing Review. Christos Alexopoulos and Dave Goldsman 10/25/17. (mostly from BCNN) Georgia Institute of Technology, Atlanta, GA, USA

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science

Introduction to queuing theory

Intro Refresher Reversibility Open networks Closed networks Multiclass networks Other networks. Queuing Networks. Florence Perronnin

Exercises Solutions. Automation IEA, LTH. Chapter 2 Manufacturing and process systems. Chapter 5 Discrete manufacturing problems

Introduction to Queuing Networks Solutions to Problem Sheet 3

CPSC 531: System Modeling and Simulation. Carey Williamson Department of Computer Science University of Calgary Fall 2017

IEOR 6711: Stochastic Models I, Fall 2003, Professor Whitt. Solutions to Final Exam: Thursday, December 18.

MIT Manufacturing Systems Analysis Lecture 10 12

Markov Chains. X(t) is a Markov Process if, for arbitrary times t 1 < t 2 <... < t k < t k+1. If X(t) is discrete-valued. If X(t) is continuous-valued

The story of the film so far... Mathematics for Informatics 4a. Continuous-time Markov processes. Counting processes

Statistics 253/317 Introduction to Probability Models. Winter Midterm Exam Monday, Feb 10, 2014

Review of Queuing Models

Non Markovian Queues (contd.)

Lecture 7: Simulation of Markov Processes. Pasi Lassila Department of Communications and Networking

All models are wrong / inaccurate, but some are useful. George Box (Wikipedia). wkc/course/part2.pdf

Slides 9: Queuing Models

Link Models for Packet Switching

A PARAMETRIC DECOMPOSITION BASED APPROACH FOR MULTI-CLASS CLOSED QUEUING NETWORKS WITH SYNCHRONIZATION STATIONS

Birth-Death Processes

2905 Queueing Theory and Simulation PART III: HIGHER DIMENSIONAL AND NON-MARKOVIAN QUEUES

Introduction to queuing theory

Session-Based Queueing Systems

15 Closed production networks

0utline. 1. Tools from Operations Research. 2. Applications

Systems Simulation Chapter 6: Queuing Models

Queueing Theory II. Summary. ! M/M/1 Output process. ! Networks of Queue! Method of Stages. ! General Distributions

(b) What is the variance of the time until the second customer arrives, starting empty, assuming that we measure time in minutes?

Data analysis and stochastic modeling

Stochastic Models of Manufacturing Systems

Lecture 8 : The Geometric Distribution

Quantitative Model Checking (QMC) - SS12

15 Closed production networks

Queueing systems. Renato Lo Cigno. Simulation and Performance Evaluation Queueing systems - Renato Lo Cigno 1

Lecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes

Stochastic Models in Computer Science A Tutorial

Chapter 6 Queueing Models. Banks, Carson, Nelson & Nicol Discrete-Event System Simulation

CS 798: Homework Assignment 3 (Queueing Theory)

Irreducibility. Irreducible. every state can be reached from every other state For any i,j, exist an m 0, such that. Absorbing state: p jj =1

MAT SYS 5120 (Winter 2012) Assignment 5 (not to be submitted) There are 4 questions.

MIT Manufacturing Systems Analysis Lectures 18 19

Part I Stochastic variables and Markov chains

1 IEOR 4701: Continuous-Time Markov Chains

2 Structures, Graphs and Networks

6 Solving Queueing Models

Chapter 8 Queuing Theory Roanna Gee. W = average number of time a customer spends in the system.

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science

Chapter 1. Introduction. 1.1 Stochastic process

Name of the Student:

Markov processes and queueing networks

Queueing Networks and Insensitivity

NICTA Short Course. Network Analysis. Vijay Sivaraman. Day 1 Queueing Systems and Markov Chains. Network Analysis, 2008s2 1-1

Readings: Finish Section 5.2

CHAPTER 4. Networks of queues. 1. Open networks Suppose that we have a network of queues as given in Figure 4.1. Arrivals

Other properties of M M 1

Probability and Statistics Concepts

Queuing Theory. Using the Math. Management Science

IEOR 6711, HMWK 5, Professor Sigman

ECE 313 Probability with Engineering Applications Fall 2000

Basic Queueing Theory

Introduction to Queueing Theory with Applications to Air Transportation Systems

EE126: Probability and Random Processes

Modelling data networks stochastic processes and Markov chains

SOLUTIONS IEOR 3106: Second Midterm Exam, Chapters 5-6, November 8, 2012

Transcription:

MIT 2.853/2.854 Introduction to Manufacturing Systems Markov Processes and Queues Stanley B. Gershwin Laboratory for Manufacturing and Productivity Massachusetts Institute of Technology Markov Processes 1 Copyright c 2016 Stanley B. Gershwin.

Stochastic processes Stochastic processes t is time. X() is a stochastic process if X(t) is a random variable for every t. t is a scalar it can be discrete or continuous. X(t) can be discrete or continuous, scalar or vector. arkov Processes 2 Copyright c 2016 Stanley B. Gershwin.

Stochastic processes Markov processes Stochastic processes Markov processes A Markov process is a stochastic process in which the probability of finding X at some value at time t + δt depends only on the value of X at time t. Or, let x(s), s t, be the history of the values of X before time t and let A be a possible value of X. Then P{X(t + δt) = A X(s) =x(s), s t} = P{X(t + δt) = A X(t) =x(t)} Markov Processes 3 Copyright c 2016 Stanley B. Gershwin.

Stochastic processes Markov processes Stochastic processes Markov processes In words: if we know what X was at time t, we don t gain any more useful information about X(t + δt) by also knowing what X was at any time earlier than t. This is the definition of a class of mathematical models. It is NOT a statement about reality!! That is, not everything is a Markov process. Markov Processes 4 Copyright c 2016 Stanley B. Gershwin.

Markov processes Example Example Transition graph * I have $100 at time t=0. At every time t 1, I have $N(t). A (possibly biased) coin is flipped. If it lands with H showing, N(t + 1) = N(t) + 1. If it lands with T showing, N(t + 1) = N(t) 1. N(t) is a Markov process. Why? arkov Processes 5 Copyright c 2016 Stanley B. Gershwin.

Discrete state, discrete time Discrete state, discrete time States and transitions States can be numbered 0, 1, 2, 3,... (or with multiple indices if that is more convenient). Time can be numbered 0, 1, 2, 3,... (or 0,, 2, 3,... if more convenient). The probability of a transition from j to i in one time unit is often written P ij, where P ij = P{X(t + 1) = i X(t) = j} Markov Processes 6 Copyright c 2016 Stanley B. Gershwin.

Discrete state, discrete time States and transitions Transition graph Transition graph P 14 1 14 1 P P P 14 24 64 P 4 P 24 64 2 P 45 6 3 5 7 P ij is a probability. Note that P ii = 1 m,m=i P mi. * arkov Processes 7 Copyright c 2016 Stanley B. Gershwin.

Discrete state, discrete time States and transitions Transition graph Example : H(t) is the number of Hs after t coin flips. Assume probability of H is p. p p p p p 0 1 2 3 4 1 p 1 p 1 p 1 p 1 p Markov Processes 8 Copyright c 2016 Stanley B. Gershwin.

Discrete state, discrete time States and transitions Transition graph Example : Coin flip bets on Slide 5. Assume probability of H is p. 1 p 1 p 1 p 1 p 1 p 1 p 1 p 1 p 1 p 96 97 98 99 100 101 102 103 p p p p p p p p p Markov Processes 9 Copyright c 2016 Stanley B. Gershwin.

Markov processes Notation Discrete state, discrete time {X(t) = i} is the event that random quantity X(t) has value i. Example: X(t) is any state in the graph on slide 7. i is a particular state. Define π i (t) = P{X(t) = i}. Normalization equation: i π i (t) = 1. Markov Processes 10 Copyright c 2016 Stanley B. Gershwin.

Markov processes Transition equations Discrete state, discrete time Transition equations: application of the law of total probability. 4 1 P P P 14 24 64 π 4 (t + 1) = π 5 (t)p 45 + π 4 (t)(1 P 14 P 24 P 64 ) P 45 (Remember that P 45 = P{X(t + 1) = 4 X(t) = 5}, 5 P 44 = P{X(t + 1) = 4 X(t) = 4} (Detail of graph = 1 P 14 P 24 P 64 ) on slide 7.) arkov Processes 11 Copyright c 2016 Stanley B. Gershwin.

Markov processes Transition equations Discrete state, discrete time 1 P 14 1 P P P 14 24 64 P 24 4 P 64 2 P 45 6 P{X(t + 1) = 2} 5 3 7 = P{X(t + 1) = 2 X(t) = 1}P{X(t) = 1} +P{X(t + 1) = 2 X(t) = 2}P{X(t) = 2} +P{X(t + 1) = 2 X(t) = 4}P{X(t) = 4} +P{X(t + 1) = 2 X(t) = 5}P{X(t) = 5} Markov Processes 12 Copyright c 2016 Stanley B. Gershwin.

Markov processes Transition equations Discrete state, discrete time Define P ij = P{X(t + 1) = i X(t) = j} Transition equations: π i (t + 1) = j P ij π j (t). (Law of Total Probability) Normalization equation: i π i (t) = 1. Markov Processes 13 Copyright c 2016 Stanley B. Gershwin.

Markov processes Transition equations Discrete state, discrete time 1 P 14 1 P P P 14 24 64 P 24 4 P 64 Therefore, since 2 P 45 6 P ij = P{X(t + 1) = i X(t) = j} 3 7 5 π i (t) = P{X(t) = i}, π 2 (t + 1) = P 21 π 1 (t) + P 22 π 2 (t) + P 24 π 4 (t) + P 25 π 5 (t) Note that P 22 = 1 P 52. Markov Processes 14 Copyright c 2016 Stanley B. Gershwin.

Discrete state, discrete time Markov processes Transition equations Matrix-Vector Form For an n-state system, * Define π P11 P 1 (t) 12... P 1n 1 2( ) π(t ) = π t, P = P 21 P 22... P 2n 1, ν =......... π n (t) P n1 P n2... P nn 1 Transition equations: π(t + 1) = P π(t) Normalization equation: Other facts: T ν π(t) = 1 T ν P = ν T (Each column of P sums to 1.) π(t) = P t π(0) Markov Processes 15 Copyright c 2016 Stanley B. Gershwin.

Markov processes Steady state Discrete state, discrete time Steady state: π i = lim π i (t), if it exists. t Steady-state transition equations: π i = j P ij π j. Alternatively, steady-state balance equations: π i m,m i P mi = j,j i P ij π j Normalization equation: i π i = 1. arkov Processes 16 Copyright c 2016 Stanley B. Gershwin.

Discrete state, discrete time Markov processes Steady state Matrix-Vector Form Steady state: π = lim π(t), if it exists. t Steady-state transition equations: π = Pπ. Normalization equation: T ν π = 1. Fact: π = lim = P t π(0), if it exists. t arkov Processes 17 Copyright c 2016 Stanley B. Gershwin.

Markov processes Balance equations Discrete state, discrete time 1 2 P 14 14 1 P P P 14 24 64 P 4 P 24 64 5 P 45 6 Balance equation: π 4 (P 14 + P 24 + P 64 ) = π 5 P 45 in steady state only. Intuitive meaning: The average number of transitions into the circle per unit time equals the average number of transitions out of the circle per unit time. Markov Processes 18 Copyright c 2016 Stanley B. Gershwin.

Markov processes Geometric distribution Discrete state, discrete time Consider a two-state system. The system can go from 1 to 0, but not from 0 to 1. 1 p 1 1 0 Let p be the conditional probability that the system is in state 0 at time t + 1, given that it is in state 1 at time t. Then [ ] p = P α(t + 1) = 0 α(t) = 1. p arkov Processes 19 Copyright c 2016 Stanley B. Gershwin.

Discrete state, discrete time Markov processes Geometric distribution Transition equations Let π(α, t) be the probability of being in state α at time t. Then, since [ ] π(0, t + 1) = P α(t + 1) = 0 α(t) = 1 P[α(t) = 1] we have [ ] +P α(t + 1) = 0 α(t) = 0 P[α(t) = 0], π(0, t + 1) π(1, t + 1) = pπ(1, t) + π(0, t), = (1 p)π(1, t), and the normalization equation π(1, t) + π(0, t) = 1. arkov Processes 20 Copyright c 2016 Stanley B. Gershwin.

Discrete state, discrete time Markov processes Geometric distribution transient probability distribution Assume that π(1, 0) = 1. Then the solution is t π(0, t) = 1 (1 p), π(1, t) = (1 p) t. arkov Processes 21 Copyright c 2016 Stanley B. Gershwin.

Discrete state, discrete time Markov processes Geometric distribution transient probability distribution 1 Geometric Distribution 0.8 probability 0.6 0.4 p(0,t) p(1,t) 0.2 0 0 10 20 30 t Markov Processes 22 Copyright c 2016 Stanley B. Gershwin.

Markov processes Unreliable machine Discrete state, discrete time 1=up; 0=down. 1 p r 1 r 1 0 p Markov Processes 23 Copyright c 2016 Stanley B. Gershwin.

Discrete state, discrete time Markov processes Unreliable machine transient probability distribution The probability distribution satisfies π(0, t + 1) = π(0, t)(1 r) + π(1, t)p, π(1, t + 1) = π(0, t)r + π(1, t)(1 p). arkov Processes 24 Copyright c 2016 Stanley B. Gershwin.

Discrete state, discrete time Markov processes Unreliable machine transient probability distribution It is not hard to show that π(0, t) = t π(0, 0)(1 p r) p + [1 (1 p r) t ], r + p π(1, t) = π(1, 0)(1 p r) r + [1 (1 p r) t ]. r + p t Markov Processes 25 Copyright c 2016 Stanley B. Gershwin.

Discrete state, discrete time Markov processes Unreliable machine transient probability distribution 1 Discrete Time Unreliable Machine 0.8 probability 0.6 0.4 p(0,t) p(1,t) 0.2 0 0 20 40 60 80 100 t Markov Processes 26 Copyright c 2016 Stanley B. Gershwin.

Discrete state, discrete time Markov processes Unreliable machine steady-state probability distribution As t, which is the solution of p π(0, t) r + p, π(1, t) r r + p π(0) = π(0)(1 r) + π(1)p, π(1) = π(0)r + π(1)(1 p). Markov Processes 27 Copyright c 2016 Stanley B. Gershwin.

Discrete state, discrete time Markov processes Unreliable machine efficiency If a machine makes one part per time unit when it is operational, its average production rate is r π(1) = r + p This quantity is the efficiency of the machine. If the machine makes one part per τ time units when it is operational, its average production rate is ( ) 1 r P = τ r + p Markov Processes 28 Copyright c 2016 Stanley B. Gershwin.

States and transitions States can be numbered 0, 1, 2, 3,... (or with multiple indices if that is more convenient). Time is a real number, defined on (, ) or a smaller interval. The probability of a transition from j to i during [t, t + δt] is approximately λ ij δt, where δt is small, and λ ij δt P{X(t + δt) = i X(t) = j} for i j Markov Processes 29 Copyright c 2016 Stanley B. Gershwin.

States and transitions More precisely, λ ij δt = P{X(t + δt) = i X(t) = j} + o(δt) for i j o(δt) where o(δt) is a function that satisfies lim = 0 δt 0 δt This implies that for small δt, o(δt) δt. Markov Processes 30 Copyright c 2016 Stanley B. Gershwin.

States and transitions Transition graph λ 14 1 λ 24 4 λ 64 2 λ 45 6 3 5 7 λ ij is a probability rate. λ ij δt is a probability. Compare with the discrete-time graph. Markov Processes 31 Copyright c 2016 Stanley B. Gershwin.

States and transitions One of the transition equations: Define π i (t) = P{X(t) = i}. Then for δt small, π 5 (t + δt) (1 λ 25 δt λ 45 δt λ 65 δt)π 5 (t)+ λ 52 δtπ 2 (t) + λ 53 δtπ 3 (t) + λ 56 δtπ 6 (t) + λ 57 δtπ 7 (t) Markov Processes 32 Copyright c 2016 Stanley B. Gershwin.

States and transitions Or, π 5 (t + δt) π 5 (t) (λ 25 + λ 45 + λ 65 )π 5 (t)δt +(λ 52 π 2 (t) + λ 53 π 3 (t) + λ 56 π 6 (t) + λ 57 π 7 (t))δt Markov Processes 33 Copyright c 2016 Stanley B. Gershwin.

States and transitions Or, lim π 5(t + δt) π 5 (t) δt δt 0 = dπ 5 (t) = dt (λ 25 + λ 45 + λ 65 )π 5 (t) +λ 52 π 2 (t) + λ 53 π 3 (t) + λ 56 π 6 (t) + λ 57 π 7 (t) Markov Processes 34 Copyright c 2016 Stanley B. Gershwin.

States and transitions Define for convenience λ 55 = (λ 25 + λ 45 + λ 65 ) Then dπ 5 (t) = dt λ 55 π 5 (t)+ λ 52 π 2 (t) + λ 53 π 3 (t) + λ 56 π 6 (t) + λ 57 π 7 (t) Markov Processes 35 Copyright c 2016 Stanley B. Gershwin.

States and transitions Define π i (t) = P{X(t) = i} It is convenient to define λ ii = j i λ ji Transition equations: d π i (t) = dt λij π j (t). j Normalization equation: i π i (t) = 1. Often confusing!!! Markov Processes 36 Copyright c 2016 Stanley B. Gershwin.

Transition equations Matrix-Vector Form Define π(t), ν as before. λ 11 λ 12... λ 1n λ Define Λ = 21 λ 22... λ 2n... λ n1 λ n2... λ nn d π(t) Transition equations: = Λπ(t). dt Normalization equation: T ν π = 1. arkov Processes 37 Copyright c 2016 Stanley B. Gershwin.

Steady State Steady state: π i = lim t π i (t), if it exists. Steady-state transition equations: 0 = j λ ij π j. Alternatively, steady-state balance equations: π i m,m i λ mi = j,j i λ ij π j Normalization equation: i π i = 1. arkov Processes 38 Copyright c 2016 Stanley B. Gershwin.

Steady State Matrix-Vector Form Steady state: π = lim π(t), if it exists. t Steady-state transition equations: 0 = Λπ. Normalization equation: T ν π = 1. Markov Processes 39 Copyright c 2016 Stanley B. Gershwin.

Sources of confusion in continuous time models Never Draw self-loops in continuous time markov process graphs. Never write 1 λ 14 λ 24 λ 64. Write 1 (λ 14 + λ 24 + λ 64 )δt, or (λ 14 + λ 24 + λ 64 ) λ ii = j i λ ji is NOT a rate and NOT a probability. It is ONLY a convenient notation. Markov Processes 40 Copyright c 2016 Stanley B. Gershwin.

Exponential distribution Exponential random variable T : the time to move from state 1 to state 0. 1 0 µ Markov Processes 41 Copyright c 2016 Stanley B. Gershwin.

Exponential distribution π(0, t + δt) = [ ] P α(t + δt) = 0 α(t) = 1 P[α(t) = 1]+ [ ] P α(t + δt) = 0 α(t) = 0 P[α(t) = 0]. or or π(0, t + δt) = µδtπ(1, t) + π(0, t) + o(δt) dπ(0, t) = dt µπ(1, t). arkov Processes 42 Copyright c 2016 Stanley B. Gershwin.

Exponential distribution Or, If π(1, 0) = 1, then dπ(1, t) = µπ(1, t). dt and π(1, t) = e µt t π(0, t) = 1 e µ arkov Processes 43 Copyright c 2016 Stanley B. Gershwin.

Exponential distribution The probability that the transition takes place at some T [t, t + δt] is P [α(t + δt) = 0 and α(t) = 1] = P[α(t + δt) = 0 α(t) = 1]P[α(t) = 1] = (µδt)(e µt ) The exponential density function is therefore µe µt for t 0 and 0 for t < 0. The time of the transition from 1 to 0 is said to be exponentially distributed with rate µ. The expected transition time is 1/µ. (Prove it!) Markov Processes 44 Copyright c 2016 Stanley B. Gershwin.

Exponential distribution f (t) = µe µt for t 0; f (t) = 0 otherwise; F (t) = 1 e µt for t 0; F (t) = 0 otherwise. 2 ET = 1/µ, V T = 1/µ. Therefore, σ = ET so cv=1. f(t) µ 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 1 1.5 0 0.5 2 2.5 3 3.5 4 4.5 1 µ t F(t) 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 1 1.5 0 0.5 2 2.5 3 3.5 4 4.5 1 µ t Markov Processes 45 Copyright c 2016 Stanley B. Gershwin.

Markov processes Exponential Density function Exponential density function and a small number of samples. Markov Processes 46 Copyright c 2016 Stanley B. Gershwin.

Exponential distribution: some properties Memorylessness: P(T > t + x T > x) = P(T > t) P(t T t + δt T t) µδt for small δt. Markov Processes 47 Copyright c 2016 Stanley B. Gershwin.

Exponential distribution: some properties If T 1,..., T n are independent exponentially distributed random variables with parameters µ 1..., µ n, and T = min(t 1,..., T n ), then T is an exponentially distributed random variable with parameter µ = µ 1 +... + µ n. arkov Processes 48 Copyright c 2016 Stanley B. Gershwin.

Unreliable machine Continuous time unreliable machine. r up down p Markov Processes 49 Copyright c 2016 Stanley B. Gershwin.

Unreliable machine From the Law of Total Probability: P({the machine is up at time t + δt})= P({the machine is up at time t + δt the machine was up at time t}) P({the machine was up at time t}) + P({the machine is up at time t + δt the machine was down at time t}) P({the machine was down at time t}) +o(δt) and similarly for P({the machine is down at time t + δt}). Markov Processes 50 Copyright c 2016 Stanley B. Gershwin.

Unreliable machine Probability distribution notation and dynamics: π(1, t) = the probability that the machine is up at time t. π(0, t) = the probability that the machine is down at time t. P(the machine is up at time t + δt the machine was up at time t) = 1 pδt P(the machine is up at time t + δt the machine was down at time t) = rδt arkov Processes 51 Copyright c 2016 Stanley B. Gershwin.

Unreliable machine Therefore π(1, t + δt) = (1 pδt)π(1, t) + rδtπ(0, t) + o(δt) Similarly, π(0, t + δt) = pδtπ(1, t) + (1 rδt)π(0, t) + o(δt) Markov Processes 52 Copyright c 2016 Stanley B. Gershwin.

Unreliable machine or, π(1, t + δt) π(1, t) = pδtπ(1, t) + rδtπ(0, t) + o(δt) or, π(1, t + δt) π(1, t) o( = pπ(1, t) + rπ(0, t) + δt) δt δt arkov Processes 53 Copyright c 2016 Stanley B. Gershwin.

or, dπ(0, t) dt = π(0, t)r + π(1, t)p dπ(1, t) dt = π(0, t)r π(1, t)p arkov Processes 54 Copyright c 2016 Stanley B. Gershwin.

Markov processes Unreliable machine Solution π(0, t) = p r + p [ ] p + π(0, 0) e r + p (r+p)t π(1, t) = 1 π(0, t). As t, p π(0) r + p, r π(1) r + p Markov Processes 55 Copyright c 2016 Stanley B. Gershwin.

Markov processes Unreliable machine Steady-state solution If the machine makes µ parts per time unit on the average when it is operational, the overall average production rate is r µπ(1) = µ r + p Markov Processes 56 Copyright c 2016 Stanley B. Gershwin.

Poisson Process T 1 T 2 T 3 T 4 t 0 T 1 T + T T + T + T T + T + T +T 1 2 1 2 3 1 2 3 4 Let T i, i = 1,... be a set of independent exponentially distributed random variables with parameter λ. Each random variable may represent the time between occurrences of a repeating event. Examples: customer arrivals, clicks of a Geiger counter Then n i=1 T i is the time required for n such events. arkov Processes 57 Copyright c 2016 Stanley B. Gershwin.

Poisson Process T 1 T 2 T 3 T 4 t 0 T 1 T + T T + T + T T + T + T +T 1 2 1 2 3 1 2 3 4 Informally: 0 and t. N(t) is the number of events that occur between Formally: Define 0 if T 1 > t N(t) = n such that n n+1 i=1 T i t, i=1 T i > t Then N(t) is a Poisson process with parameter λ. arkov Processes 58 Copyright c 2016 Stanley B. Gershwin.

Poisson Process 0.18 λt (λt)n P(N(t) = n) = e n! Poisson Distribution 0.16 0.14 0.12 0.10 0.08 0.06 0.04 0.02 0.00 1 2 3 4 5 6 n λt = 6 7 8 9 10 arkov Processes 59 Copyright c 2016 Stanley B. Gershwin.

Poisson Process P(N(t) = n) = e λt (λt)n, λ = 2 n! P(N(t)=n) 0.4 0.35 n=1 0.3 n=2 0.25 n=3 n=4 0.2 0.15 n=5 n=10 0.1 0.05 0 0 1 2 3 4 5 6 7 8 t Markov Processes 60 Copyright c 2016 Stanley B. Gershwin.

Queueing theory Queueing theory M/M/1 Queue λ M/M/1 Queue µ Simplest model is the M/M/1 queue: Exponentially distributed inter-arrival times mean is 1/λ; λ is arrival rate (customers/time). (Poisson arrival process.) Exponentially distributed service times mean is 1/µ; µ is service rate (customers/time). 1 server. Infinite waiting area. Define the utilization ρ = λ/µ. arkov Processes 61 Copyright c 2016 Stanley B. Gershwin.

Queueing theory Queueing theory M/M/1 Queue M/M/1 Queue Number of customers in the system as a function of time for a M/M/1 queue. n 6 5 4 3 2 1 t Markov Processes 62 Copyright c 2016 Stanley B. Gershwin.

Queueing theory Queueing theory D/D/1 Queue M/M/1 Queue Number of customers in the system as a function of time for a D/D/1 queue. n 6 5 4 3 2 1 t Markov Processes 63 Copyright c 2016 Stanley B. Gershwin.

Queueing theory Queueing theory M/M/1 Queue M/M/1 Queue State space λ λ λ λ λ λ λ 0 1 2 n 1 n n+1 µ µ µ µ µ µ µ Markov Processes 64 Copyright c 2016 Stanley B. Gershwin.

Queueing theory Queueing theory M/M/1 Queue M/M/1 Queue Let π(n, t) be the probability that there are n parts in the system at time t. Then, and π(n, t + δt) = π(n 1, t)λδt + π(n + 1, t)µδt + π(n, t)(1 (λδt + µδt)) + o(δt) for n > 0 π(0, t + δt) = π(1, t)µδt + π(0, t)(1 λδt) + o(δt). Markov Processes 65 Copyright c 2016 Stanley B. Gershwin.

Queueing theory Queueing theory M/M/1 Queue Or, M/M/1 Queue dπ(n, t) dt dπ(0, t) dt = π(n 1, t)λ + π(n + 1, t)µ π(n, t)(λ + µ), n > 0 = π(1, t)µ π(0, t)λ. If a steady state distribution exists, it satisfies Why if? 0 = π(n 1)λ + π(n + 1)µ π(n)(λ + µ), n > 0 0 = π(1)µ π(0)λ. arkov Processes 66 Copyright c 2016 Stanley B. Gershwin.

Queueing theory M/M/1 Queue Queueing theory M/M/1 Queue Steady State Let ρ = λ/µ. These equations are satisfied by if ρ < 1. π(n) = (1 ρ ρ, n 0 ) n The average number of parts in the system is n = nπ(n) = ρ n 1 ρ = λ. µ λ Markov Processes 67 Copyright c 2016 Stanley B. Gershwin. *

Queueing theory Queueing theory Little s Law Little s Law True for most systems of practical interest (not just M/M/1). Steady state only. L = the average number of customers in a system. W = the average delay experienced by a customer in the system. In the M/M/1 queue, L = n and L = λw 1 W =. µ λ Markov Processes 68 Copyright c 2016 Stanley B. Gershwin.

Queueing theory M/M/1 Queueing theory Sample path Suppose customers arrive in a Poisson process with average inter-arrival time 1/λ = 1 minute; and that service time is exponentially distributed with average service time 1/µ = 54 seconds. The average number of customers in the system is 9. 9 8 7 6 5 n(t) 4 3 2 1 0 0 20 40 60 80 100 t Queue behavior over a short time interval initial transient Markov Processes 69 Copyright c 2016 Stanley B. Gershwin.

Queueing theory Queueing theory Sample path M/M/1 30 25 20 n(t) 15 10 5 0 0 1000 2000 3000 4000 5000 6000 t Queue behavior over a long time interval Markov Processes 70 Copyright c 2016 Stanley B. Gershwin.

Queueing theory Queueing theory M/M/1 Queue capacity M/M/1 W 100 80 60 40 20 0 0 0.5 1 1.5 2 µ=1 λ µ is the capacity of the system. If λ < µ, system is stable and waiting time remains bounded. If λ > µ, waiting time grows over time. Markov Processes 71 Copyright c 2016 Stanley B. Gershwin.

Queueing theory Queueing theory M/M/1 Queue capacity M/M/1 100 80 60 40 20 0 W µ=1 µ=2 0 0.5 1 1.5 2 λ To increase capacity, increase µ. To decrease delay for a given λ, increase µ. Markov Processes 72 Copyright c 2016 Stanley B. Gershwin.

Queueing theory Queueing theory Other Single-Stage Models Other Single-Stage Models Things get more complicated when: There are multiple servers. There is finite space for queueing. The arrival process is not Poisson. The service process is not exponential. Closed formulas and approximations exist for some, but not all, cases. arkov Processes 73 Copyright c 2016 Stanley B. Gershwin.

Queueing theory Queueing theory M/M/s Queue M/M/s Queue µ λ µ µ s-server Queue, s = 3 arkov Processes 74 Copyright c 2016 Stanley B. Gershwin.

Queueing theory Queueing theory M/M/s Queue M/M/s Queue λ λ λ λ λ λ λ λ 0 1 2 s 2 s 1 s s+1 µ 2µ 3µ (s 2) µ (s 1) µ sµ sµ sµ The service rate when there are k > s customers in the system is sµ since all s servers are always busy. The service rate when there are k s customers in the system is kµ since only k of the servers are busy. arkov Processes 75 Copyright c 2016 Stanley B. Gershwin.

Queueing theory Queueing theory M/M/s Queue M/M/s Queue where P(k) = s k ρ k π (0) k!, k s π(0) ss ρ k s!, k > s ρ = λ < 1; π(0) chosen so that P(k) = 1 sµ k Markov Processes 76 Copyright c 2016 Stanley B. Gershwin.

Queueing theory Queueing theory M/M/s Queue M/M/s Queue 20 (mu,s)=(4,1) (mu,s)=(2,2) (mu,s)=(1,4) (mu,s)=(.5,8) W vs. λ; sµ = constant 15 W 10 5 0 0 0.5 1 1.5 2 2.5 3 3.5 4 lambda Markov Processes 77 Copyright c 2016 Stanley B. Gershwin.

Queueing theory Queueing theory M/M/s Queue M/M/s Queue 20 (mu,s)=(4,1) (mu,s)=(2,2) (mu,s)=(1,4) (mu,s)=(.5,8) L vs. λ; sµ = constant 15 L 10 5 0 0 0.5 1 1.5 2 2.5 3 3.5 4 lambda Markov Processes 78 Copyright c 2016 Stanley B. Gershwin.

Queueing theory Queueing theory M/M/s Queue L M/M/s Queue 20 (mu,s)=(4,1) (mu,s)=(2,2) (mu,s)=(1,4) (mu,s)=(.5,8) 20 (mu,s)=(4,1) (mu,s)=(2,2) (mu,s)=(1,4) (mu,s)=(.5,8) 15 15 W 10 10 5 5 0 0 0 0.5 1 1.5 2 2.5 3 3.5 4 0 0.5 1 1.5 2 2.5 3 3.5 4 lambda lambda Why do all the curves go to infinity at the same value of λ? Why does L 0 when λ 0? Why is the (µ, s) = (.5, 8) curve the highest, followed by (µ, s) = (1, 4), etc.? Markov Processes 79 Copyright c 2016 Stanley B. Gershwin.

Networks of Queues Queueing theory Networks of Queues Set of queues where customers can go to another queue after completing service at a queue. Open network: where customers enter and leave the system. λ is known and we must find L and W. Closed network: where the population of the system is constant. L is known and we must find λ and W. Markov Processes 80 Copyright c 2016 Stanley B. Gershwin.

Networks of Queues Queueing theory Networks of Queues Examples Examples of Open networks internet traffic emergency room food court airport (arrive, ticket counter, security, passport control, gate, board plane) factory with no centralized material flow control after material enters Markov Processes 81 Copyright c 2016 Stanley B. Gershwin.

Networks of Queues Examples Person Sbarro s TCBY PIZZA McDonald s Frozen Yogurt Person with Tray ENTRANCE Tables EXIT Markov Processes 82 Copyright c 2016 Stanley B. Gershwin.

Networks of Queues Queueing theory Networks of Queues Examples Examples of Closed networks factory with material controlled by keeping the number of items constant (CONWIP) factory with limited fixtures or pallets Raw Part Input Empty Pallet Buffer Finished Part Output Markov Processes 83 Copyright c 2016 Stanley B. Gershwin.

Networks of Queues Queueing theory Jackson Networks Jackson Networks Queueing networks are often modeled as Jackson networks. Relatively easy to compute performance measures (capacity, average time in system, average queue lengths). Easily provides intuition. Easy to optimize and to use for design. Valid (or good approximation) for a large class of systems... Markov Processes 84 Copyright c 2016 Stanley B. Gershwin.

Networks of Queues Queueing theory Jackson Networks Jackson Networks... but not all. Storage areas must be infinite (i.e., blocking never occurs). This assumption leads to bad results for systems with bottlenecks at locations other than the first station. arkov Processes 85 Copyright c 2016 Stanley B. Gershwin.

Networks of Queues Open Jackson Networks Queueing theory Open Jackson Networks A A D A = B M Goal of analysis: to say something about how much inventory there is in this system and how it is distributed. arkov Processes 86 Copyright c 2016 Stanley B. Gershwin. D

Networks of Queues Queueing theory Open Jackson Networks Open Jackson Networks Items arrive from outside the system to node i according to a Poisson process with rate α i. α i > 0 for at least one i. When an item s service at node i is finished, it goes to node j next with probability p ij. If p i0 = 1 p ij > 0, then items depart from the network from node i. p i0 > 0 for at least one i. j We will focus on the special case in which each node has a single server with exponential processing time. The service rate of node i is µ i. arkov Processes 87 Copyright c 2016 Stanley B. Gershwin.

Networks of Queues Queueing theory Open Jackson Networks Open Jackson Networks Define λ i as the total arrival rate of items to node i. This includes items entering the network at i and items coming from all other nodes. Then λ i = α i + p ji λ j j In matrix form, let λ be the vector of λ i, α be the vector of α i, and P be the matrix of p ij. Then λ = α + P T λ or λ = (I P T ) 1 α arkov Processes 88 Copyright c 2016 Stanley B. Gershwin.

Networks of Queues Queueing theory Open Jackson Networks Open Jackson Networks Define π(n 1, n 2,..., n k ) to be the steady-state probability that there are n i items at node i, i = 1,..., k. i Define ρ i = λ i /µ i; π i (n i ) = (1 ρ i )ρ i. Then π(n 1, n 2,..., n k ) = π i (n i ) i n n = Eni = ρ i i 1 ρ i Does this look familiar? * * Markov Processes 89 Copyright c 2016 Stanley B. Gershwin.

Networks of Queues Queueing theory Open Jackson Networks Open Jackson Networks This looks as though each station is an M/M/1 queue. But even though this is NOT in general true, the formula holds. The product form solution holds for some more general cases. This exact analytic formula is the reason that the Jackson network model is very widely used sometimes where it does not belong! arkov Processes 90 Copyright c 2016 Stanley B. Gershwin.

Networks of Queues Queueing theory Closed Jackson Networks Consider an extension in which Then Closed Jackson Networks α i = 0 for all nodes i. p i0 = 1 p ij = 0 for all nodes i. j Since nothing is entering and nothing is departing from the network, the number of items in the network is constant. That is, n i (t) = N for all t. λ i = i p ji λ j does not have a unique solution: j If {λ 1, λ 2,..., λ k } is a solution, then {sλ 1, sλ 2,..., sλ k } is also a solution for any s 0. arkov Processes 91 Copyright c 2016 Stanley B. Gershwin.

Networks of Queues Queueing theory Closed Jackson Networks For some s, define where o Closed Jackson Networks π (n 1, n 2,..., nk) = [(1 ρ i )ρ i ] i n i ρ = i s λ i µ i This looks like the open network probability distribution (Slide 89), but it is a function of s. arkov Processes 92 Copyright c 2016 Stanley B. Gershwin.

Networks of Queues Queueing theory Closed Jackson Networks Closed Jackson Networks Consider a closed network with a population of N. Then if ni = N, i π(n 1, n 2,..., n k ) = o π (n 1, n 2,..., n k ) o m 1 +m 2 +...+m k =N π (m 1, m 2,..., m k ) o Since π is a function of s, it looks like π is a function of s. But it is not, because all the s s cancel! There are nice ways of calculating C(k, N) = o π (m 1, m 2,..., m k ) m 1 +m 2 +...+m k =N Markov Processes 93 Copyright c 2016 Stanley B. Gershwin.

Networks of Queues Application Simple Flexible Manufacturing System model Queueing theory Closed Jackson Network model of an FMS Solberg s CANQ model. M (Transport Station) Load/Unload q 1 q 2 q 3 q M 1 q M 1 2 3 M 1 Let {p ij } be the set of routing probabilities, as defined on Slide 87. p im = 1 if i M p Mj = q j if j M p ij = 0 otherwise Service rate at Station i is µ i. Markov Processes 94 Copyright c 2016 Stanley B. Gershwin.

Networks of Queues Application Simple Flexible Manufacturing System model Queueing theory Closed Jackson Network model of an FMS Let N be the number of pallets. The production rate is P = C(M, N 1) C(M, N) µ m and C(M, N) is easy to calculate in this case. Input data: M, N, q j, µ j (j = 1,..., M) Output data: P, W, ρ j (j = 1,..., M) arkov Processes 95 Copyright c 2016 Stanley B. Gershwin.

Networks of Queues Application Simple Flexible Manufacturing System model Queueing theory Closed Jackson Network model of an FMS P 0.35 0.3 0.25 0.2 0.15 0.1 0.05 0 2 4 6 8 10 12 14 16 18 20 Number of pallets Markov Processes 96 Copyright c 2016 Stanley B. Gershwin.

Networks of Queues Application Simple Flexible Manufacturing System model Queueing theory Closed Jackson Network model of an FMS Average time in system 70 60 50 40 30 20 10 0 2 4 6 8 10 12 14 16 18 20 Number of Pallets Markov Processes 97 Copyright c 2016 Stanley B. Gershwin.

Networks of Queues Application Simple Flexible Manufacturing System model Queueing theory Closed Jackson Network model of an FMS Utilization 1 0.8 Station 2 0.6 0.4 0.2 0 0 2 4 6 8 10 12 14 16 18 20 Number of Pallets Markov Processes 98 Copyright c 2016 Stanley B. Gershwin.

Networks of Queues Application Simple Flexible Manufacturing System model Queueing theory Closed Jackson Network model of an FMS P 0.6 0.55 0.5 0.45 0.4 0.35 0.3 0.25 0.2 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 Station 2 operation time Markov Processes 99 Copyright c 2016 Stanley B. Gershwin.

Networks of Queues Application Simple Flexible Manufacturing System model Queueing theory Closed Jackson Network model of an FMS Average time in system 45 40 35 30 25 20 15 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 Station 2 operation time Markov Processes 100 Copyright c 2016 Stanley B. Gershwin.

Networks of Queues Application Simple Flexible Manufacturing System model Queueing theory Closed Jackson Network model of an FMS Utilization 1 0.8 Station 2 0.6 0.4 0.2 0 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 Station 2 operation time Markov Processes 101 Copyright c 2016 Stanley B. Gershwin.

MIT OpenCourseWare https://ocw.mit.edu 2.854 / 2.853 Introduction To Manufacturing Systems Fall 2016 For information about citing these materials or our Terms of Use, visit: https://ocw.mit.edu/terms.