MIT Manufacturing Systems Analysis Lectures 18 19

Similar documents
Single-part-type, multiple stage systems. Lecturer: Stanley B. Gershwin

MIT Manufacturing Systems Analysis Lecture 10 12

MIT Manufacturing Systems Analysis Lectures 15 16: Assembly/Disassembly Systems

MIT Manufacturing Systems Analysis Lectures 6 9: Flow Lines

Type 1. Type 1 Type 2 Type 2 M 4 M 1 B 41 B 71 B 31 B 51 B 32 B 11 B 12 B 22 B 61 M 3 M 2 B 21

MIT Manufacturing Systems Analysis Lecture 14-16

UNIVERSITY OF CALGARY. A Method for Stationary Analysis and Control during Transience in Multi-State Stochastic. Manufacturing Systems

0utline. 1. Tools from Operations Research. 2. Applications

PERFORMANCE ANALYSIS OF PRODUCTION SYSTEMS WITH REWORK LOOPS

Representation and Analysis of Transfer Lines. with Machines That Have Different Processing Rates

An Aggregation Method for Performance Evaluation of a Tandem Homogenous Production Line with Machines Having Multiple Failure Modes

Markov Processes and Queues

Production variability in manufacturing systems: Bernoulli reliability case

Single-part-type, multiple stage systems

The two-machine one-buffer continuous time model with restart policy

2. Transience and Recurrence

Randomized Algorithms

Exercises Solutions. Automation IEA, LTH. Chapter 2 Manufacturing and process systems. Chapter 5 Discrete manufacturing problems

PRODUCTVIVITY IMPROVEMENT MODEL OF AN UNRELIABLE THREE STATIONS AND A SINGLE BUFFER WITH CONTINUOUS MATERIAL FLOW

MODELING AND ANALYSIS OF SPLIT AND MERGE PRODUCTION SYSTEMS

Analysis and Design of Manufacturing Systems with Multiple-Loop Structures. Zhenyu Zhang

18.440: Lecture 33 Markov Chains

MIT Manufacturing Systems Analysis Lectures 19 21

Bu er capacity for accommodating machine downtime in serial production lines

Topic one: Production line profit maximization subject to a production rate constraint. c 2010 Chuan Shi Topic one: Line optimization : 22/79

6 Reinforcement Learning

A PARAMETRIC DECOMPOSITION BASED APPROACH FOR MULTI-CLASS CLOSED QUEUING NETWORKS WITH SYNCHRONIZATION STATIONS

A system dynamics modelling, simulation and analysis of production line systems

MARKOV CHAINS A finite state Markov chain is a sequence of discrete cv s from a finite alphabet where is a pmf on and for

Bottlenecks in Markovian Production Lines: A Systems Approach

A Bernoulli Model of Selective Assembly Systems

Bayesian Networks: Construction, Inference, Learning and Causal Interpretation. Volker Tresp Summer 2014

Performance evaluation of production lines with unreliable batch machines and random processing time

Approximate analysis of single-server tandem queues with finite buffers

Theorem 1.7 [Bayes' Law]: Assume that,,, are mutually disjoint events in the sample space s.t.. Then Pr( )

Reliability of Technical Systems

Glossary availability cellular manufacturing closed queueing network coefficient of variation (CV) conditional probability CONWIP

D(f/g)(P ) = D(f)(P )g(p ) f(p )D(g)(P ). g 2 (P )

at the MASSACHU INSTITUTE GY MASSACHUSETTS INSTITUTE OF TECHNOLO May 1999 * ~ pp PP,

Adam Caromicoli. Alan S. Willsky 1. Stanley B. Gershwin 2. Abstract

11.1 Set Cover ILP formulation of set cover Deterministic rounding

Bayesian Networks: Construction, Inference, Learning and Causal Interpretation. Volker Tresp Summer 2016

Transient Analysis of Single Machine Production Line Dynamics

Reliable Computing I

IEOR 4106: Introduction to Operations Research: Stochastic Models Spring 2011, Professor Whitt Class Lecture Notes: Tuesday, March 1.

Department 2 Power cut of A1 and A2

Lecture 23: Reinforcement Learning

Notes for Lecture Notes 2

Transient Performance Evaluation, Bottleneck Analysis and Control of Production Systems

CS599 Lecture 1 Introduction To RL

Design and Analysis of Algorithms

Compiler Optimisation

Markov Independence (Continued)

1 Computing the Invariant Distribution

16.30/31, Fall 2010 Recitation # 2

Reinforcement Learning

Analysis of a Machine Repair System with Warm Spares and N-Policy Vacations

Lecture Notes 8

Math Models of OR: Some Definitions

2.004 Dynamics and Control II Spring 2008

cs/ee/ids 143 Communication Networks

State Machines. step by step processes (may step in response to input not today) State machines. Die Hard. State machines

6.262: Discrete Stochastic Processes 2/2/11. Lecture 1: Introduction and Probability review

Stochastic Modeling and Analysis of Generalized Kanban Controlled Unsaturated finite capacitated Multi-Stage Production System

Homework Assignment 4 Solutions

6.080 / Great Ideas in Theoretical Computer Science Spring 2008

Lecture 2: Relativization

Solutions to Homework Discrete Stochastic Processes MIT, Spring 2011

REAL-TIME MAINTENANCE POLICIES IN MANUFACTURING SYSTEMS

4452 Mathematical Modeling Lecture 16: Markov Processes

2.830J / 6.780J / ESD.63J Control of Manufacturing Processes (SMA 6303) Spring 2008

A hybrid Markov system dynamics approach for availability analysis of degraded systems

Chapter 17 Planning Based on Model Checking

3 Undirected Graphical Models

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks

1 Some loose ends from last time

Learning Energy-Based Models of High-Dimensional Data

1 Computational Problems

Supply chains models

27 : Distributed Monte Carlo Markov Chain. 1 Recap of MCMC and Naive Parallel Gibbs Sampling

6.045: Automata, Computability, and Complexity (GITCS) Class 15 Nancy Lynch

9 Forward-backward algorithm, sum-product on factor graphs

Eleventh Problem Assignment

Round-off error propagation and non-determinism in parallel applications

CMPSCI611: The Matroid Theorem Lecture 5

CS Lecture 3. More Bayesian Networks

Introduction to Probabilistic Reasoning. Image credit: NASA. Assignment

Introduction to Reinforcement Learning

On Polynomial Cases of the Unichain Classification Problem for Markov Decision Processes

Sources of randomness

Integrated Quality and Quantity Modelling of a Production Line. Jongyoon Kim

6.080 / Great Ideas in Theoretical Computer Science Spring 2008

A scalar conservation law with discontinuous flux for supply chains with finite buffers.

Lecture XI. Approximating the Invariant Distribution

Review Paper Machine Repair Problem with Spares and N-Policy Vacation

Computational Models Lecture 11, Spring 2009

CSE 380 Computer Operating Systems

O Notation (Big Oh) We want to give an upper bound on the amount of time it takes to solve a problem.

Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science Algorithms For Inference Fall 2014

KKT Examples. Stanley B. Gershwin Massachusetts Institute of Technology

Transcription:

MIT 2.852 Manufacturing Systems Analysis Lectures 18 19 Loops Stanley B. Gershwin Spring, 2007 Copyright c 2007 Stanley B. Gershwin.

Problem Statement B 1 M 2 B 2 M 3 B 3 M 1 M 4 B 6 M 6 B 5 M 5 B 4 Finite buffers (0 n i (t) N i ). Single closed loop fixed population ( i n i(t) = N ). Focus is on the Buzacott model (deterministic processing time; geometric up and down times). Repair probability = r i ; failure probability = p i. Many results are true for more general loops. Goal: calculate production rate and inventory distribution. 2

Problem Statement Motivation Limited pallets/fixtures. CONWIP (or hybrid) control systems. Extension to more complex systems and policies. 3

Special Case Two-Machine Loops Refer to MSE Section 5.6, page 205. N r 1 p 1 M 1 N 1 B 1 M 2 r 2 r 1 p 2 = p 1 N* M 1 B M 2 r 2 p 2 N 2 B 2 where P loop (r 1, p 1, r 2, p 2, N 1, N 2 ) = P line (r 1, p 1, r 2, p 2, N ) N = min(n, N 1 ) max(0, n N 2 ) 4

Special Case Two-Machine Loops r 1.14.01 r 2.1.01 0.89 p 1 0.885 N2=10 N2=15 N2=20 N2=30 N2=40 p 2 0.88 N 1 20 production rate 0.875 0.87 0.865 0.86 0 10 20 30 40 50 60 population 5

Expected population method Treat the loop as a line in which the first machine and the last are the same. In the resulting decomposition, one equation is missing. The missing equation is replaced by the expected population constraint ( i n i = N ). 6

Expected population method Evaluate i, i 1, i + 1 modulo k (ie, replace 0 by k and replace k + 1 by 1). p s (i 1)r u (i) r u (i) = r u (i 1)X(i) + r i (1 X(i)); X(i) = ( ) pu (i)e(i 1) 1 1 p d (i 1) p u (i) = r u (i) + 2, i = 1,..., k E(i 1) e i r d (i 1) p b (i + 1)r d (i) r d (i) = r d (i + 1)Y (i + 1) + r i+1 (1 Y (i + 1)); Y (i + 1) =. ( ) p d (i)e(i + 1) 1 1 p u (i + 1) p d (i) = r d (i) + 2, i = k,..., 1 E(i + 1) e i+1 r u (i + 1) This is 4k equations in 4k unknowns. But only 4k 1 of them are independent because the derivation uses E(i) = E(i + 1) for i = 1,..., k. The first k 1 are E(1) = E(2), E(2) = E(3),..., E(k 1) = E(k). But this implies E(k) = E(1), which is the same as the kth equation. 7

Expected population method Therefore, we need one more equation. We can use n i = N i If we guess p u (1) (say), we can evaluate n TOTAL = i n i as a function of p u (1). We search for the value of p u (1) such that n TOTAL = N. 8

Expected population method Behavior: Accuracy good for large systems, not so good for small systems. Accuracy good for intermediate-size populations; not so good for very small or very large populations. 9

Expected population method Hypothesis: The reason for the accuracy behavior of the population constraint method is the correlation in the buffers. The number of parts in the system is actually constant. The expected population method treats the population as random, with a specified mean. If we know that a buffer is almost full, we know that there are fewer parts in the rest of the network, so probabilities of blockage are reduced and probabilities of starvation are increased. (Similarly if it is almost empty.) Suppose the population is smaller than the smallest buffer. Then there will be no blockage. The expected population method does not take this into account. 10

Loop Behavior To construct a method that deals with the invariant (rather than the expected value of the invariant), we investigate how buffer levels are related to one another and to the starvation and blocking of machines. In a line, every downstream machine could block a given machine, and every upstream machine could starve it. In a loop, blocking and starvation are more complicated. 11

Loop Behavior Ranges B 1 M 2 B 2 M 3 B 3 M 1 M 4 B 6 M 6 B 5 M 5 B 4 The range of blocking of a machine is the set of all machines that could block it if they stayed down for a long enough time. The range of starvation of a machine is the set of all machines that could starve it if they stayed down for a long enough time. 12

Loop Behavior Ranges Range of Blocking 10 10 10 B 1 M M 3 2 B 2 B 3 M 1 7 0 0 M 4 B 6 M 6 B 5 M 5 B 4 All buffer sizes are 10. Population is 37. If M 4 stays down for a long time, it will block M 1. Therefore M 4 is in the range of blocking of M 1. Similarly, M 2 and M 3 are in the range of blocking of M 1. 13

Loop Behavior Ranges Range of starvation 7 10 10 B 1 M 2 B 2 M 3 B 3 M 1 0 0 10 M 4 B 6 M 6 B 5 M 5 B 4 If M 5 stays down for a long time, it will starve M 1. Therefore M 5 is in the range of starvation of M 1. Similarly, M 6 is in the range of starvation of M 1. 14

Loop Behavior Ranges Line The range of blocking of a machine in a line is the entire downstream part of the line. The range of starvation of a machine in a line is the entire upstream part of the line. 15

Loop Behavior Ranges Line In an acyclic network, if M j is downstream of M i, then the range of blocking of M j is a subset of the range of blocking of M i. M i M j Range of blocking of M Range of blocking of M j i Similarly for the range of starvation. 16

Loop Behavior Ranges Line In an acyclic network, if M j is downstream of M i, any real machine whose failure could cause M d (j) to appear to be down could also cause M d (i) to appear to be down. M B M i i j M (i) d M (j) d Consequently, we can express r d (i) as a function of the parameters of L(j). This is not possible in a network with a loop because some machine that blocks M j does not block M i. 17

Loop Behavior Ranges Difficulty for decomposition B 1 Range of blocking of M 6 M M 3 2 B 2 B 3 L(1) M u (1) B(1) M d (1) M 1 M 4 Range of blocking of M 1 B 1 M 2 B 2 M 3 B 3 B 6 M 6 B 5 M 5 B 4 Range of starvation of M 1 M 1 M 4 d M (6) u B(6) M (6) B 6 M 6 B 5 M 5 B 4 L(6) Range of starvation of M 2 Ranges of blocking and starvation of adjacent machines are not subsets or supersets of one another in a loop. 18

Loop Behavior Range of blocking of M Range of blocking of M 1 2 Ranges Difficulty for decomposition 10 10 10 B 1 M 2 B M 3 2 B 3 M 1 7 0 0 M 4 B 6 M 6 B 5 M 5 B 4 M 5 can block M 2. Therefore the parameters of M 5 should directly affect the parameters of M d (1) in a decomposition. However, M 5 cannot block M 1 so the parameters of M 5 should not directly affect the parameters of M d (6). Therefore, the parameters of M d (6) cannot be functions of the parameters of M d (1). 19

Multiple Failure Mode Line Decomposition To deal with this issue, we introduce a new decomposition. In this decomposition, we do not create failures of virtual machines that are mixtures of failures of real machines. Instead, we allow the virtual machines to have distinct failure modes, each one corresponding to the failure mode of a real machine. 20

Multiple Failure Mode Line Decomposition 1,2 3 4 8 5,6,7 9,10 1,2, 3,4 5,6,7, 8,9,10 There is an observer in each buffer who is told that he is actually in the buffer of a two-machine line. 21

Multiple Failure Mode Line Decomposition 1,2 3 4 8 5,6,7 9,10 1,2, 3,4 5,6,7, 8,9,10 Each machine in the original line may and in the two-machine lines must have multiple failure modes. For each failure mode downstream of a given buffer, there is a corresponding mode in the downstream machine of its two-machine line. Similarly for upstream modes. 22

Multiple Failure Mode Line Decomposition 1,2 3 4 8 5,6,7 9,10 1,2, 3,4 5,6,7, 8,9,10 The downstream failure modes appear to the observer after propagation through blockage. The upstream failure modes appear to the observer after propagation through starvation. The two-machine lines are more complex that in earlier decompositions but the decomposition equations are simpler. 23

Multiple Failure Mode Line Decomposition Two-Machine Line up down Form a Markov chain and find the steady-state probability distribution. The solution technique is very similar to that of the two-machine-state model. Determine the production rate, probability of starvation and probability of blocking in each down mode, average inventory. 24

Multiple Failure Mode Line Decomposition Line Decomposition A set of decomposition equations are formulated. They are solved by a Dallery-David-Xie-like algorithm. The results are a little more accurate than earlier methods, especially for machines with very different failures. 25

Multiple Failure Mode Line Decomposition Line Decomposition 1,2 3 4 8 5,6,7 9,10 1,2, 3,4 5,6,7, 8,9,10 In the upstream machine of the building block, failure mode 4 is a local mode; modes 1, 2, and 3 are remote modes. Modes 5, 6, and 7 are local modes of the downstream machine; 8, 9, and 10 are remote modes. For every mode, the repair probability is the same as the repair probability of the corresponding mode in the real line. Local modes: the probability of failure into a local mode is the same as the probability of failure in that mode of the real machine. 26

Multiple Failure Mode Line Decomposition Line Decomposition 1,2 3 4 8 5,6,7 9,10 1,2, 3,4 5,6,7, 8,9,10 Remote modes: i is the building block number; j and f are the machine number and mode number of a remote failure mode. Then u P s,jf (i 1) d P b,jf (i) p jf (i) = r jf ; p jf (i 1) = r jf E(i) E(i 1) u where p jf (i) is the probability of failure of the upstream machine into mode jf; P s,jf (i 1) is the probability of starvation of line i 1 due to mode jf; r jf is the probability of repair of the upstream machine from mode jf; etc. Also, E(i 1) = E(i). u d p jf (i), p jf (i) are used to evaluate E(i), P s,jf (i),p b,jf (i) from two-machine line i in an iterative method. 27

Multiple Failure Mode Line Decomposition Line Decomposition Consider u P s,jf (i 1) d P b,jf (i) p jf (i) = r jf ; p jf (i 1) = r jf E(i) E(i 1) In a line, jf refers to all modes of all upstream machines in the first equation; and all modes of all downstream machines in the second equation. We can interpret the upstream machines as the range of starvation and the downstream machines as the range of blockage of the line. 28

Multiple Failure Mode Line Decomposition Extension to Loops B 1 M M 3 2 B 2 B 3 M 1 M 4 B 6 M 6 B 5 M 5 B 4 Use the multiple-mode decomposition, but adjust the ranges of blocking and starvation accordingly. However, this does not take into account the local information that the observer has. 29

Thresholds 10 10 10 B 1 M M 3 2 B 2 B 3 M 1 M 4 5 2 0 B 6 M 6 B 5 M 5 B 4 5 d M (6) B 6 u M (6) The B 6 observer knows how many parts there are in his buffer. If there are 5, he knows that the modes he sees in M d (6) could be those corresponding to the modes of M 1, M 2, M 3, and M 4. 30

Thresholds 10 10 9 B 1 M M 3 2 B 2 B 3 M 1 M 4 8 0 0 B 6 M 6 B 5 M 5 B 4 8 d M (6) B 6 u M (6) However, if there are 8, he knows that the modes he sees in M d (6) could only be those corresponding to the modes of M 1, M 2, and M 3 ; and not those of M 4. The transition probabilities of the two-machine line therefore depend on whether the buffer level is less than 7 or not. 31

Thresholds This would require a new model of a two-machine line. The same issue arises for starvation. In general, there can be more than one threshold in a buffer. Consequently, this makes the two-machine line very complicated. 32

Transformation Purpose: to avoid the complexities caused by thresholds. Idea: Wherever there is a threshold in a buffer, break up the buffer into smaller buffers separated by perfectly reliable machines. 33

Transformation 20 buffer size B 1 18 13 1 When M 1 fails for a long time, 3 B 4 and B 3 fill up, and there M B 2 2 is one part in B2. Therefore there is a threshold of 1 in B 2. When M 2 fails for a long time, M M 3 1 B 1 fills up, and there is one part in B 4. Therefore there is a threshold of 1 in B 4. 1 When M B 4 M 4 B 3 fails for a long time, 3 15 5 B 2 fills up, and there are 18 parts in B 1. Therefore there is threshold population = 21 a threshold of 18 in B 1. 34

Transformation 20 15 B 1 18 13 M 1 B 4 1 threshold buffer size M B 2 2 1 M 3 M 4 B 3 population = 21 3 When M 4 fails for a long time, B 3 and B 2 fill up, and there are 13 parts in B 1. Therefore there is a threshold of 13 in B 1. 5 Note: B 1 has two thresholds and B 3 has none. Note: The number of thresholds equals the number of machines. 35

Transformation 20 B M 2 B 1 2 3 M* 4 M* 1 M 2 M 3 M 1 M 3 15 5 B 4 M M 4 B 3 1 M* 3 M* 2 M 4 Break up each buffer into a sequence of buffers of size 1 and reliable machines. Count backwards from each real machine the number of buffers equal to the population. Identify the reliable machine that the count ends at. 36

Transformation M 1 2 M 3 * 5 M * 4 13 M 2 1 B 1 2 M* 2 14 5 M 4 M 3 M* 1 1 B 4 B 3 B 2 Collapse all the sequences of unmarked reliable machines and buffers of size 1 into larger buffers. 37

Transformation Ideally, this would be equivalent to the original system. However, the reliable machines cause a delay, so transformation is not exact for the discrete/ deterministic case. This transformation is exact for continuous-material machines. 38

Transformation Small populations If the population is smaller than the largest buffer, at least one machine will never be blocked. However, that violates the assumptions of the two-machine lines. We can reduce the sizes of the larger buffers so that no buffer is larger than the population. This does not change performance. 39

Numerical Results Accuracy Many cases were compared with simulation: Three-machine cases: all throughput errors under 1%; buffer level errors averaged 3%, but were as high as 10%. Six-machine cases: mean throughput error 1.1% with a maximum of 2.7%; average buffer level error 5% with a maximum of 21%. Ten-machine cases: mean throughput error 1.4% with a maximum of 4%; average buffer level error 6% with a maximum of 44%. 40

Numerical Results Other algorithm attributes Convergence reliability: almost always. Speed: execution time increases rapidly with loop size. Maximum size system: 18 machines. Memory requirements grow rapidly also. 41

Numerical Results The Batman Effect Throughput 0.815 0.81 0.805 0.8 0.795 0.79 0.785 0.78 0.775 0.77 0 5 10 15 20 25 30 Population Decomposition Simulation Error is very small, but there are apparent discontinuities. This is because we cannot deal with buffers of size 1, and because we do not need to introduce reliable machines in cases where there would be no thresholds. 42

Numerical Results Behavior B 1 M 2 B 2 * M 1 M 3 B 4 M 4 B 3 All buffer sizes 10. Population 15. Identical machines except for M 1. Observe average buffer levels and production rate as a function of r 1. 43

Numerical Results Behavior 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 Production rate vs. r 1. Usual saturating graph. 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 44

Numerical Results Behavior 10 9 8 b1 average b2 average b3 average b4 average B 1 M 2 B 2 7 6 * M 1 M 3 5 4 3 2 1 B 4 M 4 B 3 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 When r 1 is small, M 1 is a bottleneck, so B 4 holds 10 parts, B 3 holds 5 parts, and the others are empty. As r 1 increases, material is more evenly distributed. When r 1 = 0.1, the network is totally symmetrical. 45

Applications Design system with pallets/fixtures. The fixtures and the space to hold them in are expensive. Design system with tokens/kanbans (CONWIP). By limiting population, we reduce production rate, but we also reduce inventory. 46

MIT OpenCourseWare http://ocw.mit.edu 2.852 Manufacturing Systems Analysis Spring 2010 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.