MIT Manufacturing Systems Analysis Lecture 14-16

Similar documents
Single-part-type, multiple stage systems. Lecturer: Stanley B. Gershwin

2.098/6.255/ Optimization Methods Practice True/False Questions

Multidisciplinary System Design Optimization (MSDO)

Topic one: Production line profit maximization subject to a production rate constraint. c 2010 Chuan Shi Topic one: Line optimization : 22/79

MIT Manufacturing Systems Analysis Lectures 18 19

MIT Manufacturing Systems Analysis Lectures 6 9: Flow Lines

MATH 4211/6211 Optimization Basics of Optimization Problems

Applications of Linear Programming

KKT Examples. Stanley B. Gershwin Massachusetts Institute of Technology

MIT Manufacturing Systems Analysis Lecture 10 12

Constrained optimization. Unconstrained optimization. One-dimensional. Multi-dimensional. Newton with equality constraints. Active-set method.

MIT Manufacturing Systems Analysis Lectures 15 16: Assembly/Disassembly Systems

LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE

Gradient Descent. Dr. Xiaowei Huang

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem

1. Method 1: bisection. The bisection methods starts from two points a 0 and b 0 such that

Constrained Optimization

Mathematical optimization

Numerical Optimization. Review: Unconstrained Optimization

Handout 1: Introduction to Dynamic Programming. 1 Dynamic Programming: Introduction and Examples

MVE165/MMG631 Linear and integer optimization with applications Lecture 13 Overview of nonlinear programming. Ann-Brith Strömberg

Algorithms for constrained local optimization

Penalty and Barrier Methods. So we again build on our unconstrained algorithms, but in a different way.

6.252 NONLINEAR PROGRAMMING LECTURE 10 ALTERNATIVES TO GRADIENT PROJECTION LECTURE OUTLINE. Three Alternatives/Remedies for Gradient Projection

Computational Finance

Constrained optimization

Nonlinear Programming (Hillier, Lieberman Chapter 13) CHEM-E7155 Production Planning and Control

LECTURE 13 LECTURE OUTLINE

CoE 3SK3 Computer Aided Engineering Tutorial: Unconstrained Optimization

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44

Lecture 3. Optimization Problems and Iterative Algorithms

Primal/Dual Decomposition Methods

MIT Manufacturing Systems Analysis Lectures 19 21

Nonlinear Optimization: What s important?

Scientific Computing: Optimization

Written Examination

Introduction to Optimization

Numerical optimization

minimize x subject to (x 2)(x 4) u,

Lecture Notes: Geometric Considerations in Unconstrained Optimization

Numerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems


Nonlinear Programming and the Kuhn-Tucker Conditions

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30

Math 273a: Optimization Basic concepts

Max Margin-Classifier

Interior Point Methods for Mathematical Programming

Interior Point Methods in Mathematical Programming

LECTURE 22: SWARM INTELLIGENCE 3 / CLASSICAL OPTIMIZATION

Lecture 4: Optimization. Maximizing a function of a single variable

Quiz Discussion. IE417: Nonlinear Programming: Lecture 12. Motivation. Why do we care? Jeff Linderoth. 16th March 2006

NONLINEAR. (Hillier & Lieberman Introduction to Operations Research, 8 th edition)

Additional Homework Problems

Solution Methods. Richard Lusby. Department of Management Engineering Technical University of Denmark

Dual methods and ADMM. Barnabas Poczos & Ryan Tibshirani Convex Optimization /36-725

LMI Methods in Optimal and Robust Control

CONSTRAINED NONLINEAR PROGRAMMING

g(x,y) = c. For instance (see Figure 1 on the right), consider the optimization problem maximize subject to

Nonlinear Optimization for Optimal Control

6.254 : Game Theory with Engineering Applications Lecture 7: Supermodular Games

Computational Optimization. Constrained Optimization Part 2

Optimization for Communications and Networks. Poompat Saengudomlert. Session 4 Duality and Lagrange Multipliers

ECE 680 Modern Automatic Control. Gradient and Newton s Methods A Review

CHAPTER 2: QUADRATIC PROGRAMMING

Constrained Optimization and Lagrangian Duality

Support Vector Machines

6.254 : Game Theory with Engineering Applications Lecture 8: Supermodular and Potential Games

Lecture 7 Unconstrained nonlinear programming

Notes on Constrained Optimization

Gradient Descent. Sargur Srihari

x 2 i 10 cos(2πx i ). i=1

Generalization to inequality constrained problem. Maximize

Bilinear Programming: Applications in the Supply Chain Management

5 Handling Constraints

LOCAL SEARCH. Today. Reading AIMA Chapter , Goals Local search algorithms. Introduce adversarial search 1/31/14

Motivation. Lecture 2 Topics from Optimization and Duality. network utility maximization (NUM) problem:

Optimization and Gradient Descent

Lecture 9: Large Margin Classifiers. Linear Support Vector Machines

Special cases of linear programming

LECTURE 10 LECTURE OUTLINE

Optimization Tutorial 1. Basic Gradient Descent

3E4: Modelling Choice. Introduction to nonlinear programming. Announcements

CE 191: Civil and Environmental Engineering Systems Analysis. LEC 05 : Optimality Conditions

4TE3/6TE3. Algorithms for. Continuous Optimization

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints

Numerical Optimization

Numerical Optimization

Optimization Methods

Lecture 1: Introduction. Outline. B9824 Foundations of Optimization. Fall Administrative matters. 2. Introduction. 3. Existence of optima

COMP 652: Machine Learning. Lecture 12. COMP Lecture 12 1 / 37

12. Interior-point methods

Lecture 13: Constrained optimization

Lecture V. Numerical Optimization

E5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization

Optimization. Next: Curve Fitting Up: Numerical Analysis for Chemical Previous: Linear Algebraic and Equations. Subsections

Deep Learning. Authors: I. Goodfellow, Y. Bengio, A. Courville. Chapter 4: Numerical Computation. Lecture slides edited by C. Yim. C.

Unconstrained minimization of smooth functions

ICS-E4030 Kernel Methods in Machine Learning

Constrained and Unconstrained Optimization Prof. Adrijit Goswami Department of Mathematics Indian Institute of Technology, Kharagpur

DEPARTMENT OF STATISTICS AND OPERATIONS RESEARCH OPERATIONS RESEARCH DETERMINISTIC QUALIFYING EXAMINATION. Part I: Short Questions

Transcription:

MIT 2.852 Manufacturing Systems Analysis Lecture 14-16 Line Optimization Stanley B. Gershwin Spring, 2007 Copyright c 2007 Stanley B. Gershwin.

Line Design Given a process, find the best set of machines and buffers on which it can be implemented. Best: least capital cost; least operating cost; least average inventory; greatest profit, etc. Constraints: minimal production rate, maximal stockout probability, maximal floor space, maximal inventory, etc.. To be practical, computation time must be limited. Exact optimality is not necessary, especially since the parameters are not known perfectly. 2

Optimization Optimization may be performed in two ways: Analytical solution of optimality conditions; or Searching For most problems, searching is the only realistic possibility. For some problems, optimality cannot be achieved in a reasonable amount of time. 3

Optimization Search Propose Design D n = 0 0 n n+1 Modify design as a function of past designs and performance. D = D(D, D,..., D, P, P,..., P ) n+1 0 1 n 0 1 n Evaluate Design D P n = P( ) D n n Is performance satisfactory? no yes Quit Typically, many designs are tested. 4

Optimization Issues For this to be practical, total computation time must be limited. Therefore, we must control both computation time per iteration and the number of iterations. Computation time per iteration includes evaluation time and the time to determine the next design to be evaluated. The technical literature is generally focused on limiting the number of iterations by proposing designs efficiently. The number of iterations is also limited by choosing a reasonable termination criterion (ie, required accuracy). Reducing computation time per iteration is accomplished by using analytical models rather than simulations using coarser approximations in early iterations and more accurate evaluations later. 5

Problem Statement X is a set of possible choices. J is a scalar function defined on X. h and g are vector functions defined on X. Problem: Find x X that satisfies J(x) is maximized (or minimized) the objective subject to h(x) = 0 equality constraints g(x) 0 inequality constraints 6

Taxonomy static/dynamic deterministic/stochastic X set: continuous/discrete/mixed (Extensions: multi-criteria optimization, in which the set of all good compromises between different objectives are sought; games, in which there are multiple optimizers, each preferring different xs but none having complete control; etc.) 7

Continuous Variables and Objective X = R n. J is a scalar function defined on R n. h( R m ) and g( R k ) are vector functions defined on R n. Problem: Find x R n that satisfies J(x) is maximized (or minimized) subject to h(x) = 0 g(x) 0 8

Continuous Variables and Objective Unconstrained One-dimensional search Find t such that f(t) = 0. This is equivalent to Find t to maximize (or minimize) F (t) when F (t) is differentiable, and f(t) = df (t)/dt is continuous. If f(t) is differentiable, maximization or minimization depends on the sign of d 2 F (t)/dt 2. 9

Continuous Variables and Objective Unconstrained One-dimensional search f(t) Assume f(t) is decreasing. f(t 0 ) Binary search: Guess t 0 and t 1 such that f(t 0 ) > 0 and f(t 1 ) < 0. Let t 2 = (t 0 + t 1 )/2. f(t 2 ) If f(t 2 ) < 0, then repeat f(t 1 ) with t 0 = t 0 and t1 =t 2. If f(t 2 ) > 0, then repeat with t 0 = t 2 and t1 =t 1. t 0 t 2 t 1 t 0 t 1 t 10

Continuous Variables and Objective Example: f(t) = 4 t 2 Unconstrained One-dimensional search t 0 t 2 t 1 0 1.5 3 1.5 2.25 3 1.5 1.875 2.25 1.875 2.0625 2.25 1.875 1.96875 2.0625 1.96875 2.015625 2.0625 1.96875 1.9921875 2.015625 1.9921875 2.00390625 2.015625 1.9921875 1.998046875 2.00390625 1.998046875 2.0009765625 2.00390625 1.998046875 1.99951171875 2.0009765625 1.99951171875 2.000244140625 2.0009765625 1.99951171875 1.9998779296875 2.000244140625 1.9998779296875 2.00006103515625 2.000244140625 1.9998779296875 1.99996948242188 2.00006103515625 1.99996948242188 2.00001525878906 2.00006103515625 1.99996948242188 1.99999237060547 2.00001525878906 1.99999237060547 2.00000381469727 2.00001525878906 1.99999237060547 1.99999809265137 2.00000381469727 1.99999809265137 2.00000095367432 2.00000381469727 11

Continuous Variables and Objective Unconstrained One-dimensional search f(t) Newton search, exact tangent: f(t 0 ) Guess t 0. Calculate df(t 0 )/dt. Choose t 1 so that df (t 0 ) f(t 0 ) + (t 1 t 0 ) dt = 0. f(t 1 ) t 0 t 1 t Repeat with t 0 = t 1 until f(t 0 ) is small enough. 12

Continuous Variables and Objective Unconstrained One-dimensional search Example: f(t) = 4 t 2 t 0 3 2.16666666666667 2.00641025641026 2.00001024002621 2.00000000002621 2 13

Continuous Variables and Objective Newton search, approximate tangent: f(t 0 ) Unconstrained One-dimensional search f(t) Guess t 0 and t 1. Calculate approximate slope f (t 1 ) f(t 0 ) s = t 1 t 0. f(t 2 ) Choose t 2 so that f(t 1 ) f(t 0 ) + (t 2 t 0 )s = 0. Repeat with t 0 = t 1 and t 1 = t 2 until f(t 0) is small enough. t 0 t 2 t 1 t 14

Continuous Variables and Objective Example: f(t) = 4 t 2 Unconstrained One-dimensional search t 0 0 3 1.33333333333333 1.84615384615385 2.03225806451613 1.99872040946897 1.99998976002621 2.0000000032768 1.99999999999999 2 15

Continuous Variables and Objective Unconstrained Multi-dimensional search J Optimum Steepest Ascent Directions x 2 Optimum often found by steepest ascent or hill-climbing methods. x 1 16

Continuous Variables and Objective Unconstrained Gradient search To maximize J(x), where x is a vector (and J is a scalar function that has nice properties): 0. Set n = 0. Guess x 0. 1. Evaluate J (x n ). x { } 2. Let t be a scalar. Define J n (t) = J x n + t J (x n ) x Find (by one-dimensional search ) t n, the value of t that maximizes J n (t). 3. Set x n+1 = x n + t J (x n ). n x 4. Set n n + 1. Go to Step 1. also called steepest ascent or steepest descent. 17

Continuous Variables and Objective Constrained J Constrained Optimum Equality constrained: solution is on the constraint surface. x 2 h(x, x ) = 0 1 2 x 1 Problems are much easier when constraint is linear, ie, when the surface is a plane. In that case, replace J/ x by its projection onto the constraint plane. But first: find an initial feasible guess. 18

Continuous Variables and Objective Constrained J x 2 Constrained Optimum g(x 1, x 2 ) > 0 Inequality constrained: solution is required to be on one side of the plane. x 1 Inequality constraints that are satisfied with equality are called effective or active constraints. If we knew which constraints would be effective, the problem would reduce to an equality-constrained optimization. 19

6 Continuous Variables and Objective Constrained 4 2 0 2 20 8*(x+y).25*(x**4+y**4) 16 12 84 0 4 8 12 16 20 24 Minimize 8(x + y) (x 4 + y 4 )/4 subject to x + y 0 4 6 2 0 2 4 6 Solving a linearly-constrained problem is relatively easy. If the solution is not in the interior, search within the boundary plane. 20

6 Continuous Variables and Objective Constrained 4 2 0 2 20 8*(x+y).25*(x**4+y**4) 16 12 84 0 4 8 12 16 20 24 Minimize 8(x + y) (x 4 + y 4 )/4 subject to x (x y) 2 + 1 0 4 6 2 0 2 4 6 Solving a nonlinearly-constrained problem is not so easy. Searching within the boundary is numerically difficult. 21

Continuous Nonlinear and Linear Programming Variables and Objective Optimization problems with continuous variables, objective, and constraints are called nonlinear programming problems, especially when at least one of J, h, g are not linear. When all of J, h, g are linear, the problem is a linear programming problem. 22

Continuous Variables and Objective Multiple Optima Global Maximum J Local (or Relative) Maxima x 2 x 1 Danger: a search might find a local, rather than the global, optimum. 23

Continuous Variables and Objective Primals and Duals Consider the two problems: min f(x) subject to j(x) J max j(x) subject to f(x) F f(x), F, j(x), and J are scalars. We will call these problems duals of one another. (However, this is not the conventional use of the term.) Under certain conditions when the last inequalities are effective, the same x satisfies both problems. We will call one the primal problem and the other the dual problem. 24

Continuous Variables and Objective Primals and Duals Generalization: min f(x) subject to h(x) = 0 g(x) 0 j(x) J max j(x) subject to h(x) = 0 g(x) 0 f(x) F 25

Problem statement M 1 B1 M 2 B2 M 3 B3 M 4 B4 M B 5 5 M 6 Problem: Design the buffer space for a line. The machines have already been selected. Minimize the total buffer space needed to achieve a target production rate. Other problems: minimize total average inventory; maximize profit (revenue - inventory cost - buffer space cost); choose machines as well as buffer sizes; etc. 26

Problem statement Assume a deterministic processing time line with k machines with r i and p i known for all i = 1,..., k. Assume minimum buffer size N MIN. Assume a target production rate P. Then the function P (N 1,..., N k 1 ) is known it can be evaluated using the decomposition method. The problem is: Primal problem: Minimize subject to k 1 Ni i=1 P (N 1,..., N k 1 ) P N i N MIN, i = 1,..., k 1. In the following, we treat the N i s like a set of continuous variables. 27

Properties of P (N 1,..., N k 1 ) P (,..., ) = min e i i=1,...,k P (N MIN,..., N MIN ) 1 k 1 p i 1 + r i i << P (,..., ) Continuity: A small change in any N i creates a small change in P. Monotonicity: The production rate increases monotonically in each N i. Concavity: The production rate appears to be a concave function of the vector (N 1,..., N k 1 ). 28

Properties of P (N 1,..., N k 1 ) Example 3-machine line 200 Optimal curve P=0.8800 P=0.8825 P=0.8850 P=0.8875 P P=0.8900 P=0.8925 150 0.91 P=0.8950 P=0.8975 0.9 P=0.9000 0.89 0.88 0.87 100 0.86 0.85 0.84 0.83 10 20 100 50 90 80 70 60 30 40 50 30 40 50 N2 60 N1 70 80 90 10 20 0 0 50 100 150 200 100 N1 r 1 =.35 r 2 =.15 r 3 =.4 p 1 =.037 p 2 =.015 p 3 =.02 e 1 =.904 e 2 =.909 e 3 =.952 N2 29

Minimize k 1 Ni i=1 Solution Primal problem subject to P (N 1,..., N k 1 ) P N i N MIN, i = 1,..., k 1. Difficulty: If all the buffers are larger than N MIN, the solution will satisfy P (N 1,..., N k 1 ) = P. (Why?) But P (N 1,..., N k 1 ) is nonlinear and cannot be expressed in closed form. Therefore any solution method will have to search within this constraint and all steps will be small and there will be many iterations. It would be desirable to transform this problem into one with linear constraints. 30

Solution Dual problem Maximize P (N 1,..., N k 1 ) subject to k 1 i=1 N i N TOTAL specified N i N MIN, i = 1,..., k 1. All the constraints are linear. The solution will satisfy the N TOTAL constraint with equality (assuming the problem is feasible). (1. Why? 2. When would the problem be infeasible?) This problem is consequently relatively easy to solve. 31

Solution Buffer Space N 2 Dual problem TOTAL N 1 +N 2 +N 3 = N Constraint set (if N MIN = 0). N 1 N 3 32

Solution Primal strategy Solution of the primal problem: 1. Guess N TOTAL. 2. Solve the dual problem. Evaluate P = P MAX (N TOTAL ). 3. Use a one-dimensional search method to find N TOTAL such that P MAX (N TOTAL ) = P. 33

Solution Dual Algorithm Maximize P (N 1,..., N k 1 ) subject to P k 1 i=1 N i = N TOTAL specified MIN N i N, i = 1,..., k 1. Start with an initial guess (N 1,..., N k 1 ) that satisfies k 1 N i = N TOTAL. i=1 Calculate the gradient vector (g 1,..., g k 1 ): g i = P (N 1,..., N i + δn,..., N k 1 ) P (N 1,..., N i,..., N k 1 ) δn Calculate the projected gradient vector (ĝ 1,..., ĝ k 1 ): ĝ i = g i ḡ where ḡ = k 1 1 gi k 1 i=1 34

Solution Dual Algorithm The projected gradient ĝ satisfies k 1 k 1 k 1 ĝ i = (g i ḡ) = g i (k 1)ḡ = 0 i=1 i=1 i=1 Therefore, if A is a scalar, then k 1 k 1 k 1 k 1 (N i + Aĝ i ) = N i + Aĝ i = N i i=1 i=1 i=1 i=1 k 1 k 1 i=1 N i so if (N 1,..., N k 1 ) satisfies i=1 N i = N TOTAL and N i = N i + Aĝ i for any scalar A, then (N 1,..., N k 1 ) satisfies = N TOTAL. That is, if N is on the constraint, then N + Aĝ is also on the constraint (as long as all elements N MIN ). 35

Solution Dual Algorithm The gradient g is the direction of greatest increase of P. The projected gradient ĝ is the direction of greatest increase of P within the constraint plane. Therefore, once we have a point N on the constraint plane, the best improvement is to move in the direction of ĝ; that is, N + Aĝ. To find the best possible improvement, we find A, the value of A that maximizes P (N + Aĝ). A is a scalar, so this is a one-dimensional search. N + A ĝ is the next guess for N, and the process repeats. 36

Solution Dual Algorithm Specify initial guess N = (N 1,..., N k-1 ) and search parameters. Calculate gradient g. Calculate search direction p. Find A such that P(N+ Ap) is maximized. Define ^ N+ N = Ap (Here, p = ĝ.) ^ Is N close to N? Set N = NO ^ N YES N is the solution. Terminate. 37

Solution Dual Algorithm Initial Guess Copyright c 2007 Stanley B. Gershwin. 38

Solution Primal algorithm We can solve the dual problem for any N TOTAL. We can calculate N 1 (N TOTAL ), N 2 (N TOTAL ),..., N k 1 (N TOTAL ), 200 P MAX (N TOTAL ). 150 Optimal curve P=0.8800 P=0.8825 P=0.8850 P=0.8875 P=0.8900 P=0.8925 P=0.8950 P=0.8975 P=0.9000 N2 100 50 0 0 50 100 150 200 N1 39

Solution Primal algorithm 0.92 0.9 Maximum average production rate 0.88 0.86 0.84 0.82 0.8 0.78 0 50 100 150 200 Total buffer space P MAX (N TOTAL ) as a function of N TOTAL. 40

Solution Primal algorithm Then, we can find, by 1-dimensional search, N TOTAL such that P MAX (N TOTAL ) = P. 41

Solution Primal algorithm N 0 MIN Set = (k-1)n. Calculate P MAX (N ). Specify initial guess N 1 and search parameters. 1 Solve the dual to obtain P MAX (N ). Set j = 2. 0 Calculate N j from (7). Call the dual algorithm to j evaluate P (N ). MAX (Here, (7) refers to a Is P (N ) close enough to P? MAX j * NO Increment j by 1. one-dimensional search.) YES Minimum total buffer space is N. Terminate. j 42

Example The Bowl Phenomena Problem: how to allocate space in a line with identical machines. Case: 20-machine continuous material line, r i =.0952, p i =.005, and µ i = 1, i = 1,..., 20. 50 First, we show the average WIP distribution if all buffers are the same size: N i = 53, i = 1,..., 19 Average buffer level 40 30 20 10 0 0 2 4 6 8 10 12 14 16 18 20 Buffer 43

Example The Bowl Phenomena This shows the optimal distribution of buffer space and the resulting distribution of average inventory. 25 20 Buffer Size/Average buffer level 15 10 5 0 0 5 10 15 20 Buffer 44

Example The Bowl Phenomena Observations: The optimal distribution of buffer space does not look like the distribution of inventory in the line with equal buffers. Why not? Explain the shape of the optimal distribution. The distribution of average inventory is not symmetric. 45

Example The Bowl Phenomena This shows the ratios of average inventory to buffer size with equal buffers and with optimal buffers. 1 Equal buffers Optimal buffers 0.8 Ratio = Average buffer level/buffer Size 0.6 0.4 0.2 0 0 5 10 15 20 Buffer 46

Example Design the buffers for a 20-machine production line. The machines have been selected, and the only decision remaining is the amount of space to allocate for in-process inventory. The goal is to determine the smallest amount of in-process inventory space so that the line meets a production rate target. 47

Example The common operation time is one operation per minute. The target production rate is.88 parts per minute. 48

Example Case 1 MTTF= 200 minutes and MTTR = 10.5 minutes for all machines (P =.95 parts per minute). 49

Example Case 1 MTTF= 200 minutes and MTTR = 10.5 minutes for all machines (P =.95 parts per minute). Case 2 Like Case 1 except Machine 5. For Machine 5, MTTF = 100 and MTTR = 10.5 minutes (P =.905 parts per minute). 50

Example Case 1 MTTF= 200 minutes and MTTR = 10.5 minutes for all machines (P =.95 parts per minute). Case 2 Like Case 1 except Machine 5. For Machine 5, MTTF = 100 and MTTR = 10.5 minutes (P =.905 parts per minute). Case 3 Like Case 1 except Machine 5. For Machine 5, MTTF = 200 and MTTR = 21 minutes (P =.905 parts per minute). 51

Example Are buffers really needed? Line Case 1 Case 2 Case 3 Production rate with no buffers, parts per minute.487.475.475 Yes. How were these numbers calculated? 52

Example Solution 60 Buffer Size 50 40 Bottleneck Case 1 All Machines Identical Case 2 Machine 5 Bottleneck MTTF = 100 Case 3 Machine 5 Bottleneck MTTR = 21 Line 30 Case 1 Case 2 20 Case 3 10 Space 430 485 523 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 Buffer 54

Example This shows the optimal distribution of buffer space and the resulting distribution of average inventory for Case 3. 55 50 45 Buffer Size/Average buffer level 40 35 30 25 20 15 10 5 0 0 5 10 15 20 Buffer 54

Example This shows the ratio of average inventory to buffer size with optimal buffers for Case 3. 0.7 0.65 Ratio = Average buffer level/buffer Size 0.6 0.55 0.5 0.45 0.4 0.35 0.3 0 5 10 15 20 Buffer 55

Example Case 4: Same as Case 3 except bottleneck is at Machine 15. This shows the optimal distribution of buffer space and the resulting distribution of average inventory for Case 4. 55 50 45 Buffer Size/Average buffer level 40 35 30 25 20 15 10 5 0 0 5 10 15 20 Buffer 56

Example This shows the ratio of average inventory to buffer size with optimal buffers for Case 4. 0.75 0.7 Ratio = Average buffer level/buffer Size 0.65 0.6 0.55 0.5 0.45 0.4 0.35 0.3 0 5 10 15 20 Buffer 57

Example Case 5: MTTF bottleneck at Machine 5, MTTR bottleneck at Machine 15. This shows the optimal distribution of buffer space and the resulting distribution of average inventory for Case 5. 55 50 45 Buffer Size/Average buffer level 40 35 30 25 20 15 10 5 0 0 5 10 15 20 Buffer 58

Example This shows the ratio of average inventory to buffer size with optimal buffers for Case 5. 0.7 0.65 Ratio = Average buffer level/buffer Size 0.6 0.55 0.5 0.45 0.4 0.35 0.3 0 5 10 15 20 59

Example Case 6: Like Case 6, but 50 machines, MTTR bottleneck at Machine 45. This shows the optimal distribution of buffer space and the resulting distribution of average inventory for Case 6. 55 50 45 Buffer Size/Average buffer level 40 35 30 25 20 15 10 5 0 0 10 20 30 40 50 Buffer 60

Example This shows the ratio of average inventory to buffer size with optimal buffers for Case 6. 0.7 0.65 Ratio = Average buffer level/buffer Size 0.6 0.55 0.5 0.45 0.4 0.35 0.3 0 10 20 30 40 50 Buffer 61

Example Observation from studying buffer space allocation problems: Buffer space is needed most where buffer level variability is greatest! 62

Profit as a function of buffer sizes 840 830 820 810 800 790 780 770 760 750 Profit 20 40 N1 60 80 20 800 810 820 827 830 832 40 60 80 N2 Three-machine, continuous material line. r i =.1, p i =.01,µ i = 1. Π = 1000P (N 1, N 2 ) ( n 1 + n 2 ). 63

MIT OpenCourseWare http://ocw.mit.edu 2.852 Manufacturing Systems Analysis Spring 2010 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.