Topic one: Production line profit maximization subject to a production rate constraint. c 2010 Chuan Shi Topic one: Line optimization : 22/79

Similar documents
Convex Optimization & Lagrange Duality

Lecture 18: Optimization Programming

More on Lagrange multipliers

MIT Manufacturing Systems Analysis Lecture 14-16

Constrained Optimization and Lagrangian Duality

Generalization to inequality constrained problem. Maximize

Nonlinear Programming and the Kuhn-Tucker Conditions

Constrained optimization

Constrained Optimization

Introduction to Machine Learning Lecture 7. Mehryar Mohri Courant Institute and Google Research

The Karush-Kuhn-Tucker (KKT) conditions

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints

Numerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems

Introduction to Machine Learning Prof. Sudeshna Sarkar Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur

Karush-Kuhn-Tucker Conditions. Lecturer: Ryan Tibshirani Convex Optimization /36-725

Nonlinear Optimization

5. Duality. Lagrangian

Numerical Optimization

IE 5531: Engineering Optimization I

Convex Optimization M2

Nonlinear Programming (Hillier, Lieberman Chapter 13) CHEM-E7155 Production Planning and Control

In view of (31), the second of these is equal to the identity I on E m, while this, in view of (30), implies that the first can be written

Machine Learning. Support Vector Machines. Manfred Huber

Finite Dimensional Optimization Part III: Convex Optimization 1

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010

KKT Examples. Stanley B. Gershwin Massachusetts Institute of Technology

Motivation. Lecture 2 Topics from Optimization and Duality. network utility maximization (NUM) problem:

Numerical optimization

Convex Optimization. Dani Yogatama. School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA. February 12, 2014

Convex Optimization Boyd & Vandenberghe. 5. Duality

Scientific Computing: Optimization

Support Vector Machines: Maximum Margin Classifiers

Numerical Optimization. Review: Unconstrained Optimization

Linear classifiers selecting hyperplane maximizing separation margin between classes (large margin classifiers)

Support vector machines

Linear classifiers selecting hyperplane maximizing separation margin between classes (large margin classifiers)

Duality. Lagrange dual problem weak and strong duality optimality conditions perturbation and sensitivity analysis generalized inequalities

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

Lagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST)

Pattern Classification, and Quadratic Problems

Lecture: Duality.

MVE165/MMG631 Linear and integer optimization with applications Lecture 13 Overview of nonlinear programming. Ann-Brith Strömberg

EE/AA 578, Univ of Washington, Fall Duality

Lagrange duality. The Lagrangian. We consider an optimization program of the form

ICS-E4030 Kernel Methods in Machine Learning

Machine Learning Support Vector Machines. Prof. Matteo Matteucci

Support Vector Machine

Linear Support Vector Machine. Classification. Linear SVM. Huiping Cao. Huiping Cao, Slide 1/26

Sufficient Conditions for Finite-variable Constrained Minimization

ON LICQ AND THE UNIQUENESS OF LAGRANGE MULTIPLIERS

Constraint qualifications for nonlinear programming

Chap 2. Optimality conditions

Gradient Descent. Dr. Xiaowei Huang

Seminars on Mathematics for Economics and Finance Topic 5: Optimization Kuhn-Tucker conditions for problems with inequality constraints 1

Support Vector Machines

The Kuhn-Tucker Problem

Some Inexact Hybrid Proximal Augmented Lagrangian Algorithms

Quiz Discussion. IE417: Nonlinear Programming: Lecture 12. Motivation. Why do we care? Jeff Linderoth. 16th March 2006

SECTION C: CONTINUOUS OPTIMISATION LECTURE 9: FIRST ORDER OPTIMALITY CONDITIONS FOR CONSTRAINED NONLINEAR PROGRAMMING

2.098/6.255/ Optimization Methods Practice True/False Questions

Numerical Optimization Professor Horst Cerjak, Horst Bischof, Thomas Pock Mat Vis-Gra SS09

Lagrange Relaxation and Duality

Support Vector Machines

The Karush-Kuhn-Tucker conditions

Linear classifiers selecting hyperplane maximizing separation margin between classes (large margin classifiers)

Lectures 9 and 10: Constrained optimization problems and their optimality conditions

Tutorial on Convex Optimization: Part II

CS-E4830 Kernel Methods in Machine Learning

Lecture: Duality of LP, SOCP and SDP

On the Method of Lagrange Multipliers

TMA 4180 Optimeringsteori KARUSH-KUHN-TUCKER THEOREM

8. Constrained Optimization

Support Vector Machines for Regression

MATH2070 Optimisation

SECTION C: CONTINUOUS OPTIMISATION LECTURE 11: THE METHOD OF LAGRANGE MULTIPLIERS

4TE3/6TE3. Algorithms for. Continuous Optimization

Math 5311 Constrained Optimization Notes

Mathematical Foundations II

Kernel Machines. Pradeep Ravikumar Co-instructor: Manuela Veloso. Machine Learning

Bindel, Spring 2017 Numerical Analysis (CS 4220) Notes for So far, we have considered unconstrained optimization problems.

Finite Dimensional Optimization Part I: The KKT Theorem 1

Existence of minimizers

University of California, Davis Department of Agricultural and Resource Economics ARE 252 Lecture Notes 2 Quirino Paris

14. Duality. ˆ Upper and lower bounds. ˆ General duality. ˆ Constraint qualifications. ˆ Counterexample. ˆ Complementary slackness.

Second Order Optimality Conditions for Constrained Nonlinear Programming

Optimization Methods in Management Science

Convex Optimization and Support Vector Machine

Optimization Tutorial 1. Basic Gradient Descent

Lecture 3. Optimization Problems and Iterative Algorithms

Duality Theory of Constrained Optimization

Lecture Notes on Support Vector Machine

Primal/Dual Decomposition Methods

The Lagrangian L : R d R m R r R is an (easier to optimize) lower bound on the original problem:

The general programming problem is the nonlinear programming problem where a given function is maximized subject to a set of inequality constraints.

Optimization Problems with Constraints - introduction to theory, numerical Methods and applications

Convex Optimization Lecture 6: KKT Conditions, and applications

Optimality conditions for Equality Constrained Optimization Problems

Optimization. A first course on mathematics for economists

minimize x subject to (x 2)(x 4) u,

LECTURE 7 Support vector machines

Transcription:

Topic one: Production line profit maximization subject to a production rate constraint c 21 Chuan Shi Topic one: Line optimization : 22/79

Production line profit maximization The profit maximization problem max k 1 k 1 J() = AP () b i i c i n i () i=1 i=1 s.t. P () ˆP, i min, i = 1,, k 1. where P () = production rate, parts/time unit ˆP = required production rate, parts/time unit A = profit coefficient, $/part n i () = average inventory of buffer i, i = 1,, k 1 b i = buffer cost coefficient, $/part/time unit = inventory cost coefficient, $/part/time unit c i c 21 Chuan Shi Topic one: Line optimization : Constrained and unconstrained problems 23/79

An example about the research goal "data.txt" "feasible.txt" 125 12 119 118 116 114 112 11 18 16 Optimal boundary 122 12 118 116 J() 114 112 11 18 16 14 1 2 3 4 1 5 6 7 8 9 1 1 2 3 4 5 6 7 8 9 1 2 Figure 2: () vs. 1 and 2 c 21 Chuan Shi Topic one: Line optimization : Constrained and unconstrained problems 24/79

Two problems Original constrained problem max k 1 k 1 J() = AP () b i i c i n i () s.t. P () ˆP, i=1 i=1 i min, i = 1,, k 1. Simpler unconstrained problem (Schor s problem) max k 1 k 1 J() = AP () b i i c i n i () i=1 i=1 s.t. i min, i = 1,, k 1. c 21 Chuan Shi Topic one: Line optimization : Constrained and unconstrained problems 25/79

An example for algorithm derivation Data 1 =.1, 1 =.1, 2 =.11, 2 =.1, 3 =.1, 3 =.9, ˆ =.88 Cost function () = 2 () 1 2 1 () 2 () 1 9 P(1,2)>P 8 166 164 162 J() 16 158 156 154 152 15 7 2 6 5 4 1 2 3 4 1 5 6 7 8 9 1 1 2 3 4 5 Figure 3: () vs. 1 and 2 c 21 Chuan Shi Topic one: 6 7 2 8 9 1 3 P(1,2)<P P(1,2)=P 2 1 2 3 4 5 6 1 7 8 9 1 Figure 4: () Line optimization : Algorithm derivation 26/79

An example for algorithm derivation 166 164 162 J() 16 158 156 154 152 15 1 2 3 4 1 5 6 7 8 9 1 1 2 3 4 5 6 7 8 9 1 2 Figure 5: () vs. 1 and 2 c 21 Chuan Shi Topic one: Line optimization : Algorithm derivation 27/79

Algorithm derivation Two cases Case 1 The solution of the unconstrained problem is s.t. ( ) ˆ. In this case, the solution of the constrained problem is the same as the solution of the unconstrained problem. We are done. 1 Unconstrained problem = () 1 u =1 u ( 1, 2 ) 8 7 6 2 max () 9 5 1 () P(1,2) > P 4 3 =1 2 s.t. c 21 Chuan Shi min, = 1,, 1. Topic one: 1 2 3 Line optimization : Algorithm derivation 4 5 6 1 7 8 9 1 28/79

Algorithm derivation Two cases (continued) Case 2 u satisfies P ( u ) < Pˆ. This is not the solution of the constrained problem. 1 9 8 7 6 2 5 4 P(1,2) > P 3 2 u u ( 1, 2 ) 1 2 3 4 5 6 7 8 9 1 1 c 21 Chuan Shi Topic one: Line optimization : Algorithm derivation 29/79

Algorithm derivation Two cases (continued) Case 2 (continued) In this case, we consider the following unconstrained problem: k 1 k 1 max J() = A P () b i i c i n i() i=1 i=1 s.t. i min, i = 1,, k 1. in which A is replaced by A. Let (A ) be the solution to this problem and P (A ) = P ( (A )). c 21 Chuan Shi Topic one: Line optimization : Algorithm derivation 3/79

Assertion The constrained problem k 1 k 1 max J() = A P () b i i c i n i() s.t. P () P ˆ, i=1 i=1 i min, i = 1,, k 1. has the same solution for all A in which the solution of the corresponding unconstrained problem k 1 k 1 max J() = A P () b i i c i n i() has P (A ) Pˆ. i=1 i=1 s.t. i min, i = 1,, k 1. c 21 Chuan Shi Topic one: Line optimization : Algorithm derivation 31/79

Interpretation of the assertion We claim 48 8 47 7 46 6 45 5 44 4 43 3 max ( ) = 5 ( ) ( ) s.t. ( ) s.t. ˆ min Cost ($) A P() If the optimal solution of the unconstrained problem is not that of the constrained problem, then the solution of the constrained problem, ( 1,, 1 ), satisfies ( 1,, 1 ) = ˆ. max () = 5 ˆ ( ) s.t. ( ) s.t. ˆ ( ) = ˆ min 42 2 41 1 4 *(A ) 5 1 P(*(A )) < P 15 2 25 3 35 4 We formally prove this by the Karush-Kuhn-Tucker (KKT) conditions of nonlin ear programming. c 21 Chuan Shi Topic one: Line optimization : Algorithm derivation 32/79

Interpretation of the assertion We claim 48 8 47 7 46 6 45 5 44 4 43 3 max ( ) = 5 ( ) ( ) s.t. ( ) s.t. ˆ min Cost ($) A P() If the optimal solution of the unconstrained problem is not that of the constrained problem, then the solution of the constrained problem, ( 1,, 1 ), satisfies ( 1,, 1 ) = ˆ. max () = 5 ˆ ( ) s.t. ( ) s.t. ˆ ( ) = ˆ min 42 2 41 1 4 *(A ) 5 1 P(*(A )) < P 15 2 25 3 35 4 We formally prove this by the Karush-Kuhn-Tucker (KKT) conditions of nonlin ear programming. c 21 Chuan Shi Topic one: Line optimization : Algorithm derivation 32/79

Interpretation of the assertion We claim 48 8 47 7 46 6 45 5 44 4 43 3 max ( ) = 5 ( ) ( ) s.t. ( ) s.t. ˆ min Cost ($) A P() If the optimal solution of the unconstrained problem is not that of the constrained problem, then the solution of the constrained problem, ( 1,, 1 ), satisfies ( 1,, 1 ) = ˆ. max () = 5 ˆ ( ) s.t. ( ) s.t. ˆ ( ) = ˆ min 42 2 41 1 4 *(A ) 5 1 P(*(A )) < P 15 2 25 3 35 4 We formally prove this by the Karush-Kuhn-Tucker (KKT) conditions of nonlin ear programming. c 21 Chuan Shi Topic one: Line optimization : Algorithm derivation 32/79

Interpretation of the assertion A = 15 (Original Problem) A = 35 122 12 118 116 J() 114 112 11 18 16 14 1 2 3 4 5 1 6 7 8 9 1 1 2 3 4 5 6 7 8 "infeasible.txt" "feasible.txt" 125 12 119 118 116 114 112 11 18 16 Optimal 1 boundary 9 J() 285 28 275 27 265 1 2 3 4 5 1 6 7 8 9 1 1 2 3 4 5 6 7 8 2 3 4 1 c 21 Chuan Shi 5 6 7 8 9 1 Topic one: 1 2 3 4 5 6 7 2 8 "infeasible.txt" "feasible.txt" 26 255 249.8 24 22 2 198 195 193 19 Optimal 1 boundary 9 385 38 375 J() 37 365 36 355 35 345 1 2 3 Line optimization : Algorithm derivation 4 1 5 6 7 8 9 1 1 2 3 4 5 "infeasible.txt" "feasible.txt" 2921 2919.7 291 29 288 286 283 28 277 273 Optimal 1 boundary 9 2 A = 4535.82 (Final A ) 28 26 24 22 J() 2 198 196 194 192 19 188 1 29 2 A = 25 295 6 7 8 "infeasible.txt" "feasible.txt" 3819 3816 3813 3818 38 377 373 37 366 362 358 354 35 Optimal 1 boundary 9 2 33/79

Karush-Kuhn-Tucker (KKT) conditions Let x be a local minimum of the problem min f(x) s.t. h 1 (x) =,, h m (x) =, g 1 (x),, g r (x), where f, h i, and g j are continuously differentiable functions from R n to R. Then there exist unique Lagrange multipliers λ 1,, λ m and μ 1,, μ r, satisfying the following conditions: x L(x, λ, μ ) =, μ j, j = 1,, r, μ j g j (x ) =, j = 1,, r. m where L(x, λ, μ) = f(x) + i=1 λ ih i (x) + j=1 μ j g j (x) is called the Lagrangian function. c 21 Chuan Shi Topic one: Line optimization : Proofs of the algorithm by KKT conditions 34/79 r

Convert the constrained problem to minimization form Minimization form The constrained problem min k 1 k 1 J() = AP () + b i i + c i n i () i=1 i=1 s.t. ˆP P () min i, i = 1,, k 1 We have argued that we treat i as continuous variables, and P () and J() as continuously differentiable functions. c 21 Chuan Shi Topic one: Line optimization : Proofs of the algorithm by KKT conditions 35/79

Applying KKT conditions The Slater constraint qualification for convex inequalities guarantees the existence of Lagrange multipliers for our problem. So, there exist unique Lagrange multipliers, =,, 1 for the constrained problem to satisfy the KKT conditions: ( ) + ( ˆ ( )) + 1 ( min ) = (1) =1 or ( ) 1 ( ) 2... ( ) 1 ( ) 1 ( ) 2... ( ) 1 1 1... 1... 1 =..., (2) c 21 Chuan Shi Topic one: Line optimization : Proofs of the algorithm by KKT conditions 36/79

Applying KKT conditions and haha haha i μ, i =,, k 1, (3) μ (Pˆ P ( )) =, i μ i ( min ) =, i = 1,, k 1, (5) where is the optimal solution of the constrained problem. Assume that i > min for all i. In this case, by equation (5), we know that μ i =, i = 1,, k 1. (4) c 21 Chuan Shi Topic one: Line optimization : Proofs of the algorithm by KKT conditions 37/79

Applying KKT conditions The KKT conditions are simplified to J( ) P ( ) 1 J( ) 2. J( ) k 1 1 P ( ) 2 μ =., (6).. P ( ) k 1 μ (Pˆ P ( )) =, (7) where μ. Since is not the optimal solution of the unconstrained problem, J( ) =. Thus, μ = since otherwise condition (6) would be violated. By condition (7), the optimal solution satisfies P ( ) = Pˆ. c 21 Chuan Shi Topic one: Line optimization : Proofs of the algorithm by KKT conditions 38/79

Applying KKT conditions The KKT conditions are simplified to J( ) P ( ) 1 J( ) 2. J( ) k 1 1 P ( ) 2 μ =.,.. P ( ) k 1 μ (Pˆ P ( )) =, In addition, conditions (6) and (7) reveal how we could find μ and. For every μ, condition (6) determines since there are k 1 equations and k 1 unknowns. Therefore, we can think of = (μ ). We search for a value of μ such that P ( (μ )) = Pˆ. As we indicate in the following, this is exactly what the algorithm does. c 21 Chuan Shi Topic one: Line optimization : Proofs of the algorithm by KKT conditions 39/79

Applying KKT conditions Replacing μ by μ > in constraint (6) gives J( c ) P ( c ) 1 J( c 1 ) P ( c ) 2 μ 2.. J( c ) P ( c ) k 1 k 1 =.,. where c is the unique solution of (8). ote that c is the solution of the following optimization problem: (8) min J() = J() + μ ( ˆP P ()) s.t. min i, i = 1,, k 1. (9) c 21 Chuan Shi Topic one: Line optimization : Proofs of the algorithm by KKT conditions 4/79

Applying KKT conditions The problem above is equivalent to or max J () = J() μ (Pˆ P ()) s.t. min i, i = 1,, k 1. (1) k 1 k 1 max J () = AP () b i i c i n i μ (Pˆ P ()) i=1 i=1 (11) or s.t. min i, i = 1,, k 1. k 1 k 1 max J () = (A + μ )P () b i i c i n i i=1 s.t. i min, i = 1,, k 1. i=1 (12) c 21 Chuan Shi Topic one: Line optimization : Proofs of the algorithm by KKT conditions 41/79

Applying KKT conditions The problem above is equivalent to or max J () = J() μ (Pˆ P ()) s.t. min i, i = 1,, k 1. (1) k 1 k 1 max J () = AP () b i i c i n i μ (Pˆ P ()) i=1 i=1 (11) or s.t. min i, i = 1,, k 1. k 1 k 1 max J () = (A + μ )P () b i i c i n i i=1 s.t. i min, i = 1,, k 1. i=1 (12) c 21 Chuan Shi Topic one: Line optimization : Proofs of the algorithm by KKT conditions 41/79

Applying KKT conditions The problem above is equivalent to or max J () = J() μ (Pˆ P ()) s.t. min i, i = 1,, k 1. (1) k 1 k 1 max J () = AP () b i i c i n i μ (Pˆ P ()) i=1 i=1 (11) or s.t. min i, i = 1,, k 1. k 1 k 1 max J () = (A + μ )P () b i i c i n i i=1 s.t. i min, i = 1,, k 1. i=1 (12) c 21 Chuan Shi Topic one: Line optimization : Proofs of the algorithm by KKT conditions 41/79

Applying KKT conditions or, finally, k 1 k 1 max J () = A P () b i i c i n i i=1 i=1 (13) s.t. i min, i = 1,, k 1. where A = A + μ. This is exactly the unconstrained problem, and c is its optimal solution. ote that μ > indicates that A > A. In addition, the KKT conditions indicate that the optimal solution of the constrained problem satisfies P ( ) = Pˆ. This means that, for every A > A (or μ > ), we can find the corresponding optimal solution c satisfying condition (8) by solving problem (13). We need to find the A such that the solution to problem (13), denoted as (A ), satisfies P ( (A )) = Pˆ. c 21 Chuan Shi Topic one: Line optimization : Proofs of the algorithm by KKT conditions 42/79

Applying KKT conditions or, finally, k 1 k 1 max J () = A P () b i i c i n i i=1 i=1 (13) s.t. i min, i = 1,, k 1. where A = A + μ. This is exactly the unconstrained problem, and c is its optimal solution. ote that μ > indicates that A > A. In addition, the KKT conditions indicate that the optimal solution of the constrained problem satisfies P ( ) = Pˆ. This means that, for every A > A (or μ > ), we can find the corresponding optimal solution c satisfying condition (8) by solving problem (13). We need to find the A such that the solution to problem (13), denoted as (A ), satisfies P ( (A )) = Pˆ. c 21 Chuan Shi Topic one: Line optimization : Proofs of the algorithm by KKT conditions 42/79

Applying KKT conditions Then, μ = A A and (A ) satisfy conditions (6) and (7): J( (A )) + μ (Pˆ P ( (A ))) =, μ (Pˆ P ( (A ))) =. Hence, μ = A A is exactly the Lagrange multiplier satisfying the KKT conditions of the constrained problem, and = (A ) is the optimal solution of the constrained problem. d Consequently, solving the constrained problem through our algorithm is essentially finding the unique Lagrange multipliers and optimal solution of the problem. c 21 Chuan Shi Topic one: Line optimization : Proofs of the algorithm by KKT conditions 43/79

Applying KKT conditions Then, μ = A A and (A ) satisfy conditions (6) and (7): J( (A )) + μ (Pˆ P ( (A ))) =, μ (Pˆ P ( (A ))) =. Hence, μ = A A is exactly the Lagrange multiplier satisfying the KKT conditions of the constrained problem, and = (A ) is the optimal solution of the constrained problem. d Consequently, solving the constrained problem through our algorithm is essentially finding the unique Lagrange multipliers and optimal solution of the problem. c 21 Chuan Shi Topic one: Line optimization : Proofs of the algorithm by KKT conditions 43/79

Algorithm summary for case 2 Solve unconstrained problem Solve, by a gradient method, the unconstrained problem for fixed max () = () 1 =1 1 Search: Choose A' () =1 Solve unconstrained problem s.t. min, = 1,, 1. Search Do a one-dimensional search on > to find such that the solution of the unconstrained problem, ( ), satisfies ( ( )) = ˆ. c 21 Chuan Shi Topic one: P(*(A'))=P? o Yes Quit Line optimization : Proofs of the algorithm by KKT conditions 44/79

umerical results umerical experiment outline Experiments on short lines. Experiments on long lines. Computation speed. Method we use to check the algorithm Pˆ surface search in ( 1,, k 1 ) space. All buffer size allocations,, such that P () = Pˆ compose the Pˆ surface. c 21 Chuan Shi Topic one: Line optimization : umerical results 45/79

ˆP surface search 9 P surface The optimal solution 85 3 8 75 7 15 2 1 25 3 354 4 45 5 55 6 65 7 2 Figure 6: Pˆ Surface search c 21 Chuan Shi Topic one: Line optimization : umerical results 46/79

Experiment on short lines (4-buffer line) Line parameters: Pˆ =.88 machine M 1 M 2 M 3 M 4 M 5 r.11.12.1.9.1 p.8.1.1.1.1 Machine 4 is the least reliable machine (bottleneck) of the line. Cost function 4 4 J() = 25P () i n i() i=1 i=1 c 21 Chuan Shi Topic one: Line optimization : umerical results 47/79

Experiment on short lines (4-buffer line) Results Optimal solutions ˆP Surface Search The algorithm Error Rounded Prod. rate.88.88.88 1 28.85 28.857.2% 29. 2 58.46 58.5694.19% 59. 3 92.98 92.968.8% 93. 4 87.39 87.4415.6% 87. n 1 19.682 19.726.2% 19.1791 n 2 34.384 34.3835.23% 34.7289 n 3 48.72 48.6981.4% 48.9123 n 4 31.9894 32.63.5% 31.9485 Profit ($) 1798.2 1798.1.6% 1797.4 The maximal error is.23% and appears in n 2. Computer time for this experiment is 2.69 seconds. c 21 Chuan Shi Topic one: Line optimization : umerical results 48/79

Experiment on long lines (11-buffer line) Line parameters: Pˆ =.88 machine M 1 M 2 M 3 M 4 M 5 M 6 r.11.12.1.9.1.11 p.8.1.1.1.1.1 Cost function machine M 7 M 8 M 9 M 1 M 11 M 12 r.1.11.12.1.12.9 p.9.1.9.8.1.9 11 11 J() = 6P () i n i() i=1 i=1 c 21 Chuan Shi Topic one: Line optimization : umerical results 49/79

Experiment on long lines (11-buffer line) Results Optimal solutions, buffer sizes: ˆP Surface Search The algorithm Error Rounded Prod. rate.88.88.8799 1 29.1 29.1769.26% 29. 2 59.2 59.283.14% 59. 3 97.8 97.798.2% 98. 4 17.5 17.4176.8% 17. 5 84.5 84.484.2% 84. 6 7.8 7.6892.17% 71. 7 63.1 63.1893.14% 63. 8 53.1 52.9274.33% 53. 9 47.2 47.2232.5% 47. 1 47.9 47.7967.22% 48. 11 48.8 48.7716.6% 49. c 21 Chuan Shi Topic one: Line optimization : umerical results 5/79

Experiment on long lines (11-buffer line) Results (continued) Optimal solutions, average inventories: ˆP Surface Search The algorithm Error Rounded n 1 19.2388 19.2986.31% 19.1979 n 2 34.9561 35.423.25% 34.8194 n 3 52.5423 52.632.12% 52.6833 n 4 45.1528 45.184.7% 45.835 n 5 34.4289 34.477.14% 34.279 n 6 3.773 3.748.1% 3.8229 n 7 28.446 28.1299.3% 28.92 n 8 21.5666 21.5438.11% 21.5932 n 9 21.559 21.5442.18% 21.4299 n 1 22.6756 22.6496.11% 22.733 n 11 2.8692 2.8615.4% 2.9613 Profit ($) 4239.3 4239.2.2% 4239.5 Computer time is 91.47 seconds. c 21 Chuan Shi Topic one: Line optimization : umerical results 51/79

Experiments for Tolio, Matta, and Gershwin (22) model Consider a 4-machine 3-buffer line with constraints Pˆ =.87. In addition, A = 2 and all b i and c i are 1. machine M 1 M 2 M 3 M 4 r i1.1.12.1.2 p i1.1.8.1.7 r i2.2.16 p i2.5.4 ˆP Surf. Search The algorithm Error P ( ).8699.8699 1 29.86 29.993.45% 2 38.22 38.26.52% 3 2.68 2.7616.39% n 1 17.2779 17.3674.52% n 2 17.262 17.1792.47% n 3 6.1996 6.2121.2% Profit ($) 161.3 161.3.% c 21 Chuan Shi Topic one: Line optimization : umerical results 52/79

Experiments for Levantesi, Matta, and Tolio (23) model Consider a 4-machine 3-buffer line with constraints Pˆ =.87. In addition, A = 2 and all b i and c i are 1. machine M 1 M 2 M 3 M 4 μ i 1. 1.2 1. 1. r i1.1.12.1.2 p i1.1.8.1.12 r i2.2.16 p i2.5.6 P Surf. Search The algorithm Error P ( ).8699.87 1 27.72 27.942.66% 2 38.79 38.9281.34% 3 34.7 34.1574.26% n 1 15.4288 15.5313.66% n 2 19.8787 19.9711.46% n 3 13.8937 13.9426.35% Profit ($) 159. 1589.7.2% c 21 Chuan Shi Topic one: Line optimization : umerical results 53/79

Computation speed Experiment Run the algorithm for a series of experiments for lines having iden- tical machines to see how fast the algorithm could optimize longer lines. Length of the line varies from 4 machines to 3 machines. Machine parameters are p =.1 and r =.1. In all cases, the feasible production rate is Pˆ =.88. The objective function is k 1 k 1 J() = AP () i n i(). i=1 where A = 5k for the line of length k. i=1 c 21 Chuan Shi Topic one: Line optimization : umerical results 54/79

Computation speed 3 25 Computer time (seconds) 2 15 1 5 1 mins 5 1 15 2 25 3 Length of production lines 3 mins 1 min c 21 Chuan Shi Topic one: Line optimization : umerical results 55/79

Algorithm reliability We run the algorithm on 739 randomly generated 4-machine 3-buffer lines. 98.92% of these experiments have a maximal error less than 6%. 45 4 35 3 Frequency 25 2 15 1 5.1.2.3.4.5.6.7.8.9 1 Maximal Error c 21 Chuan Shi Topic one: Line optimization : umerical results 56/79

Algorithm reliability Taking a closer look at those 98.92% experiments, we find a more accurate distribution of the maximal error. We find that, out of the total 739 experiments, 83.9% of them have a maximal error less than 2%. 6 5 4 Frequency 3 2 1.1.2.3.4.5.6.7.8.9.1 Maximal Error c 21 Chuan Shi Topic one: Line optimization : umerical results 57/79

MIT OpenCourseWare http://ocw.mit.edu 2.852 Manufacturing Systems Analysis Spring 21 For information about citing these materials or our Terms of Use,visit:http://ocw.mit.edu/terms.