Topic one: Production line profit maximization subject to a production rate constraint c 21 Chuan Shi Topic one: Line optimization : 22/79
Production line profit maximization The profit maximization problem max k 1 k 1 J() = AP () b i i c i n i () i=1 i=1 s.t. P () ˆP, i min, i = 1,, k 1. where P () = production rate, parts/time unit ˆP = required production rate, parts/time unit A = profit coefficient, $/part n i () = average inventory of buffer i, i = 1,, k 1 b i = buffer cost coefficient, $/part/time unit = inventory cost coefficient, $/part/time unit c i c 21 Chuan Shi Topic one: Line optimization : Constrained and unconstrained problems 23/79
An example about the research goal "data.txt" "feasible.txt" 125 12 119 118 116 114 112 11 18 16 Optimal boundary 122 12 118 116 J() 114 112 11 18 16 14 1 2 3 4 1 5 6 7 8 9 1 1 2 3 4 5 6 7 8 9 1 2 Figure 2: () vs. 1 and 2 c 21 Chuan Shi Topic one: Line optimization : Constrained and unconstrained problems 24/79
Two problems Original constrained problem max k 1 k 1 J() = AP () b i i c i n i () s.t. P () ˆP, i=1 i=1 i min, i = 1,, k 1. Simpler unconstrained problem (Schor s problem) max k 1 k 1 J() = AP () b i i c i n i () i=1 i=1 s.t. i min, i = 1,, k 1. c 21 Chuan Shi Topic one: Line optimization : Constrained and unconstrained problems 25/79
An example for algorithm derivation Data 1 =.1, 1 =.1, 2 =.11, 2 =.1, 3 =.1, 3 =.9, ˆ =.88 Cost function () = 2 () 1 2 1 () 2 () 1 9 P(1,2)>P 8 166 164 162 J() 16 158 156 154 152 15 7 2 6 5 4 1 2 3 4 1 5 6 7 8 9 1 1 2 3 4 5 Figure 3: () vs. 1 and 2 c 21 Chuan Shi Topic one: 6 7 2 8 9 1 3 P(1,2)<P P(1,2)=P 2 1 2 3 4 5 6 1 7 8 9 1 Figure 4: () Line optimization : Algorithm derivation 26/79
An example for algorithm derivation 166 164 162 J() 16 158 156 154 152 15 1 2 3 4 1 5 6 7 8 9 1 1 2 3 4 5 6 7 8 9 1 2 Figure 5: () vs. 1 and 2 c 21 Chuan Shi Topic one: Line optimization : Algorithm derivation 27/79
Algorithm derivation Two cases Case 1 The solution of the unconstrained problem is s.t. ( ) ˆ. In this case, the solution of the constrained problem is the same as the solution of the unconstrained problem. We are done. 1 Unconstrained problem = () 1 u =1 u ( 1, 2 ) 8 7 6 2 max () 9 5 1 () P(1,2) > P 4 3 =1 2 s.t. c 21 Chuan Shi min, = 1,, 1. Topic one: 1 2 3 Line optimization : Algorithm derivation 4 5 6 1 7 8 9 1 28/79
Algorithm derivation Two cases (continued) Case 2 u satisfies P ( u ) < Pˆ. This is not the solution of the constrained problem. 1 9 8 7 6 2 5 4 P(1,2) > P 3 2 u u ( 1, 2 ) 1 2 3 4 5 6 7 8 9 1 1 c 21 Chuan Shi Topic one: Line optimization : Algorithm derivation 29/79
Algorithm derivation Two cases (continued) Case 2 (continued) In this case, we consider the following unconstrained problem: k 1 k 1 max J() = A P () b i i c i n i() i=1 i=1 s.t. i min, i = 1,, k 1. in which A is replaced by A. Let (A ) be the solution to this problem and P (A ) = P ( (A )). c 21 Chuan Shi Topic one: Line optimization : Algorithm derivation 3/79
Assertion The constrained problem k 1 k 1 max J() = A P () b i i c i n i() s.t. P () P ˆ, i=1 i=1 i min, i = 1,, k 1. has the same solution for all A in which the solution of the corresponding unconstrained problem k 1 k 1 max J() = A P () b i i c i n i() has P (A ) Pˆ. i=1 i=1 s.t. i min, i = 1,, k 1. c 21 Chuan Shi Topic one: Line optimization : Algorithm derivation 31/79
Interpretation of the assertion We claim 48 8 47 7 46 6 45 5 44 4 43 3 max ( ) = 5 ( ) ( ) s.t. ( ) s.t. ˆ min Cost ($) A P() If the optimal solution of the unconstrained problem is not that of the constrained problem, then the solution of the constrained problem, ( 1,, 1 ), satisfies ( 1,, 1 ) = ˆ. max () = 5 ˆ ( ) s.t. ( ) s.t. ˆ ( ) = ˆ min 42 2 41 1 4 *(A ) 5 1 P(*(A )) < P 15 2 25 3 35 4 We formally prove this by the Karush-Kuhn-Tucker (KKT) conditions of nonlin ear programming. c 21 Chuan Shi Topic one: Line optimization : Algorithm derivation 32/79
Interpretation of the assertion We claim 48 8 47 7 46 6 45 5 44 4 43 3 max ( ) = 5 ( ) ( ) s.t. ( ) s.t. ˆ min Cost ($) A P() If the optimal solution of the unconstrained problem is not that of the constrained problem, then the solution of the constrained problem, ( 1,, 1 ), satisfies ( 1,, 1 ) = ˆ. max () = 5 ˆ ( ) s.t. ( ) s.t. ˆ ( ) = ˆ min 42 2 41 1 4 *(A ) 5 1 P(*(A )) < P 15 2 25 3 35 4 We formally prove this by the Karush-Kuhn-Tucker (KKT) conditions of nonlin ear programming. c 21 Chuan Shi Topic one: Line optimization : Algorithm derivation 32/79
Interpretation of the assertion We claim 48 8 47 7 46 6 45 5 44 4 43 3 max ( ) = 5 ( ) ( ) s.t. ( ) s.t. ˆ min Cost ($) A P() If the optimal solution of the unconstrained problem is not that of the constrained problem, then the solution of the constrained problem, ( 1,, 1 ), satisfies ( 1,, 1 ) = ˆ. max () = 5 ˆ ( ) s.t. ( ) s.t. ˆ ( ) = ˆ min 42 2 41 1 4 *(A ) 5 1 P(*(A )) < P 15 2 25 3 35 4 We formally prove this by the Karush-Kuhn-Tucker (KKT) conditions of nonlin ear programming. c 21 Chuan Shi Topic one: Line optimization : Algorithm derivation 32/79
Interpretation of the assertion A = 15 (Original Problem) A = 35 122 12 118 116 J() 114 112 11 18 16 14 1 2 3 4 5 1 6 7 8 9 1 1 2 3 4 5 6 7 8 "infeasible.txt" "feasible.txt" 125 12 119 118 116 114 112 11 18 16 Optimal 1 boundary 9 J() 285 28 275 27 265 1 2 3 4 5 1 6 7 8 9 1 1 2 3 4 5 6 7 8 2 3 4 1 c 21 Chuan Shi 5 6 7 8 9 1 Topic one: 1 2 3 4 5 6 7 2 8 "infeasible.txt" "feasible.txt" 26 255 249.8 24 22 2 198 195 193 19 Optimal 1 boundary 9 385 38 375 J() 37 365 36 355 35 345 1 2 3 Line optimization : Algorithm derivation 4 1 5 6 7 8 9 1 1 2 3 4 5 "infeasible.txt" "feasible.txt" 2921 2919.7 291 29 288 286 283 28 277 273 Optimal 1 boundary 9 2 A = 4535.82 (Final A ) 28 26 24 22 J() 2 198 196 194 192 19 188 1 29 2 A = 25 295 6 7 8 "infeasible.txt" "feasible.txt" 3819 3816 3813 3818 38 377 373 37 366 362 358 354 35 Optimal 1 boundary 9 2 33/79
Karush-Kuhn-Tucker (KKT) conditions Let x be a local minimum of the problem min f(x) s.t. h 1 (x) =,, h m (x) =, g 1 (x),, g r (x), where f, h i, and g j are continuously differentiable functions from R n to R. Then there exist unique Lagrange multipliers λ 1,, λ m and μ 1,, μ r, satisfying the following conditions: x L(x, λ, μ ) =, μ j, j = 1,, r, μ j g j (x ) =, j = 1,, r. m where L(x, λ, μ) = f(x) + i=1 λ ih i (x) + j=1 μ j g j (x) is called the Lagrangian function. c 21 Chuan Shi Topic one: Line optimization : Proofs of the algorithm by KKT conditions 34/79 r
Convert the constrained problem to minimization form Minimization form The constrained problem min k 1 k 1 J() = AP () + b i i + c i n i () i=1 i=1 s.t. ˆP P () min i, i = 1,, k 1 We have argued that we treat i as continuous variables, and P () and J() as continuously differentiable functions. c 21 Chuan Shi Topic one: Line optimization : Proofs of the algorithm by KKT conditions 35/79
Applying KKT conditions The Slater constraint qualification for convex inequalities guarantees the existence of Lagrange multipliers for our problem. So, there exist unique Lagrange multipliers, =,, 1 for the constrained problem to satisfy the KKT conditions: ( ) + ( ˆ ( )) + 1 ( min ) = (1) =1 or ( ) 1 ( ) 2... ( ) 1 ( ) 1 ( ) 2... ( ) 1 1 1... 1... 1 =..., (2) c 21 Chuan Shi Topic one: Line optimization : Proofs of the algorithm by KKT conditions 36/79
Applying KKT conditions and haha haha i μ, i =,, k 1, (3) μ (Pˆ P ( )) =, i μ i ( min ) =, i = 1,, k 1, (5) where is the optimal solution of the constrained problem. Assume that i > min for all i. In this case, by equation (5), we know that μ i =, i = 1,, k 1. (4) c 21 Chuan Shi Topic one: Line optimization : Proofs of the algorithm by KKT conditions 37/79
Applying KKT conditions The KKT conditions are simplified to J( ) P ( ) 1 J( ) 2. J( ) k 1 1 P ( ) 2 μ =., (6).. P ( ) k 1 μ (Pˆ P ( )) =, (7) where μ. Since is not the optimal solution of the unconstrained problem, J( ) =. Thus, μ = since otherwise condition (6) would be violated. By condition (7), the optimal solution satisfies P ( ) = Pˆ. c 21 Chuan Shi Topic one: Line optimization : Proofs of the algorithm by KKT conditions 38/79
Applying KKT conditions The KKT conditions are simplified to J( ) P ( ) 1 J( ) 2. J( ) k 1 1 P ( ) 2 μ =.,.. P ( ) k 1 μ (Pˆ P ( )) =, In addition, conditions (6) and (7) reveal how we could find μ and. For every μ, condition (6) determines since there are k 1 equations and k 1 unknowns. Therefore, we can think of = (μ ). We search for a value of μ such that P ( (μ )) = Pˆ. As we indicate in the following, this is exactly what the algorithm does. c 21 Chuan Shi Topic one: Line optimization : Proofs of the algorithm by KKT conditions 39/79
Applying KKT conditions Replacing μ by μ > in constraint (6) gives J( c ) P ( c ) 1 J( c 1 ) P ( c ) 2 μ 2.. J( c ) P ( c ) k 1 k 1 =.,. where c is the unique solution of (8). ote that c is the solution of the following optimization problem: (8) min J() = J() + μ ( ˆP P ()) s.t. min i, i = 1,, k 1. (9) c 21 Chuan Shi Topic one: Line optimization : Proofs of the algorithm by KKT conditions 4/79
Applying KKT conditions The problem above is equivalent to or max J () = J() μ (Pˆ P ()) s.t. min i, i = 1,, k 1. (1) k 1 k 1 max J () = AP () b i i c i n i μ (Pˆ P ()) i=1 i=1 (11) or s.t. min i, i = 1,, k 1. k 1 k 1 max J () = (A + μ )P () b i i c i n i i=1 s.t. i min, i = 1,, k 1. i=1 (12) c 21 Chuan Shi Topic one: Line optimization : Proofs of the algorithm by KKT conditions 41/79
Applying KKT conditions The problem above is equivalent to or max J () = J() μ (Pˆ P ()) s.t. min i, i = 1,, k 1. (1) k 1 k 1 max J () = AP () b i i c i n i μ (Pˆ P ()) i=1 i=1 (11) or s.t. min i, i = 1,, k 1. k 1 k 1 max J () = (A + μ )P () b i i c i n i i=1 s.t. i min, i = 1,, k 1. i=1 (12) c 21 Chuan Shi Topic one: Line optimization : Proofs of the algorithm by KKT conditions 41/79
Applying KKT conditions The problem above is equivalent to or max J () = J() μ (Pˆ P ()) s.t. min i, i = 1,, k 1. (1) k 1 k 1 max J () = AP () b i i c i n i μ (Pˆ P ()) i=1 i=1 (11) or s.t. min i, i = 1,, k 1. k 1 k 1 max J () = (A + μ )P () b i i c i n i i=1 s.t. i min, i = 1,, k 1. i=1 (12) c 21 Chuan Shi Topic one: Line optimization : Proofs of the algorithm by KKT conditions 41/79
Applying KKT conditions or, finally, k 1 k 1 max J () = A P () b i i c i n i i=1 i=1 (13) s.t. i min, i = 1,, k 1. where A = A + μ. This is exactly the unconstrained problem, and c is its optimal solution. ote that μ > indicates that A > A. In addition, the KKT conditions indicate that the optimal solution of the constrained problem satisfies P ( ) = Pˆ. This means that, for every A > A (or μ > ), we can find the corresponding optimal solution c satisfying condition (8) by solving problem (13). We need to find the A such that the solution to problem (13), denoted as (A ), satisfies P ( (A )) = Pˆ. c 21 Chuan Shi Topic one: Line optimization : Proofs of the algorithm by KKT conditions 42/79
Applying KKT conditions or, finally, k 1 k 1 max J () = A P () b i i c i n i i=1 i=1 (13) s.t. i min, i = 1,, k 1. where A = A + μ. This is exactly the unconstrained problem, and c is its optimal solution. ote that μ > indicates that A > A. In addition, the KKT conditions indicate that the optimal solution of the constrained problem satisfies P ( ) = Pˆ. This means that, for every A > A (or μ > ), we can find the corresponding optimal solution c satisfying condition (8) by solving problem (13). We need to find the A such that the solution to problem (13), denoted as (A ), satisfies P ( (A )) = Pˆ. c 21 Chuan Shi Topic one: Line optimization : Proofs of the algorithm by KKT conditions 42/79
Applying KKT conditions Then, μ = A A and (A ) satisfy conditions (6) and (7): J( (A )) + μ (Pˆ P ( (A ))) =, μ (Pˆ P ( (A ))) =. Hence, μ = A A is exactly the Lagrange multiplier satisfying the KKT conditions of the constrained problem, and = (A ) is the optimal solution of the constrained problem. d Consequently, solving the constrained problem through our algorithm is essentially finding the unique Lagrange multipliers and optimal solution of the problem. c 21 Chuan Shi Topic one: Line optimization : Proofs of the algorithm by KKT conditions 43/79
Applying KKT conditions Then, μ = A A and (A ) satisfy conditions (6) and (7): J( (A )) + μ (Pˆ P ( (A ))) =, μ (Pˆ P ( (A ))) =. Hence, μ = A A is exactly the Lagrange multiplier satisfying the KKT conditions of the constrained problem, and = (A ) is the optimal solution of the constrained problem. d Consequently, solving the constrained problem through our algorithm is essentially finding the unique Lagrange multipliers and optimal solution of the problem. c 21 Chuan Shi Topic one: Line optimization : Proofs of the algorithm by KKT conditions 43/79
Algorithm summary for case 2 Solve unconstrained problem Solve, by a gradient method, the unconstrained problem for fixed max () = () 1 =1 1 Search: Choose A' () =1 Solve unconstrained problem s.t. min, = 1,, 1. Search Do a one-dimensional search on > to find such that the solution of the unconstrained problem, ( ), satisfies ( ( )) = ˆ. c 21 Chuan Shi Topic one: P(*(A'))=P? o Yes Quit Line optimization : Proofs of the algorithm by KKT conditions 44/79
umerical results umerical experiment outline Experiments on short lines. Experiments on long lines. Computation speed. Method we use to check the algorithm Pˆ surface search in ( 1,, k 1 ) space. All buffer size allocations,, such that P () = Pˆ compose the Pˆ surface. c 21 Chuan Shi Topic one: Line optimization : umerical results 45/79
ˆP surface search 9 P surface The optimal solution 85 3 8 75 7 15 2 1 25 3 354 4 45 5 55 6 65 7 2 Figure 6: Pˆ Surface search c 21 Chuan Shi Topic one: Line optimization : umerical results 46/79
Experiment on short lines (4-buffer line) Line parameters: Pˆ =.88 machine M 1 M 2 M 3 M 4 M 5 r.11.12.1.9.1 p.8.1.1.1.1 Machine 4 is the least reliable machine (bottleneck) of the line. Cost function 4 4 J() = 25P () i n i() i=1 i=1 c 21 Chuan Shi Topic one: Line optimization : umerical results 47/79
Experiment on short lines (4-buffer line) Results Optimal solutions ˆP Surface Search The algorithm Error Rounded Prod. rate.88.88.88 1 28.85 28.857.2% 29. 2 58.46 58.5694.19% 59. 3 92.98 92.968.8% 93. 4 87.39 87.4415.6% 87. n 1 19.682 19.726.2% 19.1791 n 2 34.384 34.3835.23% 34.7289 n 3 48.72 48.6981.4% 48.9123 n 4 31.9894 32.63.5% 31.9485 Profit ($) 1798.2 1798.1.6% 1797.4 The maximal error is.23% and appears in n 2. Computer time for this experiment is 2.69 seconds. c 21 Chuan Shi Topic one: Line optimization : umerical results 48/79
Experiment on long lines (11-buffer line) Line parameters: Pˆ =.88 machine M 1 M 2 M 3 M 4 M 5 M 6 r.11.12.1.9.1.11 p.8.1.1.1.1.1 Cost function machine M 7 M 8 M 9 M 1 M 11 M 12 r.1.11.12.1.12.9 p.9.1.9.8.1.9 11 11 J() = 6P () i n i() i=1 i=1 c 21 Chuan Shi Topic one: Line optimization : umerical results 49/79
Experiment on long lines (11-buffer line) Results Optimal solutions, buffer sizes: ˆP Surface Search The algorithm Error Rounded Prod. rate.88.88.8799 1 29.1 29.1769.26% 29. 2 59.2 59.283.14% 59. 3 97.8 97.798.2% 98. 4 17.5 17.4176.8% 17. 5 84.5 84.484.2% 84. 6 7.8 7.6892.17% 71. 7 63.1 63.1893.14% 63. 8 53.1 52.9274.33% 53. 9 47.2 47.2232.5% 47. 1 47.9 47.7967.22% 48. 11 48.8 48.7716.6% 49. c 21 Chuan Shi Topic one: Line optimization : umerical results 5/79
Experiment on long lines (11-buffer line) Results (continued) Optimal solutions, average inventories: ˆP Surface Search The algorithm Error Rounded n 1 19.2388 19.2986.31% 19.1979 n 2 34.9561 35.423.25% 34.8194 n 3 52.5423 52.632.12% 52.6833 n 4 45.1528 45.184.7% 45.835 n 5 34.4289 34.477.14% 34.279 n 6 3.773 3.748.1% 3.8229 n 7 28.446 28.1299.3% 28.92 n 8 21.5666 21.5438.11% 21.5932 n 9 21.559 21.5442.18% 21.4299 n 1 22.6756 22.6496.11% 22.733 n 11 2.8692 2.8615.4% 2.9613 Profit ($) 4239.3 4239.2.2% 4239.5 Computer time is 91.47 seconds. c 21 Chuan Shi Topic one: Line optimization : umerical results 51/79
Experiments for Tolio, Matta, and Gershwin (22) model Consider a 4-machine 3-buffer line with constraints Pˆ =.87. In addition, A = 2 and all b i and c i are 1. machine M 1 M 2 M 3 M 4 r i1.1.12.1.2 p i1.1.8.1.7 r i2.2.16 p i2.5.4 ˆP Surf. Search The algorithm Error P ( ).8699.8699 1 29.86 29.993.45% 2 38.22 38.26.52% 3 2.68 2.7616.39% n 1 17.2779 17.3674.52% n 2 17.262 17.1792.47% n 3 6.1996 6.2121.2% Profit ($) 161.3 161.3.% c 21 Chuan Shi Topic one: Line optimization : umerical results 52/79
Experiments for Levantesi, Matta, and Tolio (23) model Consider a 4-machine 3-buffer line with constraints Pˆ =.87. In addition, A = 2 and all b i and c i are 1. machine M 1 M 2 M 3 M 4 μ i 1. 1.2 1. 1. r i1.1.12.1.2 p i1.1.8.1.12 r i2.2.16 p i2.5.6 P Surf. Search The algorithm Error P ( ).8699.87 1 27.72 27.942.66% 2 38.79 38.9281.34% 3 34.7 34.1574.26% n 1 15.4288 15.5313.66% n 2 19.8787 19.9711.46% n 3 13.8937 13.9426.35% Profit ($) 159. 1589.7.2% c 21 Chuan Shi Topic one: Line optimization : umerical results 53/79
Computation speed Experiment Run the algorithm for a series of experiments for lines having iden- tical machines to see how fast the algorithm could optimize longer lines. Length of the line varies from 4 machines to 3 machines. Machine parameters are p =.1 and r =.1. In all cases, the feasible production rate is Pˆ =.88. The objective function is k 1 k 1 J() = AP () i n i(). i=1 where A = 5k for the line of length k. i=1 c 21 Chuan Shi Topic one: Line optimization : umerical results 54/79
Computation speed 3 25 Computer time (seconds) 2 15 1 5 1 mins 5 1 15 2 25 3 Length of production lines 3 mins 1 min c 21 Chuan Shi Topic one: Line optimization : umerical results 55/79
Algorithm reliability We run the algorithm on 739 randomly generated 4-machine 3-buffer lines. 98.92% of these experiments have a maximal error less than 6%. 45 4 35 3 Frequency 25 2 15 1 5.1.2.3.4.5.6.7.8.9 1 Maximal Error c 21 Chuan Shi Topic one: Line optimization : umerical results 56/79
Algorithm reliability Taking a closer look at those 98.92% experiments, we find a more accurate distribution of the maximal error. We find that, out of the total 739 experiments, 83.9% of them have a maximal error less than 2%. 6 5 4 Frequency 3 2 1.1.2.3.4.5.6.7.8.9.1 Maximal Error c 21 Chuan Shi Topic one: Line optimization : umerical results 57/79
MIT OpenCourseWare http://ocw.mit.edu 2.852 Manufacturing Systems Analysis Spring 21 For information about citing these materials or our Terms of Use,visit:http://ocw.mit.edu/terms.