Properties of a Simple Variant of Klee-Minty s LP and Their Proof

Size: px
Start display at page:

Download "Properties of a Simple Variant of Klee-Minty s LP and Their Proof"

Transcription

1 Properties of a Simple Variant of Klee-Minty s LP and Their Proof Tomonari Kitahara and Shinji Mizuno December 28, 2 Abstract Kitahara and Mizuno (2) presents a simple variant of Klee- Minty s LP, which has some nice properties. We explain and prove the properties in this paper. Keywords: Simplex method, Linear programming, Basic feasible solutions, Klee-Minty s LP Introduction Kitahara and Mizuno [2] utilize Ye s analysis [5] for the Markov Decision problem to general LPs (linear programming problems), and they get two upper bounds for the number of different BFSs (basic feasible solutions) generated by Dantzig s simplex method [] for the standard form of LP min c T x, subject to Ax = b, x, () where the number of equality constraints is m, the number of variables is n, A R m n, b R m, c R n, and x R n. The upper bounds they get are m γ x z δ log(ct ) (2) c T x z Graduate School of Decision Science and Technology, Tokyo Institute of Technology, 2-2--W9-62, Ookayama, Meguro-ku, Tokyo, , Japan. Tel.: Fax: kitahara.t.ab@m.titech.ac.jp Graduate School of Decision Science and Technology, Tokyo Institute of Technology, 2-2--W9-58, Ookayama, Meguro-ku, Tokyo, , Japan. Tel.: Fax: mizuno.s.ab@m.titech.ac.jp

2 and n m γ δ log(mγ ), (3) δ where x is an initial BFS, x is a second optimal BFS, z is the optimal value, δ and γ are the minimum and the maximum values of all the positive elements of primal BFSs, respectively, and a denotes the smallest integer bigger than a R. The size of the bounds highly depends on the ratio γ/δ. A question one may have is how tight the bounds are. Kitahara and Mizuno [3] partially answer to the question above. They presents a simple variant of Klee-Minty s LP [4], and they show that the number of iterations by Dantzig s simplex method for solving the variant is just γ/δ. So it is impossible to get an upper bound which is smaller than γ/δ. In this paper, we explain and prove the properties of the variant given in [3] more precisely. 2 A simple variant of Klee-Minty s LP Klee-Minty s LP [4] is max m i= m i x i, subject to x, 2 k i= k i x i + x k k for k = 2, 3,, m, x, where x = (x, x 2,, x m ) T is a variable vector. Kitahara and Mizuno [3] presents a simple variant of Klee-Minty s LP, which is represented as max m i= x i, subject to x, 2 k i= x i + x k 2 k for k = 2, 3,, m, x, The standard form using a vector y = (y, y 2,, y m ) T of slack variables is written as max m i= x i, subject to x + y =, 2 k i= x i + x k + y k = 2 k (4) for k = 2, 3,, m, x, y. The number of equality constraints of this problem is m and the number of variables is n = 2m. 2

3 Lemma 2. The variant (4) has the following properties. for each k {, 2,, m} at any BFS (basic feasible solution), exactly one of x k and y k is a basic variable, 2. the problem has 2 m BFSs, 3. each component of any BFS is an integer, 4. the problem is nondegenerate, 5. the optimal BFS is and the optimal value is 2 m, x = (,,,, 2 m ) T, y = (, 2 2,, 2 m, ) T, 6. we have that δ = and γ = 2 m, where δ and γ are the minimum and the maximum values of all the positive elements of primal BFSs, respectively. Proof. At first, we easily get from the equality constraints of (4) that x k 2 k and y k 2 k (5) for any feasible solution (x, y) and any k {, 2,, m}. Next we will show that x k + y k. (6) From the first equality constraint in (4), we have (6) for k =. When k 2, we have from the (k )-th and k-th constraints that 2 k 2 i= x i + x k + y k = 2 k, 2 k i= x i + x k + y k = 2 k. Subtracting the first equality from the second equality, we get x k + y k + x k y k = 2 k. (7) Hence we obtain (6) from (5), (7), and y k. Let (x, y) be any BFS of (4). From (6), either x k or y k for any k {, 2,, m} must be a basic variable. Since the number of basic variables of any BFS is m, exactly one of x k and y k is a basic variable. 3

4 If we choose one of x k and y k as a basic variable for each k {, 2,, m}, a basic solution (x, y ) is determined. The value of x or y is from the equality constraint x + y =, and the values of x k or y k for k = 2, 3,, m are easily computed from (7) in this order. Then the value of each basic variable is a positive integer from (6) and (7). So the basic solution (x, y ) is feasible and nondegenerate. The number of such basic feasible solutions is obviously 2 m. We get from the last equality constraint of (4) that m i= m x i 2 x i + x m + y m = 2 m. i= Hence the object function value is at most 2 m. When y i (i {, 2,, m }) and x m are basic variables, the BFS is computed as in 5 of the lemma. Since the object function value is 2 m, it is an optimal solution. The last property 6 in the lemma is obvious from the discussion above. 3 Dantzig s simplex method for the variant Suppose that the initial BFS is x =, y = (, 2 2,, 2 m ) T and we generate a sequence of BFSs by Dantzig s simplex method [], in which a nonbasic variable, whose reduced cost is maximum (in the case of a maximization problem), is chosen as an entering variable, if two or more nonbasic variables have maximal reduced cost, then a variable (either x i or y i ) of the minimum index in them is chosen as an entering variable. At first, we explain the case of m =. Since x =, y = at the initial BFS, y is the basic variables and the dictionary for the initial BFS is z = x, y = x, (8) where z represents the objective function. Then Dantzig s simplex method updates the dictionaries as follows: x is chosen as an entering variable from (8), and the second dictionary is calculated as z = y, x = y. 4

5 This dictionary is optimal. We get an optimal solution x =, y = and an optimal value. Then we explain the case of m = 2. Since x =, y = (, 3) T at the initial BFS, y and y 2 are the basic variables and the dictionary for the initial BFS is z = x +x 2, y = x, (9) y 2 = 3 2x x 2, where z represents the objective function. Then Dantzig s simplex method updates the dictionary as follows: x is chosen as an entering variable from (9), and the second dictionary is calculated as The third dictionary is The 4-th dictionary is z = y +x 2, x = y, y 2 = +2y x 2. z = 2 +y y 2, x = y, x 2 = +2y y 2. z = 3 x y 2, y = x, x 2 = 3 2x y 2. () This dictionary is optimal. We get an optimal solution (x, x 2 ) = (, 3), (y, y 2 ) = (, ) and an optimal value 2 2 = 3. The x-parts of BFSs generated are ( ) ( ) v = ( v 2 =, v = ), v 3 = ( 3 which are shown in Figure. We observe that the number of iterations is (2 2 ),, ), any reduced cost of every dictionary is or, the objective function value increases by at each iteration. 5

6 x 2 3 v 2 v v v x Figure : Vertices generated by Dantzig s simplex method when m = 2 Now we consider the following variant of Klee-Minty s LP max (2 m ) m i= x i, subject to x, 2 k i= x i + x k 2 k for k = 2, 3,, m, x. The standard form using a vector y = (y, y 2,, y m ) T of slack variables is written as max (2 m ) m i= x i, subject to x + y =, 2 k i= x i + x k + y k = 2 k for k = 2, 3,, m, x, y. () It is easy to see that this problem () also has the properties in Lemma 2. except for the optimal solution. The optimal solution of () is x =, y = (, 2 2,, 2 m ), and the optimal value is (2 m ). Suppose that the initial BFS is x = (,,,, 2 m ) T, y = (, 2 2,, 2 m, ) T and we generate a sequence of BFSs by Dantzig s simplex method for the problem (). When m =, Dantzig s simplex method finds the optimal solution in one iteration just as the case of the problem (4). Next 6

7 we explain the case of m = 2. Since x = (, 3) T, y = (, ) at the initial BFS, y and x 2 are the basic variables and the dictionary for the initial BFS is w = +x +y 2, y = x, (2) x 2 = 3 2x y 2. where w represents the objective function. Then Dantzig s simplex method updates the dictionary as follows: x is chosen as an entering variable from (2), and the second dictionary is calculated as The third dictionary is The 4-th dictionary is w = y +y 2, x = y, x 2 = +2y y 2. w = 2 +y x 2, x = y, y 2 = +2y x 2. w = 3 x x 2, y = x, y 2 = 3 2x x 2, (3) This dictionary is optimal. We get an optimal solution (x, x 2 ) = (, ), (y, y 2 ) = (, 3) and an optimal value 3. The x-parts of BFSs generated are v 3, v 2, v, v, which are just the opposite order in the case of the problem (4). We observe that the number of iterations is (2 2 ), any reduced cost of every dictionary is or, the objective function value increases by at each iteration. Theorem 3. When we generate a sequence of BFSs by Dantzig s simplex method for the problem (4) from an initial BFS x =, y = (, 2 2,, 2 m ) T then the number of iterations is (2 m ), any reduced cost of every dictionary is or, 7

8 the objective function value increases by at each iteration. When we generate a sequence of BFSs by Dantzig s simplex method for the problem () from an initial BFS x = (,,,, 2 m ) T, y = (, 2 2,, 2 m, ) T then the number of iterations is (2 m ), any reduced cost of every dictionary is or, the objective function value increases by at each iteration. We can prove these results by using induction. We get the results for m =, 2 from the discussion above. Then we will show that the results are also true for m = 3. For ease of understanding the discussion, we show all the iterations of Dantzig s simplex method for solving the problem (4), when m = 3 and m = 4, in Section 5. When m = 3, the first dictionary is z = x +x 2 +x 3, y = x, y 2 = 3 2x x 2, y 3 = 7 2x 2x 2 x 3. (4) Note that the nonbasic variable x 3 does not appear except for the objective function and the last constraint in (4), and that the reduced cost of x 3 is. So the variable x 3 does not chosen as an entering variable until the reduced costs of the other nonbasic variables becomes less than. While the variable x 3 is nonbasic, the dictionaries except for the last constraint are updated just as the case of m = 2. Since any reduced cost for m = 2 is or, all the reduced costs becomes less than only when an optimal BFS for m = 2 is obtained. Thus the first 2 2 = 3 iterations are just like the case of m = 2, and (x, x 2 ) T -part of BFSs moves in the order of v, v, v 2, v 3. After 3 iterations, we get the 4-th dictionary z = 3 x y 2 +x 3, y = x, x 2 = 3 2x y 2, y 3 = +2x +2y 2 x 3. (5) We can see that this dictionary except for the last row and the last column is just same as the dictionary (). At this dictionary, x 3 is chosen as an 8

9 entering variable, and the 5-th dictionary is z = 4 +x +y 2 y 3, y = x, x 2 = 3 2x y 2, x 3 = +2x +2y 2 y 3. (6) We see that the objective function value increases by and each reduced cost is or in this case too. The nonbasic variable y 3 does not appear except for the objective function and the last constraint in (6), and the reduced cost of y 3 is. So the variable y 3 does not chosen as an entering variable until the algorithm stops. Hence iterations are just like the case of m = 2. Note that the dictionary (6) except for the last column, the last row, and the constant in z is same as the dictionary (2). So (x, x 2 ) T -part of BFSs generated by Dantzig s simplex method starting from (6) moves in the order of v 3, v 2, v, v (see Figure 2 where the vertices generated are shown as u, u 2,, u 7 ). After 3 iterations, we get the 8-th dictionary z = 7 x x 2 y 3, y = x, y 2 = 3 2x x 2, x 3 = 7 2x 2x 2 y 3. This dictionary is optimal and the optimal value is 2 3. It is easy to check that x-parts of the iterates are u =, u =, u 2 =, u 3 = u 6 = 3 5, u 4 =, u 7 = 3, u 5 = whose objective function values are,, 2, 3, 4, 5, 6, 7 in this order. Hence all the results of the problem (4) in the theorem are true for m = 3. In the similar way, we can also prove that all the results of the problem () in the theorem are true for m = 3. We will prove the theorem in the next section. Here we state the next two corollaries, which are obtained from the second and third results for each problem in the theorem. 9 7, 3,

10 x 3 7 u 6 u 5 u u u x 2 u 4 u 3 u x 2 Figure 2: Vertices generated by Dantzig s simplex method when m = 3 Corollary 3. The sequence of BFSs generated by the simplex method for the problem (4) by the minimum index rule is same as the case of Dantzig s rule. Corollary 3.2 Let {(v i, u i ) R 2m : i =,, 2,, 2 m } be the sequence of BFSs generated by Dantzig s simplex method for the problem (4) from an initial BFS x =, y = (, 2 2,, 2 m ) T. Then the sequence of BFSs generated by Dantzig s simplex method for the problem () from an initial BFS x = (,,,, 2 m ) T, y = (, 2 2,, 2 m, ) T is {(v 2m i, u 2m i ) R 2m : i =,, 2,, 2 m }. 4 Proof of Theorem 3. In this section, we prove Theorem 3.. For ease of understanding the proof, we show in Section 5 all the iterations of Dantzig s simplex method for solving the problem (4) when m = 3 and m = 4.

11 Proof. Assume that all the results in Theorem 3. are true for m = k. When m = k +, the first dictionary of the problem (4) is z = x +x k +x k+, y = x,. y k = (2 k ) 2x x k, y k+ = (2 k+ ) 2x 2x k x k+. (7) Note that the nonbasic variable x k+ does not appear except for the objective function and the last constraint in (7), and that the reduced cost of x k+ is. So the variable x k+ does not chosen as an entering variable until the reduced costs of the other nonbasic variables becomes less than. While the variable x k+ is nonbasic, the dictionaries except for the last constraint are updated just as the case of m = k. Since any reduced cost for m = k is or, all the reduced costs becomes less than only when an optimal BFS for m = k is obtained. Thus the first (2 k ) iterations are just like the case of m = k. After (2 k ) iterations, the basic variables are y,, y k, x k, y k+, and we get the 2 k -th dictionary z = (2 k ) x y k +x k+, y = x,. x k = (2 k ) 2x y k, y k+ = +2x +2y k x k+. (8) This dictionary except for the last row and the last column is just same as the dictionary of the optimal BFS for m = k. At this dictionary, x k+ is chosen as an entering variable, and the (2 k + )-th dictionary is z = 2 k +x +y k y k+, y = x,. x k = (2 k ) 2x y k, x k+ = +2x +2y k y k+. (9) We see that the objective function value increases by, and each reduced cost is or in this case too. The nonbasic variable y k+ does not appear except for the objective function and the last constraint in (9), and the reduced cost of y k+ is. So the variable y k+ does not chosen as an entering variable until the algorithm stops. Hence iterations are just like the case of m = k.

12 The dictionary (9) except for the last column, the last row, and the constant in z is same as the initial dictionary of the problem () when m = k. Thus the next (2 k ) iterations are just the case of the problem () when m = k. After (2 k ) iterations, we get the 2 k+ -th dictionary z = (2 k+ ) x x k y k+, y = x,. y k = (2 k ) 2x x k, x k+ = (2 k+ ) 2x 2x k y k+. This dictionary is optimal and the optimal value is 2 k+. It is easy to check that all the results of the problem (4) for m = k + in the theorem are true. In the similar way, we can also prove that all the results of the problem () in the theorem are true when m = k +. 5 Iterations when m = 3 and m = 4 In this section, we show all the iterations of Dantzig s simplex method for solving the problem (4) when m = 3 and m = 4. When m = 3, the initial dictionary is The second dictionary is The third dictionary is z = x +x 2 +x 3 y = x y 2 = 3 2x x 2 y 3 = 7 2x 2x 2 x 3. z = y +x 2 +x 3 x = y y 2 = +2y x 2 y 3 = 5 +2y 2x 2 x 3. z = 2 +y y 2 +x 3 x = y x 2 = +2y y 2 y 3 = 3 2y +2y 2 x 3. 2

13 The 4-th dictionary is z = 3 x y 2 +x 3 y = x x 2 = 3 2x y 2 y 3 = +2x +2y 2 x 3. The 5-th dictionary is z = 4 +x +y 2 y 3 y = x x 2 = 3 2x y 2 x 3 = +2x +2y 2 y 3. The 6-th dictionary is z = 5 y +y 2 y 3 x = y x 2 = +2y y 2 x 3 = 3 2y +2y 2 y 3. The 7-th dictionary is z = 6 +y x 2 y 3 x = y y 2 = +2y x 2 x 3 = 5 +2y 2x 2 y 3. The 8-th dictionary is z = 7 x x 2 y 3 y = x y 2 = 3 2x x 2 x 3 = 7 2x 2x 2 y 3. This dictionary is optimal. Next we show all the iterations of Dantzig s simplex method for solving the problem (4) when m = 4. The initial dictionary is z = x +x 2 +x 3 +x 4 y = x y 2 = 3 2x x 2 y 3 = 7 2x 2x 2 x 3 y 4 = 5 2x 2x 2 2x 3 x 4. 3

14 The second dictionary is The third dictionary is The 4-th dictionary is The 5-th dictionary is The 6-th dictionary is The 7-th dictionary is z = y +x 2 +x 3 +x 4 x = y y 2 = +2y x 2 y 3 = 5 +2y 2x 2 x 3 y 4 = 3 +2y 2x 2 2x 3 x 4. z = 2 +y y 2 +x 3 +x 4 x = y x 2 = +2y y 2 y 3 = 3 2y +2y 2 x 3 y 4 = 2y +2y 2 2x 3 x 4. z = 3 x y 2 +x 3 +x 4 y = x x 2 = 3 2x y 2 y 3 = +2x +2y 2 x 3 y 4 = 9 +2x +2y 2 2x 3 x 4. z = 4 +x +y 2 y 3 +x 4 y = x x 2 = 3 2x y 2 x 3 = +2x +2y 2 y 3 y 4 = 7 2x 2y 2 +2y 3 x 4. z = 5 y +y 2 y 3 +x 4 x = y x 2 = +2y y 2 x 3 = 3 2y +2y 2 y 3 y 4 = 5 +2y 2y 2 +2y 3 x 4. z = 6 +y x 2 y 3 +x 4 x = y y 2 = +2y x 2 x 3 = 5 +2y 2x 2 y 3 y 4 = 3 2y +2x 2 +2y 3 x 4. 4

15 The 8-th dictionary is The 9-th dictionary is The -th dictionary is The -th dictionary is The 2-th dictionary is The 3-th dictionary is z = 7 x x 2 y 3 +x 4 y = x y 2 = 3 2x x 2 x 3 = 7 2x 2x 2 y 3 y 4 = +2x +2x 2 +2y 3 x 4. z = 8 +x +x 2 +y 3 y 4 y = x y 2 = 3 2x x 2 x 3 = 7 2x 2x 2 y 3 x 4 = +2x +2x 2 +2y 3 y 4. z = 9 y +x 2 +y 3 y 4 x = y y 2 = +2y x 2 x 3 = 5 +2y 2x 2 y 3 x 4 = 3 2y +2x 2 +2y 3 y 4. z = +y y 2 +y 3 y 4 x = y x 2 = +2y y 2 x 3 = 3 2y +2y 2 y 3 x 4 = 5 +2y 2y 2 +2y 3 y 4. z = x y 2 +y 3 y 4 y = x x 2 = 3 2x y 2 x 3 = +2x +2y 2 y 3 x 4 = 7 2x 2y 2 +2y 3 y 4. z = 2 +x +y 2 x 3 y 4 y = x x 2 = 3 2x y 2 y 3 = +2x +2y 2 x 3 x 4 = 9 +2x +2y 2 2x 3 y 4. 5

16 The 4-th dictionary is The 5-th dictionary is The 6-th dictionary is This dictionary is optimal. z = 3 y +y 2 x 3 y 4 x = y x 2 = +2y y 2 y 3 = 3 2y +2y 2 x 3 x 4 = 2y +2y 2 2x 3 y 4. z = 4 +y x 2 x 3 y 4 x = y y 2 = +2y x 2 y 3 = 5 +2y 2x 2 x 3 x 4 = 3 +2y 2x 2 2x 3 y 4. z = 5 x x 2 x 3 y 4 y = x y 2 = 3 2x x 2 y 3 = 7 2x 2x 2 x 3 x 4 = 5 2x 2x 2 2x 3 y 4. Acknowledgment This research is supported in part by Grant-in-Aid for Science Research (A) of Japan Society for the Promotion of Science. References [] G. B. Dantzig, Linear Programming and Extensions, Princeton University Press, Princeton, New Jersey, 963. [2] T. Kitahara and S. Mizuno, A Bound for the Number of Different Basic Solutions Generated by the Simplex Method, Technical paper, available at mizu lab/simplexmethod/, 2. [3] T. Kitahara and S. Mizuno, Klee-Minty s LP and Upper Bounds for Dantzig s Simplex Method, Technical paper, available at mizu lab/simplexmethod/, 2. 6

17 [4] V. Klee and G. J. Minty, How good is the simplex method. In O. Shisha, editor, Inequalities III, pp , Academic Press, New York, NY, 972. [5] Y. Ye, The Simplex and Policy Iteration Methods are Strongly Polynomial for the Markov Decision Problem with a Fixed Discount Rate, Technical paper, available at yyye/, 2. 7

LOWER BOUNDS FOR THE MAXIMUM NUMBER OF SOLUTIONS GENERATED BY THE SIMPLEX METHOD

LOWER BOUNDS FOR THE MAXIMUM NUMBER OF SOLUTIONS GENERATED BY THE SIMPLEX METHOD Journal of the Operations Research Society of Japan Vol 54, No 4, December 2011, pp 191 200 c The Operations Research Society of Japan LOWER BOUNDS FOR THE MAXIMUM NUMBER OF SOLUTIONS GENERATED BY THE

More information

An upper bound for the number of different solutions generated by the primal simplex method with any selection rule of entering variables

An upper bound for the number of different solutions generated by the primal simplex method with any selection rule of entering variables An upper bound for the number of different solutions generated by the primal simplex method with any selection rule of entering variables Tomonari Kitahara and Shinji Mizuno February 2012 Abstract Kitahara

More information

On the number of distinct solutions generated by the simplex method for LP

On the number of distinct solutions generated by the simplex method for LP Retrospective Workshop Fields Institute Toronto, Ontario, Canada On the number of distinct solutions generated by the simplex method for LP Tomonari Kitahara and Shinji Mizuno Tokyo Institute of Technology

More information

A Bound for the Number of Different Basic Solutions Generated by the Simplex Method

A Bound for the Number of Different Basic Solutions Generated by the Simplex Method ICOTA8, SHANGHAI, CHINA A Bound for the Number of Different Basic Solutions Generated by the Simplex Method Tomonari Kitahara and Shinji Mizuno Tokyo Institute of Technology December 12th, 2010 Contents

More information

On the Number of Solutions Generated by the Simplex Method for LP

On the Number of Solutions Generated by the Simplex Method for LP Workshop 1 on Large Scale Conic Optimization IMS (NUS) On the Number of Solutions Generated by the Simplex Method for LP Tomonari Kitahara and Shinji Mizuno Tokyo Institute of Technology November 19 23,

More information

A Strongly Polynomial Simplex Method for Totally Unimodular LP

A Strongly Polynomial Simplex Method for Totally Unimodular LP A Strongly Polynomial Simplex Method for Totally Unimodular LP Shinji Mizuno July 19, 2014 Abstract Kitahara and Mizuno get new bounds for the number of distinct solutions generated by the simplex method

More information

A primal-simplex based Tardos algorithm

A primal-simplex based Tardos algorithm A primal-simplex based Tardos algorithm Shinji Mizuno a, Noriyoshi Sukegawa a, and Antoine Deza b a Graduate School of Decision Science and Technology, Tokyo Institute of Technology, 2-12-1-W9-58, Oo-Okayama,

More information

Optimization (168) Lecture 7-8-9

Optimization (168) Lecture 7-8-9 Optimization (168) Lecture 7-8-9 Jesús De Loera UC Davis, Mathematics Wednesday, April 2, 2012 1 DEGENERACY IN THE SIMPLEX METHOD 2 DEGENERACY z =2x 1 x 2 + 8x 3 x 4 =1 2x 3 x 5 =3 2x 1 + 4x 2 6x 3 x 6

More information

The Simplex and Policy Iteration Methods are Strongly Polynomial for the Markov Decision Problem with Fixed Discount

The Simplex and Policy Iteration Methods are Strongly Polynomial for the Markov Decision Problem with Fixed Discount The Simplex and Policy Iteration Methods are Strongly Polynomial for the Markov Decision Problem with Fixed Discount Yinyu Ye Department of Management Science and Engineering and Institute of Computational

More information

Slack Variable. Max Z= 3x 1 + 4x 2 + 5X 3. Subject to: X 1 + X 2 + X x 1 + 4x 2 + X X 1 + X 2 + 4X 3 10 X 1 0, X 2 0, X 3 0

Slack Variable. Max Z= 3x 1 + 4x 2 + 5X 3. Subject to: X 1 + X 2 + X x 1 + 4x 2 + X X 1 + X 2 + 4X 3 10 X 1 0, X 2 0, X 3 0 Simplex Method Slack Variable Max Z= 3x 1 + 4x 2 + 5X 3 Subject to: X 1 + X 2 + X 3 20 3x 1 + 4x 2 + X 3 15 2X 1 + X 2 + 4X 3 10 X 1 0, X 2 0, X 3 0 Standard Form Max Z= 3x 1 +4x 2 +5X 3 + 0S 1 + 0S 2

More information

"SYMMETRIC" PRIMAL-DUAL PAIR

SYMMETRIC PRIMAL-DUAL PAIR "SYMMETRIC" PRIMAL-DUAL PAIR PRIMAL Minimize cx DUAL Maximize y T b st Ax b st A T y c T x y Here c 1 n, x n 1, b m 1, A m n, y m 1, WITH THE PRIMAL IN STANDARD FORM... Minimize cx Maximize y T b st Ax

More information

Numerical Optimization

Numerical Optimization Linear Programming - Interior Point Methods Computer Science and Automation Indian Institute of Science Bangalore 560 012, India. NPTEL Course on Example 1 Computational Complexity of Simplex Algorithm

More information

Simplex method(s) for solving LPs in standard form

Simplex method(s) for solving LPs in standard form Simplex method: outline I The Simplex Method is a family of algorithms for solving LPs in standard form (and their duals) I Goal: identify an optimal basis, as in Definition 3.3 I Versions we will consider:

More information

CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming

CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming January 26, 2018 1 / 38 Liability/asset cash-flow matching problem Recall the formulation of the problem: max w c 1 + p 1 e 1 = 150

More information

Lesson 27 Linear Programming; The Simplex Method

Lesson 27 Linear Programming; The Simplex Method Lesson Linear Programming; The Simplex Method Math 0 April 9, 006 Setup A standard linear programming problem is to maximize the quantity c x + c x +... c n x n = c T x subject to constraints a x + a x

More information

21. Solve the LP given in Exercise 19 using the big-m method discussed in Exercise 20.

21. Solve the LP given in Exercise 19 using the big-m method discussed in Exercise 20. Extra Problems for Chapter 3. Linear Programming Methods 20. (Big-M Method) An alternative to the two-phase method of finding an initial basic feasible solution by minimizing the sum of the artificial

More information

CO350 Linear Programming Chapter 6: The Simplex Method

CO350 Linear Programming Chapter 6: The Simplex Method CO350 Linear Programming Chapter 6: The Simplex Method 8th June 2005 Chapter 6: The Simplex Method 1 Minimization Problem ( 6.5) We can solve minimization problems by transforming it into a maximization

More information

Math Models of OR: Some Definitions

Math Models of OR: Some Definitions Math Models of OR: Some Definitions John E. Mitchell Department of Mathematical Sciences RPI, Troy, NY 12180 USA September 2018 Mitchell Some Definitions 1 / 20 Active constraints Outline 1 Active constraints

More information

The Simplex Method is Strongly Polynomial for the Markov Decision Problem with a Fixed Discount Rate

The Simplex Method is Strongly Polynomial for the Markov Decision Problem with a Fixed Discount Rate The Siplex Method is Strongly Polynoial for the Markov Decision Proble with a Fixed Discount Rate Yinyu Ye April 20, 2010 Abstract In this note we prove that the classic siplex ethod with the ost-negativereduced-cost

More information

1 date: February 23, 1998 le: papar1. coecient pivoting rule. a particular form of the simplex algorithm.

1 date: February 23, 1998 le: papar1. coecient pivoting rule. a particular form of the simplex algorithm. 1 date: February 23, 1998 le: papar1 KLEE - MINTY EAMPLES FOR (LP) Abstract : The problem of determining the worst case behavior of the simplex algorithm remained an outstanding open problem for more than

More information

Contents. 4.5 The(Primal)SimplexMethod NumericalExamplesoftheSimplexMethod

Contents. 4.5 The(Primal)SimplexMethod NumericalExamplesoftheSimplexMethod Contents 4 The Simplex Method for Solving LPs 149 4.1 Transformations to be Carried Out On an LP Model Before Applying the Simplex Method On It... 151 4.2 Definitions of Various Types of Basic Vectors

More information

An approximation algorithm for the partial covering 0 1 integer program

An approximation algorithm for the partial covering 0 1 integer program An approximation algorithm for the partial covering 0 1 integer program Yotaro Takazawa Shinji Mizuno Tomonari Kitahara submitted January, 2016 revised July, 2017 Abstract The partial covering 0 1 integer

More information

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs Introduction to Mathematical Programming IE406 Lecture 10 Dr. Ted Ralphs IE406 Lecture 10 1 Reading for This Lecture Bertsimas 4.1-4.3 IE406 Lecture 10 2 Duality Theory: Motivation Consider the following

More information

1 Review Session. 1.1 Lecture 2

1 Review Session. 1.1 Lecture 2 1 Review Session Note: The following lists give an overview of the material that was covered in the lectures and sections. Your TF will go through these lists. If anything is unclear or you have questions

More information

Chap6 Duality Theory and Sensitivity Analysis

Chap6 Duality Theory and Sensitivity Analysis Chap6 Duality Theory and Sensitivity Analysis The rationale of duality theory Max 4x 1 + x 2 + 5x 3 + 3x 4 S.T. x 1 x 2 x 3 + 3x 4 1 5x 1 + x 2 + 3x 3 + 8x 4 55 x 1 + 2x 2 + 3x 3 5x 4 3 x 1 ~x 4 0 If we

More information

A Review of Linear Programming

A Review of Linear Programming A Review of Linear Programming Instructor: Farid Alizadeh IEOR 4600y Spring 2001 February 14, 2001 1 Overview In this note we review the basic properties of linear programming including the primal simplex

More information

Dr. Maddah ENMG 500 Engineering Management I 10/21/07

Dr. Maddah ENMG 500 Engineering Management I 10/21/07 Dr. Maddah ENMG 500 Engineering Management I 10/21/07 Computational Procedure of the Simplex Method The optimal solution of a general LP problem is obtained in the following steps: Step 1. Express the

More information

Linear Programming. Linear Programming I. Lecture 1. Linear Programming. Linear Programming

Linear Programming. Linear Programming I. Lecture 1. Linear Programming. Linear Programming Linear Programming Linear Programming Lecture Linear programming. Optimize a linear function subject to linear inequalities. (P) max " c j x j n j= n s. t. " a ij x j = b i # i # m j= x j 0 # j # n (P)

More information

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method The Simplex Method Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye (LY, Chapters 2.3-2.5, 3.1-3.4) 1 Geometry of Linear

More information

CO350 Linear Programming Chapter 8: Degeneracy and Finite Termination

CO350 Linear Programming Chapter 8: Degeneracy and Finite Termination CO350 Linear Programming Chapter 8: Degeneracy and Finite Termination 22th June 2005 Chapter 8: Finite Termination Recap On Monday, we established In the absence of degeneracy, the simplex method will

More information

Lecture 11: Post-Optimal Analysis. September 23, 2009

Lecture 11: Post-Optimal Analysis. September 23, 2009 Lecture : Post-Optimal Analysis September 23, 2009 Today Lecture Dual-Simplex Algorithm Post-Optimal Analysis Chapters 4.4 and 4.5. IE 30/GE 330 Lecture Dual Simplex Method The dual simplex method will

More information

LINEAR PROGRAMMING I. a refreshing example standard form fundamental questions geometry linear algebra simplex algorithm

LINEAR PROGRAMMING I. a refreshing example standard form fundamental questions geometry linear algebra simplex algorithm Linear programming Linear programming. Optimize a linear function subject to linear inequalities. (P) max c j x j n j= n s. t. a ij x j = b i i m j= x j 0 j n (P) max c T x s. t. Ax = b Lecture slides

More information

A Redundant Klee-Minty Construction with All the Redundant Constraints Touching the Feasible Region

A Redundant Klee-Minty Construction with All the Redundant Constraints Touching the Feasible Region A Redundant Klee-Minty Construction with All the Redundant Constraints Touching the Feasible Region Eissa Nematollahi Tamás Terlaky January 5, 2008 Abstract By introducing some redundant Klee-Minty constructions,

More information

Lecture slides by Kevin Wayne

Lecture slides by Kevin Wayne LINEAR PROGRAMMING I a refreshing example standard form fundamental questions geometry linear algebra simplex algorithm Lecture slides by Kevin Wayne Last updated on 7/25/17 11:09 AM Linear programming

More information

MATH 445/545 Homework 2: Due March 3rd, 2016

MATH 445/545 Homework 2: Due March 3rd, 2016 MATH 445/545 Homework 2: Due March 3rd, 216 Answer the following questions. Please include the question with the solution (write or type them out doing this will help you digest the problem). I do not

More information

The Simplex Method. Lecture 5 Standard and Canonical Forms and Setting up the Tableau. Lecture 5 Slide 1. FOMGT 353 Introduction to Management Science

The Simplex Method. Lecture 5 Standard and Canonical Forms and Setting up the Tableau. Lecture 5 Slide 1. FOMGT 353 Introduction to Management Science The Simplex Method Lecture 5 Standard and Canonical Forms and Setting up the Tableau Lecture 5 Slide 1 The Simplex Method Formulate Constrained Maximization or Minimization Problem Convert to Standard

More information

Sensitivity Analysis

Sensitivity Analysis Dr. Maddah ENMG 500 /9/07 Sensitivity Analysis Changes in the RHS (b) Consider an optimal LP solution. Suppose that the original RHS (b) is changed from b 0 to b new. In the following, we study the affect

More information

Linear programming: algebra

Linear programming: algebra : algebra CE 377K March 26, 2015 ANNOUNCEMENTS Groups and project topics due soon Announcements Groups and project topics due soon Did everyone get my test email? Announcements REVIEW geometry Review geometry

More information

CO350 Linear Programming Chapter 8: Degeneracy and Finite Termination

CO350 Linear Programming Chapter 8: Degeneracy and Finite Termination CO350 Linear Programming Chapter 8: Degeneracy and Finite Termination 27th June 2005 Chapter 8: Finite Termination 1 The perturbation method Recap max c T x (P ) s.t. Ax = b x 0 Assumption: B is a feasible

More information

A Simpler and Tighter Redundant Klee-Minty Construction

A Simpler and Tighter Redundant Klee-Minty Construction A Simpler and Tighter Redundant Klee-Minty Construction Eissa Nematollahi Tamás Terlaky October 19, 2006 Abstract By introducing redundant Klee-Minty examples, we have previously shown that the central

More information

CPS 616 ITERATIVE IMPROVEMENTS 10-1

CPS 616 ITERATIVE IMPROVEMENTS 10-1 CPS 66 ITERATIVE IMPROVEMENTS 0 - APPROACH Algorithm design technique for solving optimization problems Start with a feasible solution Repeat the following step until no improvement can be found: change

More information

CSC373: Algorithm Design, Analysis and Complexity Fall 2017 DENIS PANKRATOV NOVEMBER 1, 2017

CSC373: Algorithm Design, Analysis and Complexity Fall 2017 DENIS PANKRATOV NOVEMBER 1, 2017 CSC373: Algorithm Design, Analysis and Complexity Fall 2017 DENIS PANKRATOV NOVEMBER 1, 2017 Linear Function f: R n R is linear if it can be written as f x = a T x for some a R n Example: f x 1, x 2 =

More information

MAT016: Optimization

MAT016: Optimization MAT016: Optimization M.El Ghami e-mail: melghami@ii.uib.no URL: http://www.ii.uib.no/ melghami/ March 29, 2011 Outline for today The Simplex method in matrix notation Managing a production facility The

More information

The simplex algorithm

The simplex algorithm The simplex algorithm The simplex algorithm is the classical method for solving linear programs. Its running time is not polynomial in the worst case. It does yield insight into linear programs, however,

More information

Simplex Algorithm Using Canonical Tableaus

Simplex Algorithm Using Canonical Tableaus 41 Simplex Algorithm Using Canonical Tableaus Consider LP in standard form: Min z = cx + α subject to Ax = b where A m n has rank m and α is a constant In tableau form we record it as below Original Tableau

More information

Dantzig s pivoting rule for shortest paths, deterministic MDPs, and minimum cost to time ratio cycles

Dantzig s pivoting rule for shortest paths, deterministic MDPs, and minimum cost to time ratio cycles Dantzig s pivoting rule for shortest paths, deterministic MDPs, and minimum cost to time ratio cycles Thomas Dueholm Hansen 1 Haim Kaplan Uri Zwick 1 Department of Management Science and Engineering, Stanford

More information

TIM 206 Lecture 3: The Simplex Method

TIM 206 Lecture 3: The Simplex Method TIM 206 Lecture 3: The Simplex Method Kevin Ross. Scribe: Shane Brennan (2006) September 29, 2011 1 Basic Feasible Solutions Have equation Ax = b contain more columns (variables) than rows (constraints),

More information

The Simplex Method for Solving a Linear Program Prof. Stephen Graves

The Simplex Method for Solving a Linear Program Prof. Stephen Graves The Simplex Method for Solving a Linear Program Prof. Stephen Graves Observations from Geometry feasible region is a convex polyhedron an optimum occurs at a corner point possible algorithm - search over

More information

CS711008Z Algorithm Design and Analysis

CS711008Z Algorithm Design and Analysis CS711008Z Algorithm Design and Analysis Lecture 8 Linear programming: interior point method Dongbo Bu Institute of Computing Technology Chinese Academy of Sciences, Beijing, China 1 / 31 Outline Brief

More information

Section Notes 9. IP: Cutting Planes. Applied Math 121. Week of April 12, 2010

Section Notes 9. IP: Cutting Planes. Applied Math 121. Week of April 12, 2010 Section Notes 9 IP: Cutting Planes Applied Math 121 Week of April 12, 2010 Goals for the week understand what a strong formulations is. be familiar with the cutting planes algorithm and the types of cuts

More information

Linear Programming: Simplex

Linear Programming: Simplex Linear Programming: Simplex Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. IMA, August 2016 Stephen Wright (UW-Madison) Linear Programming: Simplex IMA, August 2016

More information

The Primal-Dual Algorithm P&S Chapter 5 Last Revised October 30, 2006

The Primal-Dual Algorithm P&S Chapter 5 Last Revised October 30, 2006 The Primal-Dual Algorithm P&S Chapter 5 Last Revised October 30, 2006 1 Simplex solves LP by starting at a Basic Feasible Solution (BFS) and moving from BFS to BFS, always improving the objective function,

More information

OPERATIONS RESEARCH. Linear Programming Problem

OPERATIONS RESEARCH. Linear Programming Problem OPERATIONS RESEARCH Chapter 1 Linear Programming Problem Prof. Bibhas C. Giri Department of Mathematics Jadavpur University Kolkata, India Email: bcgiri.jumath@gmail.com MODULE - 2: Simplex Method for

More information

Introduction to optimization

Introduction to optimization Introduction to optimization Geir Dahl CMA, Dept. of Mathematics and Dept. of Informatics University of Oslo 1 / 24 The plan 1. The basic concepts 2. Some useful tools (linear programming = linear optimization)

More information

Lecture 10: Linear programming. duality. and. The dual of the LP in standard form. maximize w = b T y (D) subject to A T y c, minimize z = c T x (P)

Lecture 10: Linear programming. duality. and. The dual of the LP in standard form. maximize w = b T y (D) subject to A T y c, minimize z = c T x (P) Lecture 10: Linear programming duality Michael Patriksson 19 February 2004 0-0 The dual of the LP in standard form minimize z = c T x (P) subject to Ax = b, x 0 n, and maximize w = b T y (D) subject to

More information

(includes both Phases I & II)

(includes both Phases I & II) (includes both Phases I & II) Dennis ricker Dept of Mechanical & Industrial Engineering The University of Iowa Revised Simplex Method 09/23/04 page 1 of 22 Minimize z=3x + 5x + 4x + 7x + 5x + 4x subject

More information

Introduction to integer programming II

Introduction to integer programming II Introduction to integer programming II Martin Branda Charles University in Prague Faculty of Mathematics and Physics Department of Probability and Mathematical Statistics Computational Aspects of Optimization

More information

Summary of the simplex method

Summary of the simplex method MVE165/MMG631,Linear and integer optimization with applications The simplex method: degeneracy; unbounded solutions; starting solutions; infeasibility; alternative optimal solutions Ann-Brith Strömberg

More information

Lecture 9 Tuesday, 4/20/10. Linear Programming

Lecture 9 Tuesday, 4/20/10. Linear Programming UMass Lowell Computer Science 91.503 Analysis of Algorithms Prof. Karen Daniels Spring, 2010 Lecture 9 Tuesday, 4/20/10 Linear Programming 1 Overview Motivation & Basics Standard & Slack Forms Formulating

More information

Optimization WS 13/14:, by Y. Goldstein/K. Reinert, 9. Dezember 2013, 16: Linear programming. Optimization Problems

Optimization WS 13/14:, by Y. Goldstein/K. Reinert, 9. Dezember 2013, 16: Linear programming. Optimization Problems Optimization WS 13/14:, by Y. Goldstein/K. Reinert, 9. Dezember 2013, 16:38 2001 Linear programming Optimization Problems General optimization problem max{z(x) f j (x) 0,x D} or min{z(x) f j (x) 0,x D}

More information

The Simplex Method. Formulate Constrained Maximization or Minimization Problem. Convert to Standard Form. Convert to Canonical Form

The Simplex Method. Formulate Constrained Maximization or Minimization Problem. Convert to Standard Form. Convert to Canonical Form The Simplex Method 1 The Simplex Method Formulate Constrained Maximization or Minimization Problem Convert to Standard Form Convert to Canonical Form Set Up the Tableau and the Initial Basic Feasible Solution

More information

Lecture Simplex Issues: Number of Pivots. ORIE 6300 Mathematical Programming I October 9, 2014

Lecture Simplex Issues: Number of Pivots. ORIE 6300 Mathematical Programming I October 9, 2014 ORIE 6300 Mathematical Programming I October 9, 2014 Lecturer: David P. Williamson Lecture 14 Scribe: Calvin Wylie 1 Simplex Issues: Number of Pivots Question: How many pivots does the simplex algorithm

More information

The Simplex Method. Standard form (max) z c T x = 0 such that Ax = b.

The Simplex Method. Standard form (max) z c T x = 0 such that Ax = b. The Simplex Method Standard form (max) z c T x = 0 such that Ax = b. The Simplex Method Standard form (max) z c T x = 0 such that Ax = b. Build initial tableau. z c T 0 0 A b The Simplex Method Standard

More information

Prelude to the Simplex Algorithm. The Algebraic Approach The search for extreme point solutions.

Prelude to the Simplex Algorithm. The Algebraic Approach The search for extreme point solutions. Prelude to the Simplex Algorithm The Algebraic Approach The search for extreme point solutions. 1 Linear Programming-1 x 2 12 8 (4,8) Max z = 6x 1 + 4x 2 Subj. to: x 1 + x 2

More information

Ω R n is called the constraint set or feasible set. x 1

Ω R n is called the constraint set or feasible set. x 1 1 Chapter 5 Linear Programming (LP) General constrained optimization problem: minimize subject to f(x) x Ω Ω R n is called the constraint set or feasible set. any point x Ω is called a feasible point We

More information

MATH 373 Section A1. Final Exam. Dr. J. Bowman 17 December :00 17:00

MATH 373 Section A1. Final Exam. Dr. J. Bowman 17 December :00 17:00 MATH 373 Section A1 Final Exam Dr. J. Bowman 17 December 2018 14:00 17:00 Name (Last, First): Student ID: Email: @ualberta.ca Scrap paper is supplied. No notes or books are permitted. All electronic equipment,

More information

The Ellipsoid (Kachiyan) Method

The Ellipsoid (Kachiyan) Method Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note: Ellipsoid Method 1 The Ellipsoid (Kachiyan) Method Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A.

More information

Simplex Method for LP (II)

Simplex Method for LP (II) Simplex Method for LP (II) Xiaoxi Li Wuhan University Sept. 27, 2017 (week 4) Operations Research (Li, X.) Simplex Method for LP (II) Sept. 27, 2017 (week 4) 1 / 31 Organization of this lecture Contents:

More information

4.5 Simplex method. min z = c T x s.v. Ax = b. LP in standard form

4.5 Simplex method. min z = c T x s.v. Ax = b. LP in standard form 4.5 Simplex method min z = c T x s.v. Ax = b x 0 LP in standard form Examine a sequence of basic feasible solutions with non increasing objective function value until an optimal solution is reached or

More information

Part 1. The Review of Linear Programming

Part 1. The Review of Linear Programming In the name of God Part 1. The Review of Linear Programming 1.2. Spring 2010 Instructor: Dr. Masoud Yaghini Outline Introduction Basic Feasible Solutions Key to the Algebra of the The Simplex Algorithm

More information

Review Solutions, Exam 2, Operations Research

Review Solutions, Exam 2, Operations Research Review Solutions, Exam 2, Operations Research 1. Prove the weak duality theorem: For any x feasible for the primal and y feasible for the dual, then... HINT: Consider the quantity y T Ax. SOLUTION: To

More information

The use of shadow price is an example of sensitivity analysis. Duality theory can be applied to do other kind of sensitivity analysis:

The use of shadow price is an example of sensitivity analysis. Duality theory can be applied to do other kind of sensitivity analysis: Sensitivity analysis The use of shadow price is an example of sensitivity analysis. Duality theory can be applied to do other kind of sensitivity analysis: Changing the coefficient of a nonbasic variable

More information

CSCI5654 (Linear Programming, Fall 2013) Lecture-8. Lecture 8 Slide# 1

CSCI5654 (Linear Programming, Fall 2013) Lecture-8. Lecture 8 Slide# 1 CSCI5654 (Linear Programming, Fall 2013) Lecture-8 Lecture 8 Slide# 1 Today s Lecture 1. Recap of dual variables and strong duality. 2. Complementary Slackness Theorem. 3. Interpretation of dual variables.

More information

AM 121: Intro to Optimization

AM 121: Intro to Optimization AM 121: Intro to Optimization Models and Methods Lecture 6: Phase I, degeneracy, smallest subscript rule. Yiling Chen SEAS Lesson Plan Phase 1 (initialization) Degeneracy and cycling Smallest subscript

More information

Linear Programming: Chapter 5 Duality

Linear Programming: Chapter 5 Duality Linear Programming: Chapter 5 Duality Robert J. Vanderbei September 30, 2010 Slides last edited on October 5, 2010 Operations Research and Financial Engineering Princeton University Princeton, NJ 08544

More information

3 The Simplex Method. 3.1 Basic Solutions

3 The Simplex Method. 3.1 Basic Solutions 3 The Simplex Method 3.1 Basic Solutions In the LP of Example 2.3, the optimal solution happened to lie at an extreme point of the feasible set. This was not a coincidence. Consider an LP in general form,

More information

COMPUTATIONAL COMPLEXITY OF PARAMETRIC LINEAR PROGRAMMING +

COMPUTATIONAL COMPLEXITY OF PARAMETRIC LINEAR PROGRAMMING + Mathematical Programming 19 (1980) 213-219. North-Holland Publishing Company COMPUTATIONAL COMPLEXITY OF PARAMETRIC LINEAR PROGRAMMING + Katta G. MURTY The University of Michigan, Ann Arbor, MI, U.S.A.

More information

Simplex tableau CE 377K. April 2, 2015

Simplex tableau CE 377K. April 2, 2015 CE 377K April 2, 2015 Review Reduced costs Basic and nonbasic variables OUTLINE Review by example: simplex method demonstration Outline Example You own a small firm producing construction materials for

More information

CSC Design and Analysis of Algorithms. LP Shader Electronics Example

CSC Design and Analysis of Algorithms. LP Shader Electronics Example CSC 80- Design and Analysis of Algorithms Lecture (LP) LP Shader Electronics Example The Shader Electronics Company produces two products:.eclipse, a portable touchscreen digital player; it takes hours

More information

Chapter 1 Linear Programming. Paragraph 5 Duality

Chapter 1 Linear Programming. Paragraph 5 Duality Chapter 1 Linear Programming Paragraph 5 Duality What we did so far We developed the 2-Phase Simplex Algorithm: Hop (reasonably) from basic solution (bs) to bs until you find a basic feasible solution

More information

Chapter 5 Linear Programming (LP)

Chapter 5 Linear Programming (LP) Chapter 5 Linear Programming (LP) General constrained optimization problem: minimize f(x) subject to x R n is called the constraint set or feasible set. any point x is called a feasible point We consider

More information

Example Problem. Linear Program (standard form) CSCI5654 (Linear Programming, Fall 2013) Lecture-7. Duality

Example Problem. Linear Program (standard form) CSCI5654 (Linear Programming, Fall 2013) Lecture-7. Duality CSCI5654 (Linear Programming, Fall 013) Lecture-7 Duality Lecture 7 Slide# 1 Lecture 7 Slide# Linear Program (standard form) Example Problem maximize c 1 x 1 + + c n x n s.t. a j1 x 1 + + a jn x n b j

More information

ORF 307: Lecture 2. Linear Programming: Chapter 2 Simplex Methods

ORF 307: Lecture 2. Linear Programming: Chapter 2 Simplex Methods ORF 307: Lecture 2 Linear Programming: Chapter 2 Simplex Methods Robert Vanderbei February 8, 2018 Slides last edited on February 8, 2018 http://www.princeton.edu/ rvdb Simplex Method for LP An Example.

More information

4.5 Simplex method. LP in standard form: min z = c T x s.t. Ax = b

4.5 Simplex method. LP in standard form: min z = c T x s.t. Ax = b 4.5 Simplex method LP in standard form: min z = c T x s.t. Ax = b x 0 George Dantzig (1914-2005) Examine a sequence of basic feasible solutions with non increasing objective function values until an optimal

More information

CO350 Linear Programming Chapter 6: The Simplex Method

CO350 Linear Programming Chapter 6: The Simplex Method CO50 Linear Programming Chapter 6: The Simplex Method rd June 2005 Chapter 6: The Simplex Method 1 Recap Suppose A is an m-by-n matrix with rank m. max. c T x (P ) s.t. Ax = b x 0 On Wednesday, we learned

More information

Standard Form An LP is in standard form when: All variables are non-negativenegative All constraints are equalities Putting an LP formulation into sta

Standard Form An LP is in standard form when: All variables are non-negativenegative All constraints are equalities Putting an LP formulation into sta Chapter 4 Linear Programming: The Simplex Method An Overview of the Simplex Method Standard Form Tableau Form Setting Up the Initial Simplex Tableau Improving the Solution Calculating the Next Tableau

More information

(includes both Phases I & II)

(includes both Phases I & II) Minimize z=3x 5x 4x 7x 5x 4x subject to 2x x2 x4 3x6 0 x 3x3 x4 3x5 2x6 2 4x2 2x3 3x4 x5 5 and x 0 j, 6 2 3 4 5 6 j ecause of the lack of a slack variable in each constraint, we must use Phase I to find

More information

IE 5531: Engineering Optimization I

IE 5531: Engineering Optimization I IE 5531: Engineering Optimization I Lecture 5: The Simplex method, continued Prof. John Gunnar Carlsson September 22, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I September 22, 2010

More information

Optimisation and Operations Research

Optimisation and Operations Research Optimisation and Operations Research Lecture 22: Linear Programming Revisited Matthew Roughan http://www.maths.adelaide.edu.au/matthew.roughan/ Lecture_notes/OORII/ School

More information

Chapter 3, Operations Research (OR)

Chapter 3, Operations Research (OR) Chapter 3, Operations Research (OR) Kent Andersen February 7, 2007 1 Linear Programs (continued) In the last chapter, we introduced the general form of a linear program, which we denote (P) Minimize Z

More information

LINEAR PROGRAMMING 2. In many business and policy making situations the following type of problem is encountered:

LINEAR PROGRAMMING 2. In many business and policy making situations the following type of problem is encountered: LINEAR PROGRAMMING 2 In many business and policy making situations the following type of problem is encountered: Maximise an objective subject to (in)equality constraints. Mathematical programming provides

More information

Lecture 10: Linear programming duality and sensitivity 0-0

Lecture 10: Linear programming duality and sensitivity 0-0 Lecture 10: Linear programming duality and sensitivity 0-0 The canonical primal dual pair 1 A R m n, b R m, and c R n maximize z = c T x (1) subject to Ax b, x 0 n and minimize w = b T y (2) subject to

More information

Algorithmic Game Theory and Applications. Lecture 7: The LP Duality Theorem

Algorithmic Game Theory and Applications. Lecture 7: The LP Duality Theorem Algorithmic Game Theory and Applications Lecture 7: The LP Duality Theorem Kousha Etessami recall LP s in Primal Form 1 Maximize c 1 x 1 + c 2 x 2 +... + c n x n a 1,1 x 1 + a 1,2 x 2 +... + a 1,n x n

More information

Operations Research Lecture 6: Integer Programming

Operations Research Lecture 6: Integer Programming Operations Research Lecture 6: Integer Programming Notes taken by Kaiquan Xu@Business School, Nanjing University May 12th 2016 1 Integer programming (IP) formulations The integer programming (IP) is the

More information

Fast Algorithms for LAD Lasso Problems

Fast Algorithms for LAD Lasso Problems Fast Algorithms for LAD Lasso Problems Robert J. Vanderbei 2015 Nov 3 http://www.princeton.edu/ rvdb INFORMS Philadelphia Lasso Regression The problem is to solve a sparsity-encouraging regularized regression

More information

IP Cut Homework from J and B Chapter 9: 14, 15, 16, 23, 24, You wish to solve the IP below with a cutting plane technique.

IP Cut Homework from J and B Chapter 9: 14, 15, 16, 23, 24, You wish to solve the IP below with a cutting plane technique. IP Cut Homework from J and B Chapter 9: 14, 15, 16, 23, 24, 31 14. You wish to solve the IP below with a cutting plane technique. Maximize 4x 1 + 2x 2 + x 3 subject to 14x 1 + 10x 2 + 11x 3 32 10x 1 +

More information

Lecture #21. c T x Ax b. maximize subject to

Lecture #21. c T x Ax b. maximize subject to COMPSCI 330: Design and Analysis of Algorithms 11/11/2014 Lecture #21 Lecturer: Debmalya Panigrahi Scribe: Samuel Haney 1 Overview In this lecture, we discuss linear programming. We first show that the

More information

Lecture 11 Linear programming : The Revised Simplex Method

Lecture 11 Linear programming : The Revised Simplex Method Lecture 11 Linear programming : The Revised Simplex Method 11.1 The Revised Simplex Method While solving linear programming problem on a digital computer by regular simplex method, it requires storing

More information

Example. 1 Rows 1,..., m of the simplex tableau remain lexicographically positive

Example. 1 Rows 1,..., m of the simplex tableau remain lexicographically positive 3.4 Anticycling Lexicographic order In this section we discuss two pivoting rules that are guaranteed to avoid cycling. These are the lexicographic rule and Bland s rule. Definition A vector u R n is lexicographically

More information

(P ) Minimize 4x 1 + 6x 2 + 5x 3 s.t. 2x 1 3x 3 3 3x 2 2x 3 6

(P ) Minimize 4x 1 + 6x 2 + 5x 3 s.t. 2x 1 3x 3 3 3x 2 2x 3 6 The exam is three hours long and consists of 4 exercises. The exam is graded on a scale 0-25 points, and the points assigned to each question are indicated in parenthesis within the text. Problem 1 Consider

More information