An upper bound for the number of different solutions generated by the primal simplex method with any selection rule of entering variables

Similar documents
Properties of a Simple Variant of Klee-Minty s LP and Their Proof

LOWER BOUNDS FOR THE MAXIMUM NUMBER OF SOLUTIONS GENERATED BY THE SIMPLEX METHOD

A Bound for the Number of Different Basic Solutions Generated by the Simplex Method

A Strongly Polynomial Simplex Method for Totally Unimodular LP

On the Number of Solutions Generated by the Simplex Method for LP

On the number of distinct solutions generated by the simplex method for LP

A primal-simplex based Tardos algorithm

A Review of Linear Programming

1 date: February 23, 1998 le: papar1. coecient pivoting rule. a particular form of the simplex algorithm.

Numerical Optimization

Optimization WS 13/14:, by Y. Goldstein/K. Reinert, 9. Dezember 2013, 16: Linear programming. Optimization Problems

CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming

Linear Programming: Simplex

CS711008Z Algorithm Design and Analysis

Lecture Simplex Issues: Number of Pivots. ORIE 6300 Mathematical Programming I October 9, 2014

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs

Optimization (168) Lecture 7-8-9

An approximation algorithm for the partial covering 0 1 integer program

Note 3: LP Duality. If the primal problem (P) in the canonical form is min Z = n (1) then the dual problem (D) in the canonical form is max W = m (2)

A Parametric Simplex Algorithm for Linear Vector Optimization Problems

MAT016: Optimization

Dual Basic Solutions. Observation 5.7. Consider LP in standard form with A 2 R m n,rank(a) =m, and dual LP:

Sensitivity Analysis

MATH3017: Mathematical Programming

Slack Variable. Max Z= 3x 1 + 4x 2 + 5X 3. Subject to: X 1 + X 2 + X x 1 + 4x 2 + X X 1 + X 2 + 4X 3 10 X 1 0, X 2 0, X 3 0

(P ) Minimize 4x 1 + 6x 2 + 5x 3 s.t. 2x 1 3x 3 3 3x 2 2x 3 6

Linear Programming: Chapter 5 Duality

Lecture 11: Post-Optimal Analysis. September 23, 2009

Foundations of Operations Research

The use of shadow price is an example of sensitivity analysis. Duality theory can be applied to do other kind of sensitivity analysis:

Review Solutions, Exam 2, Operations Research

A Redundant Klee-Minty Construction with All the Redundant Constraints Touching the Feasible Region

Chapter 3, Operations Research (OR)

TRANSPORTATION PROBLEMS

1 Review Session. 1.1 Lecture 2

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method

(includes both Phases I & II)

Introduction to integer programming II

Introduction to optimization

CO350 Linear Programming Chapter 8: Degeneracy and Finite Termination

"SYMMETRIC" PRIMAL-DUAL PAIR

Math 5593 Linear Programming Week 1

Understanding the Simplex algorithm. Standard Optimization Problems.

Chap6 Duality Theory and Sensitivity Analysis

(includes both Phases I & II)

F 1 F 2 Daily Requirement Cost N N N

1 Overview. 2 Extreme Points. AM 221: Advanced Optimization Spring 2016

The Strong Duality Theorem 1

OPTIMISATION 3: NOTES ON THE SIMPLEX ALGORITHM

4.5 Simplex method. LP in standard form: min z = c T x s.t. Ax = b

ORF 307: Lecture 2. Linear Programming: Chapter 2 Simplex Methods

CSC373: Algorithm Design, Analysis and Complexity Fall 2017 DENIS PANKRATOV NOVEMBER 1, 2017

CO350 Linear Programming Chapter 8: Degeneracy and Finite Termination

Lecture 7 Duality II

Linear and Combinatorial Optimization

A Simpler and Tighter Redundant Klee-Minty Construction

Lecture 9 Tuesday, 4/20/10. Linear Programming

9.1 Linear Programs in canonical form

Relation of Pure Minimum Cost Flow Model to Linear Programming

Linear Programming. Chapter Introduction

Farkas Lemma, Dual Simplex and Sensitivity Analysis

Math Models of OR: Sensitivity Analysis

4.5 Simplex method. min z = c T x s.v. Ax = b. LP in standard form

Lecture #21. c T x Ax b. maximize subject to

::::: OFTECHY. .0D 0 ::: ::_ I;. :.!:: t;0i f::t l. :- - :.. :?:: : ;. :--- :-.-i. .. r : : a o er -,:I :,--:-':: : :.:

Multicommodity Flows and Column Generation

COMPUTATIONAL COMPLEXITY OF PARAMETRIC LINEAR PROGRAMMING +

Absolute Value Programming

Polynomiality of Linear Programming

Linear Programming Redux

Research Reports on Mathematical and Computing Sciences

New Artificial-Free Phase 1 Simplex Method

Math Models of OR: Some Definitions

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

Part 1. The Review of Linear Programming

Math Models of OR: Handling Upper Bounds in Simplex

3.3 Sensitivity Analysis

The Simplex Algorithm

Contents. 4.5 The(Primal)SimplexMethod NumericalExamplesoftheSimplexMethod

Lecture 10: Linear programming. duality. and. The dual of the LP in standard form. maximize w = b T y (D) subject to A T y c, minimize z = c T x (P)

4. Duality Duality 4.1 Duality of LPs and the duality theorem. min c T x x R n, c R n. s.t. ai Tx = b i i M a i R n

Introduction to Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

4.6 Linear Programming duality

min 4x 1 5x 2 + 3x 3 s.t. x 1 + 2x 2 + x 3 = 10 x 1 x 2 6 x 1 + 3x 2 + x 3 14

1. Algebraic and geometric treatments Consider an LP problem in the standard form. x 0. Solutions to the system of linear equations

Introduction to Mathematical Programming

Simplex method(s) for solving LPs in standard form

21. Solve the LP given in Exercise 19 using the big-m method discussed in Exercise 20.

Complexity of linear programming: outline

Chapter 1 Linear Programming. Paragraph 5 Duality

OPERATIONS RESEARCH. Linear Programming Problem

Lecture 6 Simplex method for linear programming

CO350 Linear Programming Chapter 6: The Simplex Method

Part 1. The Review of Linear Programming

Duality Theory, Optimality Conditions

Points: The first problem is worth 10 points, the others are worth 15. Maximize z = x y subject to 3x y 19 x + 7y 10 x + y = 100.

Linear programming. Saad Mneimneh. maximize x 1 + x 2 subject to 4x 1 x 2 8 2x 1 + x x 1 2x 2 2

TIM 206 Lecture 3: The Simplex Method

Simplex Algorithm Using Canonical Tableaus

GETTING STARTED INITIALIZATION

Transcription:

An upper bound for the number of different solutions generated by the primal simplex method with any selection rule of entering variables Tomonari Kitahara and Shinji Mizuno February 2012 Abstract Kitahara and Mizuno (2011a) obtained an upper bound for the number of different solutions generated by the primal simplex method with Dantzig s (the most negative) pivoting rule. In this paper, we obtain an upper bound with any pivoting rule which chooses an entering variable whose reduced cost is negative at each iteration. The bound is applied to special linear programming problems. We also get a similar bound for the dual simplex method. Keywords: Linear programming; the number of basic solutions; pivoting rule; the simplex method. 1 Introduction The simplex method for solving linear programming problems (LPs) was originally developed by Dantzig (1963). In spite of its practical efficiency, a good bound for the number of iterations of the simplex method has not been known for a long time. A main reason for this is a pessimistic result Graduate School of Decision Science and Technology, Tokyo Institute of Technology, 2-12-1-W9-62, Oo-Okayama, Meguro-ku, Tokyo, 152-8552, Japan. Tel.: +81-3-5734-2896, Fax: +81-3-5734-2947, E-mail: kitahara.t.ab@m.titech.ac.jp. Graduate School of Decision Science and Technology, Tokyo Institute of Technology, 2-12-1-W9-58, Oo-Okayama, Meguro-ku, Tokyo, 152-8552, Japan. Tel.: +81-3-5734-2816, Fax: +81-3-5734-2947, E-mail: mizuno.s.ab@m.titech.ac.jp. 1

by Klee and Minty (1972). They showed that the simplex method needs an exponential number of iterations for a well designed LP. Kitahara and Mizuno (2011a) obtained an upper bound for the number of different basic feasible solutions generated by Dantzig s simplex method (the primal simplex method with the most negative pivoting rule) for a standard form linear programming problem. The bound is n m γ P δ P log(m γ P δ P ), where m is the number of constraints, n is the number of variables, δ P and γ P are the minimum and the maximum values of all the positive elements of primal basic feasible solutions, and r is the minimum integer bigger than a real number r. The number of different basic feasible solutions generated by the simplex method is not identical to the number of iterations in general, but it is identical if the primal problem is nondegenerate. Kitahara, Matsui, and Mizuno (2011) improved the bound to (n m) min{m, n m} γ P δ P log(min{m, n m} γ P δ P ). These are restricted results since they are valid only for the most negative pivoting rule and the best improvement pivoting rule. In this paper, we handle any pivoting rule which chooses an entering variable whose reduced cost is negative at each iteration. We show that the number of different basic feasible solutions generated by the primal simplex method is at most min{m, n m} γ P γ D, (1) δ P δ D where δ D and γ D are the minimum and the maximum values of absolute values of all the negative elements of basic solutions of the dual problem. We also get a similar bound for the dual simplex method. When the bound (1) is applied to a variant of Klee-Minty s LP proposed by Kitahara and Mizuno (2011b), the upper bound (1) becomes m(2 m 1). The ratio between this upper bound and the lower bound (2 m 1) given in Kitahara and Mizuno (2011b) is much smaller than the bound itself. In the case where the constraint matrix A is totally unimodular and the constraint vector b and the objective vector c are integral, the bound becomes min{m, n m} b 1 c 1. 2

2 The simplex method In this section, we review the simplex method. We consider the primal LP min c T x, subject to Ax = b, x 0, (2) where A R m n, b R m and c R n are given data and x R n is a variable vector. The dual problem of (2) is max b T y, subject to A T y + s = c, s 0, (3) where y R m and s R n are variable vectors. In this paper, we make the following assumptions. 1. rank(a) = m. 2. The primal problem (2) has an optimal solution. 3. An initial basic feasible solution x 0 of the primal problem is known. Let x be an optimal basic feasible solution of (2) and let z be the optimal value. From the duality theorem, the dual problem (3) also has an optimal solution and the optimal value is z. Let (y, s ) be an optimal basic feasible solution of the dual problem (3). We split A, c, x and s according to an index set B {1, 2,..., n} and its complementary set N = {1, 2,..., n} B like [ ] [ ] [ ] cb xb sb A = (A B, A N ), c =, x =, s =. c N x N s N We call B a basis when A B is an m m nonsingular matrix. Let B be the set of bases. For any basis B B and nonbasis N = {1, 2,..., n} B, the primal problem can be written as min c T B x B + c T N x N, subject to A B x B + A N x N = b, x 0. Since A B is nonsingular, we further rewrite the problem as min subject to c T B A 1 B b + (c N A T N(A 1 B )T c B ) T x N, x B = A 1 B b A 1 B A Nx N, x B 0, x N 0. 3 (4)

This form is called a dictionary for the primal problem (2) with B. From the dictionary, we can get the basic solution [ ] x x B B = B, x B B = A 1 B b, xb N = 0. (5) x B N If x B B 0, this is a basic feasible solution. We define the set of primal feasible bases by B P = {B B x B B 0}. Let δ P and γ P be the minimum and the maximum values of all the positive elements of primal basic feasible solutions, respectively. Then we have δ P x B j γ P if B B P and x B j 0. (6) Similarly, the dual problem (3) can be written as and we can proceed further: max b T y, subject to A T By + s B = c B, A T Ny + s N = c N, s B 0, s N 0, max b T (A T B) 1 c B b T (A T B) 1 s B, subject to y = (A T B) 1 c B (A T B) 1 s B, s N = (c N A T N(A T B) 1 c B ) + A T N(A T B) 1 s B, s B 0, s N 0. (7) This is called a dictionary for the dual problem (3) with B. From the dictionary, we can obtain the basic solution [ ] s y B = (A T B) 1 c B, s B B = B, s B B = 0, s B N = c N A T N(A T B) 1 c B. (8) s B N If s B N 0, this is a dual basic feasible solution. We define the set of dual feasible bases by B D = {B B s B N 0}. Let δ D and γ D be the minimum and the maximum values of all the positive elements of s B for any dual basic feasible solution (y B, s B ). Then we have In addition, we define δ D s B j γ D if B B D and s B j 0. (9) δ D = min{ s B j B B P and s B j < 0} 4

and γ D = max{ s B j B B P and s B j < 0}. From these definitions we have γ D s B j δ D if B B P and s B j < 0. (10) We also define and Then we have δ P = min{ x B j B B D and x B j < 0} γ P = max{ x B j B B D and x B j < 0}. γ P x B j δ P if B B D and x B j < 0. (11) We remark that the values of δ P, γ P, δ P, and γ P depend only on A and b but not on c, while the values δ D, γ D, δ D, and γ D depend only on A and c but not on b. The primal dictionary (4) can be expressed by using the primal basic solution (5) and the dual basic solution (8): min z B + (s B N )T x N, subject to x B = x B B A 1 B A Nx N, x B 0, x N 0, where z B = c T B A 1 B b. Similarly, the dual dictionary (7) can be written as max z B (x B B )T s B, subject to y = y B (A T B) 1 s B, s N = s B N + AT N(A T B) 1 s B, s B 0, s N 0. Since the objective function is independent of y, we can ignore y and obtain the following LP max z B (x B B )T s B, subject to s N = s B N + AT N(A T B) 1 s B, s B 0, s N 0. (12) Suppose that we generate a sequence of primal basic feasible solutions {x k k = 1, 2,..., } and a sequence of bases {B k B P k = 1, 2,... } by the primal simplex method from an initial basic feasible solution x 0. Let 5

B k B P be the k-th basis and N k = {1, 2,..., n} B k. Then the dictionary (4) can be expressed as min subject to z Bk + (s Bk ) T x N k N k, x B k = x Bk A 1 A B k B k N kx N k, x B k 0, x N k 0. If s Bk 0 holds, the current basic feasible solution x k is optimal. Otherwise N k we conduct a pivot. In this paper, we consider the primal simplex method which choose an entering variable whose reduced cost is negative. Thus we choose a nonbasic variable (entering variable) x j such that s Bk j < 0 and j N k. Then the value of the entering variable is increased until a basic variable becomes zero. There are several rules to decide an entering variable, for example, the minimum coefficient rule (Dantzig s rule), the best improvement rule, and the minimum index rule. Under Dantzig s rule, we choose an index according to j k arg min s Bk j N k j and set x j k as an entering variable. We briefly explain the dual simplex method. Let (y k, s k ) and B k B D be the k-th iterate and the k-th basis. Then from (12), the k-th dictionary is expressed as max subject to z Bk (x Bk ) T s B k B k, s N k = s Bk + A T N k N (A T k B ) 1 s k B k, s B k 0, s N k 0. Unless x Bk 0, we conduct a pivot. We choose a basic variable s B k j, j B k which satisfies x Bk j < 0, then we increase the value of s j. 3 A new bound for the primal simplex method First we show that for any nonoptimal basic feasible solution x t of (2), we can obtain a lower bound of the optimal value. Lemma 1 We have where z is the optimal value of (2). z c T x t min{n m, m}γ P γ D, 6

Proof: Let x be an optimal basic feasible solution. We have z = c T x = z Bt + (s Bt N ) T x t N t c T x t γ D et x N t c T x t min{n m, m}γ P γ D. The first inequality follows from (10). The second inequality follows from the facts that N t = n m, the number of positive elements of x is at most m, and (6). The desired inequality readily follows from the above inequality. We show that the objective value decreases at least by a constant value when the primal simplex method updates the iterate. Theorem 1 Let x t and x t+1 be the t-th and (t + 1)-st iterates generated by the primal simplex method. If x t is not optimal and x t+1 x t, we have c T x t c T x t+1 δ Dδ P. Proof: Let x j t be the entering variable at the t-th iteration. If x t+1 j = 0 t we have x t+1 = x t, which contradicts the assumption. Thus x t+1 j 0 which t implies x t+1 j δ t P from (6), and s t j δ t D from (10). Then we have c T x t c T x t+1 = s t j t x t+1 j t δ D δ P. Thus the proof is completed. From Lemma 3.1 and Theorem 3.1, we obtain an upper bound for the number of different basic feasible solutions generated by the primal simplex method. Corollary 1 Suppose that we generate a sequence of basic feasible solutions by the primal simplex method from an initial iterate x 0. Then the number of different basic feasible solutions is at most min{m, n m} γ P γ D, δ P δ D where r is the minimum integer bigger than a real value r. 7

4 Applications to special LPs 4.1 Klee-Minty s LP Kitahara and Mizuno (2011b) constructed a variant of Klee-Minty s LP. For a given integer m 2, the problem is represented as max m i=1 x i, subject to x 1 1, 2 k 1 i=1 x i + x k 2 k 1 for k = 2, 3,..., m, x = (x 1, x 2,..., x m ) T 0. By introducing a slack vector y = (y 1, y 2,..., y m ) T, the problem is equivalent to max m i=1 x i, subject to x 1 + y 1 = 1, 2 k 1 i=1 x i + x k + y k = 2 k 1 for k = 2, 3,..., m, x 0, y 0. (13) Kitahara and Mizuno (2011b) show that the variant (13) has the following properties. 1. The number of basic feasible solutions is 2 m. 2. The number of different basic feasible solutions generated by Dantzig s primal simplex method is (2 m 1). 3. We have δ P = 1, γ P = 2 m 1, δ D = 1, γ D = 1 from Theorem 2.1 in Kitahara and Mizuno (2011b). By using the bound in Corollary 3.1 and n = 2m, the number of different basic feasible solutions is at most min{m, n m} γ P γ D = m(2 m 1) + 1. (14) δ P δ D It is not hard to show that +1 in (14) is not actually required. The ratio between this upper bound and the lower bound (2 m 1) given in Kitahara and Mizuno (2011b) is about m, which is much smaller than the bound itself. 8

4.2 LP with totally unimodular matrix A matrix A R m n is said to be totally unimodular if the determinant of every square submatrix of A is ±1 or 0. In this subsection, we assume that the constraint matrix A is totally unimodular and the constraint vector b and the objective vector c are integral in (2). Then all the elements of any primal or dual basic solution are integers, which means that δ P = δ D = 1. Let B {1, 2..., n} be a basis. Then the basic solution associated with B is x B B = (A B) 1 b, x B N = 0. Since A is totally unimodular, all the elements of A 1 B is ±1 or 0. Thus for any j B, we have x B j b 1, which implies γ P, γ P b 1. Similarly, we can show that γ D, γ D c 1. Hence we get the next theorem. Theorem 2 Suppose that the constraint matrix A is totally unimodular and the constraint vector b and the objective vector c are integral. Then the number of different solutions generated by the primal simplex method for solving (2) is at most min{m, n m} b 1 c 1 + 1. (15) It is not hard to show that +1 in (15) is not actually required. Acknowledgment This research is supported in part by Grant-in Aid for Young Scientists (B) 23710164 and Grant-in-Aid for Science Research (A) 20241038 of Japan Society for the Promotion of Science. References [1] Dantzig, GB (1963). Linear Programming and Extensions. Princeton University Press, Princeton, New Jersey. [2] Kitahara, T, T Matsui, and S Mizuno (2011). On the number of solutions generated by Dantzig s simplex method for LP with bounded variables. To appear in Pacific Journal of Optimization. [3] Kitahara, T and S Mizuno (2011a). A bound for the number of different basic solutions generated by the simplex method. To appear in Mathematical Programming. 9

[4] Kitahara, T and S Mizuno (2011b). Klee-Minty s LP and Upper Bounds for Dantzig s Simplex Method. Operations Research Letters 39, 88-91. [5] Klee, V and GJ Minty (1972). How good is the simplex method. In O. Shisha (ed), Inequalities III, Academic Press, New York. 10