A Strongly Polynomial Simplex Method for Totally Unimodular LP

Similar documents
A primal-simplex based Tardos algorithm

An upper bound for the number of different solutions generated by the primal simplex method with any selection rule of entering variables

Properties of a Simple Variant of Klee-Minty s LP and Their Proof

LOWER BOUNDS FOR THE MAXIMUM NUMBER OF SOLUTIONS GENERATED BY THE SIMPLEX METHOD

A Bound for the Number of Different Basic Solutions Generated by the Simplex Method

On the number of distinct solutions generated by the simplex method for LP

On the Number of Solutions Generated by the Simplex Method for LP

CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method

Review Solutions, Exam 2, Operations Research

Optimization (168) Lecture 7-8-9

The Primal-Dual Algorithm P&S Chapter 5 Last Revised October 30, 2006

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs

A polynomial relaxation-type algorithm for linear programming

Lecture: Algorithms for LP, SOCP and SDP

COT 6936: Topics in Algorithms! Giri Narasimhan. ECS 254A / EC 2443; Phone: x3748

Chapter 1. Preliminaries

Polynomiality of Linear Programming

Chapter 3, Operations Research (OR)

Linear Programming. Linear Programming I. Lecture 1. Linear Programming. Linear Programming

CS711008Z Algorithm Design and Analysis

that nds a basis which is optimal for both the primal and the dual problems, given

Optimization WS 13/14:, by Y. Goldstein/K. Reinert, 9. Dezember 2013, 16: Linear programming. Optimization Problems

A Generalized Homogeneous and Self-Dual Algorithm. for Linear Programming. February 1994 (revised December 1994)

The Ellipsoid (Kachiyan) Method

1 Review Session. 1.1 Lecture 2

Linear Programming. 1 An Introduction to Linear Programming

Linear Programming: Simplex

1 Overview. 2 Extreme Points. AM 221: Advanced Optimization Spring 2016

Lecture 5. Theorems of Alternatives and Self-Dual Embedding

1. Algebraic and geometric treatments Consider an LP problem in the standard form. x 0. Solutions to the system of linear equations

LINEAR PROGRAMMING I. a refreshing example standard form fundamental questions geometry linear algebra simplex algorithm

4.5 Simplex method. LP in standard form: min z = c T x s.t. Ax = b

An approximation algorithm for the partial covering 0 1 integer program

Math 273a: Optimization The Simplex method

Midterm Review. Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A.

Motivating examples Introduction to algorithms Simplex algorithm. On a particular example General algorithm. Duality An application to game theory

OPERATIONS RESEARCH. Linear Programming Problem

Lecture slides by Kevin Wayne

CSC373: Algorithm Design, Analysis and Complexity Fall 2017 DENIS PANKRATOV NOVEMBER 1, 2017

Note 3: LP Duality. If the primal problem (P) in the canonical form is min Z = n (1) then the dual problem (D) in the canonical form is max W = m (2)

MATH2070 Optimisation

Introduction to Mathematical Programming

Part 1. The Review of Linear Programming

Standard Form An LP is in standard form when: All variables are non-negativenegative All constraints are equalities Putting an LP formulation into sta

CO350 Linear Programming Chapter 8: Degeneracy and Finite Termination

Introduction to optimization

4. Duality Duality 4.1 Duality of LPs and the duality theorem. min c T x x R n, c R n. s.t. ai Tx = b i i M a i R n

Lecture 4: Algebra, Geometry, and Complexity of the Simplex Method. Reading: Sections 2.6.4, 3.5,

6.854J / J Advanced Algorithms Fall 2008

Numerical Optimization

min 4x 1 5x 2 + 3x 3 s.t. x 1 + 2x 2 + x 3 = 10 x 1 x 2 6 x 1 + 3x 2 + x 3 14

Linear Programming and the Simplex method

Week 2. The Simplex method was developed by Dantzig in the late 40-ties.

TRANSPORTATION PROBLEMS

Lecture 1 Introduction

15-780: LinearProgramming

Simplex method(s) for solving LPs in standard form

Example. 1 Rows 1,..., m of the simplex tableau remain lexicographically positive

Algorithms and Theory of Computation. Lecture 13: Linear Programming (2)

Lecture 9 Tuesday, 4/20/10. Linear Programming

informs DOI /moor.xxxx.xxxx c 200x INFORMS

The Simplex Algorithm: Technicalities 1

The Ellipsoid Algorithm

A Simpler and Tighter Redundant Klee-Minty Construction

Linear Programming. Chapter Introduction

Lecture 6 Simplex method for linear programming

The Simplex and Policy Iteration Methods are Strongly Polynomial for the Markov Decision Problem with Fixed Discount

A Redundant Klee-Minty Construction with All the Redundant Constraints Touching the Feasible Region

Benders Decomposition

BBM402-Lecture 20: LP Duality

Lecture 2: The Simplex method

Complexity of linear programming: outline

Benders' Method Paul A Jensen

Optimization: Then and Now

Linear and Combinatorial Optimization

EE364a Review Session 5

The Simplex Algorithm

Interior Point Methods for LP

LP Duality: outline. Duality theory for Linear Programming. alternatives. optimization I Idea: polyhedra

4.6 Linear Programming duality

An O(nL) Infeasible-Interior-Point Algorithm for Linear Programming arxiv: v2 [math.oc] 29 Jun 2015

Linear Programming, Lecture 4

A Parametric Simplex Algorithm for Linear Vector Optimization Problems

Interior Point Algorithm for Linear Programming Problem and Related inscribed Ellipsoids Applications.

Introduction to integer programming II

CO 250 Final Exam Guide

Foundations of Operations Research

Theory and Internet Protocols

Lectures 6, 7 and part of 8

Simplex Algorithm Using Canonical Tableaus

CSCI5654 (Linear Programming, Fall 2013) Lecture-8. Lecture 8 Slide# 1

IE 5531: Engineering Optimization I

2.098/6.255/ Optimization Methods Practice True/False Questions

21. Solve the LP given in Exercise 19 using the big-m method discussed in Exercise 20.

Algorithmic Game Theory and Applications. Lecture 7: The LP Duality Theorem

AM 121: Intro to Optimization

3. Linear Programming and Polyhedral Combinatorics

A Review of Linear Programming

Dual Basic Solutions. Observation 5.7. Consider LP in standard form with A 2 R m n,rank(a) =m, and dual LP:

Transcription:

A Strongly Polynomial Simplex Method for Totally Unimodular LP Shinji Mizuno July 19, 2014 Abstract Kitahara and Mizuno get new bounds for the number of distinct solutions generated by the simplex method for linear programming (LP). In this paper, we combine results of Kitahara and Mizuno and Tardos s strongly polynomial algorithm, and propose an algorithm for solving a standard form LP problem. The algorithm solves polynomial number of artificial LP problems by the simplex method with Dantzig s rule. It is shown that the total number of distinct basic solutions generated by the algorithm is polynomially bounded in the number of constraints, the number of variables, and the maximum determinant of submatrices of a coefficient matrix. If the coefficient matrix is totally unimodular and all the artificial problems are nondegenerate, then the algorithm is strongly polynomial. Keywords: Linear programming, Simplex method, Strongly polynomial, Totally unimodular. 1 Introduction Recently, Kitahara and Mizuno [4, 5] get new bounds for the number of distinct solutions generated by the simplex method for linear programming (LP). They extend results of Ye [7] for Markov decision problems to LP problems. In this paper, we combine results of Kitahara and Mizuno [4, 5] Graduate School of Decision Science and Technology, Tokyo Institute of Technology, 2-12-1-W9-58, Oo-Okayama, Meguro-ku, Tokyo, 152-8552, Japan. Tel.: +81-3-5734-2816, Fax: +81-3-5734-2947, E-mail: mizuno.s.ab@m.titech.ac.jp. 1

and Tardos [6], and propose an algorithm for solving a standard form LP problem min c T x, (1) subject to Ax = b, x 0, where m and n are positive integers, A R m n, b R m, c R n are given data, and x R n is a column vector of variables. Assume that each element of A is an integer and rank A = m. Define = max{ det D D is a square submatrix of A}. The number of arithmetic operations required in the algorithm is polynomially bounded in m, n, and. If is bounded by a polynomial of m and n, then the algorithm is strongly polynomial. Hence it is a strongly polynomial algorithm when A is totally unimodular, that is, = 1. The algorithm is based on the basic algorithm proposed by Tardos [6], in which we use the dual simplex method instead of the ellipsoid method [3] or the interior-point method [2] for solving artificial LP problems. Let N = {1, 2,, n}. For any subset K of N and its complementary set K = N K, we split the matrix A and the vectors c and x as follows: A = (A K, A K), c = ( ck c K ), x = ( xk Suppose that rank A K = m. Each artificial problem appeared in our algorithm is expressed as x K ). min dt Kx K, subject to A K x K = b, x K 0 (2) for some vectors d K and b, where each component of d K is an integer whose absolute value is bounded by n 2. The dual problem of (2) is formulated as max (b ) T y, subject to A T Ky + z K = d K, z K 0, (3) where z K is a vector of slack variables. We will find an index set B K of the optimal basis and the dual optimal basic feasible solution y = (A T B) 1 db, z B = 0, z B = d B A T B(A T B) 1 db by the dual simplex method, where B = K B. The primal optimal basic feasible solution is expressed as x B = (A B ) 1 b, x B = 0. 2

Each component of z K in any basic solution (y, z K ) of (3) is a rational number p/q whose denominator q is bounded by and numerator p is bounded by n d K n 3 2 in absolute value from Cramer s rule. Kitahara and Mizuno [4] show that the number of distinct basic feasible solutions generated by the dual simplex method with Dantzig s rule is bounded by m 2 γ D δ D log(m γ D δ D ), (4) where δ D and γ D are the minimum and the maximum values of all the positive elements of z in dual basic feasible solutions, respectively. Since γ D n 3 2 and δ D 1/ for the problem (2), the bound (4) becomes m 2 n 3 3 log(mn 3 3 ). If the problem (2) is nondegenerate, the number of iterations is also bounded by it. The number of artificial problems (2) solved in the proposed algorithm is at most (m + 1)n. In this paper, a and a for a vector a denote the l 2 -norm and l - norm of a, and a for a scalar a denotes the smallest integer not less than a. 2 An algorithm In this section, we propose an algorithm for solving a standard form LP problem (1) and its dual problem max b T y, subject to A T y + z = c, z 0. (5) The algorithm is based on the basic algorithm proposed by Tardos [6], in which we use the dual simplex method instead of the ellipsoid method [3] or the interior-point method [2] for solving artificial LP problems. The algorithm is stated as follows. Algorithm Step 0: If each element of c is an integer and c n 2, then solve the LP problem (1) by the dual simplex method and stop. Otherwise set K = N and go to Step 1. Step 1: Let c K be the projection of c K onto the subspace {x K A K x K = 0}, that is, c K = (I AT K(A K A T K) 1 A K )c K. If c K = 0 then stop. Otherwise go to Step 2. 3

Step 2: Let d K = (n 2 / c K )c K, d i = d i for each i K, and d K the vector of d i (i K). Consider an LP problem min dt Kx K, subject to A K x K = b, x K 0 (6) and its dual problem max b T y, subject to A T Ky + z K = d K, z K 0. (7) Check whether the dual feasible region F = {(y, z K ) A T Ky + z K = d K, z K 0} is empty. If it is empty then stop. Otherwise compute an initial basic feasible solution (y 0, z 0 K ) F and solve the LP problem (6) by the dual simplex method with Dantzig s rule. If the dual problem (7) is unbounded, then stop. Otherwise compute the dual optimal basic feasible solution (ȳ, z K ) and an index set B K of the optimal basis. Set J = {i d i a T i ȳ n, i K}, where a i is the i-th column of A K. Remove J from K. Go to Step 1. If each element of c is integral and c n 2, we directly solve the LP problem (1) by the dual simplex method. The number of distinct solutions generated is polynomially bounded in m, n, and. If the feasible region F is empty at Step 2, then the dual problem (5) is infeasible from the definition of c K, d K, and d K. Hence the primal problem (1) is unbounded or infeasible. We discuss in the next section how to check whether the dual feasible region F is empty and how to compute a basic feasible solution (y 0, z 0 K ) F. In the remainder of this section, we assume that (y 0, z 0 K ) F is available and that the dual problem (7) is nondegenerate. Thus the dual simplex method finds the dual optimal basic feasible solution (ȳ, z K ) and an index set B K of the optimal basis unless the problem (7) is unbounded. Let x K be the primal optimal basic feasible solution of (6). From the complementarity condition and the definition of J at step 2, x i = 0 for each i J. Thus x K for the new K is a feasible solution of (6) at the next iteration. If the dual problem (7) is unbounded at Step 2, then the primal problem (6) is infeasible. Since the primal optimal basic feasible solution x K is a feasible solution of (6) in the next iteration, the primal problem (6) could be infeasible only at the first iteration. Thus the problem (1) is infeasible in this case. Tardos [6] shows that, for each i J, x i is 0 at any optimal solution x of the LP problem (1) (see Lemma 1 below) and that J is nonempty 4

(see Lemma 2 below) at each iteration. If the algorithm stops at Step 1 by c K = 0, then any feasible solution x K, which satisfies A Kx K = b and x K 0, together with x K = 0 is an optimal solution of (1). Since the primal optimal basic feasible solution x K defined in the previous iteration is such a feasible solution, we can get an optimal basis of (1). If c K = 0 at the first iteration, any feasible solution of (1) is an optimal solution. So we can get an optimal basis by using the dual simplex method for (1), where c is replaced by any vector of integers whose absolute values are bounded by n 2. Now we show that, for each i J, x i is 0 at any optimal solution x of the LP problem (1). Lemma 1 (Lemma 1.1 in Tardos [6]) Let x K and (ȳ, z K ) be the optimal solutions of (6) and (7) respectively. Then for any optimal solution x K of min d T Kx K, subject to A K x K = b, x K 0, (8) we have that d i a T i ȳ n implies x i = 0 for any i K, where a i is the i-th column of A K. Proof: Suppose conversely that there exists an optimal solution x K of (8) such that d j a T j ȳ n and x j > 0 for some j K. A positive scalar multiple of the vector x K x K is a solution of the inequality system A K u = 0, u j 1, u i 0 for each i with x i x i > 0 and i j, (9) u i 0 for each i with x i x i < 0, u i = 0 for each i with x i x i = 0. The system (9) also has a basic feasible solution û such that û by Cramer s rule. If d i a T i ȳ > 0 then z i = d i a T i ȳ > 0, which implies x i = 0 from the complementarity condition. Thus û i 0 for each i with d i a T i ȳ > 0. Hence we have that d T Kû = (d K A T Kȳ)T û = d i a T i ȳ>0(d i a T i ȳ)û i + d i a T i ȳ<0(d i a T i ȳ)û i (d j a T j ȳ)û j + d i a T i ȳ<0(d i d i + z i )û i n (n 1) > 0. 5

Let (y, z K ) be an optimal solution of the dual problem of (8) max b T y, subject to A T Ky + z K = d K, z K 0. Then we have a contradiction as follows d T Kû = (AT Ky + z K) T û = (z K) T û = z i >0 z i û i 0, because z i > 0 implies x i = 0, so û i 0 from (9). Lemma 2 (Lemma 1.2 in Tardos [6]) The set J defined in step 2 of the algorithm contains at least one index i. Proof: Since d K {x K A K x K = 0} from Step 1, we have d K d K A T Ky for any y. Since d K = n 2, we get d K A T Kȳ 1 n d K A T Kȳ 1 n d K n. 3 Feasibility and an initial basic feasible solution In this section we will show how to check whether the feasible region F = {(y, z K ) A T Ky + z K = d K, z K 0} is empty and simultaneously how to compute a basic feasible solution of it by the simplex method. Let B K be an index set of any basis so that A B is nonsingular. Let B = K B. Then a basic solution of F is computed as y = (A T B) 1 db, z B = 0, z B = d B A T B(A T B) 1 db. (10) If z B 0, then this is a basic feasible solution of F. Otherwise let I = {i z i < 0, i K}. Define an artificial dual problem max i I z i, subject to A T Ky + z K = d (11) K, z i 0 (i K I), z i 0 (i I). Obviously the solution (10) is a basic feasible solution of this problem. So we can solve the problem (11) by the simplex method. Suppose that the 6

problem is nondegenerate. Since the object function value is bounded by 0, the simplex method can find an optimal solution by generating a sequence of basic feasible solutions such that the objective function value increases. If we get an optimal solution where z i < 0 for each i I, then F is empty from Lemma 3 below. Otherwise, a variable z i (i I) must become zero at some basic feasible solution (ŷ, ẑ K ). Then we define a new Î = {i ẑ i < 0, i K}. Obviously Î < I. If Î is empty, the solution (ŷ, ẑ K) is a basic feasible solution of F. Otherwise it is a basic feasible solution of (11) for the new I = Î. We repeat the above procedure until I =, then we obtain a basic feasible solution of F. Lemma 3 If there is an optimal solution (y, z K ) of (11) where z i < 0 for each i I, then F is empty. Proof: If there exists (y, z K ) F, then ϵ(y, z K ) + (1 ϵ)(y, z K) for a sufficiently small ϵ > 0 is a feasible solution of (11) and the objective function value at the point is bigger than that at (y, z K ). This contradicts to the optimality of (y, z K ). The number of procedures above is at most m. Since the algorithm proposed in the previous section is repeated at most n, we can get the next theorem from the discussion in Introduction and Section 2. Theorem 1 The algorithm proposed in Section 2 solves at most (1 + m)n artificial dual LP problems such as (7) or (11) by the simplex method. If all the problems are nondegenerate, then the total number of basic solutions generated by the algorithm is bounded by (m + 1)m 2 n 4 3 log(mn 3 3 ). The algorithm detects whether the problem (1) has an optimal solution. When (1) has an optimal solution, the algorithm finds an optimal basis. Note that the strongly polynomial algorithm proposed by Tardos [6] is theoretically efficient, but it is not realistic to use it in practice. Our algorithm proposed in Section 2 could practically be used for solving an LP problem. When we perform the algorithm, we can use the optimal basis B computed at Step 2 as the initial basis of the procedure above at the next iteration. 7

4 Concluding remarks Throughout the paper, we assume that all the dual problems solved by our algorithm are nondegenerate. We think that the assumption is strong. We can relax the assumption as follows. If we observe that the basic feasible solution does not move when we perform the dual simplex method with Dantzig s rule, we can switch to an anti-cycling rule such as Bland s rule [1]. After some pivots by an anticycling rule, the basic feasible solution moves to a new one, then switch to Dantzig s rule again. If the number of all the pivots at those degenerate solutions is polynomially bounded in m, n, and, our algorithm is still strongly polynomial. Acknowledgment I wish to express my thanks to T. Kitahara and N. Sukegawa for many discussions on Tardos s algorithm. This research is supported in part by Grant-in-Aid for Science Research (A) 26242027 of Japan Society for the Promotion of Science. Part of this research was done while the author visited the LRI, University of Paris-Sud, as Digiteo invited researcher. References [1] R. G. Bland, New finite pivoting rules for the simplex method, Math. Oper. Res., 2, 103 107 (1977) [2] N. Karmarkar, A new polynomial-time algorithm for linear programming, Combinatorica, 4, 373 395 (1984) [3] L. G. Khachiyan, A polynomial algorithm in linear programming, Soviet Math. Dokl., 20, 191 194 (1979) [4] T. Kitahara and S. Mizuno, On the number of solutions generated by the dual simplex method, Oper. Res. Lett., 40, 172 174 (2012) [5] T. Kitahara and S. Mizuno, A bound for the number of different basic solutions generated by the simplex method, Math. Pro., 137, 579 586 (2013) [6] É. Tardos, A strongly polynomial algorithm to solve combinatorial linear programs, Oper. Res., 34, 250 256 (1986) 8

[7] Y. Ye, The simplex and policy-iteration methods are strongly polynomial for the Markov Decision Problem with a fixed discount rate, Math. Oper. Res., 36, 593 603 (2011) 9