A primal-simplex based Tardos algorithm

Similar documents
A Strongly Polynomial Simplex Method for Totally Unimodular LP

An upper bound for the number of different solutions generated by the primal simplex method with any selection rule of entering variables

Properties of a Simple Variant of Klee-Minty s LP and Their Proof

LOWER BOUNDS FOR THE MAXIMUM NUMBER OF SOLUTIONS GENERATED BY THE SIMPLEX METHOD

On the number of distinct solutions generated by the simplex method for LP

A Bound for the Number of Different Basic Solutions Generated by the Simplex Method

On the Number of Solutions Generated by the Simplex Method for LP

CO350 Linear Programming Chapter 8: Degeneracy and Finite Termination

Dual Basic Solutions. Observation 5.7. Consider LP in standard form with A 2 R m n,rank(a) =m, and dual LP:

Standard Form An LP is in standard form when: All variables are non-negativenegative All constraints are equalities Putting an LP formulation into sta

Termination, Cycling, and Degeneracy

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method

Optimization (168) Lecture 7-8-9

The dual simplex method with bounds

Linear Programming. Linear Programming I. Lecture 1. Linear Programming. Linear Programming

The Simplex Algorithm: Technicalities 1

min 4x 1 5x 2 + 3x 3 s.t. x 1 + 2x 2 + x 3 = 10 x 1 x 2 6 x 1 + 3x 2 + x 3 14

Multicommodity Flows and Column Generation

LINEAR PROGRAMMING I. a refreshing example standard form fundamental questions geometry linear algebra simplex algorithm

(P ) Minimize 4x 1 + 6x 2 + 5x 3 s.t. 2x 1 3x 3 3 3x 2 2x 3 6

CO350 Linear Programming Chapter 8: Degeneracy and Finite Termination

Lecture slides by Kevin Wayne

Linear Programming: Simplex

21. Solve the LP given in Exercise 19 using the big-m method discussed in Exercise 20.

A Parametric Simplex Algorithm for Linear Vector Optimization Problems

Lecture 5 January 16, 2013

Ann-Brith Strömberg. Lecture 4 Linear and Integer Optimization with Applications 1/10

Simplex Method: More Details

Math 273a: Optimization The Simplex method

1 Overview. 2 Extreme Points. AM 221: Advanced Optimization Spring 2016

1 Review Session. 1.1 Lecture 2

Mathematical Programs Linear Program (LP)

Summary of the simplex method

Week 4. (1) 0 f ij u ij.

Simplex Algorithm Using Canonical Tableaus

3. Linear Programming and Polyhedral Combinatorics

4.5 Simplex method. LP in standard form: min z = c T x s.t. Ax = b

February 17, Simplex Method Continued

AM 121: Intro to Optimization

Lecture 9 Tuesday, 4/20/10. Linear Programming

Part 1. The Review of Linear Programming

The Simplex Algorithm

Motivating examples Introduction to algorithms Simplex algorithm. On a particular example General algorithm. Duality An application to game theory

The Simplex and Policy Iteration Methods are Strongly Polynomial for the Markov Decision Problem with Fixed Discount

Lecture: Algorithms for LP, SOCP and SDP

Lectures 6, 7 and part of 8

AM 121: Intro to Optimization Models and Methods Fall 2018

Example. 1 Rows 1,..., m of the simplex tableau remain lexicographically positive

The simplex algorithm

Lecture Simplex Issues: Number of Pivots. ORIE 6300 Mathematical Programming I October 9, 2014

IE 400: Principles of Engineering Management. Simplex Method Continued

Simplex method(s) for solving LPs in standard form

CO 250 Final Exam Guide

MATH2070 Optimisation

Revised Simplex Method

A Redundant Klee-Minty Construction with All the Redundant Constraints Touching the Feasible Region

2.098/6.255/ Optimization Methods Practice True/False Questions

The Primal-Dual Algorithm P&S Chapter 5 Last Revised October 30, 2006

Linear and Integer Programming - ideas

Dr. Maddah ENMG 500 Engineering Management I 10/21/07

CSC373: Algorithm Design, Analysis and Complexity Fall 2017 DENIS PANKRATOV NOVEMBER 1, 2017

15-780: LinearProgramming

Introduction to Linear and Combinatorial Optimization (ADM I)

1 The linear algebra of linear programs (March 15 and 22, 2015)

III. Linear Programming

Review Solutions, Exam 2, Operations Research

Optimization methods NOPT048

Introduce the idea of a nondegenerate tableau and its analogy with nondenegerate vertices.

Optimization methods NOPT048

LINEAR PROGRAMMING III

TRANSPORTATION PROBLEMS

Linear Programming. Scheduling problems

Introduction to optimization

Linear Programming Duality P&S Chapter 3 Last Revised Nov 1, 2004

MATH 445/545 Homework 2: Due March 3rd, 2016

Lift-and-Project Inequalities

MAT016: Optimization

DM545 Linear and Integer Programming. Lecture 7 Revised Simplex Method. Marco Chiarandini

Summary of the simplex method

3. Linear Programming and Polyhedral Combinatorics

9.1 Linear Programs in canonical form

Theory and Internet Protocols

COT 6936: Topics in Algorithms! Giri Narasimhan. ECS 254A / EC 2443; Phone: x3748

Contents. 4.5 The(Primal)SimplexMethod NumericalExamplesoftheSimplexMethod

Applications. Stephen J. Stoyan, Maged M. Dessouky*, and Xiaoqing Wang

Ω R n is called the constraint set or feasible set. x 1

Chapter 3, Operations Research (OR)

C&O 355 Mathematical Programming Fall 2010 Lecture 18. N. Harvey

An approximation algorithm for the partial covering 0 1 integer program

CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming

Maximum flow problem

Optimization WS 13/14:, by Y. Goldstein/K. Reinert, 9. Dezember 2013, 16: Linear programming. Optimization Problems

Math Models of OR: Some Definitions

4. Duality Duality 4.1 Duality of LPs and the duality theorem. min c T x x R n, c R n. s.t. ai Tx = b i i M a i R n

Farkas Lemma, Dual Simplex and Sensitivity Analysis

A NEW FACE METHOD FOR LINEAR PROGRAMMING. 1. Introduction We are concerned with the standard linear programming (LP) problem

New Artificial-Free Phase 1 Simplex Method

A Simpler and Tighter Redundant Klee-Minty Construction

Technische Universität München, Zentrum Mathematik Lehrstuhl für Angewandte Geometrie und Diskrete Mathematik. Combinatorial Optimization (MA 4502)

The Simplex Method. Standard form (max) z c T x = 0 such that Ax = b.

Transcription:

A primal-simplex based Tardos algorithm Shinji Mizuno a, Noriyoshi Sukegawa a, and Antoine Deza b a Graduate School of Decision Science and Technology, Tokyo Institute of Technology, 2-12-1-W9-58, Oo-Okayama, Meguro-ku, Tokyo, 152-8552, Japan. b Advanced Optimization Laboratory, Department of Computing and Software, McMaster University, Hamilton, Ontario, Canada. September 6, 2014 Abstract In the mid-eighties Tardos proposed a strongly polynomial algorithm for solving linear programming problems for which the size of the coefficient matrix is polynomially bounded by the dimension. Combining Orlin s primal-based modification and Mizuno s use of the simplex method, we introduce a modification of Tardos algorithm considering only the primal problem and using simplex method to solve the auxiliary problems. The proposed algorithm is strongly polynomial if the coefficient matrix is totally unimodular and the auxiliary problems are non-degenerate. Keyword: Tardos algorithm, simplex method, strongly polynomial algorithm, total unimodularity 1 Introduction In the mid-eighties Tardos [1, 2] proposed a strongly polynomial algorithm for solving linear programming problems min{c x Ax = b, x 0} for which the size of the coefficients of A are polynomially bounded by the dimension. Such instances include minimum cost flow, bipartite matching, multicommodity flow, and vertex packing in chordal graphs. The basic strategy of Tardos algorithm is to identify the coordinates equal to zero at optimality. The algorithm involves solving several auxiliary dual problems by the ellipsoid or interior-point methods. By successively identifying such vanishing coordinates, the problem is made smaller and an optimal solution is obtained inductively. Orlin [6] proposed a modification of Tardos algorithm considering only the primal problem; that is, identifying the coordinates strictly positive at optimality. He observed that the right-hand side coefficients of the auxiliary problems might be impractically large. 1

In 2014, Mizuno [5] modified Tardos algorithm by using a dual simplex method to solve the auxiliary problems. He observed that this approach is strongly polynomial if A is totally unimodular and the auxiliary problems are non-degenerate; that is, the basic variables are strictly positive for every basic feasible solution. The strong polynomiality is a consequence of Kitahara and Mizuno [3, 4] results which extend in part Ye s result [8] for Markov decision problems and bounds the number of distinct basic feasible solutions generated by the simplex method. Combining Orlin s and Mizuno s approaches, we introduce a modification of the algorithm proposed by Mizuno considering only the primal problem. The proposed algorithm is strongly polynomial if A is totally unimodular and the auxiliary problems are non-degenerate. As it involves only the primal and does not suffer from impractically large right-hand side coefficients, the proposed algorithm improves the implementability of the approach. While the proposed algorithm and the complexity analysis is focusing on the case where A is totally unimodular, the algorithm could be enhanced to handle general matrices. The enhanced algorithm would be strongly polynomiality if the absolute value of any subdeterminant of A is polynomially bounded by the dimension. 2 A primal-simplex based Tardos algorithm 2.1 Formulation and main result Consider the following formulation: minimize c x subject to Ax = b, x 0 (1) where A R m n, b R m, and c R n are given. The optimal solution of (1), if any, is assumed without loss of generality to be unique. Otherwise c could be perturbed by (ϵ, ϵ 2,..., ϵ n ) for a sufficiently small ϵ > 0. Alternatively, the simplex method can be performed using a lexicographical order if a tie occurs when choosing an entering variable by Dantzig s rule. Let K N = {1, 2,..., n} be the optimal basis of (1). The proposed algorithm inductively builds a subset K K through solving an auxiliary problem. If K = K we obtained the optimal solution. Otherwise, we obtain a smaller yet equivalent problem by deleting the variables corresponding to K. Thus, the optimal solution is obtained inductively. For clarity of the exposition of the algorithm and of the proof of Theorem 1, we assume in the remainder of the paper that A is totally unimodular; that is, all its subdeterminants are equal to either 1, 0 or 1. Theorem 1. The primal-simplex based Tardos algorithm is strongly polynomial if A is totally unimodular and all the auxiliary problems are non-degenerate; that is, all the basic variables are strictly positive for every basic feasible solution. Proof. See Section 3. 2

2.2 A primal-simplex based Tardos algorithm Step 0 (initialization): Let K := and its complement K := N. Step 1 (reduction): If K, remove the variables corresponding to K in the following way. Let G R m m be a nonsingular submatrix of A such that its first K columns form A K and H = G 1. Let H 1 consists of the first K rows of H, H 2 denote the remainder, and consider the following reduced problem: subject to A x = b, x 0, (2) where A = H 2 A K, b = H 2 b, c = c K (H 1 A K ) c K, and x = x K. If K =, set A := A, b := b, and c := c. Go to Step 2. Step 2 (scaling and rounding): Let m = m K and n = n K. For a basis L K of A and L = K \ L, rewrite (2) as: subject to x L + (A L ) 1 A Lx L = (A L ) 1 b, x 0. (3) If (A L ) 1 b = 0, stop. Otherwise, consider the following scaled problem: subject to x L + (A L ) 1 A Lx L = (A L ) 1 b /k, x 0, (4) where k = A (A A ) 1 b 2 /(m + (n ) 2 ). Then, consider the following rounded problem: subject to x L + (A L ) 1 A Lx L = (A L ) 1 b /k, x 0. (5) If (5) is infeasible, stop. Otherwise, solve (5) using the simplex method with Dantzig s rule. If (5) is unbounded, stop. Otherwise, let x denote the optimal solution and L the optimal basis. If K L is an optimal basis of the original problem (1), stop. Otherwise, go to Step 3. Step 3 (iteration): Set K := K J and K := K \ J where J = { i x i n, i K }. If K = n m, stop. Otherwise, go to Step 1. 3

2.3 Annotations of the proposed algorithm If K K and the optimal solution of (1) is unique, we can remove the nonnegativity constraints for x i for i K. In Step 1, the reduced problem (2) is obtained by expressing x K as H 1 b H 1 A K x K and substituting H 1 b H 1 A K x K for x K in the objective function. Therefore, the optimal solution for (2) yields the optimal solution for (1) via x K = H 1 b H 1 A K x K. The constant term in the objective function is removed for simplicity. Note that the matrices A and [I, (A L ) 1 A L] involved in (2), (3), (4), and (5) are totally unimodular if A is totally unimodular, see Theorem 19.5 in Schrijver [7]. In step 2, the scaling factor k is strictly positive if (A L ) 1 b 0 and, see Lemma 2, (A L ) 1 b /k is polynomially bounded above in m and n, which is a key fact for showing the strong polynomiality. Although the proposed algorithm builds the simplex tableau associated to (3) and the reduced problem (2) from scratch at each iteration, it is essentially for clarity of the exposition and can be ignored. In particular, one can observe that L \J can be used as the basis L for (3) at the next iteration, thus enabling a warm start. By performing Phase one of the two-phase simplex method for the rounded problem (5), we can check the feasibility of (5) and compute an initial basic feasible solution, unless it is infeasible. In Step 3, J by Lemma 1; that is, the size of K is strictly decreasing. Thus, the proposed algorithm terminates after at most m iterations. If (1) has an optimal solution, K K by Corollary 1. The stopping conditions of the proposed algorithm are: if (A L ) 1 b = 0, the simplex tableau associated to (3) yields either the optimality of x = 0 or the unboundedness of the reduced problem (2). since the rounded problem (5) is a relaxation of the scaled problem (4), the scaled problem (4) and the original problem (1) are both infeasible if (5) is infeasible the scaled problem (4) is unbounded or infeasible if (5) is unbounded. In both cases, the original problem (1) has no optimal solution. if K = n m in Step 3, the problem (1) is infeasible as otherwise the algorithm finds an optimal basis in Step 2. 3 Proof of Theorem 1 Lemma 1 states that the set J = {i x i n, i K} used in Step 3 is never empty and thus, the proposed algorithm solves the rounded problem (5) at most m times. 4

Lemma 1. J as any solution x of the rounded problem (5) satisfies x n. Proof. Let x be any solution of the rounded problem (5). Then A x = A L (A L) 1 b /k. Since, for any g, the minimal l 2 -norm point satisfying A x = g is A T (A A T ) 1 g, we have x 2 A T (A A T ) 1 A L (A L ) 1 b /k 2 A T (A A T ) 1 b /k 2 A T (A A ( T ) 1 ) A L d 2 d = (m + (n ) 2 ) A T (A A T ) 1 A 0 2 L (n ) 2 + m d 2, where k = A (A A ) 1 b 2 /(m + (n ) 2 ) and d = (A L ) 1 b /k (A L ) 1 b /k. Since d < 1, we obtain that x x 2 /n > ((n ) 2 + m m )/n = n. Corollary 1 is a direct consequence of Theorem 2 and shows that K K. Theorem 2 (Theorem 10.5 in Schrijver [7]). Let A be an m n-matrix, and let be such that for each nonsingular submatrix B of A all entries of B 1 are at most in absolute value. Let c be a column n-vector, and let b and b be column m-vectors such that P : max{c x Ax b } and P : max{c x Ax b } are finite. Then, for each optimal solution x of P, there exists an optimal solution x of P with x x n b b. Corollary 1. Let x be an optimal solution of the rounded problem (5), and J = {i x i n, i K} as defined in Step 3 of the proposed algorithm. If the scaled problem (4) is feasible, the i-th coordinate of the optimal solution of the scaled problem (4) is strictly positive for i J. Furthermore, the same holds for the reduced problem (2) and the original problem (1) as the scaling factor k is strictly positive. Proof. Define à +n ) n, b, and b R R(2m 2m +n as: à = E E, b = (A 1 L b )/k (A 1 L b )/k, and b = b. I 0 where E = [I, (A L ) 1 A L]. With this notation, the rounded problem (5), respectively the scaled problem (4), can be restated as P : max{ c x Ãx b }, respectively P : max{ c x Ãx b }. Since E is totally unimodular, à is totally unimodular, and thus = 1 in Theorem 2. In addition, note that 5

b b < 1. Recall that the scaled problem (4) and P share the same unique optimal solution x as the optimal solution of the original problem (1) is assumed to be unique. Therefore, since x is an optimal solution of P, we observe that x x < n by Theorem 2 and thus, x i > 0 for i J. Finally, we show the strong polynomiality of the proposed algorithm using Kitahara and Mizuno [3, 4] results showing that the number of different basic feasible solutions generated by the primal simplex method with the most negative pivoting rule Dantzig s rule or the best improvement pivoting rule is bounded by: n m γ δ log(mγ δ ) where m is the number of constraints, n is the number of variables, and γ and δ are, respectively, the minimum and the maximum values of all the positive elements of the primal basic feasible solutions. Thus, we need to estimate the values γ and δ for the introduced auxiliary problems. Since the coefficient matrices used in the proposed algorithm are totally unimodular and the right hand side vector of the rounded problem (5) is integer, we have δ = 1. For γ, we use Lemma 2. Lemma 2. For the auxiliary problem (5), we have γ γ = m(mn(m+n 2 )+1). Proof. Note that the right-hand side vector for (5) is (A L ) 1 b /k. total unimodularity, we observe that By the (A L) 1 b /k (A L) 1 b /k + 1 m b /k + 1. The numerator A (A A ) 1 b 2 of k is bounded below by b /n implying (A L ) 1 b /k m n (m + (n ) 2 ) + 1. Thus, by Cramer s rule and the total unimodularity of the coefficient matrix of (5), the l -norm of a basic solution of (5) is bounded above by m (m n (m + (n ) 2 ) + 1). The two-phase simplex algorithm is called at most m times. Thus, the number of auxiliary problems solved by the proposed algorithm is bounded above by 2m as each call corresponds to 2 auxiliary problems : one for each phase. Therefore, if all the auxiliary problems are non-degenerate, the total number of basic solutions generated by the algorithm is bounded above by 2m[n mγ log(mγ ) ]; that is by 2mn (m 4 n + m 3 n 3 + m 2 ) log(m 4 n + m 3 n 3 + m 2 ) which completes the proof of Theorem 1. Alternatively, since m n, this bound can be restated as O(m 4 n 4 log n). While assuming the non-degeneracy of the auxiliary problems is needed to use Kitahara-Mizuno s bound, the number of degenerate updates of bases at a single basic solution is typically not too large in practice. Acknowledgment Research supported in part by Grant-in-Aid for Science Research (A) 26242027 6

of Japan Society for the Promotion of Science (JSPS), Grant-in-Aid for JSPS Fellows, the Natural Sciences and Engineering Research Council of Canada, the Digiteo Chair C&O program, and by the Canada Research Chairs program. Part of this research was done while the authors were at the LRI, Université Paris-Sud, Orsay, France, within the Digiteo invited researchers program. References [1] É. Tardos: A strongly polynomial minimum cost circulation algorithm, Combinatorica (1985) 247 255. [2] É. Tardos: A strongly polynomial algorithm to solve combinatorial linear programs, Operations Research (1986) 250 256. [3] T. Kitahara and S. Mizuno: On the number of solutions generated by the dual simplex method, Operations Research Letters (2012) 172 174. [4] T. Kitahara and S. Mizuno: A bound for the number of different basic solutions generated by the simplex method, Mathematical Programming (2013) 579 586. [5] S. Mizuno: A strongly polynomial simplex method for totally unimodular LP, Technical Report 2014-3, Department of Industrial Engineering and Management, Tokyo Institute of Technology (2014). [6] J. B. Orlin: A dual version of Tardos s algorithm for linear programming, Operations Research Letters (1986) 221 226. [7] A. Schrijver: Theory of Linear and Integer Programming, John Wiley & Sons (1986). [8] Y. Ye: The simplex and policy-iteration methods are strongly polynomial for the Markov Decision Problem with a fixed discount rate, Mathematics of Operations Research (2011) 593 603. 7