that nds a basis which is optimal for both the primal and the dual problems, given

Similar documents
Part 1. The Review of Linear Programming

Chapter 1. Preliminaries

ON THE COMPLEXITY OF SOLVING THE GENERALIZED SET PACKING PROBLEM APPROXIMATELY. Nimrod Megiddoy

Review Solutions, Exam 2, Operations Research

Chapter 1 Linear Programming. Paragraph 5 Duality

Linear Programming: Simplex

A Strongly Polynomial Simplex Method for Totally Unimodular LP

Lecture Simplex Issues: Number of Pivots. ORIE 6300 Mathematical Programming I October 9, 2014

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs

Dual Basic Solutions. Observation 5.7. Consider LP in standard form with A 2 R m n,rank(a) =m, and dual LP:

Linear Programming Duality

Standard Form An LP is in standard form when: All variables are non-negativenegative All constraints are equalities Putting an LP formulation into sta

Lecture 10: Linear programming duality and sensitivity 0-0

Nimrod Megiddo y. Bernhard von Stengel z. Revised version, submitted to Games and Economic Behavior

Yorktown Heights, New York * San Jose, California * Zurich, Switzerland

The subject of this paper is nding small sample spaces for joint distributions of

min 4x 1 5x 2 + 3x 3 s.t. x 1 + 2x 2 + x 3 = 10 x 1 x 2 6 x 1 + 3x 2 + x 3 14

Chapter 33 MSMYM1 Mathematical Linear Programming

Optimization 4. GAME THEORY

Chapter 3, Operations Research (OR)

The Dual Simplex Algorithm

(P ) Minimize 4x 1 + 6x 2 + 5x 3 s.t. 2x 1 3x 3 3 3x 2 2x 3 6

1 Review Session. 1.1 Lecture 2

Understanding the Simplex algorithm. Standard Optimization Problems.

Worked Examples for Chapter 5

Linear Programming and the Simplex method

Homework Assignment 4 Solutions

IBM Almaden Research Center, 650 Harry Road, School of Mathematical Sciences, Tel Aviv University, TelAviv, Israel

Lecture 11: Post-Optimal Analysis. September 23, 2009

Lecture 11 Linear programming : The Revised Simplex Method

COT 6936: Topics in Algorithms! Giri Narasimhan. ECS 254A / EC 2443; Phone: x3748

Section Notes 9. IP: Cutting Planes. Applied Math 121. Week of April 12, 2010

Midterm Review. Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A.

21. Solve the LP given in Exercise 19 using the big-m method discussed in Exercise 20.

The dual simplex method with bounds

4.6 Linear Programming duality

A primal-simplex based Tardos algorithm

Farkas Lemma, Dual Simplex and Sensitivity Analysis

Note 3: LP Duality. If the primal problem (P) in the canonical form is min Z = n (1) then the dual problem (D) in the canonical form is max W = m (2)

Linear Programming Inverse Projection Theory Chapter 3

Foundations of Operations Research

3. THE SIMPLEX ALGORITHM

Lecture 10: Linear programming. duality. and. The dual of the LP in standard form. maximize w = b T y (D) subject to A T y c, minimize z = c T x (P)

IE 5531: Engineering Optimization I

Optimization (168) Lecture 7-8-9

Math 273a: Optimization The Simplex method

Chap6 Duality Theory and Sensitivity Analysis

LP Duality: outline. Duality theory for Linear Programming. alternatives. optimization I Idea: polyhedra

Summary of the simplex method

Summary of the simplex method

Introduction to linear programming using LEGO.

Linear Programming Duality P&S Chapter 3 Last Revised Nov 1, 2004

IE 5531: Engineering Optimization I

Algorithmic Game Theory and Applications. Lecture 7: The LP Duality Theorem

Introduction to Mathematical Programming

Duality. Peter Bro Mitersen (University of Aarhus) Optimization, Lecture 9 February 28, / 49

2.098/6.255/ Optimization Methods Practice True/False Questions

Simplex method(s) for solving LPs in standard form

4. The Dual Simplex Method

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method

Optimization WS 13/14:, by Y. Goldstein/K. Reinert, 9. Dezember 2013, 16: Linear programming. Optimization Problems

Motivating examples Introduction to algorithms Simplex algorithm. On a particular example General algorithm. Duality An application to game theory

Hermite normal form: Computation and applications

Lesson 27 Linear Programming; The Simplex Method

MVE165/MMG631 Linear and integer optimization with applications Lecture 5 Linear programming duality and sensitivity analysis

The Simplex Algorithm

4. Duality and Sensitivity

New concepts: Span of a vector set, matrix column space (range) Linearly dependent set of vectors Matrix null space

A Parametric Simplex Algorithm for Linear Vector Optimization Problems

APPROXIMATING THE COMPLEXITY MEASURE OF. Levent Tuncel. November 10, C&O Research Report: 98{51. Abstract

2018 년전기입시기출문제 2017/8/21~22

Fraction-free Row Reduction of Matrices of Skew Polynomials

1 Overview. 2 Extreme Points. AM 221: Advanced Optimization Spring 2016

AM 121 Introduction to Optimization: Models and Methods Example Questions for Midterm 1

y Ray of Half-line or ray through in the direction of y

14. Duality. ˆ Upper and lower bounds. ˆ General duality. ˆ Constraint qualifications. ˆ Counterexample. ˆ Complementary slackness.

OPTIMISATION 3: NOTES ON THE SIMPLEX ALGORITHM

CSC373: Algorithm Design, Analysis and Complexity Fall 2017 DENIS PANKRATOV NOVEMBER 1, 2017

3.3 Sensitivity Analysis

3. Duality: What is duality? Why does it matter? Sensitivity through duality.

Linear Programming: Chapter 5 Duality

Variants of Simplex Method

Duality Theory, Optimality Conditions

A Generalized Homogeneous and Self-Dual Algorithm. for Linear Programming. February 1994 (revised December 1994)

Sensitivity Analysis

3. Linear Programming and Polyhedral Combinatorics

Lectures 6, 7 and part of 8

4.5 Simplex method. LP in standard form: min z = c T x s.t. Ax = b

DEPARTMENT OF STATISTICS AND OPERATIONS RESEARCH OPERATIONS RESEARCH DETERMINISTIC QUALIFYING EXAMINATION. Part I: Short Questions

Example Bases and Basic Feasible Solutions 63 Let q = >: ; > and M = >: ;2 > and consider the LCP (q M). The class of ; ;2 complementary cones

MATH 4211/6211 Optimization Linear Programming

F 1 F 2 Daily Requirement Cost N N N

CSCI5654 (Linear Programming, Fall 2013) Lecture-8. Lecture 8 Slide# 1

4. Duality Duality 4.1 Duality of LPs and the duality theorem. min c T x x R n, c R n. s.t. ai Tx = b i i M a i R n

Part IB Optimisation

Introduction to optimization

LINEAR AND NONLINEAR PROGRAMMING

Linear Programming. Linear Programming I. Lecture 1. Linear Programming. Linear Programming

Linear programs Optimization Geoff Gordon Ryan Tibshirani

Lecture 6 Simplex method for linear programming

Transcription:

On Finding Primal- and Dual-Optimal Bases Nimrod Megiddo (revised June 1990) Abstract. We show that if there exists a strongly polynomial time algorithm that nds a basis which is optimal for both the primal and the dual problems, given an optimal solution for one of the problems, then there exists a strongly polynomial algorithm for the general linear programming problem. On other hand, we give a strongly polynomial time algorithm that nds such a basis, given any pair of optimal solutions (not necessarily basic) for the primal and the dual problems. Such an algorithm is needed when one is using an interior point method and is interested in nding a basis which is both primal- and dual-optimal. Subject classication: Programming, Linear, Algorithms and Theory Introduction The reader is referred, for example, to [2] for information about standard results in linear programming which are used in this work. The simplex method for linear programminghas the nice property that if the problem has an optimal solution then a basis is found which is both primal-optimal and dual-optimal, i.e., both of the basic solutions which are dened by such a basis in the primal and the dual problems are optimal in their respective problems. For brevity we call such a basis an optimal basis. An optimal basis IBM Research, Almaden Research Center, San Jose, CA 95120-6099, and School of Mathematical Sciences, Tel Aviv University, Tel Aviv, Israel 1

is a useful object for post-optimality analysis. The known polynomial-time algorithms for linear programminggenerate optimal solutions which are not necessarily basic. Given any primal-optimal solution (not necessarily basic), it is easy to nd a primal-optimal basis. Analogously, given any dual-optimal solution, it is easy to nd a dual-optimal basis. However, none of the two basesfoundinthisway is guaranteed to be an optimal basis. In fact, the dual solution associated with a primal-optimal basis and the primal solution associated with a dual-optimal basis may both be infeasible in their respective problems. Furthermore, if the problem is put into the combined primal-dual form, a primal-optimal basis of the combined form yields a primal-optimal basis and a dual-optimal one for the original problem, but these two bases may be distinct. Since no polynomial-time variant of the simplex method is known, this raises the question whether an optimal basis can be found in polynomial time in terms of the input size of a problem with rational data. We answer this question in the armative. Actually, weprove a stronger result using the concept of strongly polynomial time complexity. For simplicity, wesay that an algorithm for linear programmingruns in strongly polynomial time if it performs no more than p(m n) arithmetic operations and comparisons. (When applied to problems with rational data, the strongly polynomial algorithms of this paper involveonlynumbers of polynomial size.) It is not known whether there exists a strongly polynomial algorithm for the general linear programming problem. Since this question is open and seems dicult, we consider here a related problem concerning basic solutions. We prove the following two complementary theorems which shed some light 2

on the complexity of nding optimal bases: Theorem 0.1. If there exists a strongly polynomial time algorithm that nds an optimal basis, given an optimal solution for either the primal or the dual, then there exists a strongly polynomial algorithm for the general linear programming problem. Theorem 0.2. There exists a strongly polynomial time algorithm that nds an optimal basis, given optimal solutions for both the primal and the dual. We give the necessary denitions and the proofs in Section 1. 1. The results Consider the linear programming problemin standard form Maximize c T x (P ) subject to Ax = b x 0 where the rows of A 2 R mn are assumed to be linearly independent. The dual problem is: (D) Minimize b T y subject to A T y c A basis is a nonsingular submatrix B 2 R mm of A. A basis B is called primal-optimal if B ;1 b is an optimal solution for (P )(variables not corresponding to columns of B are set 3

to zero). Denote, as usual, by c B the restriction of c to the components corresponding to the columns of B. A basis B is called dual-optimal if (B T ) ;1 c B is an optimal solution for (D). We call B optimal if it is both primal- and dual-optimal. It is well-known, in view of the duality theorem, that an optimal basis is characterized by the inequalities B ;1 b 0 and c T B B;1 A c T : We are interested here in the complexity of nding an optimal basis. It follows from the theory of the simplex method that such a basis exists if the problem has an optimal solution. The proof of Theorem 0.1 uses Theorem 0.2, and hence we rst prove the latter. Proof of Theorem 0.2: Suppose x and y are given optimal solutions for (P )and (D), respectively. LetX denote the submatrix of A consisting of the columns A j such that x j > 0. By the complementary slackness condition, y T X = c T X where c X denotes the restriction of c to the coordinates corresponding to X. If the columns of X are linearly dependent, we can nd a vector z 6= 0 such that Xz = 0,and hence c T X z = 0. It follows that for some scalar t, thevector x0,denedby x 0 j = x j + tz j for j in X and x 0 j = 0 otherwise, is an optimal solution with a smaller set of positive 4

coordinates. Such avector x 0 can obviously be found in strongly polynomial time. Successive applications of this principle nally yield an optimal solution x 0 where the set X 0 of columns A j such thatx 0 j > 0 is linearly independent. If X 0 has m columns then we have found an optimal basis. Otherwise, we expandthesetx as follows. First, consider the submatrix Y of A consisting of the columns A j such thaty T A j = c j. If Y contains any column which is independent of the columns in X 0, then add this column to X 0. Repeat this step until all the columns of Y are linearly dependent on the columns of the resulting set X 0. (This procedure can be easily implemented by Gaussian eliminations.) If at this point the matrix X 0 still consists of less than m columns then we expand further as follows. We now wish to move toapoint y 0 with one additional independent column in the set Y. By assumption, the rows of A are linearly independent and hence there exists a column A j which is linearly independent of the columns in X 0.Thus, this column is not in Y and hence y T A j >c j.wenowsolve the set of equations: z T X 0 =0 z T A j =1: A solution exists since the A j together with the columns of X 0 constitute a linearly independent set.now, consider vectors of the form y ; tz, wheret is any scalar. First, 5

since every column of Y is now a linear combination of the columns of X 0,wehave (y ; tz) T Y = c T Y : Moreover, consider the number t 0 = min ( ) y T A k ; c k : z T z T A k > 0 A k : Notice that t 0 is well-dened since z T A j = 1. The vector y 0 = y ; t 0 z is also a dualoptimal solution as can be veried from the above. The set Y 0 of columns A k such that y 0T A k = c k now contains at least one column which is linearly independent of the columns of X 0.Wenow add such columns to X 0, one at a time, making sure that X 0 remains linearly independent. We then nd another vector z, and this process is repeated until X 0 contains m linearly independent columns. Note that throughout this process we maintain optimal solutions x 0 and y 0 such that for every column A k not in X 0,wehave x 0 k = 0, and the set Y 0 contains X 0. Thus, at the end X 0 is an optimal basis. The whole process does not take more than n \pivot" steps since columns which leave the set X prior to the generation of the rst set X 0 never come back, and the set X 0 only increases. Thus, the whole algorithm runs in strongly polynomial time. We note that if the given optimal solutions are vertices of the respective polyhedra then the optimal basis found by the above procedure yields the same pair of solutions. For the proof of Theorem 0.1 we rst state the subject problems: Problem 1.1. Given A b c, nd an optimal basis or conclude that no such basis exists. 6

Problem 1.2. Given A b c, such that (P )and(d) are known to have optimal solutions, nd an optimal basis. Problem 1.3. Given A b c and an optimal solution x of (P ), nd an optimal basis. The proof of Theorem 0.1 follows from the following reductions: Proposition 1.4. If there exists a strongly polynomial time algorithm for Problem 1.3 then there exists one for Problem 1.2. Proof: We reduce Problem 1.2 to Problem 1.3. Given A b c, such that(p )and (D) have optimal solutions, consider the following problem where (P ) and(d) are combined, and y is replaced by z ; w: (PD) Maximize c T x ;b T z +b T w subject to Ax = b ;A T z +A T w +v = ;c x z w v 0 : By the duality theorem, (PD) has an optimal solution and the optimal value is 0. Consider the following problem with an additional variable : (PD 0 ) Maximize c T x ;b T z +b T w subject to Ax ;b = 0 ;A T z +A T w +v +c =0 x z w v 0 : The dual of (PD 0 ) amounts to the following system of inequalities: (DP 0 ) c T u ;b T y 0 A T y c ;Au ;b Au b u 0: 7

Since (DP 0 ) is feasible, it follows that the optimal value of (PD 0 ) is also 0. Thus, we have a trivial optimal solution for (PD 0 ), namely, set x, z, w, v and to zero. Now, by assumption, an optimal basis for (PD 0 ) can be found in strongly polynomial time. Given an optimal basis, we can compute optimal solutions for both (PD 0 ) and (DP 0 ). We are interested in the latter. Let (y u) besuch an optimal solution for (DP 0 ). Obviously, u is an optimal solution for (P )andy is an optimal solution for (D). By Theorem 0.2, we cannow nd an optimal basis for (P ) in strongly polynomial time. Proposition 1.5. If there exists a strongly polynomial time algorithm for Problem 1.2 then there exists one for Problem 1.1. Proof: Suppose there exists a strongly polynomial time algorithm for Problem 1.2. Given A b c, consider the following problem: (S) Maximize c T x ;b T y ; subject to Ax +b = b A T y +c c c T x ;b T y 0 x 0 The problem (S) can obviously be put into the standard form of (P ). Now, (S) is feasible (set x and y to zero and = 1). Moreover, the value of the objective function on the feasible domain of (S) is bounded by0. Thus, (S) has an optimal solution which by assumption can be found in strongly polynomial time. The x part and the y part of such a solution are optimal solutions for (P )and(d), respectively, if and only if 8

=0. If =0,thenby Theorem 0.2 an optimal basis for (P ) can be found in strongly polynomial time. 2. Conclusion An algorithm for nding an optimal basis has to work on the problem from two \sides": the primal and the dual. If an algorithm concentrates on the primal side or simply discards all the information obtained throughout its execution and reports only a primaloptimal solution, then nding a dual-optimal solution given a primal-optimal one may be as hard as solving the problem from the beginning. This observation is important for the implementation of interior point algorithms. If an algorithm does not generate values for the dual variables then in the worst case it may be hard to nd a dual-optimal solution. If the algorithm generates both primal and dual values then it is relatively easy to nd an optimal basis. Finally, we note that the eciency of the algorithm given in the proof of Theorem 0.2 is very closely related to the amount of degeneracy in the problem. The more degenerate the problem is, more steps might be needed to construct an optimal basis from a pair of optimal solutions for the primal and the dual problems. We also note that the work of our algorithms can be carried out in a tableau form just like the simplex method. 9

References [1] N. Megiddo, \A note on degeneracy in linear programming," Mathematical Programming 35 (1986), 365{367. [2] A. Schrijver, Theory of Linear and Integer Programming, Wiley, New York, 1986. 10