Week 8. 1 LP is easy: the Ellipsoid Method

Similar documents
15.083J/6.859J Integer Optimization. Lecture 10: Solving Relaxations

The Ellipsoid (Kachiyan) Method

Travelling Salesman Problem

Week 4. (1) 0 f ij u ij.

7. Lecture notes on the ellipsoid algorithm

3.7 Strong valid inequalities for structured ILP problems

3.7 Cutting plane methods

Week Cuts, Branch & Bound, and Lagrangean Relaxation

Technische Universität München, Zentrum Mathematik Lehrstuhl für Angewandte Geometrie und Diskrete Mathematik. Combinatorial Optimization (MA 4502)

Lecture notes on the ellipsoid algorithm

15.081J/6.251J Introduction to Mathematical Programming. Lecture 18: The Ellipsoid method

Introduction to Mathematical Programming IE406. Lecture 21. Dr. Ted Ralphs

LINEAR PROGRAMMING III

Lecture #21. c T x Ax b. maximize subject to

Discrete Optimization 2010 Lecture 8 Lagrangian Relaxation / P, N P and co-n P

Discrete Optimization 2010 Lecture 7 Introduction to Integer Programming

Linear Programming. Scheduling problems

3.8 Strong valid inequalities

The Ellipsoid Algorithm

Integer Programming ISE 418. Lecture 12. Dr. Ted Ralphs

3. Linear Programming and Polyhedral Combinatorics

Lecture 5 January 16, 2013

CS675: Convex and Combinatorial Optimization Fall 2014 Combinatorial Problems as Linear Programs. Instructor: Shaddin Dughmi

Complexity of linear programming: outline

Preliminaries and Complexity Theory

3.10 Lagrangian relaxation

Week 2. The Simplex method was developed by Dantzig in the late 40-ties.

3. Linear Programming and Polyhedral Combinatorics

Optimization WS 13/14:, by Y. Goldstein/K. Reinert, 9. Dezember 2013, 16: Linear programming. Optimization Problems

CS675: Convex and Combinatorial Optimization Fall 2016 Combinatorial Problems as Linear and Convex Programs. Instructor: Shaddin Dughmi

3.3 Easy ILP problems and totally unimodular matrices

Duality of LPs and Applications

Section Notes 8. Integer Programming II. Applied Math 121. Week of April 5, expand your knowledge of big M s and logical constraints.

Linear and Integer Optimization (V3C1/F4C1)

MAT-INF4110/MAT-INF9110 Mathematical optimization

Linear Programming. Chapter Introduction

The traveling salesman problem

Cutting Plane Methods II

15-850: Advanced Algorithms CMU, Fall 2018 HW #4 (out October 17, 2018) Due: October 28, 2018

Theory and Internet Protocols

New Integer Programming Formulations of the Generalized Travelling Salesman Problem

CS599: Convex and Combinatorial Optimization Fall 2013 Lecture 17: Combinatorial Problems as Linear Programs III. Instructor: Shaddin Dughmi

MVE165/MMG631 Linear and integer optimization with applications Lecture 8 Discrete optimization: theory and algorithms

A strongly polynomial algorithm for linear systems having a binary solution

Lectures 6, 7 and part of 8

Resource Constrained Project Scheduling Linear and Integer Programming (1)

Bounds on the Traveling Salesman Problem

A notion of Total Dual Integrality for Convex, Semidefinite and Extended Formulations

Combinatorial Optimization

15.081J/6.251J Introduction to Mathematical Programming. Lecture 24: Discrete Optimization

CS 6820 Fall 2014 Lectures, October 3-20, 2014

Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS

Advanced Linear Programming: The Exercises

Mathematical Programs Linear Program (LP)

3.4 Relaxations and bounds

princeton univ. F 13 cos 521: Advanced Algorithm Design Lecture 17: Duality and MinMax Theorem Lecturer: Sanjeev Arora

Computational Complexity. IE 496 Lecture 6. Dr. Ted Ralphs

Iterative Rounding and Relaxation

Critical Reading of Optimization Methods for Logical Inference [1]

Algorithms and Theory of Computation. Lecture 13: Linear Programming (2)

CS675: Convex and Combinatorial Optimization Spring 2018 The Ellipsoid Algorithm. Instructor: Shaddin Dughmi

Separating Simple Domino Parity Inequalities

CS Algorithms and Complexity

- Well-characterized problems, min-max relations, approximate certificates. - LP problems in the standard form, primal and dual linear programs

C&O 355 Mathematical Programming Fall 2010 Lecture 18. N. Harvey

A simple LP relaxation for the Asymmetric Traveling Salesman Problem

Discrete (and Continuous) Optimization WI4 131

CHAPTER 3 FUNDAMENTALS OF COMPUTATIONAL COMPLEXITY. E. Amaldi Foundations of Operations Research Politecnico di Milano 1

LMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009

Combinatorial Optimization Spring Term 2015 Rico Zenklusen. 2 a = ( 3 2 ) 1 E(a, A) = E(( 3 2 ), ( 4 0

Discrete Optimisation

CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming

Polynomiality of Linear Programming

Topics in Theoretical Computer Science April 08, Lecture 8

Network Optimization

Multicommodity Flows and Column Generation

SDP Relaxations for MAXCUT

Integer Programming ISE 418. Lecture 8. Dr. Ted Ralphs

Integer Programming, Part 1

Introduction to Integer Programming

Algorithms. Outline! Approximation Algorithms. The class APX. The intelligence behind the hardware. ! Based on

Integer Linear Programming (ILP)

Deciding Emptiness of the Gomory-Chvátal Closure is NP-Complete, Even for a Rational Polyhedron Containing No Integer Point

GRAPH ALGORITHMS Week 7 (13 Nov - 18 Nov 2017)

Chapter 3: Discrete Optimization Integer Programming

6.854J / J Advanced Algorithms Fall 2008

CO 250 Final Exam Guide

Section Notes 9. IP: Cutting Planes. Applied Math 121. Week of April 12, 2010

A Bound for the Number of Different Basic Solutions Generated by the Simplex Method

Optimization Exercise Set n. 4 :

Integer programming: an introduction. Alessandro Astolfi

CS675: Convex and Combinatorial Optimization Fall 2016 Convex Optimization Problems. Instructor: Shaddin Dughmi

The Steiner Network Problem

A Review of Linear Programming

Discrete Optimization in a Nutshell

Integer Linear Programs

An introductory example

Improving Christofides Algorithm for the s-t Path TSP

A Parametric Simplex Algorithm for Linear Vector Optimization Problems

Combinatorial Optimization

Transcription:

Week 8 1 LP is easy: the Ellipsoid Method In 1979 Khachyan proved that LP is solvable in polynomial time by a method of shrinking ellipsoids. The running time is polynomial in the number of variables n, the number of constraints m and the logarithm of the largest coefficient log U. Khachyan developed his method for finding a feasible point in P = {x IR n Ax b} or decide that P =. Notice that solving an LP optimization problem is a matter of finding a feasible solution to the system of primal inequalities, dual inequalities and the equality that primal and dual objective values are equal see Section 8.4 in [B&T]). The idea behind the ellipsoid method is rather simply explained in a twodimensional picture on the blackboard). The proof that it works is technical and based on several ingredients. In what follows let A and b be integer and let U be the maximum absolute value of the entries in A and b.. a) We can guess a ball that contains all extreme points of P, and the volume of which is not too large in terms of n, m and log U; b) If P is non-empty then its volume is not too small in terms of n, m and log U; c) In each step either the center of the ellipsoid is in P or the next ellipsoid contains P, and in fact the entire half of the previous ellipsoid containing P ; d) In each step the ratio between the volume of the next ellipsoid and the volume of the previous ellipsoid is large enough in terms of n, m and log U. 1.1 An iteration We shall first be concerned in showing c) and d). We will obtain perfect insight by considering how this works for an ellipsoid that contains the half of the unit ball E 0 consisting of all points with non-negative first coordinate; that is the unit ball E 0 intersected with the halfspace H 0 = {x IR n e T 1 x 0}. Any general case can in fact be transformed into this result. Let s start with a general description of an ellipsoid. For any positive definite symmetric matrix D and centre z an ellipsoid is defined by Ez, D) = { x IR n x z) T D 1 x z) 1 }. Remember a matrix is positive definite if x T Dx > 0 x 0. D positive definite D 1 positive definite. Also D is non-singular. Any such matrix can be 1

written as the product of two symmetric non-singular matrices D = D 1/2 D 1/2. So, within this notation the unit ball is E 0 = E0, I), and we will verify that the following ellipsoid contains E 0 H 0. E 0 = E z, D) = { x IR n x z) T D 1 x z) 1 }, using e 1 to denote the first unit vector in IR n, and or written out D = z = e 1 n + 1 = 1, 0,..., 0)) n + 1 D = n2 n 2 1 n2 n 2 I 2 ) 1 n + 1 e 1e T 1, 1 2 0.... 0 0 1 0... 0. 0 1 0...................... 1 0 0 0... 0 1 Let us check that this indeed contains all the extreme points of E 0 H 0 : i.e., all points on the n 1-dimensional unit ball of points with x 1 = 0, and the point 1, 0, 0,..., 0). A little thought should make it clear that such an ellipsoid contains the entire half-ball E 0 H 0. It is in fact the smallest ellipsoid with this property, which I won t show, because for the rest of the story it is irrelevant. Written out, the ellipsoid becomes { ) 2 n + 1 E 0 = x IR n x 1 1 ) } 2 + n2 1 n n n + 1 n 2 x 2 i 1. 1). The points in E 0 with x 1 = 0 and n i=2 x2 i = 1 satisfy ) 2 n + 1 1 ) 2 + n2 1 n n + 1 n 2 = 1 n 2 + n2 1 n 2 = 1, The point 1, 0,..., 0) satisfies ) 2 n + 1 1 1 ) 2 = 1. n n + 1 So, we would like to know the ratio V olumee 0)/V olumee 0 ). We do this by making an appropriate transformation F such that F E 0) = E 0, which allows us to use the following lemma. i=2 2

Lemma 1.1 Lemma 8.1 in [B&T]) Given the affine transformation Sx) = Bx+b, with B a non-singular matrix, then V olsl)) = detb) V oll), where SL) = {y IR n x L s.t. y = Bx + b}. The following transformation works F x) = D 1/2 x z) because x E 0 x z) T D 1 x z) 1 x z) T D 1/2 D 1/2 x z) 1 D 1/2 x z) E 0 F x) E 0. Applying Lemma 1.1 V olumee 0 ) = det D 1/2 ) V olumee 0). Using the property that, for any non-singular square matrix B, detb 1/2 ) = 1 detb), we get V olumee 0) = det D) V olumee 0 ). det D) is easily seen to be Hence, n 2 n 2 1 V olumee 0 ) V olumee 0) = ) n/2 1 2 ) 1/2. n + 1 n 2 n 2 1 = n = ) n/2 ) 1/2 1 2 ) n 1)/2 n 2 n 2 1 ) ) n 1)/2 1 1 1 + 1 n 2 1 applying 1 + a < e a, a 0, < e 1/) e 1/n2 1) = e 1/2)). ) n 1)/2 3

In general, let us be given ellipsoid and some halfspace E = Ez, D) = { x IR n x z) T D 1 x z) 1 }. H = {x R n a T x a T z} which is bounded by the hyperplane H = {x IR n a T x = a T z} through the centre z of E, hence containing exactly half of E. Then the following ellipsoid contains E H. E = E z, D) with and D = z = z + 1 Da n + 1 at Da n2 n 2 D 2 1 n + 1 Daa T ) D a T. Da And by transforming this situation to the unit ball we obtain: V olumee ) V olumee) = V olumee 0) V olumee 0 ) < e 1/2)). 1.2 The ellipsoid method for feasibility a) We can guess a ball that contains all extreme points of P, and the volume of which is not too large in terms of n, m and log U; b) In each step check if the center z of the ellipsoid E is in P. If so, we have a feasible point. If not determine a constraint a T x b of P that is violated by z. Take the halfspace H = {x R n a T x a T z} and determine the next ellipsoid that E H, containing P. c) Check if the volume dropped below the critical value. If so, conclude that there does not exist a feasible point: P =. Otherwise reiterate b). So we have now a formula to do b) and we have a bound on the reduction in volume in each iteration. Suppose that V is the volume of the initial ball and v a lowerbound on the volume of a feasible P, then there exists a t such that after at most t iterations we have found a point in P or the volume of the ellipsoid has reduced to V e t /2)) < v 4

and we correctly conclude that P is empty. Correct calculation gives that t = 2n + 1) logv/v). 2) Extremely coarse bounds on V and v follow rather straightforwardly from bounds on each of the coordinates of any bfs, which in its turn follows through Cramer s rule. In case P is full-dimensional, and V 2n) n nu) n2. v n n nu) n2 ). There is indeed a little complication in case P is not full-dimensional, in which case V olumep ) = 0. It requires then some effort to show that we may perturb P a little bit, in such a way that the perturbation is full-dimensional if P is non-empty and such that it is empty if P is empty. Filling in V and v in the expression 2) yields a number of iterations of On 4 log nu)). In case P is not full-dimensional then the perturbed polytope would have different bounds on V and v resulting in On 6 log nu)) iterations. It requires technicalities, which the book does not give, to show that the precision of taking square roots and of multiplications of numbers can be cut down sufficiently to conclude that also the running time per iteration can be polynomially bounded. I leave studying how to adapt the method to solve optimisation i.o. feasibility problems in Section 8.4 to yourself. It is extremely important for what follows to notice that the number of iterations is independent of m the number of constraints! It is only dependent on n, the number of variables, and log U, the size of the largest coefficient. 1.3 The ellipsoid method and the separation problem In each iteration of the ellipsoid method a restrictive form of the separation problem is solved. Separation Problem: Given polyhedron P IR n and x IR n : a) Either decide that x P, or b) Find a hyperplane separating x from P. In the ellipsoid method the separating hyperplane is found among those parallel to the constraints of P. Obviously, this problem can be solved in Omn) time, making the ellipsoid method run in time polynomial in n, m and log U. But the number of iterations is independent of m the number of restrictions). Thus, 5

if for a family of polyhedra we can solve the separation problem in time polynomial in n and log U only, then we can solve the linear optimization problems over this family in time polynomial in n and log U. This is particularly interesting in case the number of restrictions is exponential in the number of variables, and the restrictions do not have to be written out explicitly to define the problem. Take for example the min-cut problem finding a minimum s, t-cut in an undirected graph G = N, E). The s, t-cuts are characterized by e K x e 1, K K 0 x e 1, e E, where, K is the set of all s, t-paths in G. In fact you could see the formulation as the LP-relaxation of a min-cut, but it can be shown that all extreme points of this polytope are {0, 1}-vectors. The restrictions say that there should be at least 1 edge on each s, t-path, clearly the definition of a path. We do not wish to specify all these paths since, indeed, there are exponentially many in the number of edges. Given a {0, 1}-vector x, it is however a matter of deleting from the graph all edges e with x e = 1 and use a simple graph search algorithm to decide if there exists an s, t-path in the remaining graph. If there is not, then x represents a cut, otherwise one finds a path K with e K x e < 1 and therefore a violated constraint. In fact this was Exercise 8.8. Thus, since separation over the cut-polytope can be done in time polynomial in the number of edges only, we can conclude immediately that the min-cut problem can be solved in polynomial time using the ellipsoid method though we know it can be done faster). Complexity equivalence of separation and optimization is extremely useful for the construction of polynomial time approximation algorithms based on rounding optimal solutions of LP-relaxations of Integer LP problems. The bounds on the integrality gap is usually smaller in ILP-formulations with an exponential number of constraints. Let us briefly go through the example given in the book for the famous travelling salesman problem TSP). In TSP we are given a graph G = V, E) with edge costs c e, e E. We are to determine a circuit in the graph of total minimum cost containing all vertices of V. In the integer LP problem decision variables are x e, with x e = 1 if e is selected in the tour and 0 otherwise. The following LP has as feasible region the subtour elimination polytope, an LP-relaxation of the TSP. We use δs) to denote the edges with exactly one if its two nodes in S. Then, δi) is the edges 6

incident to node i. min s.t. e E c ex e e δi) x e = 2, i e δs) x e 2, S V 0 x e 1, e E. The second set of constraints are the so-called subtour-elimination constraints. Without these even the ILP-problem can be solved in polynomial time, giving so-called 2-factors. These are a bunch of vertex disjoint cycles. If it is only 1 then that is the optimal TSP tour. Usually it is not and subtour elimination constraints need to be added. Clearly there are exponential many of them, so let us see how we can solve the separation problem over the subtour elimination constraints. If we interpret x e as the capacity of the edge e then e δs) x e can be interpreted as the size of the cut S. Thus, if the subtour elimination constraint w.r.t. S is violated if e δs) x e < 2, i.e. S is a cut of size less than 2. Hence, checking if some subtour elimination constraint is violated is a matter of finding the mincut in G with capacities on the edges x e, which can be done in polynomial time. The optimal solution of this LP-relaxation is usually not integer, but gives good lower bounds. The value optimal of the LP-relaxation is conjectured to be within 3/4th of the integer optimal value. Material of Week 8 from [B& T] Chapter 8 Exercises of Week 8 8.9 7