Integer Programming ISE 418. Lecture 12. Dr. Ted Ralphs

Similar documents
Integer Programming ISE 418. Lecture 13. Dr. Ted Ralphs

Computational Integer Programming Universidad de los Andes. Lecture 1. Dr. Ted Ralphs

LP Relaxations of Mixed Integer Programs

3.8 Strong valid inequalities

Linear Algebra Review: Linear Independence. IE418 Integer Programming. Linear Algebra Review: Subspaces. Linear Algebra Review: Affine Independence

Computational Integer Programming. Lecture 2: Modeling and Formulation. Dr. Ted Ralphs

3.7 Strong valid inequalities for structured ILP problems

Integer Programming ISE 418. Lecture 8. Dr. Ted Ralphs

Integer Programming ISE 418. Lecture 13b. Dr. Ted Ralphs

Chapter 1. Preliminaries

Combinatorial Optimization

Separation, Inverse Optimization, and Decomposition. Some Observations. Ted Ralphs 1 Joint work with: Aykut Bulut 1

Introduction to Mathematical Programming IE406. Lecture 21. Dr. Ted Ralphs

Solving the MWT. Recall the ILP for the MWT. We can obtain a solution to the MWT problem by solving the following ILP:

Key Things We Learned Last Time. IE418 Integer Programming. Proving Facets Way #2 Indirect. A More Abstract Example

Separation, Inverse Optimization, and Decomposition. Some Observations. Ted Ralphs 1 Joint work with: Aykut Bulut 1

7. Lecture notes on the ellipsoid algorithm

Discrete Optimization 2010 Lecture 7 Introduction to Integer Programming

Optimization Exercise Set n. 4 :

Integer Programming ISE 418. Lecture 16. Dr. Ted Ralphs

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs

On the knapsack closure of 0-1 Integer Linear Programs

Section Notes 9. IP: Cutting Planes. Applied Math 121. Week of April 12, 2010

Computational Complexity. IE 496 Lecture 6. Dr. Ted Ralphs

- Well-characterized problems, min-max relations, approximate certificates. - LP problems in the standard form, primal and dual linear programs

Section Notes 8. Integer Programming II. Applied Math 121. Week of April 5, expand your knowledge of big M s and logical constraints.

Lecture notes on the ellipsoid algorithm

Decomposition-based Methods for Large-scale Discrete Optimization p.1

Lift-and-Project Inequalities

THE EXISTENCE AND USEFULNESS OF EQUALITY CUTS IN THE MULTI-DEMAND MULTIDIMENSIONAL KNAPSACK PROBLEM LEVI DELISSA. B.S., Kansas State University, 2014

Lecture #21. c T x Ax b. maximize subject to

Deciding Emptiness of the Gomory-Chvátal Closure is NP-Complete, Even for a Rational Polyhedron Containing No Integer Point

CO 250 Final Exam Guide

Polyhedral Approach to Integer Linear Programming. Tepper School of Business Carnegie Mellon University, Pittsburgh

Bilevel Integer Linear Programming

SEQUENTIAL AND SIMULTANEOUS LIFTING IN THE NODE PACKING POLYHEDRON JEFFREY WILLIAM PAVELKA. B.S., Kansas State University, 2011

1 Maximal Lattice-free Convex Sets

3.7 Cutting plane methods

The Ellipsoid (Kachiyan) Method

Integer Programming, Part 1

Section Notes 9. Midterm 2 Review. Applied Math / Engineering Sciences 121. Week of December 3, 2018

Week 8. 1 LP is easy: the Ellipsoid Method

MIP reformulations of some chance-constrained mathematical programs

On the Relative Strength of Split, Triangle and Quadrilateral Cuts

BBM402-Lecture 20: LP Duality

Optimization Exercise Set n.5 :

CS 6820 Fall 2014 Lectures, October 3-20, 2014

3. Linear Programming and Polyhedral Combinatorics

Cutting Planes in SCIP

LMI Methods in Optimal and Robust Control


1 Perfect Matching and Matching Polytopes

Cutting Plane Separators in SCIP

Lecture 2. Split Inequalities and Gomory Mixed Integer Cuts. Tepper School of Business Carnegie Mellon University, Pittsburgh

Polynomiality of Linear Programming

An Integer Cutting-Plane Procedure for the Dantzig-Wolfe Decomposition: Theory

A New Facet Generating Procedure for the Stable Set Polytope

Multi-Row Cuts in Integer Programming. Tepper School of Business Carnegie Mellon University, Pittsburgh

Lagrangian Relaxation in MIP

When the Gomory-Chvátal Closure Coincides with the Integer Hull

Some Relationships between Disjunctive Cuts and Cuts based on S-free Convex Sets

March 2002, December Introduction. We investigate the facial structure of the convex hull of the mixed integer knapsack set

Cutting planes from extended LP formulations

Decision Diagram Relaxations for Integer Programming

Linear and Integer Optimization (V3C1/F4C1)

CSC Linear Programming and Combinatorial Optimization Lecture 12: The Lift and Project Method

Lagrangian Duality Theory

Constrained Optimization and Lagrangian Duality

Optimization WS 13/14:, by Y. Goldstein/K. Reinert, 9. Dezember 2013, 16: Linear programming. Optimization Problems

Duality of LPs and Applications

3. Linear Programming and Polyhedral Combinatorics

8. Geometric problems

15-850: Advanced Algorithms CMU, Fall 2018 HW #4 (out October 17, 2018) Due: October 28, 2018

Technische Universität München, Zentrum Mathematik Lehrstuhl für Angewandte Geometrie und Diskrete Mathematik. Combinatorial Optimization (MA 4502)

The master equality polyhedron with multiple rows

Lecture 9: Dantzig-Wolfe Decomposition

Convex Optimization and Support Vector Machine

A Review of Linear Programming

Topics in Theoretical Computer Science: An Algorithmist's Toolkit Fall 2007

Cutting Plane Methods I

Integer Programming Methods LNMB

MAT-INF4110/MAT-INF9110 Mathematical optimization

CS599: Convex and Combinatorial Optimization Fall 2013 Lecture 17: Combinatorial Problems as Linear Programs III. Instructor: Shaddin Dughmi

Graph Coloring Inequalities from All-different Systems

Lectures 6, 7 and part of 8

Integer programming: an introduction. Alessandro Astolfi

3.10 Lagrangian relaxation

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5

Critical Reading of Optimization Methods for Logical Inference [1]

A Lower Bound on the Split Rank of Intersection Cuts

Combinatorial Optimization Spring Term 2015 Rico Zenklusen. 2 a = ( 3 2 ) 1 E(a, A) = E(( 3 2 ), ( 4 0

On the Relative Strength of Split, Triangle and Quadrilateral Cuts

Linear Programming. Chapter Introduction

SUNS: A NEW CLASS OF FACET DEFINING STRUCTURES FOR THE NODE PACKING POLYHEDRON CHELSEA NICOLE IRVINE. B.S., Kansas State University, 2012

The Ellipsoid Algorithm

LP Duality: outline. Duality theory for Linear Programming. alternatives. optimization I Idea: polyhedra

How tight is the corner relaxation? Insights gained from the stable set problem

The Strength of Multi-row Aggregation Cuts for Sign-pattern Integer Programs

Linear Programming Inverse Projection Theory Chapter 3

Cutting Plane Methods II

Transcription:

Integer Programming ISE 418 Lecture 12 Dr. Ted Ralphs

ISE 418 Lecture 12 1 Reading for This Lecture Nemhauser and Wolsey Sections II.2.1 Wolsey Chapter 9

ISE 418 Lecture 12 2 Generating Stronger Valid Inequalities We have now seen some generic methods of generating valid inequalities. In general, these methods are not capable of generating strong inequalities (facets). To generate such inequalities, we must use our knowledge of the problem structure.

ISE 418 Lecture 12 3 The Strength of a Valid Inequality Roughly speaking, for an inequality to be strong, the face it defines should have as high a dimension as possible. The facet-defining inequalities are those of maximal dimension, i.e., dimension one less than the dimension of the polyhedron. The facet-defining inequalities dominate all others and are the only ones necessary in a complete description of a polyhedron. To know which inequalities are facets, we use the following result based on methods for determining the dimension of polyhedra. Proposition 1. If (π, π 0 ) defines a face of dimension k 1 of conv(s), then there are k affinely independent points x 1,..., x k S such that πx i = π 0 for i = 1,..., k.

ISE 418 Lecture 12 4 Facet Proofs How do we prove an inequality is facet-defining? Straightforward approach: use the definition. First, we need to find the dimension d of the polyhedron. Then, we need to exhibit a set of d affinely independent points in S satisfying the given inequality at equality. Example: Set S = {(x, y) R m + B m i=1 x i my}. We want to show that the valid inequality x i y is facet-defining for conv(s). First, we show that dim(conv(s)) = m + 1 (how?). For a chosen i, we exhibit m + 1 affinely independent points in X that satisfy x i = y (how?).

ISE 418 Lecture 12 5 Facet Proofs: Another Method In the case where P is full-dimensional, another method for proving (π, π 0 ) defines a facet is the following. 1. Select t dim(p) points x 1,..., x t in X satisfying πx = π 0. 2. Solve the linear system µx k = µ 0, k [1, t] where µ R n, µ 0 R. 3. If the only solution is (µ, µ 0 ) = λ(π, π 0 ), then (π, π 0 ) is facet-defining. Why does this method work?

ISE 418 Lecture 12 6 Example: Valid Inequalities for Node Packing Recall the node packing problem. The set of node packings of a graph G = (V, E) is given by S = {x B n x i + x j 1 for all {i, j} E}. We are interested in the polytope P = conv(s). This polytope is easily shown to be full-dimensional (how?). What are some valid inequalities?

ISE 418 Lecture 12 7 The Clique Inequalities When C is a clique in G, the clique constraint j C x j 1 is valid for conv(s). In fact, when C is maximal, this constraint is facet-defining for conv(s). How do we prove this?

ISE 418 Lecture 12 8 Separation and Optimization We have just seen an example of a class of inequalities of which we have explicit knowledge of an inequality that is facet-defining. Yet, we know that this problem is a difficult one to solve. Question: Can we efficiently generate such inequalities? Answer: Yes and no. It is easy to generate some maximal cliques in a graph. It may be difficult to generate one that corresponds to an inequality violated by a given (fractional) solution to the LP relaxation. In general, there are no efficient exact separation algorithms for the convex hull of feasible solutions to a difficult MILP. Why is it difficult to generate facets of conv(s) in general? We will develop a formal framework for assessing the difficulty of solving well-defined classes of optimization problems later in the course. However, it is easy to show informally that generating facet-defining inequalities for a polyhedron is roughly as difficult as optimizing over it.

ISE 418 Lecture 12 9 The Separation Problem as an Optimization Problem Separation Problem: Given a polyhedron P R n and x R n, determine whether x P and if not, determine (π, π 0 ), a valid inequality for P such that πx > π 0. Closer examination of the separation problem for a polyhedron reveals that it is in fact an optimization problem. Consider a polyhedron P R n and x R n. The separation problem can be formulated as max{πx π 0 π x π 0 x P, (π, π 0 ) R n+1 } (SEP) along with some appropriate normalization. When P is a polytope, we can reformulate this problem as the LP max{πx π 0 π x π 0 x E}, where E is the set of extreme points of P. When P is not bounded, the reformulation must account for the extreme rays of P.

ISE 418 Lecture 12 10 Normalization and the 1-Polar Assuming w.l.og. that 0 is in the interior of P, the set of all inequalities valid for P is given by P = {π R n π x 1 x P} and is called its 1-Polar. Then we can normalize (??) by taking π 0 = 1. If P R n is a polyhedron containing the origin, then 1. P is a polyhedron; 2. P = P; 3. x P if and only if π x 1 π P ; 4. If E and R are the extreme points and extreme rays of P, respectively, then P = {π R n π x 1 x E, π r 0 r R}. A converse of the last result also holds. If the polar is described by a finite set of points and rays, then these constitute generators for the polyhedron. However, these sets need not be minimal.

ISE 418 Lecture 12 11 Interpreting the Polar The polar is the set of all valid inequalities, but without some normalization, it contains all scalar multiples of each inequality. The 1-Polar of a polyhedron is the set of all valid inequalities as long as 0 is in the interior. The 1-Polar has a built-in normalization. There is a one-to-one correspondance between the facets of the polyhedron and the extreme points of the 1-Polar when the polyhedron is full-dimensional and the origin is in its interior, Hence, the separation problem can be seen as an optimization problem over the polar.

ISE 418 Lecture 12 12 Solving the Separation Problem The separation problem (??) for P has a large number of inequalities in principle (one for each extreme point). Can we solve it efficiently? In principle, it can be solved by dynamically generating the inequalities. This is a bit circular since this itself requires solving the separation problem for the set {π R n+1 π x 1 x E} of members of the 1-Polar). It is easy to see, however that the separation problem for the 1-Polar can be formulated as max{π x x P}, which is an optimization problem over P!

ISE 418 Lecture 12 13 The Membership Problem Membership Problem: Given a polyhedron P R n and x R n, determine whether x P. The membership problem is a decision problem and is closely related to the separation problem. In fact, if we take the dual of (??), we get { min 0 λ Eλ = x, 1 λ = 1 } (MEM) λ R E + This LP tries to express x as a convex combination of extreme points of P. This problem is an LP with a column for each extreme point. If this LP is infeasible, the certificate is a separating hyperplane. We can picture this algorithm in the primal space to understand what it s doing.

ISE 418 Lecture 12 14 Example: Separation Algorithm with Optimization Oracle Figure 1: Polyhedron and point to be separated

ISE 418 Lecture 12 15 Example: Separation Algorithm with Optimization Oracle Figure 2: Iteration 1

ISE 418 Lecture 12 16 Example: Separation Algorithm with Optimization Oracle Figure 3: Iteration 2

ISE 418 Lecture 12 17 Example: Separation Algorithm with Optimization Oracle Figure 4: Iteration 3

ISE 418 Lecture 12 18 Example: Separation Algorithm with Optimization Oracle Figure 5: Iteration 4

ISE 418 Lecture 12 19 Example: Separation Algorithm with Optimization Oracle Figure 6: Iteration 5

ISE 418 Lecture 12 20 Formal Equivalence of Separation and Optimization Separation Problem: Given a polyhedron P R n and x R n, determine whether x P and if not, determine (π, π 0 ), a valid inequality for P such that πx > π 0. Optimization Problem: Given a polyhedron P, and a cost vector c R n, determine x such that cx = max{cx : x P}. Theorem 1. For a family of rational polyhedra P(n, T ) whose input length is polynomial in n and log T, there is a polynomial-time reduction of the linear programming problem over the family to the separation problem over the family. Conversely, there is a polynomial-time reduction of the separation problem to the linear programming problem. The parameter n represents the dimension of the space. The parameter T represents the largest numerator or denominator of any coordinate of an extreme point of P (the vertex complexity). The ellipsoid algorithm provides the reduction of linear programming separation to separation. Polarity provides the other direction.

ISE 418 Lecture 12 21 Proof: The Ellipsoid Algorithm The ellipsoid algorithm is an algorithm for solving linear programs. The implementation requires a subroutine for solving the separation problem over the feasible region (see next slide). We will not go through the details of the ellipsoid algorithm. However, its existence is very important to our study of integer programming. Each step of the ellipsoid algorithm, except that of finding a violated inequality, is polynomial in n, the dimension of the space, log T, where is the largest numerator or denominator of any coordinate of an extreme point of P, and log c, where c R n is the given cost vector. The entire algorithm is polynomial if and only if the separation problem is polynomial.

ISE 418 Lecture 12 22 Generating a Class of Inequalities As we have just shown, producing general facets of conv(s) is as hard as optimizing over S. Thus, the approach often taken is to solve the separation problem for a relaxation. This relaxation is usually obtained in one of two ways. It can be obtained in the usual way by relaxing some constraints to obtain a more tractable problem. The structure of the inequalities my be somehow restricted. The second approach is exemplified by the example of the clique inequalities given earlier. In either case, the class of inequalities we want to generate typically defines a polyhedron C. C is what we earlier called the closure. The separation problem for the class is the separation problem over the closure.

ISE 418 Lecture 12 23 Defining the Closure Implicitly Our previous formulation of the separation problem for a polyhedron assumed we had either an explicit description of or an algorithm for optimizing over the closure. We may also want to describe our class by explicitly describing the set of coefficient vectors in it as a set in itself. This is precisely what we did with the cut-generating linear program. In general, the set of all inequalities valid for a given polyhedron can be written simply as P = {(π, π 0 ) R n+1 π x π 0 x P} P is called the polar of P. When P is bounded, we only need to consider the extreme points in the above description, as in the generic cutting plane algorithm.

ISE 418 Lecture 12 24 More Valid Inequalities for Node Packing The clique constraints are not enough to completely describe the convex hull for all instances. What other inequalities can we find? An odd hole is a set of nodes that lie on a chordless cycle of the graph G. If H V is an odd hole, then the inequality j H x j H 1 2 is valid for conv(s). This new inequality is easily shown to be facet-defining for the subgraph induced by H. But it is not facet-defining in general. Can we strengthen it?

ISE 418 Lecture 12 25 Strengthening Valid Inequalities The problem seems to be that we are not taking into account the interaction with other nodes in the graph. Let s try to generate a valid inequality of the form αx i + j H x j H 1 2 where i H. We want to make α as big as possible. How big can it be?

ISE 418 Lecture 12 26 The Lifting Principle Suppose we have an inequality n i=2 π ix i π 0 that is facet-defining for P 0 = {x P x 1 = 0} where P = conv(s) and S B n. We want to generate π 1 so that n i=1 π ix i π 0 will be a facet of P. This means making the new inequality as strong as possible. Hence, we set π 1 := π 0 ξ, where ξ = max{ n i=2 π ix i x P, x 1 = 1}. If there are no feasible solutions with x 1 = 1, then we can simply fix x 1 to zero. For BIPs, this guarantees that the new inequality will be valid for P and will define a face of dimension one higher than the original inequality. Note that the new inequality will be valid as long π 1 π 0 ξ

ISE 418 Lecture 12 27 Projections and Restrictions We will define a restriction of P to be any polyhedron Q strictly contained in P. P is also called a relaxation of Q. If P = {x R n Ax b} and Q is a restriction of P, then Q = {x P Dx d}. It s important to understand the difference between a projection and a restriction. Q 1 = {(x, 0) (x, y) P} is a projection of P. Q 2 = {(x, y) P y = 0} is a restriction of P. Q 1 and Q 2 may or may not be the same polyhedron.

ISE 418 Lecture 12 28 Lifting Inequalities Valid for a Restriction For our discussion here, we will only consider integer polytopes of the form P I = conv(s), where S = {x B n Ax b}. We will only consider restrictions of the form {x P I x j = 0 for j N 0, x j = 1 for j N 1 } where N 0, N 1 {1,..., n}. The lifting principle allows us to do two things: Transform inequalities that are valid for a restriction into inequalities that are valid for the original problem. Transform inequalities that are strong for a restriction into inequalities that are strong for the original problem. In our example, the inequality was already valid for the original polyhedron and we wanted to strengthen it. It is not always the case that inequalities valid for a restriction are valid for the original polyhedron.

ISE 418 Lecture 12 29 Determining Lifting Coefficients Suppose we have an inequality valid for a restriction defined by sets N 0, N 1 {1,..., n}. In the case where N 0 = {1} and N 1 =, we already have a procedure. We choose π 1 such that π 1 π 0 ξ, where ξ = max{ n i=2 π ix i x P, x 1 = 1}. If there are no feasible solutions with x 1 = 1, then we can simply fix x 1 to zero. How about the case where N 0 = and N 1 = {1}? We chose π 1 such that π 1 ξ π 0, where ξ = max{ n i=2 π ix i x P, x 1 = 0}. In this case, we must also add π 1 to the right hand side to obtain the inequality n π 1 x 1 + π i x i π 0 + π 1. i=2 If there is no feasible solution with x 1 = 0, then we can fix x 1 to one.

ISE 418 Lecture 12 30 Determining Multiple Lifting Coefficients (Sequentially) The same procedure can be used in cases where multiple variables are restricted. We simply determine one lifting coefficient at a time, as before. Note that the order matters. The earlier a variable is lifted in the sequence, the larger its coefficient will be.

ISE 418 Lecture 12 31 Approximating Lifting Coefficients It is not always necessary or even possible to determine the best possible lifting coefficient. In general the problem of determining the best possible lifting coefficient is an optimization problem over a restricted polytope (usually N P-hard). In practice, lifting coefficients are often determined using heuristic algorithms that guarantee validity, but not strength. Note that generating approximate lifting coefficients destroys the property that the face defined by the inequality increase in dimension as it is lifted.

ISE 418 Lecture 12 32 Determining Multiple Lifting Coefficients (Simultaneously) We can also determine multiple lifting coefficients simultaneously. Suppose the inequality j N\(N 0 N 1 ) π jx j is valid for the restriction {x P x j = 0 for j N 0, x j = 1 for j N 1 }. We want to determining lifting coefficients π i for i N 0 N 1 {1,..., n}. Choose M such that M π 0 ξ, where ξ = max π i x i x P. N\(N 0 N 1 ) Then the inequality M x j M j N 0 is valid for P. j N 1 x j + j N\(N 0 N 1 ) π j x j π 0 M N 1