STRONG VALID INEQUALITIES FOR MIXED-INTEGER NONLINEAR PROGRAMS VIA DISJUNCTIVE PROGRAMMING AND LIFTING

Size: px
Start display at page:

Download "STRONG VALID INEQUALITIES FOR MIXED-INTEGER NONLINEAR PROGRAMS VIA DISJUNCTIVE PROGRAMMING AND LIFTING"

Transcription

1 STRONG VALID INEQUALITIES FOR MIXED-INTEGER NONLINEAR PROGRAMS VIA DISJUNCTIVE PROGRAMMING AND LIFTING By KWANGHUN CHUNG A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY UNIVERSITY OF FLORIDA 2010

2 c 2010 Kwanghun Chung 2

3 To my father, Youngkwan Chung, and my mother, Haeja Hwangbo 3

4 ACKNOWLEDGEMENTS It is my great pleasure to thank all the people who helped me successfully complete this thesis. First, I would like to deeply thank my advisor, Dr. Jean-Philippe P. Richard, for advising me with enthusiasm and patience during my Ph.D. study. He always inspired and encouraged me to pursue my research whenever I was frustrated with difficulties. While I worked with him, I learned a lot about Operations Research from his knowledge and academia from his experience. I set him as a role model that I wish to follow if I work in academia. I would like to thank my co-advisor, Dr. Mohit Tawarmalani, for his guidance and discussions that make it possible for me to write this thesis. His critical and rigorous way of thinking motivated me to overcome various obstacles. I am also thankful to Dr. Panos Pardalos, Dr. J. Cole Smith, and Dr. William Hager, for serving on my committee and giving me helpful comments to improve the quality of this thesis. My life as a doctoral student for the last few years has been happy and pleasant because of many of my friends at Purdue University and the University of Florida. In particular, I appreciate Seokcheon, Byungcheol, Keumseok, Daiki, Kyungdoh, Sangbok, and all members of Purdue Korean Industrial Engineers for their help and support that alleviated me from the painful research. I also appreciate Chanmin and Youngwoong as well as my fellow office mates for many kind favors during my stay in Florida. Finally, I would like to show my gratitude to all of my families, Youngkwan, Haeja, Hyunjoo, and Jaehun, whose sincere love and constant supports are the source of my life. 4

5 TABLE OF CONTENTS page ACKNOWLEDGEMENTS LIST OF TABLES LIST OF FIGURES ABSTRACT CHAPTER 1 INTRODUCTION Mixed-Integer Nonlinear Program (MINLP) Models and Applications Solution Methodologies to Global Optimization Preliminaries Well-Solved Optimization Problems Relaxations and Convexifications Branch-and-Cut in MINLP Bounding Scheme Branching Scheme Cutting Scheme Domain Reduction Outline of the Dissertation CONVEX RELAXATIONS IN MILP and MINLP Convexification Methods in MINLP Convex Envelopes and Convex Extensions Reformulation and Relaxation Cutting Plane Techniques for Mixed-Integer Linear Program (MILP) Disjunctive Programming Lifting Sequential lifting Sequence-independent lifting MOTIVATION AND RESEARCH STATEMENTS Motivation Problem Statements Strong Valid Inequalities for Orthogonal Disjunctions and Bilinear Covering Sets Lifted Inequalities for 0-1 Mixed-Integer Bilinear Covering Sets with Bounded Variables

6 4 STRONG VALID INEQUALITIES FOR ORTHOGONAL DISJUNCTIONS AND BILINEAR COVERING SETS Introduction Convexification of Orthogonal Disjunctive Sets Convex Extension Property Concluding Remarks LIFTED INEQUALITIES FOR 0-1 MIXED-INTEGER BILINEAR COVERING SETS Introduction Basic Polyhedral Results Lifted Inequalities Sequence-Independent Lifting for Bilinear Covering Sets Lifted Inequalities by Sequence-Independent Lifting Lifted bilinear cover inequalities Lifted reverse bilinear cover inequalities Inequalities through Approximate Lifting New Facet-Defining Inequalities for a Single-node Flow Model Concluding Remarks A COMPUTATIONAL STUDY OF LIFTED INEQUALITIES FOR 0-1 BILINEAR COVERING SETS Introduction Generalization to Bilinear Constraints with Linear Terms Generalized Lifted Bilinear Cover Inequalities Generalized Lifted Reverse Bilinear Cover Inequalities Preliminary Computational Study Computational Environments Testing Instances Separation Procedures Numerical Results Concluding Remarks CONCLUSIONS AND FUTURE RESEARCH Summary of Contributions Future Research APPENDIX A LINEAR DESCRIPTION OF THE CONVEX HULL OF A BILINEAR SET B LINEAR DESCRIPTION OF THE CONVEX HULL OF A FLOW SET REFERENCES

7 BIOGRAPHICAL SKETCH

8 Table LIST OF TABLES page 6-1 Parameters of the random instances for three test sets Characteristics of the three test sets Objective values to the test instances Performance of lifted cuts on small size instances Performance of lifted cuts on medium size instances Performance of lifted cuts on large size instances

9 Figure LIST OF FIGURES page 1-1 Branch-and-Cut framework Cutting plane algorithm Geometric illustration of S, conv(s), S 1 and S Illustration of Theorem 4.1 with (a) J 1, J 2 (b) J 2 = (c) J 1 = J 2 = Facet-defining inequalities for conv ( ) Bi I Lifting function P C (w) of (5 44) Deriving lifting coefficients for Example Deriving lifting coefficients for Example A valid subadditive approximation Ψ(w) of Φ(w) for Example Lifting function L C (w) of (6 19) Deriving lifting coefficients for Example

10 Abstract of Dissertation Presented to the Graduate School of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy STRONG VALID INEQUALITIES FOR MIXED-INTEGER NONLINEAR PROGRAMS VIA DISJUNCTIVE PROGRAMMING AND LIFTING Chair: Jean-Philippe. P. Richard Major: Industrial and Systems Engineering By Kwanghun Chung August 2010 Mixed-Integer Nonlinear Programs (MINLP) are optimization problems that have found applications in virtually all sectors of the economy. Although these models can be used to design and improve a large array of practical systems, they are typically difficult to solve to global optimality. In this thesis, we introduce new tools for the solution of such problems. In particular, we develop new procedures to construct convex relaxations of certain MINLP problems. These relaxations are stronger than those currently known for these problems and therefore provide improvements in the solution of MINLPs through branch-and-bound techniques. There are three main components to our contributions. First, we derive a closed-form characterization of the convex hull of a generic nonlinear set, when the convex hull of this set is completely determined by orthogonal restrictions of the original set. Although the tools used in our derivation include disjunctive programming and convex extensions, our characterization does not introduce additional variables. We develop and apply a toolbox of results to check the technical assumptions under which this convexification tool can be employed. We demonstrate its applicability in integer programming by providing an alternate derivation of the split cut for mixed-integer polyhedral sets and by finding the convex hull of various mixed/pure-integer bilinear sets. We then develop a key result that extends the utility of the convexification tool to relaxing nonconvex inequalities, which are not naturally disjunctive, by providing sufficient conditions for establishing the convex extension 10

11 property over the non-negative orthant. We illustrate the utility of this result by deriving the convex hull of a continuous bilinear covering set over the non-negative orthant. Second, we study the 0 1 mixed-integer bilinear covering set. We show that the convex hull of this set is polyhedral and we provide characterizations for its trivial facets. We also obtain a complete convex hull description when it contains only two pairs of variables. We then derive three families of facet-defining inequalities via sequence-independent lifting techniques. Two of these families have an exponential number of members. Next, we relate the polyhedral structure of the 0 1 mixed-integer bilinear covering set to that of certain single-node flow sets. As a result, we obtain new facet-defining inequalities for flow sets that generalize well-known lifted flow cover inequalities from the integer programming literature. Third, we evaluate the strength of the lifted inequalities we derive for 0 1 mixed-integer bilinear covering sets inside of a branch-and-cut framework. To this end, we first generalize our theoretical results to bilinear covering sets that have additional linear terms. We then present separation techniques for lifted inequalities and report computational results obtained when using these procedures on several families of randomly generated problems. 11

12 CHAPTER 1 INTRODUCTION In this chapter, we give a brief overview of Mixed-Integer Nonlinear Programming models and their applications. We then describe general methodologies to solve them. After discussing basic concepts in mathematical programming, we describe in more detail the branch-and-bound approach to MINLP. We conclude this chapter by describing the overall structure of this thesis. 1.1 Mixed-Integer Nonlinear Program (MINLP) Models and Applications A Mixed-Integer Nonlinear Program (MINLP) is an optimization problem of the form: min f(x) (P ) s.t. g i (x) 0 i M, x j Z + j I N := {1,..., n}, x j R + j N \ I, where 1. f : R n R, 2. g i : R n R, i M. Throughout the thesis, we restrict our attention to problems (P ), where the functions f and g i are continuous and factorable. Definition 1.1 (Factorable Function [89]). A function is factorable if it is defined by a finite recursive composition of binary sums, binary products, and a given collection of univariate intrinsic functions. For example, the function f(x) = x 1 e x 2 + cos(x 1 + x 2 )x 3 is factorable since it can be expressed as ( ( ) ( )) f(x) = S P x 1, h 1 (x 2 ), P h 2 (S(x 1, x 2 ), x 3 ), where 12

13 S(x, y) = x + y represents a binary sum, P(x, y) = x y represents a binary product, h 1 (x) = e x and h 2 (x) = cos(x) are intrinsic univariate functions. The class of factorable functions contains most functions encountered in practical applications; see McCormick [86]. We refer to x R n as the decision variables of (P ). We refer to f(x) as the objective function of (P ) and to g i (x) 0 for i M as the constraints of (P ). If there are no constraints (i.e., M = ), we say that problem (P ) is unconstrained. We define S := { x Z I + R n I + g i (x) 0 } i M to be the feasible region of (P ). A vector x S is said to be a feasible solution of (P ). Further, problem (P ) is said to be feasible if S, and infeasible if S =. The goal of problem (P ) is to find a vector x S, called a (globally) optimal solution of (P ) whose objective value f(x ) is minimized over the set S, i.e., f(x ) f(x) x S. We refer to f(x ) as the optimal value of (P ). A vector x S is said to be locally optimal if there exists an ɛ > 0 such that f( x) f(x) x S {x R n x x ɛ }. In this thesis, we will use the terms optimal and globally optimal interchangeably. When f is linear (i.e., f(x) = c T x) and all of the functions g i are affine (i.e., g i (x) = (a i ) T x + b i ), (P ) is said to be a Mixed-Integer Linear Program (MILP). When I =, (P ) is referred to as a Linear Program (LP). When I = N, (P ) is said to be a Pure Integer Program (IP). Finally, when all variables x j for j I are restricted to be binary, (P ) is commonly known as a 0 1 Mixed-Integer Linear Program or Binary Mixed-Integer Linear Program (BMILP). While LP problems can be solved in polynomial-time, solving 13

14 general MILPs is NP-hard; see Cook [35]. Note however that when the number of variables is fixed, Lenstra [76] describes a polynomial-time algorithm for IP. In MINLP models, continuous variables are typically used to represent physical quantities while binary variables are used to describe managerial decisions. Functions f(x) and g i (x) are used to capture the (possibly nonlinear) physical relations between these variables. As a result, MINLP problems arise in a wide variety of practical applications and are used to model decision problems in business and engineering. Successful applications of MINLP can be found in a number of fields such as telecommunication networks [25], supply chain design and management [135], portfolio optimization [39], chemical processes [27, 53, 70], protein folding [93], molecular biology [77], quantum chemistry [78], and unit commitment problems [142] Solution Methodologies to Global Optimization Global optimization of MINLPs is typically difficult when (1) there are integrality restrictions on a subset of variables (i.e., I ) and (2) there are nonconvex functions (see definition in Section 1.2.1) either in the objective or in the constraints. General solution methodologies to obtain globally optimal solutions for MINLPs can be classified as either deterministic or stochastic; see Neumaier [94] for a survey of existing solution methods. Deterministic algorithms include branch-and-bound [51, 85, 103], outer-approximation [48, 65, 68], cutting planes [124, 126], and decomposition [125, 129]. Stochastic approaches include random search [140], genetic algorithms [134], and clustering algorithms [71]. For detailed presentations of these approaches, we refer the interested reader to the books of Horst and Pardalos [67] and Horst and Tuy [69]. In this thesis, we will focus on branch-and-bound approaches for MINLP. 1.2 Preliminaries In this section, we briefly review fundamental results in mathematical programming that are used throughout this thesis. 14

15 1.2.1 Well-Solved Optimization Problems Since MINLP is known to be NP-hard, it is unlikely that we will ever be able to design an algorithm that solves all instances of (P ) to global optimality in polynomial time. However, there are families of problems (P ) that can be solved efficiently. We introduce two such families next. To this end, we introduce next the notion of convex set and convex function. Definition 1.2 (Convex Combination). Let x 1,..., x p be vectors in R n. We refer to any point x obtained as p j=1 λ jx j where λ j R + for j = 1,..., p and p j=1 λ j = 1 as a convex combination of x 1,..., x p. Definition 1.3 (Convex Set). A set S R n is said to be convex if, x 1, x 2 S, all convex combinations of x 1 and x 2 belong to S, i.e., λx 1 + (1 λ)x 2 S, λ [0, 1]. Definition 1.4 (Convex Function). Let S be a nonempty convex subset of R n. A function f : S R is said to be convex if, x 1, x 2 S, ) f (λx 1 + (1 λ)x 2 λf(x 1 ) + (1 λ)f(x 2 ), λ [0, 1]. Convex sets and convex functions can be related in different ways; see Section 3.1 of Bazaraa et al. [24] for a textbook discussion. We present one such relation next. Definition 1.5 (Level Set). Given a function f : R n R and a scalar α R, we refer to the set } S α = {x R n f(x) α as the α level set of f. Proposition 1.1. Let f : R n R be a convex function. Then, the α level set of f is a convex set for each value of α R. 15

16 We now focus on a subfamily of problems (P ) of the form min f(x) (CP ) s.t. g i (x) 0 i M, x j R + j N, where the functions f(x) and g i (x) for i M are convex. Proposition 1.1 implies that the feasible region of (CP ) is a convex set since the intersection of convex sets is convex. We refer to such problems as convex programs. While (CP ) cannot typically be solved with an analytical formula, (CP ) has many good properties that make finding a globally optimal solution easier than finding that of other problems (P ). In particular, it can be shown that every local optimal solution of (CP ) is also globally optimal. There are various methods to solve (CP ) to global optimality; see Boyd and Vandenberghe [29] and Nesterov and Nemirovskii [92]. We briefly comment on two of these methods. The ellipsoid method was formally developed by Yudin and Nemirovski [136] although similar ideas had been introduced earlier by Shor [112]. The ellipsoid method generates a sequence of ellipsoids containing an optimal solution of the problem whose volumes are decreasing. At each iteration, the algorithm splits the current ellipsoid in half and use problem information to determine which half of the ellipsoid contains an optimal solution. A new ellipsoid (of smaller volume) is then built around the selected half-ellipsoid and the process is iterated. Interior point algorithms form another family of solution approaches for convex programs. The idea originates from the work of Fiacco and McCormick [52] in the 1960 s. Among others, the authors include barrier functions in the objective to take into account the feasible region of the problem (CP ). Although progress on these techniques remained limited through the 1980 s, the discovery of a polynomial-time algorithm for linear programs by Karmarkar [73] led to a revival of interests in barrier methods. In particular, Nesterov and Nemirovskii [92] later showed that polynomial time convergence can be achieved for any convex program that can be equipped with an easily computable 16

17 self-concordant barrier functions. Simple self-concordant barriers are known for many convex programs; see Nesterov and Nemirovskii [92]. As a result, convex programs are typically thought to be simple optimization problems to solve. A particular type of convex programs that is very simple is the linear programming problem. This problem is a variant of (P ) of the form: min c T x (LP ) s.t. Ax b x R n +. Before we discuss algorithms to solve LPs, we introduce some basic concepts of polyhedral theory that we will use later in this thesis. Definition 1.6 (Polyhedron and Polytope). A polyhedron Q R n is a set of points in R n that can be described as the intersection of a finite number of half-spaces, i.e., Q = {x R n Ax b }, (1 1) where A R m n and b R m. A polyhedron is said to be bounded if there exists M R + { } such that sup x x Q < M. We typically refer to a bounded polyhedron as a polytope. It is clear that the feasible region of an LP is a polyhedron. When studying MILPs, we will typically consider rational polyhedra, i.e., polyhedra that can be defined with A Q m n and b Q m. When studying a polyhedron, some feasible solutions are of particular interest. Definition 1.7 (Extreme Point). A point x in a polyhedron Q is said to be an extreme point of Q if whenever x = 1 2 x x2 for some x 1, x 2 Q, then x = x 1 = x 2. 17

18 Definition 1.8 (Extreme Ray). Given the nonempty polyhedron Q defined in (1 1), we define the recession cone of Q as Q 0 = {r R n Ar 0 }. A non-zero vector r in Q 0 is said to be a ray of Q. Further, a ray r is said to be an extreme ray of Q if whenever r = 1 2 r r2 for some r 1, r 2 Q 0, then r = r 1 = r 2. Polyhedra can be represented using extreme points and extreme rays as presented in Theorem 1.1. Theorem 1.1 (Minkowski s Theorem [88]). If Q is a nonempty polyhedron as defined in (1 1) and rank(a) = n, then Q = x R n x = k K λ kx k + j J µ jr j, k K λ k = 1, λ k 0, k K, µ j 0, j J, where {x k } k K is the set of extreme points of Q and {r j } j J is the set of extreme rays of Q. Using Theorem 1.1, we can easily verify the following result. Theorem 1.2. If (LP ) has an optimal solution, then at least one of the extreme points of Q must be an optimal solution. Using the fact that an optimal solution to (LP ) can be found among the extreme points of its feasible region, Danzig [44] developed in 1947 the first algorithm to solve general LPs: the simplex algorithm. We mention that Kantorovich had proposed earlier in 1939 a method to solve a restricted form of LPs; see [72] for a translation. The simplex algorithm relies on the observation that every extreme point of LPs of the form (LP ) { } min c T x Ax = b, x R n +, (1 2) 18

19 can be computed as x B = A 1 B b, x N = 0, (1 3) where B N, N = N \ B, A B is an invertible submatrix of A formed by the columns of A corresponding to B, and A 1 B b 0. In (1 3), the variables x j for j B are called basic while the variables x j for j N are called nonbasic. The simplex algorithm searches for an optimal solution of (LP ) by creating a sequence of bases B 1, B 2,..., B k that are such that (i) B i B i+1 = B i 1 = B i+1 1 and (ii) the basic solutions corresponding to the B i s are feasible and nonincreasing. The operation of moving from one basis to the next is called pivoting. Using an appropriate pivoting strategy such as Bland s rule [28], the simplex algorithm obtains an optimal solution to (LP ) in a finite number of iterations. Over the years, many different pivoting rules have been developed but none of them has been shown to provide a polynomial-time algorithm to solve LPs. Nonetheless, the simplex algorithm is typically very efficient at solving practical LP problems. In 1979, Khachiyan [74] proposed the first polynomial time algorithm for LPs. This algorithm is a specialized variant of the ellipsoid algorithm. Although the practical performance of this algorithm is poor, it is remarkable that it does not depend directly on the number of constraints the LP has. This feature has important consequences in the study of integer programs that we will comment on in Section 2.2. Karmarkar [73] introduced the first algorithm for the solution of LPs that has good performance in both theory and practice. Improvements and variants of this algorithm were subsequently discovered; see Wright [133]. Nowadays, commercial software such as CPLEX [40] use a combination of simplex algorithm and interior points methods to solve LPs and can solve large instances of practical problems very quickly; see Mittelmann [90]. As a result, they can be used as the working horse for the solution of other more difficult problems. 19

20 1.2.2 Relaxations and Convexifications One of the ways to prove that z is the optimal value of (P ) is to show z is both a lower and a upper bound on the optimal value z. Upper bounds (also called primal bounds) can be obtained from any feasible solution x F S since z f(x F ). To obtain tight upper bounds, we need to find good feasible solutions, which can be difficult depending on the original problem (P ). Heuristic approaches are typically used for this purpose. Finding lower bounds (also called dual bounds) requires other techniques. A common approach is to use relaxations. We give a formal definition of relaxation next. Definition 1.9 (Relaxation). Given an optimization problem (P ) z = min{f(x) x S}, the related optimization problem (RP ) z R = min{f(x) x R} is said to be a relaxation of (P ) if 1. S R, 2. f(x) f(x) for all x S. Definition 1.9 states that relaxations can be obtained in two ways; (i) by enlarging the feasible region S and/or (ii) by underestimating the objective function f(x) over S. Lower bounds can be obtained by solving relaxations, as the following result suggests. Proposition 1.2. If (RP ) is a relaxation of (P ), then z R z. Although optimal solutions of relaxations are not always optimal for the original problem, they sometimes are. The following result handles this issue. Proposition 1.3. If x is an optimal solution of (RP ), x S, and f(x ) = f(x ), then x is an optimal solution of (P ). The derivation of a relaxation is particularly useful if the problem associated with the relaxation is substantially easier to solve than the original problem and the relaxation 20

21 value z R is close to z. Given the fact that convex programs are typically easy to solve, it makes sense to study how to construct convex relaxations of optimization problems. To obtain the tightest possible relaxation bound, it is best to replace S by the smallest convex set that contains S, which is called convex hull. An alternate definition is as follows. Definition 1.10 (Convex Hull). Let S R n. We refer to the set of all convex combinations of points in S, which we denote by conv(s), as the convex hull of S. When underestimating the objective function f(x), it is also clear that, to obtain the tightest relaxation possible, we should replace f(x) with the tightest convex lower approximation of f(x), which is commonly known as convex envelope; see Falk [49], Rockafellar [102], Horst [66], and Horst and Tuy [69]. Definition 1.11 (Convex Envelope). Let S R n be convex and compact, and let f : S R be lower semi-continuous on S. A function convenv(f) : S R is called the convex envelope of f on S, denoted as convenv(f), if it satisfies 1. convenv(f(x)) is convex on S, 2. convenv(f(x)) f(x), x S, 3. there is no function g : S R satisfying (i), (ii), and convenv(f( x)) < g( x) for some x S. Note that it is easily seen from Condition 3 that the convex envelope is uniquely determined, if it exists. In theory, a very strong relaxation of the problem (P ) can be obtained by replacing the feasible region S with conv(s) and the objective function f(x) with convenv(f(x)). However, such a construction is typically not practical as deriving convex hulls of nonconvex sets and convex envelopes of nonconvex functions is often difficult. Therefore, simpler and weaker convex relaxations are typically derived. We call these relaxations convexifications. 21

22 Definition 1.12 (Convexification). Given a nonconvex problem a problem of (P ) (NCP ) (CRP ) z = min f(x) s.t. x S, z R = min f(x) s.t. x R, is said to be a convexification of (NCP ) if 1. f(x) is a convex underestimator of f(x), 2. S R and R is convex. In Definition 1.12, we require a convexification to have both a convex objective function and a convex feasible region. It therefore can be solved by a variety of algorithms. Since LPs can be solved extremely efficiently, it is often helpful to require in Definition 1.12 that f(x) be linear and that R be polyhedral. If so, we refer to the resulting convexification as a linearization. Since every convex set can be represented as an intersection of (possibly infinitely many) half-spaces, a linearization can always be constructed from a convexification. Linearizations have been typically preferred to convexifications in commercial solvers (see Adjiman et al. [3], LINDO Systems Inc. [80], Sahinidis and Tawarmalani [105], and Belotti et al. [26]) because they tend to be faster and algorithms are more stable. An example of convexification for MILPs of the form is the linear program (MILP ) (RMILP ) min s.t. min s.t. c T x Ax = b x Z I + R n I +, c T x Ax = b x R n +. 22

23 obtained dropping integrality restrictions on variables x j for j I. This relaxation is called the LP relaxation of (MILP ). We will describe general methods to generate convexifications for MILPs and MINLPs in Chapter 2. For detailed discussions about convexification techniques, we refer the interested reader to the book by Tawarmalani and Sahinidis [121]. 1.3 Branch-and-Cut in MINLP For a nonconvex MINLP problem where f and/or g i are nonconvex, finding globally optimal solutions is a challenging problem that has attracted much attention. Branch-and-bound is one of the methods described in Section that have been proposed for solving this problem. Branch-and-bound methods are implicit enumeration techniques based on the divide-and-conquer strategy and the concept of convexification. A globally optimal solution of the convexification is first obtained. If it satisfies the conditions of Proposition 1.3, it is optimal for the problem. Otherwise, the relaxed solution only yields a lower bound on z. When this happens, the feasible region is divided into non-overlapping subsets for which stronger convex relaxations can be built. An optimal solution to the initial problem can then be obtained by selecting the best among the globally optimal solutions of the subproblems. Since subproblems are likely to be nonconvex problems, globally optimal solutions of these subproblems are obtained by applying the procedure recursively. As a result, a tree of subproblems is created that is called branch-and-bound tree. There are three cases when the branch-and-bound search of a current node is stopped, an operation known as fathoming of a node: 1. the relaxation is infeasible, 2. the objective value of the current relaxation is larger than the value of a known feasible solution, 3. the solution of the relaxation is globally optimal for subproblems; see Proposition 1.3. The branch-and-bound process terminates when all nodes are fathomed (i.e., when the lower bound z L is equal to the upper bound z U ). In MILP, this process is finite (i.e., 23

24 z L = z U occurs in a finite number of steps) and is convergent (i.e., z U z L 0) when variables are bounded. In MINLP, for a given tolerance ɛ > 0, the search process typically terminates when z U z l ɛ. Provided that convexification used in the tree is finitely consistent, i.e., any unfathomed partition can be further refined at every iteration, the branch-and-bound process can terminate after finitely many steps; see Horst and Tuy [69]. Land and Doig [75] in 1960 introduced the first branch-and-bound algorithm for pure integer linear programs. Dakin [43] and Driebeek [47] extended it to mixed-integer linear programming problems. Since then, branch-and-bound has become a general solution method in MILP that has been successfully implemented in commercial software such as CPLEX [40]. In MILP problems, branch-and-bound proceeds by recursively solving LP relaxations of the problem (see Section 1.2.1). Since LP relaxations can be weak, new linear inequalities derived from the problem structure are typically added to cut off fractional solutions. These additional valid inequalities are called cuts or cutting planes. The use of cuts is known to be one of the most important ingredients to the efficient solution of MILP with branch-and-bound. The addition of cuts inside the branch-and-bound framework yields a family of methods called branch-and-cut; see Martin [84]. Falk and Soland [51] introduced nonlinear branch-and-bound for continuous global optimization. For factorable nonconvex problems, McCormick [85] proposed a convexification scheme for factorable problems under the assumption that tight convex and concave envelopes are known for the underlying univariate functions. Ryoo and Sahinidis [103] introduced a branch-and-reduce algorithm that uses domain reduction techniques during the process. Androulakis et al. [5] developed an αbb branch-and-bound method that relies on the twice differentiable functions. Tawarmalani and Sahinidis [122] introduced the idea of building and solving polyhedral-based relaxations in branch-and-bound for global optimization and Tawarmalani and Sahinidis [123] implemented this idea. Currently, nonlinear branch-and-bound methodologies have 24

25 been implemented in various global optimization software; see Adjiman et al. [3], Sahinidis and Tawarmalani [105], LINDO Systems Inc. [80], and Belotti et al. [26]. Branch-and-cut is not a specific algorithm but a general framework since it relies on four main components that can be adapted. These four components are: bounding that obtains lower and upper bounds on the optimal value of relaxations, branching that divides a problem into smaller subproblems, cutting that adds valid inequalities to formulations, and domain reduction, also known as bound tightening, that reduces the search region. A key component in the success of branch-and-cut algorithm is the quality of bounds obtained from the relaxation. To obtain better bounds, it is necessary to develop tighter convexifications. This is the ultimate goal of this thesis as we will discuss more in Chapters 2 and 3. Next, we describe in more detail the branch-and-cut framework to illustrate the setting in which our results are applied; see Figure 1-1. We discuss each of its components in the following sections Bounding Scheme In every branch-and-bound node, both lower and upper bounds on the optimal value are computed and/or updated. Upper bounds are obtained from feasible solutions that are found using some upper bounding procedures or heuristic algorithms. Lower bounds are computed through the solution of a convexification of the problem Branching Scheme For MILPs, dividing feasible regions into subproblems is simple. Assuming an LP relaxation has been solved and the optimal solution x is fractional, we can choose any integer variable x i whose optimal value x i is fractional and then create two subproblems: one obtained by adding the constraint x i x i and the other obtained by adding the constraint x i x i +1. This scheme, called dichotomy branching, ensures that the current LP solution does not survive in any of the subsequent convexifications of the subproblems and therefore ensures that the branch-and-bound search progresses. We mention that other branching schemes such as GUB branching or constraint branching can be used. 25

26 Input: Problem (P ) and set of integer variables I Output: An optimal solution x with optimal value z Initialization; L {P }, x, z + ; while L do Check termination criteria; Update list L; if z P i z for P i L then L L \ {P i }; end Node Selection; Select P i L and let L L \ {P i }; Domain Reduction; Tighten the bounds on variables of P i ; Construct a convex relaxation RP i of P i ; while Cut Generation needs to be performed do Obtain z RP i and x RP i by solving RP i ; Pruning; by Infeasibility; by Bounds if z RP i z ; by Global Feasibility; Cut Generation; if a violated cut then add to the formulation; end end Primal Heuristics; Branching; Choose a variable x j ; Choose a branching point x b j; Create subproblems: P i and P i+ ; L L {P i, P i+ }; end Figure 1-1. Branch-and-Cut framework 26

27 Observe that even when only applying dichotomy branching, algorithmic decisions must be made about the selection of both the branching variable (i.e., which fractional variables will be branched on) and the branching point; see Achterberg et al. [1], Linderoth and Savelsbergh [79]. In MILP, while the latter is straightforward, the former is not and different strategies might result in dramatically different trees. Similar approaches can be used in MINLP. In the selection of branching variables, integer variables typically take priority over continuous variables. Hence, if there are integer variables with fractional values, then one of these variables is selected first for branching. To select among several integer variables, standard MILP techniques are used. Note that it could happen that x i has integer values for all i I, but x i is not feasible for the other relaxed constraints. Hence, a measure of infeasibility for solutions is introduced in MINLP. To select a branching variable among continuous variables, Tawarmalani and Sahinidis [122] propose to use violation transfer and Belotti et al. [26] extend the reliability branching used in MILP. After the selection of branching variables, the branching point can be chosen using several rules such as bisection rule, ω rule, or other variants [103, 109, 116]. For bilinear programs, an alternative selection rule for the branching point is provided in [116] Cutting Scheme Since the initial relaxation created at the root node is typically weak, it is important to improve it by adding strong inequalities. In MILP, this can be done through the addition of cutting planes that separate a fractional solution from the feasible region. We will discuss strong valid inequalities for MILPs and will describe two well-known tools to generate them in Section 2.2. Similarly, the performance of the branch-and-bound search in MINLP can be improved if relaxations are tightened using strong inequalities. While cuts must be linear inequalities in MILP, convex constraints can also be used in MINLP as long as they are valid and improve bounds; see Tawarmalani and Sahinidis [123]. 27

28 1.3.4 Domain Reduction Domain reduction for a variable x is the process of reducing the interval [x l, x u ] where x is considered while guaranteeing that an optimal solution is not cut off. As the search space is reduced through this procedure, relaxations obtained typically become stronger. One such procedure is the optimality-based range deduction that uses the current linearization to improve the bounds on variables; see Shectman and Sahinidis [109] and Zamora and Grossmann [137]. It is typically used for auxiliary variables introduced in the reformulation phase and applied only at the root node or up to a limited depth. On the other hand, feasibility-based range deduction similar to interval propagation in Constraint Programming is performed at all nodes of the tree; see Shectman and Sahinidis [109]. Domain reduction has also been widely used in MILP; see Savelsbergh [106]. Belotti et al. [26] developed aggressive bounds tightening which is similar to probing techniques in MILP [106, 120]. Reduced-cost bounds tightening, introduced for solving MILP problems [91], has also been extended to MINLP by Ryoo and Sahinidis [103]. 1.4 Outline of the Dissertation In this thesis, we introduce new tools to improve the convexifications used in MINLP. In particular, we study nonlinear sets that appear as relaxations of MINLP problems. The overall structure of the thesis is as follows. In Chapter 2, we give an overview of techniques that are used in integer programming and global optimization to produce convexifications of nonconvex sets. We focus on factorable relaxation techniques since they are most related to our work. We also describe how to generate strong cutting planes for general MILP problems using disjunctive programming and lifting techniques in Sections and In Chapter 3, we motivate the problems that are addressed in this thesis. Then, we provide formal problem statements for the following chapters. In Chapter 4, we propose a convexification tool that constructs the convex hulls of orthogonal disjunctive sets using convex extensions and disjunctive programming; see 28

29 Chapter 2 for an introduction to these techniques. We discuss the technical assumptions under which this convexification tool can be used. In particular, we provide sufficient conditions for establishing the convex extension property. The convexification tool is then applied to obtain explicit convex hulls of various bilinear covering sets over the nonnegative orthant. It is, in general, widely applicable to problems where variables do not have upper bounds. In Chapter 5, we study 0 1 mixed-integer bilinear covering sets to investigate how bounds on the variables affect the derivation of cuts. We derive large families of facet-defining inequalities via sequence-independent lifting techniques; see Chapter 2 for an introduction to lifting techniques. We show that these sets have polyhedral structures that are similar to those of certain single-node flow sets. In particular, we prove that the facet-defining inequalities we develop generalize well-known lifted flow cover inequalities from the integer programming literature. In Chapter 6, we present a computational study that evaluates the strength of lifted inequalities derived in Chapter 5. We first generalize the lifted inequalities of Chapter 5 to a more general form of bilinear covering sets that include linear terms on variables. This extension is necessary to account for the linear terms introduced during the branch-and-bound process. We discuss implementations details and experimental results. In Chapter 7, we summarize the main results of this thesis and conclude with directions for future research. 29

30 CHAPTER 2 CONVEX RELAXATIONS IN MILP AND MINLP In this chapter, we describe methods to generate convex relaxations of MILPs and MINLPs, focusing on the techniques that are most related to our work. In Section 2.1, we describe how to build convex relaxations of nonconvex MINLP problems. Then, in Section 2.2, we give an overview of how disjunctive programming and lifting techniques can be used to generate improved formulations of MILPs. The tools described in Sections 2.1 and 2.2 will be used in Chapters 4, 5, and Convexification Methods in MINLP Constructing strong convex relaxations of nonconvex problems is a central problem in developing branch-and-cut frameworks for nonconvex MINLPs. In this section, we describe general convexification methods that are used in commercial global optimization solvers. Note that, given a nonconvex problem of the form min f(x) s.t. g i (x) 0 i M, a simple convex relaxation can be obtained by relaxing each inequality into a convex constraint and replacing f with a convex underestimator. In particular, if g i (x) is a convex underestimator of g i (x) and f(x) is a convex underestimator of f(x), the relaxation min f(x) s.t. g i (x) 0 i M. is a convex optimization problem. Among convex underestimators, convex envelopes are strongest. Therefore, the ability of constructing convex envelopes of nonlinear functions is an essential ingredient in the derivation of strong convexifications of MINLPs. 30

31 2.1.1 Convex Envelopes and Convex Extensions In the global optimization literature, convex envelopes have been developed for special classes of functions over special polytopes. For detailed discussions, we refer the interested reader to the books of Horst and Tuy [69] and Tawarmalani and Sahinidis [121]. First, we describe how convex envelopes of sums of functions can be obtained from the sum of convex envelopes of the individual functions. Theorem 2.1 (Al-Khayyal and Falk [4]). Let Q = r j=1 Q j be the cartesian product of r compact n j -dimensional rectangles Q j for j = 1,..., r satisfying r j=1 n j = n. Assume that f : Q R is of the form f(x) = r j=1 f j(x j ), where f j : Q j R is lower semi-continuous on Q j for j = 1,..., r. Then, the convex envelope of f on Q is obtained as the sum of the convex envelopes of f j on Q j, i.e., convenv(f) = r convenv f j (x j ). j=1 Next, we present two fundamental results developed by Falk and Hoffman [50] and Horst [66]. Theorem 2.2. Let Q be a polytope with vertices v 1,..., v k. Let f : Q R be a concave function on Q. Then, the convex envelope convenv(f) of f can be computed as k convenv(f(x)) = min λ λ j f(v j ) j=1 k s.t. λ j v j = x, j=1 k λ j = 1, j=1 λ j 0, j = 1,..., k. The following result immediately follows. Theorem 2.3. Let Q be a n-simplex generated by the vertices v 0, v 1,..., v n, and let f : Q R be a concave function on Q. Then, the convex envelope of f is the affine 31

32 function φ(x) = a T x + b, where a R n, b R, that is uniquely determined by the system of linear equations f(v i ) = a T v i + b, for i = 0, 1,..., n. It follows from Theorem 2.3 that it is especially easy to construct the convex envelopes of univariate concave functions f : R R over an interval [l, u]. This is because the graph of the convex envelope is simply the line segment connecting the points (l, f(l)) and (u, f(u)). Among the set of all multivariate functions, multilinear functions are of particular importance as we will see in Section Convex envelopes of multilinear functions were studied by Crama [41] and Rikun [101]. We next give a formal definition of a multilinear function. Definition 2.1 (Multilinear). A function f(x 1,..., x k ) is said to be multilinear if, for each i = 1,..., k, f( x 1,..., x i,..., x k ) is a linear function of the vector x i when the components of the other k 1 vectors are fixed to x j = x j for j i. Rikun [101] studied multilinear functions f(x) of x = (x 1,..., x k ) defined on the cartesian product of polytopes where x Q = k j=1 Q j, x j Q j R n j for j = 1,..., k. Definition 2.2 (Associated Affine Function). Let f(x) be a multilinear function defined on k j=1 Rn j. For the function f(x) and any given point ξ = (ξ 1,..., ξ k ) where ξ j R n j for j = 1,..., k, the associated affine function f ξ (x) is defined as: f ξ (x) = k f(ξ i,..., ξ j 1, x j, ξ j+1,..., ξ k ) (k 1)f(ξ). (2 1) j=1 Rikun [101] showed that the convex envelope of a multilinear function over the cartesian product of polytopes is polyhedral. Theorem 2.4 (Rikun [101]). Let f(x 1,..., x n ) : Q R n be a multilinear function defined on the cartesian product of polytopes Q = k j=1 Q j where x j Q j R n j for j = 1,..., k. 32

33 Let ξ = (ξ 1,..., ξ k ) be a vertex of Q, i.e., ξ j vert(q j ) and the associated affine function (2 1) satisfy f ξ (x) f(x), x vert(q). Then, the affine function f ξ (x) is an element of the convex envelope of f(x). To facilitate the construction of convex envelopes of nonconvex functions, Tawarmalani and Sahinidis [120] introduced the notion of convex extensions. This notion generalizes a similar concept introduced by Crama [41]. Definition 2.3 (Convex Extensions). Let S be a convex set and X S. A convex extension of a function φ : X R over S is defined as a convex function ψ : S R such that φ(x) = ψ(x) for all x X. Note that convex extensions are neither always constructible nor unique. The following result describes conditions under which a convex extension can be constructed. Theorem 2.5 (Tawarmalani and Sahinidis [120]). A convex extension of a function φ : X R over a convex set S X can be constructed if and only if n n j=1 λ jx j = x, φ(x) min λ j φ(x j ) n j=1 j=1 λ j = 1, x j X, j = 1,..., n λ j [0, 1], j = 1,..., n for all x X. Note that for complicated functions, finding convex envelopes might be difficult. Next, we describe a general scheme that produces convex relaxations of factorable functions Reformulation and Relaxation Convexifications are often obtained in two steps: reformulation and relaxation. The first step converts the original problem into an equivalent formulation that is easier to study; the second step constructs a convex relaxation by relaxing nonconvex terms in the reformulated problem. 33

34 First, we describe a general reformulation scheme for functions that are factorable; see Definition 1.1. In fact, factorable functions can be reformulated by introducing auxiliary variables using the recursive algorithms presented in Tawarmalani and Sahinidis [121]. To illustrate the idea, consider a factorable function f(x) given as the following sum of product of univariate functions, i.e., f(x) = 2 2 h jk (x). j=1 k=1 In this case, we can reformulate f(x) by introducing auxiliary variables y j to represent each term of the summation and auxiliary variables y jk to represent the factors of the product respectively, i.e., f(x) = y, 2 y = y j, (2 2) y j = j=1 2 y jk, j = 1, 2 (2 3) k=1 y jk = h jk (x), j = 1, 2, k = 1, 2. (2 4) Note that this reformulation lifts the original problem into a higher dimensional space by introducing auxiliary variables. After the reformulation phase, we observe that relaxation schemes are only needed for sums and products of two variables, appearing in (2 2) and (2 3) respectively, as well as for univariate functions appearing in (2 4). For all of the terms, convex relaxations can be constructed using factorable programming techniques rooted in the work of McCormick [85]. Definition 2.4 (McCormick Relaxations [89]). The relaxations of a factorable function that are formed via recursive application of rules for the relaxation of univariate composition, binary multiplication, and binary addition from convex and concave relaxations of the univariate intrinsic functions, without the introduction of auxiliary variables, are said to be McCormick Relaxations. 34

35 Since the sum of convex functions is convex, convex relaxations for the sum of two functions can be easily constructed as follows. Theorem 2.6 (Relaxation of Sums [89]). Let S R n be a nonempty convex set, and g, g 1, g 2 : S R such that g(x) = g 1 (x) + g 2 (x). Let g1 u, g1 o : S R be a convex underestimator and concave overestimator of g 1 on S, respectively. Similarly, g2 u, g2 o : S R be a convex underestimator and concave overestimator of g 2 on S, respectively. Then, g u, g o : S R, defined as, g u (x) = g u 1 (x) + g u 2 (x), g o (x) = g o 1(x) + g o 2(x), are a convex and concave relaxation of g(x) on S, respectively. However, relaxation techniques for products of two functions is not straightforward as shown in the following proposition. This results follows from the convex and concave envelopes of a bilinear function developed by McCormick [85]. Theorem 2.7 (Relaxation of Products [89]). Let S R n be a nonempty convex set, and g, g 1, g 2 : S R such that g(x) = g 1 (x)g 2 (x). Let g1 u, g1 o : S R be a convex underestimator and concave overestimator of g 1 on S, respectively. Similarly, g2 u, g2 o : S R be a convex underestimator and concave overestimator of g 2 on S, respectively. Furthermore, let g L 1, g U 1, g L 2, g U 2 R such that g L 1 g 1 (x) g U 1 x S and g L 2 g 2 (x) g U 2 x S Consider the following intermediate functions, α 1, α 2, β 1, β 2, γ 1, γ 2, δ 1, δ 2 : S R: α 1 (x) = min{g2 L g1 u (x), g2 L g1(x)}, o β 1 (x) = min{g2 U g1 u (x), g2 U g1(x)}, o γ 1 (x) = max{g2 L g1 u (x), g2 L g1(x)}, o δ 1 (x) = max{g2 U g1 u (x), g2 U g1(x)}, o α 2 (x) = min{g1 L g2 u (x), g1 L g2(x)}, o β 2 (x) = min{g1 U g2 u (x), g1 U g2(x)}, o α 2 (x) = max{g1 U g2 u (x), g1 U g2(x)}, o α 2 (x) = max{g1 L g2 u (x), g1 L g2(x)}. o 35

36 Then, α 1, α 2, β 1, and β 2 are convex on S, while γ 1, γ 2, δ 1, and δ 2 are concave on S. Moreover, g u, g o : S R, defined as g u (x) = max { α 1 (x) + α 2 (x) g L 1 g L 2, β 1 (x) + β 2 (x) g U 1 g U 2 }, g o (x) = min { γ 1 (x) + γ 2 (x) g U 1 g L 2, δ 1 (x) + δ 2 (x) g L 1 g U 2 }, are convex and concave relaxations of g on S, respectively. Al-Khayyal and Falk [4] prove that McCormick relaxation constructs the convex and concave envelopes of bilinear terms, presented as follows. Theorem 2.8 (Al-Khayyal and Falk [4]). Consider a bilinear term y i y j over the hypercube H 2 := [y l i, y u i ] [y l j, y u j ]. Then, { } convenv(y i y j ) = max yiy l j + yjy l i yiy l j, l yi u y j + yj u y i yi u yj u and { } concenv(y i y j ) = min yi u y j + yjy l i yi u yj, l yiy l j + yj u y i yiy l j u. McCormick [86] showed that a tight relaxation of a composition of functions h(g(x)) can be built using convex and concave envelopes as the underestimators and overestimators of h(y g ). Relaxation methods for multilinear functions over a hypercube have been proposed by Rikun [101] and Ryoo and Sahinidis [104]. Different relaxation schemes for the fractional functions are developed by Tawarmalani and Sahinidis [119] and Tawarmalani et al. [114, 115]. For detailed specification of recursive reformulation algorithms, we refer the interested reader to the book of Tawarmalani and Sahinidis [121]. Assuming that all variables are bounded, a univariate convex function f(x j ) where x j [x l j, x u j ], is overestimated by the line connecting the points ( x l j, f(x l j) ) and ( x u j, f(x u j ) ) while f(x j ) is underestimated by the function itself. Hence, a convex outer-approximator of any convex function can be constructed by combining these estimators. If a univariate function f(x j ) is convex and differentiable over x j [x l j, x u j ], then for any x [x l j, x u j ], a valid linear inequality can be obtained using the gradient. For a given 36

37 gradient f x j ( x) of f(x j ) at x, the gradient inequality y f( x) + f x j ( x)(x j x), (2 5) is valid for all x j [x l j, x u j ]. Therefore, we can build linear relaxations using outerapproximations of differentiable univariate functions such as exp(x), log(x), sin(x), and cos(x). 2.2 Cutting Plane Techniques for Mixed-Integer Linear Program (MILP) For MILPs, we mentioned in Section that LP relaxations are often used as convexifications. In this section, we discuss techniques to improve LP relaxations of MILPs. We consider mixed-integer linear programs of the form (MILP ) min s.t. c T x x S where I {1,..., n} and S := { } x Z I + R n I + Ax b. We first present a basic result about the convex hull of S. Theorem 2.9 (Meyer [87]). The convex hull of S, where A Q m n and b Q m, is a polyhedron whose extreme points lie in S. This result together with Theorem 1.2 implies that every MILP problem can be reformulated as a linear program, provided that A and b are rational. This is particularly interesting since LPs can be solved efficiently as we mentioned in Section While the linear program { } min c T x x conv(s) always has an optimal solution that is optimal for (MILP ), it is typically difficult to obtain a full linear description of conv(s). Nevertheless, we are interested in finding partial descriptions of conv(s). Studying the polyhedron conv(s) requires a good 37

38 understanding of what inequalities a i x b i are most important in the description of conv(s). This motivates the introduction of the following definitions. Definition 2.5 (Valid Inequality). Let X R n. The inequality α T x δ is said to be valid for X if it is satisfied by all points of X, i.e., α T x δ x X. Definition 2.6 (Face). If α T x δ is a valid inequality for a polyhedron Q, then F = Q { } x R n α T x = δ is said to be a face of Q. We also say that α T x δ represents or defines the face F. In order for an inequality to be helpful in the description of a polyhedron, the face it defines should be large. To measure the dimension of a polyhedron, we introduce the following definitions Definition 2.7 (Affine Independence). Vectors x 1,..., x k in R n are said to be affinely independent if the unique solution to the system k j=1 λ jx j = 0, k j=1 λ j = 0 is λ j = 0 for all j = 1,..., k. Definition 2.8 (Dimension). A polyhedron Q has dimension d, which we denote by dim(q) = d, if the maximum number of affinely independent points in Q is d + 1. Definition 2.9 (Facet). A face F of a polyhedron Q is said to be a facet of Q if dim(f ) = dim(q) 1. A valid inequality α T x δ that induces a facet of Q is called a facet-defining inequality for Q, or facet for short. We mention that among all inequalities in the description of a full-dimensional polyhedron, only those that define facets are necessary. We refer the interested reader to Nemhauser and Wolsey [91] for a detailed exposition. Proposition 2.1. Let Q be a full-dimensional polyhedron defined by (a i ) T x b i for i M. Let M F be the subset of M containing the indices of facet-defining inequalities for Q. Then, Q = {x R n (a i ) T x b i i M F }. 38

39 Therefore, when studying conv(s), it is sufficient to consider inequalities that are facet-defining. We will describe in Sections and techniques to construct valid and facet-defining inequalities for MILPs. We note that, in practice, the question is not only how to generate inequalities but also how to use them. In fact, the linear description of conv(s) can have exponentially many inequalities. It is therefore typically impractical to solve the corresponding linear programs. In order to overcome this difficulty, cutting plane methods are typically used. The first cutting plane algorithm for solving MILPs was described in 1958 by Gomory [57] for the case where I = n. This algorithm generalized the more dedicated polyhedral approach devised by Danzig et al. [45] for the Traveling Salesman Problem. In cutting plane algorithms, we solve a sequence of linear programs that differ from each other by the addition of one or more valid inequalities. More precisely, we first solve the LP relaxation of (MILP ) to global optimality. The corresponding optimal solution x 0 is typically fractional since the LP relaxation does not impose integrality on the variables. We obtain a tightened formulation by adding inequalities to the LP relaxation. Definition 2.10 (Cutting Plane). An inequality α T x δ where α R n and δ R is said to be a cutting plane for (MILP ), or cut for short, if it is valid for S and there exists solution x 0 of the LP relaxation such that α T x 0 > δ. It is clear that, for a cut α T x δ to improve the current LP relaxation of an MILP, it must cut x 0 off, i.e., α T x 0 > δ. Given a fractional solution x 0 of the LP relaxation, the problem of finding such a violated cut is known as separation problem. It is typically difficult to solve separation problems exactly since separation was shown to be as hard as optimization; see Grötschel et al. [59]. Note that the proof relies on the ellipsoid algorithm described earlier in Section As a result, heuristics are often used for separation. If a cut is found, it is added and the process is iterated. Otherwise, the process is terminated. The basic structure of cutting plane algorithms is described in Figure 2-1. For detailed 39

40 textbook descriptions, we refer the interested readers to Nemhauser and Wolsey [91] and Schrijver [108]. Input: Problem (MILP ) Output: An optimal solution x Initialization; i = 0, Q 0 LP relaxation of (MILP ); Obtain x 0 by solving the LP min{c T x x Q 0 }; while x i is fractional do Separation; if there exists a cutting plane α T i x δ i separating x i from S then Q i+1 Q i {x α T i x δ i }; i i + 1; else terminate end Obtain x i by solving the linear program min{c T x x Q i }; end x x i ; Figure 2-1. Cutting plane algorithm Although the algorithm of Figure 2-1 can terminate without finding an integer optimal solution for (MILP ), the formulation Q i obtained after the addition of cuts provides a strengthened formulation for which branch-and-bound is likely to be more efficient. In practice, there are many tradeoffs to consider between the running time of a separation procedure and the quality of the cutting planes it produces when designing a cutting plane algorithm Disjunctive Programming In this section, we give an overview of disjunctive programming techniques and how they can be used to generate strong cuts for MILPs. Disjunctive programming can be succinctly described as the study of optimization problems defined over unions of sets, typically polyhedra. Even when the sets are convex, their union is typically not. One of the main focuses of disjunctive programming is to study the convex hull of such unions. 40

41 The foundations of disjunctive programming were laid by Balas in a technical report in This report was published 24 years later in Balas [17]. Disjunctive programming is directly applicable to MILPs since fixing integer variables to all values they can take transforms these MILPs into disjunctive programs. As a result, disjunctive programming techniques have been used to derive strong relaxations and cutting planes for various problems; see Balas [15, 16]. In particular, Balas et al. [20] implemented disjunctive programming techniques for mixed 0 1 programs in a branch-and-cut framework. They specialize generic disjunctive programming techniques to show how to generate lift-and-project cuts through the solution of a cut generation linear program (CGLP), and develop strengthened disjunctive cuts. Stubbs and Mehrotra [113] generalized the disjunctive programming techniques of Balas et al. [20] to 0 1 mixed convex programming problems inside of a branch-and-cut framework. Ceria and Soares [31] also provided algebraic representations and solution procedures for disjunctive convex programming. Next, we describe some important results in disjunctive programming. We limit our presentation to union of polyhedra. We first review the basic concept of projection that will be used to relate convex hulls of sets in the space of their original variables and their higher dimensional representations obtained by disjunctive programming. We refer to Balas [18] and Cornuéjols [37] for more detailed discussions. Definition 2.11 (Projection). Given a polyhedron Q R n R r, the projection of Q onto the subspace of R n defined by the x variables is defined as { proj Q = x R n (x, y) Q x } for some y R r. The projection of a polyhedron Q can be obtained using Fourier-Motzkin Elimination; see Fourier [54]. This method recursively eliminates the variables y i one at a time, as presented in the following proposition. 41

42 Proposition 2.2 (Fourier-Motzkin Elimination). Given a polyhedron { } Q = (x, y) R n n R a ij x j + b i y d i, i = 1,..., m, the projection of Q onto x satisfies d k n j=1 proj Q = x x a kjx j Rn b k n j=1 a ijx j d i, i M 0 j=1 d n l j=1 a ljx j b l, k M, l M +, where M + = {i M b i > 0}, M = {i M b i < 0} and M 0 = {i M b i = 0}. below. The projection can be also obtained using the concept of projection cone as described Proposition 2.3 (Cornuéjols [37]). Let Q = {(x, y) R n R r Ax + By d }., Then, { } proj Q = x R n (u T A)x u T d, u E x where E is the set of extreme rays of the projection cone C := { } u R r u T B = 0, u 0. } Definition 2.12 (Disjunctive sets). Given polyhedra Q i = {x R n A i x b i for i M, we define the disjunctive set i M Q i as { } Q = x R n (A i x b i ). (2 6) Expression (2 6) is known as the disjunctive normal form of the disjunctive program. i M Using operations described in Balas [17], the disjunctive set Q can also be expressed as Q = x Rn Ax b, (d h d h 0), j = 1,..., t, (2 7) h M j which is called the conjunctive normal form. 42

43 Balas [17] describes how to obtain the convex hull of a disjunctive set. We present this result in the following theorem. Theorem 2.10 (Balas [17]). Given polyhedra Q i = {x R n A i x b i }, i M, define Q := (x, (y i, y i0) ) i M where (y i, y0) i R n+1 for i M. Then, x i M yi = 0, A i y i b i y i 0 0, y0 i 0, i M yi 0 = 1, Q M := cl conv( i M Q i ) = proj Q. x Further, 1. if x is an extreme point of Q M, then (x, (y i, y i0) i M ) is an extreme point of Q where x = x, (ȳ k, ȳ0) k = (x, 1) for some k M, and (ȳ i, ȳ0) i = (x, 1) for all i M \ {k}. 2. if (x, (y i, y i0) ) i M is an extreme point of Q, then ȳ k = x = x and ȳ0 k = 1 for some k M, and x is an extreme point of Q M. Theorem 2.10 gives a description of the convex hull of i M Q i in a higher dimensional space. In order to obtain the convex hull Q M in the original space of variables of Q i s, we must project Q onto the x space. Theorem 2.11 describes how this projection is obtained. This result follows from Proposition 2.3. Theorem 2.11 (Balas [17]). proj x (Q) = {x R n αx β, (α, β) W 0 }, where W 0 = { } (α, β) R n+1 α = u i A i, β u i b i, for some u i 0, i M. The higher dimensional representation also allows the derivation of facets of Q M as described in the following theorem. Theorem 2.12 (Balas [17]). Assume that Q M is full-dimensional. The inequality αx β defines a facet of Q M if and only if (α, β) is an extreme ray of the cone W 0. 43

44 Given a point x / Q M, it is often necessary to derive a disjunctive cut αx β valid for Q M that cuts off x. This problem is equivalent to choosing coefficients (α, β, u) in W 0 that minimize α x β. This gives rise to Problem (2 8) commonly known as cut generating LP. Note that in (2 8) we added the normalization constraint, i M et u i = 1, to make the problem bounded: min α x β α = u i A i i M, β u i b i M, (2 8) u i 0 i M, e T u i = 1. i M A disjunctive set is called facial if every inequality in (2 7) defines a face of Q, the polyhedron defined by the constraints Ax b. An interesting feature of facial disjunctive programs is that they can be sequentially convexified as described next. Theorem 2.13 (Balas [17]). Let D := x Rn Ax b, (d h x d h 0), j = 1,..., t, h M j where M j 1 for j = 1,..., t and D is facial. Define Q 0 (= Q) := {x R n Ax b }, and for j = 1,..., t, Q j := conv Q j 1 x (d h x d h 0). (2 9) h M j Then, Q t = cl conv(d). Theorem 2.13 shows that, in some cases, it is sufficient to consider the disjunctions sequentially rather than simultaneously to obtain the convex hulls. 44

45 To illustrate that disjunctive programming techniques can be helpful in creating good convexification in integer programming, we describe its application to 0 1 integer programming. A thorough description including relations to Lovász and Schrijver [82] and Sherali and Adams [110] is given in Balas et al. [20]. This variant of disjunctive programming is commonly referred to as lift-and-project; see Balas et al. [20]. For each variable x j for j = 1,..., n, the current formulation is lifted into a higher dimensional space R n+p+q where it is tightened. Then, this strengthened formulation is projected back onto the original space R n+p, thus defining an improved formulation for S. After the last variable is considered, the convex hull is obtained. More precisely, consider the problem min c T x (BMILP ) s.t. Ax b, x j {0, 1} j = 1,..., n, x j R + j = n + 1,..., n + r, where integer variables can only take the values 0 or 1. We define Q := { x R n+r + } Āx b and denote the set of feasible solutions of (BMILP ) by S := { } x {0, 1} n R r + Āx b. We assume that Āx b are obtained by adding x j 1 for j = 1,..., n to Ax b and that Āx b does not include x j 0 for j = 1,..., n. Clearly, the set S can be reformulated as S := x Rn+r + Āx b, ( x j 0) (x j 1) j = 1,..., n, 45

46 which shows its relation to disjunctive programming. Since this problem is facial, its convex hull can be obtained using Theorem Particularly in this case, the j th step (2 9) can be obtained as Q j = proj x (x, x 0, x 1, y 0, y 1 ) R 3n + R 2 + Ā j 1 x 0 b j 1 y 0, x 0 j 0, Ā j 1 x 1 b j 1 y 1, x 1 j y 1, x 0 + x 1 = x, y 0 + y 1 = 1, x, x 0, x 1, y 0, y 1 0. where Q j 1 = {Āj 1 x b j 1 }. Denote the j th unit vector by e j. Using the projection cone approach described in Proposition 2.3, we obtain that Q j is defined by inequalities αx β where (α, β, u, u 0, v, v 0 ) are feasible solution to α uāj 1 +u 0 e j 0, α vāj 1 +v 0 e j 0, β u b j 1 0, (2 10) β v b j 1 v 0 0, u, u 0, v, v 0 0, which is an expression of Theorem The inequality αx β is called a lift-and-project inequality. Note that lift-and-project inequalities are special type of split inequalities [36], derived from the split disjunction x j 0 or x j 1. For details, see Balas et al. [20, 21] Lifting In this section, we describe a technique known as lifting and review how it has been used to generate strong valid inequalities for MILPs. Deriving facet-defining inequalities for the convex hull of feasible solutions to a MILP with many variables is typically difficult. However, when a subset of variables is fixed to some values such as their lower 46

47 or upper bounds, it might be easier to derive a strong valid inequality. We refer to a nontrivial inequality valid for a restricted set as a seed inequality. Lifting is the process of constructing progressively, from a seed inequality valid for a lower dimensional set, an inequality valid for a higher dimensional set. Gomory [58] first introduced the concept of lifting in the context of the group problem. The technique was refined by Padberg [95] and Wolsey [130]; see also Balas [14], Hammer et al. [63], Padberg [96], Wolsey [131], Zemel [138], and Balas and Zemel [23]. Lifting is generally performed sequentially. Crowder et al. [42] and Gu et al. [60] successfully used sequential lifting in a branch-and-cut framework for solving 0 1 integer programs with cover inequalities. For 0 1 integer programs, Wolsey [132] proved that, if the lifting function is superadditive, lifting coefficients are independent of the lifting order; see Section Gu et al. [62] applied sequence-independent lifting to mixed-integer programs. Marchand and Wolsey [83] also used superadditive lifting for 0 1 knapsack problems with a single continuous variable and Richard et al. [98] developed a general lifting theory for continuous variables. Recently, lifting has also been used to obtain inequalities for special-purpose global optimization problems; see de Farias et al. [46], Vandenbussche and Nemhauser [128], and Atamtürk and Narayanan [10]. A general lifting theory for nonlinear programming is described in Richard and Tawarmalani [100]. However, the application of lifting techniques in MINLPs remains limited Sequential lifting Although lifting can be used for general MILPs and for nonlinear programs, we describe it only for the 0 1 knapsack polytope: { } K = x {0, 1} n a j x j d j N where N = n since the ideas extend to more general settings. Let N N and v {0, 1} n. To represent the restricted set where some of the variables x j are fixed to 0 or 47

48 1, we define K(N, v) = { x K x j = v j j N }. By selecting N to be a larger and larger subset of N, we can change conv(k(n, v)) into a polyhedron whose dimension is as small as we want. We note that one might think of fixing the variables to some values between lower and upper bounds. In this case, however, Atamtürk [8] and Richard et al. [98] show that it is typically not possible to perform lifting. Often, it is easy to find a facet-defining inequality for low-dimensional polyhedra. Assume therefore that α j x j δ (2 11) j N\N is a valid inequality for K(N, v). Assume without loss of generality that N = {1,..., p} where p n. Taking (2 11) as the seed inequality, we convert (2 11) into an inequality globally valid for conv(s) by lifting variables x j that were fixed to v j for j N. We can perform lifting one variable x j at a time in some predefined order such as j = 1,..., p. This approach is known as sequential lifting and is the most commonly used form of lifting. We mention however that it can sometimes be beneficial to lift several variables x j for some j N at the same time; see Zemel [138] and Gu et al. [60]. This variant of lifting is called simultaneous lifting. Assume that the variables x 1,..., x i 1 have already been lifted and i 1 α j (x j v j ) + α j x j δ (2 12) j=1 j N\N is valid for K(N \ {1,..., i 1}, v). Lifting the variable x i for i N in the inequality (2 12) amounts to deriving a coefficient α i for which the lifted inequality i 1 α i (x i v i ) + α j (x j v j ) + α j x j δ (2 13) j=1 j N\N 48

49 is valid for K(N \ {1,..., i}, v). To find α i, we define the lifting function Φ i (a) = δ max s.t. i 1 α j (x j v j ) + α j x j j=1 j N\N i 1 a j (x j v j ) + a j x j d a (2 14) j=1 j N\N x j {0, 1} j {1,..., i 1} N \ N associated with the inequality (2 12). Theorem 2.14 (Wolsey [130]). Assume that the optimization problem defining Φ(a i ) is feasible. Inequality (2 13) is valid for K(N \ {1,..., i}, v) if Φ i (a i ) if v i = 0, α i Φ i ( a i ) if v i = 1. Moreover, if 1. (2 12) defines a face of conv(k(n \ {1,..., i 1}, v)) of dimension k, and 2. α i = Φ i (a i ) when v i = 0 or α i = Φ i ( a i ) when v i = 1, then (2 13) defines a face of conv(k(n \ {1,..., i}, v)) of dimension at least k + 1. Theorem 2.14 describes how to sequentially lift binary variables inside of 0 1 knapsack constraints. Lifting for general integer variables was used in Ceria et al. [30]. Lifting for continuous variables was first used by Marchand and Wolsey [83] where the authors lift a single continuous variable without upper bounds inside a 0 1 mixed-integer knapsack set. Richard et al. [98] proposed a general theory for the lifting of multiple continuous variables with bounds. We observe in Theorem 2.14 that a different lifting function Φ i (a) must be computed to determine the lifting coefficient of each lifted variable. In general, computing the lifting function (2 14) even at a single point can be computationally time-consuming. Some of these difficulties disappear when the lifting function is well-structured. 49

50 Sequence-independent lifting To improve computational efficiency of sequential lifting, Wolsey [132] introduced the concept of sequence-independent lifting. This method reduces the computational burden associated with lifting by identifying conditions under which the lifting function does not change during the various stages of lifting. Definition 2.13 (Superadditive). Let Φ : W R R. The function Φ is superadditive over W if Φ(w 1 ) + Φ(w 2 ) Φ(w 1 + w 2 ) for all w 1, w 2, w 1 + w 2 W. For 0 1 integer programs, Wolsey [132] proved that, if a lifting function is superadditive, lifting coefficients are independent of the lifting order. Gu et al. [62] generalized the concept of sequence-independent lifting to 0 1 mixed-integer programs. Atamtürk [8] generalized these results to general mixed-integer programs. Theorem 2.15 (Gu et al. [62]). If the lifting function Φ i (w) is superadditive over R, then Φ i (w) = Φ i+1 (w). A superadditive lifting function is useful for deriving lifted inequalities efficiently. Unfortunately, lifting functions are not always superadditive. For these situations, Gu et al. [62] proposed to use superadditive approximations of the lifting function. Further, they identify validity, dominance, and maximality to be common properties of good superadditive approximations. Sequence-independent lifting has been used to derive strong valid inequalities for various problems; see Marchand and Wolsey [83], Gu et al. [61], Atamtürk and Rajan [11], and Atamtürk [7]. To lift multiple bounded continuous variables, Richard et al. [99] introduced the concept of superlinear lifting that is a natural counterpart to superadditive lifting for integer variables. We refer the interested reader to Richard et al. [99]. 50

51 CHAPTER 3 MOTIVATION AND RESEARCH STATEMENTS 3.1 Motivation When comparing state-of-the-art solvers, it can be readily observed that solving MINLPs to globally optimality requires more computational time than solving MILPs. This is because traditional convexification methods do not always construct strong convex relaxations. As discussed in Chapter 2, currently prevalent convexification techniques derive convex relaxations of nonconvex MINLP problems by relaxing inequalities of the form g(x) r with ḡ(x) r, where ḡ(x) is a concave overestimator of the function g(x). Tawarmalani and Sahinidis [121] discuss how tight overestimators for various kinds of functions can be constructed to produce such relaxations. However, the derived relaxation can be weak because these methods do not use right-hand-side information during the construction of the convex relaxations. As an illustrative example, consider the simple set S defined as S = { } (x, y, z) R 3 + xy + z r, where r > 0. It can be easily seen that S is not a convex set since both ( r, r, 0) and (0, 0, r) belong to S while their convex combination with a weight of 1 2 on each point does not. The feasible region of S for r = 2 is represented in Figure 3-1 (a) where it can be observed to be nonconvex. First, we consider the set S where there are no upper bounds on the variables x, y, and z. In this case, we can verify that the concave envelope of g(x, y, z) = xy + z is infinite if both x and y have non-zero values. As a result, the convex relaxation of S obtained by replacing g(x, y, z) r with convenv(g(x, y, z)) r is given by R(S) = { (x, y, z) R 3 + } { x > 0, y > 0 (x, y, z) R 3 + } z r, xy = 0. 51

52 This set is not closed and is therefore unlikely to be used as a relaxation. Its closure can be observed to be R 3 +. Therefore, the above relaxation scheme corresponds in essence to dropping the original constraint in the relaxed problem. We observe in Figure 3-1 (b) that this is clearly not the best convex relaxation. In fact, we will establish in Chapter 4 that the convex hull of S can be expressed as conv(s) = { (x, y, z) R 3 xy + r + z } r 1. Note that in the above expression, the right-hand-side plays a different role in each term. It therefore cannot be naturally obtained as ḡ(x, y, z) r. (a) S (b) conv(s) (c) S 1 and S 2 Figure 3-1. Geometric illustration of S, conv(s), S 1 and S 2 Next, consider the set S B where the variables have upper bounds and assume r = 2 for simplicity, i.e., S B = (x, y, z) R3 + xy + z 2, x 4, y 4, z 3. 52

Lifted Inequalities for 0 1 Mixed-Integer Bilinear Covering Sets

Lifted Inequalities for 0 1 Mixed-Integer Bilinear Covering Sets 1 2 3 Lifted Inequalities for 0 1 Mixed-Integer Bilinear Covering Sets Kwanghun Chung 1, Jean-Philippe P. Richard 2, Mohit Tawarmalani 3 March 1, 2011 4 5 6 7 8 9 Abstract In this paper, we study 0 1 mixed-integer

More information

BCOL RESEARCH REPORT 07.04

BCOL RESEARCH REPORT 07.04 BCOL RESEARCH REPORT 07.04 Industrial Engineering & Operations Research University of California, Berkeley, CA 94720-1777 LIFTING FOR CONIC MIXED-INTEGER PROGRAMMING ALPER ATAMTÜRK AND VISHNU NARAYANAN

More information

Solving Mixed-Integer Nonlinear Programs

Solving Mixed-Integer Nonlinear Programs Solving Mixed-Integer Nonlinear Programs (with SCIP) Ambros M. Gleixner Zuse Institute Berlin MATHEON Berlin Mathematical School 5th Porto Meeting on Mathematics for Industry, April 10 11, 2014, Porto

More information

Integer Programming ISE 418. Lecture 12. Dr. Ted Ralphs

Integer Programming ISE 418. Lecture 12. Dr. Ted Ralphs Integer Programming ISE 418 Lecture 12 Dr. Ted Ralphs ISE 418 Lecture 12 1 Reading for This Lecture Nemhauser and Wolsey Sections II.2.1 Wolsey Chapter 9 ISE 418 Lecture 12 2 Generating Stronger Valid

More information

Valid Inequalities and Convex Hulls for Multilinear Functions

Valid Inequalities and Convex Hulls for Multilinear Functions Electronic Notes in Discrete Mathematics 36 (2010) 805 812 www.elsevier.com/locate/endm Valid Inequalities and Convex Hulls for Multilinear Functions Pietro Belotti Department of Industrial and Systems

More information

Convex Quadratic Relaxations of Nonconvex Quadratically Constrained Quadratic Progams

Convex Quadratic Relaxations of Nonconvex Quadratically Constrained Quadratic Progams Convex Quadratic Relaxations of Nonconvex Quadratically Constrained Quadratic Progams John E. Mitchell, Jong-Shi Pang, and Bin Yu Original: June 10, 2011 Abstract Nonconvex quadratic constraints can be

More information

Mixed Integer Non Linear Programming

Mixed Integer Non Linear Programming Mixed Integer Non Linear Programming Claudia D Ambrosio CNRS Research Scientist CNRS & LIX, École Polytechnique MPRO PMA 2016-2017 Outline What is a MINLP? Dealing with nonconvexities Global Optimization

More information

GLOBAL OPTIMIZATION WITH GAMS/BARON

GLOBAL OPTIMIZATION WITH GAMS/BARON GLOBAL OPTIMIZATION WITH GAMS/BARON Nick Sahinidis Chemical and Biomolecular Engineering University of Illinois at Urbana Mohit Tawarmalani Krannert School of Management Purdue University MIXED-INTEGER

More information

Network Flows. 6. Lagrangian Relaxation. Programming. Fall 2010 Instructor: Dr. Masoud Yaghini

Network Flows. 6. Lagrangian Relaxation. Programming. Fall 2010 Instructor: Dr. Masoud Yaghini In the name of God Network Flows 6. Lagrangian Relaxation 6.3 Lagrangian Relaxation and Integer Programming Fall 2010 Instructor: Dr. Masoud Yaghini Integer Programming Outline Branch-and-Bound Technique

More information

Implementation of an αbb-type underestimator in the SGO-algorithm

Implementation of an αbb-type underestimator in the SGO-algorithm Implementation of an αbb-type underestimator in the SGO-algorithm Process Design & Systems Engineering November 3, 2010 Refining without branching Could the αbb underestimator be used without an explicit

More information

Lifting for conic mixed-integer programming

Lifting for conic mixed-integer programming Math. Program., Ser. A DOI 1.17/s117-9-282-9 FULL LENGTH PAPER Lifting for conic mixed-integer programming Alper Atamtürk Vishnu Narayanan Received: 13 March 28 / Accepted: 28 January 29 The Author(s)

More information

Lagrangian Relaxation in MIP

Lagrangian Relaxation in MIP Lagrangian Relaxation in MIP Bernard Gendron May 28, 2016 Master Class on Decomposition, CPAIOR2016, Banff, Canada CIRRELT and Département d informatique et de recherche opérationnelle, Université de Montréal,

More information

Math 5593 Linear Programming Week 1

Math 5593 Linear Programming Week 1 University of Colorado Denver, Fall 2013, Prof. Engau 1 Problem-Solving in Operations Research 2 Brief History of Linear Programming 3 Review of Basic Linear Algebra Linear Programming - The Story About

More information

Chapter 1. Preliminaries

Chapter 1. Preliminaries Introduction This dissertation is a reading of chapter 4 in part I of the book : Integer and Combinatorial Optimization by George L. Nemhauser & Laurence A. Wolsey. The chapter elaborates links between

More information

Integer Programming ISE 418. Lecture 13. Dr. Ted Ralphs

Integer Programming ISE 418. Lecture 13. Dr. Ted Ralphs Integer Programming ISE 418 Lecture 13 Dr. Ted Ralphs ISE 418 Lecture 13 1 Reading for This Lecture Nemhauser and Wolsey Sections II.1.1-II.1.3, II.1.6 Wolsey Chapter 8 CCZ Chapters 5 and 6 Valid Inequalities

More information

Analyzing the computational impact of individual MINLP solver components

Analyzing the computational impact of individual MINLP solver components Analyzing the computational impact of individual MINLP solver components Ambros M. Gleixner joint work with Stefan Vigerske Zuse Institute Berlin MATHEON Berlin Mathematical School MINLP 2014, June 4,

More information

Combinatorial Optimization

Combinatorial Optimization Combinatorial Optimization Lecture notes, WS 2010/11, TU Munich Prof. Dr. Raymond Hemmecke Version of February 9, 2011 Contents 1 The knapsack problem 1 1.1 Complete enumeration..................................

More information

A Branch-and-Refine Method for Nonconvex Mixed-Integer Optimization

A Branch-and-Refine Method for Nonconvex Mixed-Integer Optimization A Branch-and-Refine Method for Nonconvex Mixed-Integer Optimization Sven Leyffer 2 Annick Sartenaer 1 Emilie Wanufelle 1 1 University of Namur, Belgium 2 Argonne National Laboratory, USA IMA Workshop,

More information

Section Notes 8. Integer Programming II. Applied Math 121. Week of April 5, expand your knowledge of big M s and logical constraints.

Section Notes 8. Integer Programming II. Applied Math 121. Week of April 5, expand your knowledge of big M s and logical constraints. Section Notes 8 Integer Programming II Applied Math 121 Week of April 5, 2010 Goals for the week understand IP relaxations be able to determine the relative strength of formulations understand the branch

More information

Computational Integer Programming. Lecture 2: Modeling and Formulation. Dr. Ted Ralphs

Computational Integer Programming. Lecture 2: Modeling and Formulation. Dr. Ted Ralphs Computational Integer Programming Lecture 2: Modeling and Formulation Dr. Ted Ralphs Computational MILP Lecture 2 1 Reading for This Lecture N&W Sections I.1.1-I.1.6 Wolsey Chapter 1 CCZ Chapter 2 Computational

More information

A Parametric Simplex Algorithm for Linear Vector Optimization Problems

A Parametric Simplex Algorithm for Linear Vector Optimization Problems A Parametric Simplex Algorithm for Linear Vector Optimization Problems Birgit Rudloff Firdevs Ulus Robert Vanderbei July 9, 2015 Abstract In this paper, a parametric simplex algorithm for solving linear

More information

CO 250 Final Exam Guide

CO 250 Final Exam Guide Spring 2017 CO 250 Final Exam Guide TABLE OF CONTENTS richardwu.ca CO 250 Final Exam Guide Introduction to Optimization Kanstantsin Pashkovich Spring 2017 University of Waterloo Last Revision: March 4,

More information

Integer Programming ISE 418. Lecture 8. Dr. Ted Ralphs

Integer Programming ISE 418. Lecture 8. Dr. Ted Ralphs Integer Programming ISE 418 Lecture 8 Dr. Ted Ralphs ISE 418 Lecture 8 1 Reading for This Lecture Wolsey Chapter 2 Nemhauser and Wolsey Sections II.3.1, II.3.6, II.4.1, II.4.2, II.5.4 Duality for Mixed-Integer

More information

THE EXISTENCE AND USEFULNESS OF EQUALITY CUTS IN THE MULTI-DEMAND MULTIDIMENSIONAL KNAPSACK PROBLEM LEVI DELISSA. B.S., Kansas State University, 2014

THE EXISTENCE AND USEFULNESS OF EQUALITY CUTS IN THE MULTI-DEMAND MULTIDIMENSIONAL KNAPSACK PROBLEM LEVI DELISSA. B.S., Kansas State University, 2014 THE EXISTENCE AND USEFULNESS OF EQUALITY CUTS IN THE MULTI-DEMAND MULTIDIMENSIONAL KNAPSACK PROBLEM by LEVI DELISSA B.S., Kansas State University, 2014 A THESIS submitted in partial fulfillment of the

More information

Week Cuts, Branch & Bound, and Lagrangean Relaxation

Week Cuts, Branch & Bound, and Lagrangean Relaxation Week 11 1 Integer Linear Programming This week we will discuss solution methods for solving integer linear programming problems. I will skip the part on complexity theory, Section 11.8, although this is

More information

Section Notes 9. IP: Cutting Planes. Applied Math 121. Week of April 12, 2010

Section Notes 9. IP: Cutting Planes. Applied Math 121. Week of April 12, 2010 Section Notes 9 IP: Cutting Planes Applied Math 121 Week of April 12, 2010 Goals for the week understand what a strong formulations is. be familiar with the cutting planes algorithm and the types of cuts

More information

LP Relaxations of Mixed Integer Programs

LP Relaxations of Mixed Integer Programs LP Relaxations of Mixed Integer Programs John E. Mitchell Department of Mathematical Sciences RPI, Troy, NY 12180 USA February 2015 Mitchell LP Relaxations 1 / 29 LP Relaxations LP relaxations We want

More information

Section Notes 9. Midterm 2 Review. Applied Math / Engineering Sciences 121. Week of December 3, 2018

Section Notes 9. Midterm 2 Review. Applied Math / Engineering Sciences 121. Week of December 3, 2018 Section Notes 9 Midterm 2 Review Applied Math / Engineering Sciences 121 Week of December 3, 2018 The following list of topics is an overview of the material that was covered in the lectures and sections

More information

Valid inequalities for sets defined by multilinear functions

Valid inequalities for sets defined by multilinear functions Valid inequalities for sets defined by multilinear functions Pietro Belotti 1 Andrew Miller 2 Mahdi Namazifar 3 3 1 Lehigh University 200 W Packer Ave Bethlehem PA 18018, USA belotti@lehighedu 2 Institut

More information

Mixed-Integer Nonlinear Programming

Mixed-Integer Nonlinear Programming Mixed-Integer Nonlinear Programming Claudia D Ambrosio CNRS researcher LIX, École Polytechnique, France pictures taken from slides by Leo Liberti MPRO PMA 2016-2017 Motivating Applications Nonlinear Knapsack

More information

Minimizing Cubic and Homogeneous Polynomials over Integers in the Plane

Minimizing Cubic and Homogeneous Polynomials over Integers in the Plane Minimizing Cubic and Homogeneous Polynomials over Integers in the Plane Alberto Del Pia Department of Industrial and Systems Engineering & Wisconsin Institutes for Discovery, University of Wisconsin-Madison

More information

Disconnecting Networks via Node Deletions

Disconnecting Networks via Node Deletions 1 / 27 Disconnecting Networks via Node Deletions Exact Interdiction Models and Algorithms Siqian Shen 1 J. Cole Smith 2 R. Goli 2 1 IOE, University of Michigan 2 ISE, University of Florida 2012 INFORMS

More information

Lift-and-Project Inequalities

Lift-and-Project Inequalities Lift-and-Project Inequalities Q. Louveaux Abstract The lift-and-project technique is a systematic way to generate valid inequalities for a mixed binary program. The technique is interesting both on the

More information

Introduction to Mathematical Programming IE406. Lecture 21. Dr. Ted Ralphs

Introduction to Mathematical Programming IE406. Lecture 21. Dr. Ted Ralphs Introduction to Mathematical Programming IE406 Lecture 21 Dr. Ted Ralphs IE406 Lecture 21 1 Reading for This Lecture Bertsimas Sections 10.2, 10.3, 11.1, 11.2 IE406 Lecture 21 2 Branch and Bound Branch

More information

where X is the feasible region, i.e., the set of the feasible solutions.

where X is the feasible region, i.e., the set of the feasible solutions. 3.5 Branch and Bound Consider a generic Discrete Optimization problem (P) z = max{c(x) : x X }, where X is the feasible region, i.e., the set of the feasible solutions. Branch and Bound is a general semi-enumerative

More information

Decision Diagram Relaxations for Integer Programming

Decision Diagram Relaxations for Integer Programming Decision Diagram Relaxations for Integer Programming Christian Tjandraatmadja April, 2018 Tepper School of Business Carnegie Mellon University Submitted to the Tepper School of Business in Partial Fulfillment

More information

Convex envelopes, cardinality constrained optimization and LASSO. An application in supervised learning: support vector machines (SVMs)

Convex envelopes, cardinality constrained optimization and LASSO. An application in supervised learning: support vector machines (SVMs) ORF 523 Lecture 8 Princeton University Instructor: A.A. Ahmadi Scribe: G. Hall Any typos should be emailed to a a a@princeton.edu. 1 Outline Convexity-preserving operations Convex envelopes, cardinality

More information

January 29, Introduction to optimization and complexity. Outline. Introduction. Problem formulation. Convexity reminder. Optimality Conditions

January 29, Introduction to optimization and complexity. Outline. Introduction. Problem formulation. Convexity reminder. Optimality Conditions Olga Galinina olga.galinina@tut.fi ELT-53656 Network Analysis Dimensioning II Department of Electronics Communications Engineering Tampere University of Technology, Tampere, Finl January 29, 2014 1 2 3

More information

Computational Integer Programming Universidad de los Andes. Lecture 1. Dr. Ted Ralphs

Computational Integer Programming Universidad de los Andes. Lecture 1. Dr. Ted Ralphs Computational Integer Programming Universidad de los Andes Lecture 1 Dr. Ted Ralphs MIP Lecture 1 1 Quick Introduction Bio Course web site Course structure http://coral.ie.lehigh.edu/ ted/teaching/mip

More information

arxiv: v1 [cs.cc] 5 Dec 2018

arxiv: v1 [cs.cc] 5 Dec 2018 Consistency for 0 1 Programming Danial Davarnia 1 and J. N. Hooker 2 1 Iowa state University davarnia@iastate.edu 2 Carnegie Mellon University jh38@andrew.cmu.edu arxiv:1812.02215v1 [cs.cc] 5 Dec 2018

More information

Convex Optimization. (EE227A: UC Berkeley) Lecture 28. Suvrit Sra. (Algebra + Optimization) 02 May, 2013

Convex Optimization. (EE227A: UC Berkeley) Lecture 28. Suvrit Sra. (Algebra + Optimization) 02 May, 2013 Convex Optimization (EE227A: UC Berkeley) Lecture 28 (Algebra + Optimization) 02 May, 2013 Suvrit Sra Admin Poster presentation on 10th May mandatory HW, Midterm, Quiz to be reweighted Project final report

More information

Constrained Optimization and Lagrangian Duality

Constrained Optimization and Lagrangian Duality CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may

More information

Polyhedral Approach to Integer Linear Programming. Tepper School of Business Carnegie Mellon University, Pittsburgh

Polyhedral Approach to Integer Linear Programming. Tepper School of Business Carnegie Mellon University, Pittsburgh Polyhedral Approach to Integer Linear Programming Gérard Cornuéjols Tepper School of Business Carnegie Mellon University, Pittsburgh 1 / 30 Brief history First Algorithms Polynomial Algorithms Solving

More information

Improved quadratic cuts for convex mixed-integer nonlinear programs

Improved quadratic cuts for convex mixed-integer nonlinear programs Improved quadratic cuts for convex mixed-integer nonlinear programs Lijie Su a,b, Lixin Tang a*, David E. Bernal c, Ignacio E. Grossmann c a Institute of Industrial and Systems Engineering, Northeastern

More information

An Integer Cutting-Plane Procedure for the Dantzig-Wolfe Decomposition: Theory

An Integer Cutting-Plane Procedure for the Dantzig-Wolfe Decomposition: Theory An Integer Cutting-Plane Procedure for the Dantzig-Wolfe Decomposition: Theory by Troels Martin Range Discussion Papers on Business and Economics No. 10/2006 FURTHER INFORMATION Department of Business

More information

Convex relaxations of chance constrained optimization problems

Convex relaxations of chance constrained optimization problems Convex relaxations of chance constrained optimization problems Shabbir Ahmed School of Industrial & Systems Engineering, Georgia Institute of Technology, 765 Ferst Drive, Atlanta, GA 30332. May 12, 2011

More information

Convex Optimization and l 1 -minimization

Convex Optimization and l 1 -minimization Convex Optimization and l 1 -minimization Sangwoon Yun Computational Sciences Korea Institute for Advanced Study December 11, 2009 2009 NIMS Thematic Winter School Outline I. Convex Optimization II. l

More information

is called an integer programming (IP) problem. model is called a mixed integer programming (MIP)

is called an integer programming (IP) problem. model is called a mixed integer programming (MIP) INTEGER PROGRAMMING Integer Programming g In many problems the decision variables must have integer values. Example: assign people, machines, and vehicles to activities in integer quantities. If this is

More information

Linear Programming Redux

Linear Programming Redux Linear Programming Redux Jim Bremer May 12, 2008 The purpose of these notes is to review the basics of linear programming and the simplex method in a clear, concise, and comprehensive way. The book contains

More information

7. Lecture notes on the ellipsoid algorithm

7. Lecture notes on the ellipsoid algorithm Massachusetts Institute of Technology Michel X. Goemans 18.433: Combinatorial Optimization 7. Lecture notes on the ellipsoid algorithm The simplex algorithm was the first algorithm proposed for linear

More information

On the relation between concavity cuts and the surrogate dual for convex maximization problems

On the relation between concavity cuts and the surrogate dual for convex maximization problems On the relation between concavity cuts and the surrogate dual for convex imization problems Marco Locatelli Dipartimento di Ingegneria Informatica, Università di Parma Via G.P. Usberti, 181/A, 43124 Parma,

More information

Some Recent Advances in Mixed-Integer Nonlinear Programming

Some Recent Advances in Mixed-Integer Nonlinear Programming Some Recent Advances in Mixed-Integer Nonlinear Programming Andreas Wächter IBM T.J. Watson Research Center Yorktown Heights, New York andreasw@us.ibm.com SIAM Conference on Optimization 2008 Boston, MA

More information

Two-Term Disjunctions on the Second-Order Cone

Two-Term Disjunctions on the Second-Order Cone Noname manuscript No. (will be inserted by the editor) Two-Term Disjunctions on the Second-Order Cone Fatma Kılınç-Karzan Sercan Yıldız the date of receipt and acceptance should be inserted later Abstract

More information

Spring 2017 CO 250 Course Notes TABLE OF CONTENTS. richardwu.ca. CO 250 Course Notes. Introduction to Optimization

Spring 2017 CO 250 Course Notes TABLE OF CONTENTS. richardwu.ca. CO 250 Course Notes. Introduction to Optimization Spring 2017 CO 250 Course Notes TABLE OF CONTENTS richardwu.ca CO 250 Course Notes Introduction to Optimization Kanstantsin Pashkovich Spring 2017 University of Waterloo Last Revision: March 4, 2018 Table

More information

3. Linear Programming and Polyhedral Combinatorics

3. Linear Programming and Polyhedral Combinatorics Massachusetts Institute of Technology 18.433: Combinatorial Optimization Michel X. Goemans February 28th, 2013 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the introductory

More information

1 Convexity, Convex Relaxations, and Global Optimization

1 Convexity, Convex Relaxations, and Global Optimization 1 Conveity, Conve Relaations, and Global Optimization Algorithms 1 1.1 Minima Consider the nonlinear optimization problem in a Euclidean space: where the feasible region. min f(), R n Definition (global

More information

Deterministic Global Optimization Algorithm and Nonlinear Dynamics

Deterministic Global Optimization Algorithm and Nonlinear Dynamics Deterministic Global Optimization Algorithm and Nonlinear Dynamics C. S. Adjiman and I. Papamichail Centre for Process Systems Engineering Department of Chemical Engineering and Chemical Technology Imperial

More information

to work with) can be solved by solving their LP relaxations with the Simplex method I Cutting plane algorithms, e.g., Gomory s fractional cutting

to work with) can be solved by solving their LP relaxations with the Simplex method I Cutting plane algorithms, e.g., Gomory s fractional cutting Summary so far z =max{c T x : Ax apple b, x 2 Z n +} I Modeling with IP (and MIP, and BIP) problems I Formulation for a discrete set that is a feasible region of an IP I Alternative formulations for the

More information

Valid Inequalities and Restrictions for Stochastic Programming Problems with First Order Stochastic Dominance Constraints

Valid Inequalities and Restrictions for Stochastic Programming Problems with First Order Stochastic Dominance Constraints Valid Inequalities and Restrictions for Stochastic Programming Problems with First Order Stochastic Dominance Constraints Nilay Noyan Andrzej Ruszczyński March 21, 2006 Abstract Stochastic dominance relations

More information

Extended Formulations, Lagrangian Relaxation, & Column Generation: tackling large scale applications

Extended Formulations, Lagrangian Relaxation, & Column Generation: tackling large scale applications Extended Formulations, Lagrangian Relaxation, & Column Generation: tackling large scale applications François Vanderbeck University of Bordeaux INRIA Bordeaux-Sud-Ouest part : Defining Extended Formulations

More information

Computer Sciences Department

Computer Sciences Department Computer Sciences Department Valid Inequalities for the Pooling Problem with Binary Variables Claudia D Ambrosio Jeff Linderoth James Luedtke Technical Report #682 November 200 Valid Inequalities for the

More information

Heuristics and Upper Bounds for a Pooling Problem with Cubic Constraints

Heuristics and Upper Bounds for a Pooling Problem with Cubic Constraints Heuristics and Upper Bounds for a Pooling Problem with Cubic Constraints Matthew J. Real, Shabbir Ahmed, Helder Inàcio and Kevin Norwood School of Chemical & Biomolecular Engineering 311 Ferst Drive, N.W.

More information

Linear Programming: Simplex

Linear Programming: Simplex Linear Programming: Simplex Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. IMA, August 2016 Stephen Wright (UW-Madison) Linear Programming: Simplex IMA, August 2016

More information

Lecture notes on the ellipsoid algorithm

Lecture notes on the ellipsoid algorithm Massachusetts Institute of Technology Handout 1 18.433: Combinatorial Optimization May 14th, 007 Michel X. Goemans Lecture notes on the ellipsoid algorithm The simplex algorithm was the first algorithm

More information

Introduction to Integer Linear Programming

Introduction to Integer Linear Programming Lecture 7/12/2006 p. 1/30 Introduction to Integer Linear Programming Leo Liberti, Ruslan Sadykov LIX, École Polytechnique liberti@lix.polytechnique.fr sadykov@lix.polytechnique.fr Lecture 7/12/2006 p.

More information

Scenario Grouping and Decomposition Algorithms for Chance-constrained Programs

Scenario Grouping and Decomposition Algorithms for Chance-constrained Programs Scenario Grouping and Decomposition Algorithms for Chance-constrained Programs Siqian Shen Dept. of Industrial and Operations Engineering University of Michigan Joint work with Yan Deng (UMich, Google)

More information

SYNCHRONIZED SIMULTANEOUS APPROXIMATE LIFTING FOR THE MULTIPLE KNAPSACK POLYTOPE THOMAS BRADEN MORRISON. B.S., Kansas State University, 2012

SYNCHRONIZED SIMULTANEOUS APPROXIMATE LIFTING FOR THE MULTIPLE KNAPSACK POLYTOPE THOMAS BRADEN MORRISON. B.S., Kansas State University, 2012 SYNCHRONIZED SIMULTANEOUS APPROXIMATE LIFTING FOR THE MULTIPLE KNAPSACK POLYTOPE by THOMAS BRADEN MORRISON B.S., Kansas State University, 2012 A THESIS Submitted in partial fulfillment of the requirements

More information

Branch-and-cut Approaches for Chance-constrained Formulations of Reliable Network Design Problems

Branch-and-cut Approaches for Chance-constrained Formulations of Reliable Network Design Problems Branch-and-cut Approaches for Chance-constrained Formulations of Reliable Network Design Problems Yongjia Song James R. Luedtke August 9, 2012 Abstract We study solution approaches for the design of reliably

More information

Integer Programming. Wolfram Wiesemann. December 6, 2007

Integer Programming. Wolfram Wiesemann. December 6, 2007 Integer Programming Wolfram Wiesemann December 6, 2007 Contents of this Lecture Revision: Mixed Integer Programming Problems Branch & Bound Algorithms: The Big Picture Solving MIP s: Complete Enumeration

More information

Lecture 1: Introduction

Lecture 1: Introduction EE 227A: Convex Optimization and Applications January 17 Lecture 1: Introduction Lecturer: Anh Pham Reading assignment: Chapter 1 of BV 1. Course outline and organization Course web page: http://www.eecs.berkeley.edu/~elghaoui/teaching/ee227a/

More information

A Decomposition-based MINLP Solution Method Using Piecewise Linear Relaxations

A Decomposition-based MINLP Solution Method Using Piecewise Linear Relaxations A Decomposition-based MINLP Solution Method Using Piecewise Linear Relaxations Pradeep K. Polisetty and Edward P. Gatzke 1 Department of Chemical Engineering University of South Carolina Columbia, SC 29208

More information

A Geometric Framework for Nonconvex Optimization Duality using Augmented Lagrangian Functions

A Geometric Framework for Nonconvex Optimization Duality using Augmented Lagrangian Functions A Geometric Framework for Nonconvex Optimization Duality using Augmented Lagrangian Functions Angelia Nedić and Asuman Ozdaglar April 15, 2006 Abstract We provide a unifying geometric framework for the

More information

1 Solution of a Large-Scale Traveling-Salesman Problem... 7 George B. Dantzig, Delbert R. Fulkerson, and Selmer M. Johnson

1 Solution of a Large-Scale Traveling-Salesman Problem... 7 George B. Dantzig, Delbert R. Fulkerson, and Selmer M. Johnson Part I The Early Years 1 Solution of a Large-Scale Traveling-Salesman Problem............ 7 George B. Dantzig, Delbert R. Fulkerson, and Selmer M. Johnson 2 The Hungarian Method for the Assignment Problem..............

More information

Can Li a, Ignacio E. Grossmann a,

Can Li a, Ignacio E. Grossmann a, A generalized Benders decomposition-based branch and cut algorithm for two-stage stochastic programs with nonconvex constraints and mixed-binary first and second stage variables Can Li a, Ignacio E. Grossmann

More information

Jørgen Tind, Department of Statistics and Operations Research, University of Copenhagen, Universitetsparken 5, 2100 Copenhagen O, Denmark.

Jørgen Tind, Department of Statistics and Operations Research, University of Copenhagen, Universitetsparken 5, 2100 Copenhagen O, Denmark. DUALITY THEORY Jørgen Tind, Department of Statistics and Operations Research, University of Copenhagen, Universitetsparken 5, 2100 Copenhagen O, Denmark. Keywords: Duality, Saddle point, Complementary

More information

Optimization WS 13/14:, by Y. Goldstein/K. Reinert, 9. Dezember 2013, 16: Linear programming. Optimization Problems

Optimization WS 13/14:, by Y. Goldstein/K. Reinert, 9. Dezember 2013, 16: Linear programming. Optimization Problems Optimization WS 13/14:, by Y. Goldstein/K. Reinert, 9. Dezember 2013, 16:38 2001 Linear programming Optimization Problems General optimization problem max{z(x) f j (x) 0,x D} or min{z(x) f j (x) 0,x D}

More information

Indicator Constraints in Mixed-Integer Programming

Indicator Constraints in Mixed-Integer Programming Indicator Constraints in Mixed-Integer Programming Andrea Lodi University of Bologna, Italy - andrea.lodi@unibo.it Amaya Nogales-Gómez, Universidad de Sevilla, Spain Pietro Belotti, FICO, UK Matteo Fischetti,

More information

An Adaptive Partition-based Approach for Solving Two-stage Stochastic Programs with Fixed Recourse

An Adaptive Partition-based Approach for Solving Two-stage Stochastic Programs with Fixed Recourse An Adaptive Partition-based Approach for Solving Two-stage Stochastic Programs with Fixed Recourse Yongjia Song, James Luedtke Virginia Commonwealth University, Richmond, VA, ysong3@vcu.edu University

More information

On mathematical programming with indicator constraints

On mathematical programming with indicator constraints On mathematical programming with indicator constraints Andrea Lodi joint work with P. Bonami & A. Tramontani (IBM), S. Wiese (Unibo) University of Bologna, Italy École Polytechnique de Montréal, Québec,

More information

A note on : A Superior Representation Method for Piecewise Linear Functions

A note on : A Superior Representation Method for Piecewise Linear Functions A note on : A Superior Representation Method for Piecewise Linear Functions Juan Pablo Vielma Business Analytics and Mathematical Sciences Department, IBM T. J. Watson Research Center, Yorktown Heights,

More information

14 : Theory of Variational Inference: Inner and Outer Approximation

14 : Theory of Variational Inference: Inner and Outer Approximation 10-708: Probabilistic Graphical Models 10-708, Spring 2017 14 : Theory of Variational Inference: Inner and Outer Approximation Lecturer: Eric P. Xing Scribes: Maria Ryskina, Yen-Chia Hsu 1 Introduction

More information

Integer programming: an introduction. Alessandro Astolfi

Integer programming: an introduction. Alessandro Astolfi Integer programming: an introduction Alessandro Astolfi Outline Introduction Examples Methods for solving ILP Optimization on graphs LP problems with integer solutions Summary Introduction Integer programming

More information

Cutting Planes in SCIP

Cutting Planes in SCIP Cutting Planes in SCIP Kati Wolter Zuse-Institute Berlin Department Optimization Berlin, 6th June 2007 Outline 1 Cutting Planes in SCIP 2 Cutting Planes for the 0-1 Knapsack Problem 2.1 Cover Cuts 2.2

More information

Lecture Note 5: Semidefinite Programming for Stability Analysis

Lecture Note 5: Semidefinite Programming for Stability Analysis ECE7850: Hybrid Systems:Theory and Applications Lecture Note 5: Semidefinite Programming for Stability Analysis Wei Zhang Assistant Professor Department of Electrical and Computer Engineering Ohio State

More information

The Ellipsoid Algorithm

The Ellipsoid Algorithm The Ellipsoid Algorithm John E. Mitchell Department of Mathematical Sciences RPI, Troy, NY 12180 USA 9 February 2018 Mitchell The Ellipsoid Algorithm 1 / 28 Introduction Outline 1 Introduction 2 Assumptions

More information

The Master Equality Polyhedron: Two-Slope Facets and Separation Algorithm

The Master Equality Polyhedron: Two-Slope Facets and Separation Algorithm The Master Equality Polyhedron: Two-Slope Facets and Separation Algorithm by Xiaojing Wang A thesis presented to the University of Waterloo in fulfillment of the thesis requirement for the degree of Master

More information

Lifting inequalities: a framework for generating strong cuts for nonlinear programs

Lifting inequalities: a framework for generating strong cuts for nonlinear programs Math. Program., Ser. A DOI 10.1007/s10107-008-0226-9 FULL LENGTH PAPER Lifting inequalities: a framework for generating strong cuts for nonlinear programs Jean-Philippe P. Richard Mohit Tawarmalani Received:

More information

The moment-lp and moment-sos approaches

The moment-lp and moment-sos approaches The moment-lp and moment-sos approaches LAAS-CNRS and Institute of Mathematics, Toulouse, France CIRM, November 2013 Semidefinite Programming Why polynomial optimization? LP- and SDP- CERTIFICATES of POSITIVITY

More information

Can Li a, Ignacio E. Grossmann a,

Can Li a, Ignacio E. Grossmann a, A generalized Benders decomposition-based branch and cut algorithm for two-stage stochastic programs with nonconvex constraints and mixed-binary first and second stage variables Can Li a, Ignacio E. Grossmann

More information

Gestion de la production. Book: Linear Programming, Vasek Chvatal, McGill University, W.H. Freeman and Company, New York, USA

Gestion de la production. Book: Linear Programming, Vasek Chvatal, McGill University, W.H. Freeman and Company, New York, USA Gestion de la production Book: Linear Programming, Vasek Chvatal, McGill University, W.H. Freeman and Company, New York, USA 1 Contents 1 Integer Linear Programming 3 1.1 Definitions and notations......................................

More information

Cuts for Conic Mixed-Integer Programming

Cuts for Conic Mixed-Integer Programming Cuts for Conic Mixed-Integer Programming Alper Atamtürk and Vishnu Narayanan Department of Industrial Engineering and Operations Research, University of California, Berkeley, CA 94720-1777 USA atamturk@berkeley.edu,

More information

23. Cutting planes and branch & bound

23. Cutting planes and branch & bound CS/ECE/ISyE 524 Introduction to Optimization Spring 207 8 23. Cutting planes and branch & bound ˆ Algorithms for solving MIPs ˆ Cutting plane methods ˆ Branch and bound methods Laurent Lessard (www.laurentlessard.com)

More information

Discrete Optimization 2010 Lecture 7 Introduction to Integer Programming

Discrete Optimization 2010 Lecture 7 Introduction to Integer Programming Discrete Optimization 2010 Lecture 7 Introduction to Integer Programming Marc Uetz University of Twente m.uetz@utwente.nl Lecture 8: sheet 1 / 32 Marc Uetz Discrete Optimization Outline 1 Intro: The Matching

More information

3.7 Strong valid inequalities for structured ILP problems

3.7 Strong valid inequalities for structured ILP problems 3.7 Strong valid inequalities for structured ILP problems By studying the problem structure, we can derive strong valid inequalities yielding better approximations of conv(x ) and hence tighter bounds.

More information

Generation and Representation of Piecewise Polyhedral Value Functions

Generation and Representation of Piecewise Polyhedral Value Functions Generation and Representation of Piecewise Polyhedral Value Functions Ted Ralphs 1 Joint work with Menal Güzelsoy 2 and Anahita Hassanzadeh 1 1 COR@L Lab, Department of Industrial and Systems Engineering,

More information

Duality Theory of Constrained Optimization

Duality Theory of Constrained Optimization Duality Theory of Constrained Optimization Robert M. Freund April, 2014 c 2014 Massachusetts Institute of Technology. All rights reserved. 1 2 1 The Practical Importance of Duality Duality is pervasive

More information

MIXED INTEGER PROGRAMMING APPROACHES FOR NONLINEAR AND STOCHASTIC PROGRAMMING

MIXED INTEGER PROGRAMMING APPROACHES FOR NONLINEAR AND STOCHASTIC PROGRAMMING MIXED INTEGER PROGRAMMING APPROACHES FOR NONLINEAR AND STOCHASTIC PROGRAMMING A Thesis Presented to The Academic Faculty by Juan Pablo Vielma Centeno In Partial Fulfillment of the Requirements for the

More information

Game Theory and Algorithms Lecture 7: PPAD and Fixed-Point Theorems

Game Theory and Algorithms Lecture 7: PPAD and Fixed-Point Theorems Game Theory and Algorithms Lecture 7: PPAD and Fixed-Point Theorems March 17, 2011 Summary: The ultimate goal of this lecture is to finally prove Nash s theorem. First, we introduce and prove Sperner s

More information

AN INTRODUCTION TO CONVEXITY

AN INTRODUCTION TO CONVEXITY AN INTRODUCTION TO CONVEXITY GEIR DAHL NOVEMBER 2010 University of Oslo, Centre of Mathematics for Applications, P.O.Box 1053, Blindern, 0316 Oslo, Norway (geird@math.uio.no) Contents 1 The basic concepts

More information

Largest dual ellipsoids inscribed in dual cones

Largest dual ellipsoids inscribed in dual cones Largest dual ellipsoids inscribed in dual cones M. J. Todd June 23, 2005 Abstract Suppose x and s lie in the interiors of a cone K and its dual K respectively. We seek dual ellipsoidal norms such that

More information