Modeling with Integer Programming

Size: px
Start display at page:

Download "Modeling with Integer Programming"

Transcription

1 Modeling with Integer Programg Laura Galli December 18, 2014 We can use 0-1 (binary) variables for a variety of purposes, such as: Modeling yes/no decisions Enforcing disjunctions Enforcing logical conditions Modeling fixed costs Modeling piecewise linear functions 1 Modeling Yes/No Decisions We can use binary variables to model yes/no decisions. Example: 0-1 Knapsack We are given a set of items j = 1... n with associated values p j and weights w j. We wish to select a subset of maximum value such that the total weight is less than a constant b. We can associate a 0-1 variable x j with each item indicating whether it is selected or not. max n j=1 p jx j n j=1 w jx j b x {0, 1} n. Variants are: Subset-Sum, KP-Min, Equality-KP, Multiple-Choice-KP, Bounded-KP, Two- Constrained-KP, Multiple-KP. 1

2 Example: (Linear) Assignment We are given n people and m jobs. Each job must be done by exactly one person. Each person can do at most one job. The cost of person j = 1,..., n doing job i = 1,..., m is c ij. We wish to find a imum cost assignment. We can associate a 0-1 variable x ij to each possible assignment (job i to person j) indicating whether it is selected or not. m n i=1 j=1 c ijx ij n j=1 x ij = 1 (i = 1,..., m) m i=1 x ij 1 (j = 1,..., n) x {0, 1} mn. Note that we can always assume m = n, w.l.o.g.: If m > n the problem is infeasible (pigeonhole principle). If n > m we can set m = n adding n m dummy jobs such that the corresponding cost c ij = 0 for i = m + 1,..., n and j = 1,..., n. So, the model becomes: n n i=1 j=1 c ijx ij n j=1 x ij = 1 (i = 1,..., n) n i=1 x ij = 1 (j = 1,..., n) x {0, 1} n2. Thus, assignment problems deal with the question of how to assign n items to n other items. There are different ways in math to describe an assignment. For example, we can view an assignment as a bijective mapping ψ between two finite sets U and V of n elements. This way, we get a representation of an assignment by a permutation ψ, we can just represent the permutation as ψ = ψ(1), ψ(2),..., ψ(n). 2

3 Every permutation ψ corresponds in a unique way to an n n permutation matrix X ψ with x ij = 1 if j = ψ(i), 0 otherwise. So there is a one-to-one correspondence between assignments and permutations of {1,..., n} elements. Note that this is an easy problem, indeed constraint matrix is TU. Example: Generalized Assignment A N P-hard version is the so called Generalized Assignment Problem, where each person j = 1,..., n has a capacity b j (e.g., max working hours), and each job i = 1,..., m assigned to person j uses up w ij of the person capacity b j. Each job must be assigned exactly to one person, and each person cannot exceed his/her own capacity. m n i=1 j=1 c ijx ij n j=1 x ij = 1 (i = 1,..., m) m i=1 w ijx ij b j (j = 1,..., n) x {0, 1} mn. 2 Selecting from a Set, Fixed Costs, and Dependent Decisions We can use binary variables to model conditions such as at most one, exactly one, at least one when selecting elements from a set M: Let M = {1... m} be a finite set. Let {M j }, j N = {1... n} be a collection of subsets of M. We say F N covers M if j F M j = M (i.e., at least one). We say F N is a packing of M if M j M k =, (j, k), j k (i.e., at most one). If F is both a packing and a cover, we say F is a partition of M (i.e., exactly one). These problems can be easily represented as IPs. We define an incidence matrix A m n of the family {M j }: Each row of A represents an item of set M Each column of A represents a subset M j of the items 3

4 We can associate a 0-1 variable to each subset indicating whether the subset is selected or not. This is a 01-IP model for the set covering problem: c T x Ax 1 x {0, 1} n, where c j, j = 1... n is the cost vector of the n subsets. Similarly, the model for set partitioning: c T x Ax = 1 x {0, 1} n, and the model for set packing: max c T x Ax 1 x {0, 1} n. We can use binary variables to model fixed costs: h(x) = f + px if 0 < x C and h(x) = 0 if x = 0. To do this we define an additional binary variable y = 1 if x > 0 and y = 0 otherwise. We replace h(x) by fy + px, and add the constraints x Cy, y {0, 1}. Note that this is not a completely satisfactory formulation, because although the costs are correct when x > 0, it is possible to have the solution x = 0, y = 1. So we did not model the set X = {(0, 0), (x, 1) for 0 < x C}, but the set X {0, 1}. However, as a typical objective is of imization, this will not arise in an optimal solution. We can use binary variables to enforce the condition that a certain action X can only be taken if some other action Y is also taken, i.e., dependent decisions. We introduce two binary variables x and y for actions X and Y, respectively, and impose the constraint: x y. 4

5 Example: Uncapacitated Facility Location We are given n potential facility locations and m customers that must be serviced from those locations. There is a fixed cost d j of opening facility j. There is a cost c ij associated with serving customer i from facility j. We can use two sets of binary variables: y j is a binary variable taking value 1 if facility j is opened, 0 otherwise. x ij is a binary variable taking value 1 if customer i is served by facility j, 0 otherwise. n j=1 d jy j + m n i=1 j=1 c ijx ij n j=1 x ij = 1 (i = 1,..., m) x ij y j (i = 1,..., m; j = 1... n) (1) x {0, 1} mn, y {0, 1} n. Alternatively, we can model dependency constraints (1) as follows: m x ij my j i=1 (j = 1... n). Example: Capacitated Facility Location In the capacitated version we are given a demand r i for each customer i and a capacity b j for each facility j. The following is a valid IP formulation: n j=1 d jy j + m n i=1 j=1 c ijx ij n j=1 x ij = 1 (i = 1,..., m) m i=1 r ix ij b j y j (j = 1,..., n) (2) x {0, 1} mn, y {0, 1} n. 5

6 Example: Bin Packing We are given n bins with the same capacity b and m items with weight r i, i = 1... m. We want to pack all items in the bins and imize the number of bins used: n j=1 y j n j=1 x ij = 1 (i = 1,..., m) m i=1 r ix ij by j (j = 1,..., n) (3) x {0, 1} mn, y {0, 1} n. Example: Cutting Stock There are n rolls of fixed size b awaiting to be cut, aka raws. There are s manufacturers who want different numbers d i of rolls of various-sized widths w i, i = 1,..., s, aka finals. How should the raws be cut so that the least number of rolls are cut? The following model by Kantorovich is a valid formulation for the cutting stock problem: n j=1 y j n j=1 x ij d i (i = 1,..., s) s i=1 w ix ij by j (j = 1,..., n) (4) x Z sn +, y {0, 1} n. An alternative, stronger formulation, is due to Gilmore & Gomory. The idea is to enumerate all possible raw cutting patterns. A pattern j J is described by the vector (a 1j, a 2j,..., a sj ), where a ij representes the number of final rolls of width w i obtained from cutting a raw roll according to pattern j. In this model we have an integer variable x j for each pattern j J, indicating how many times pattern j is used, i.e., how many raw rolls are cut according to pattern j. j J x j j J a ijx j d i (i = 1,..., s) x Z J +. Clearly, the Gilmore-Gomory model must be solved using column generation techniques. Note that Bin Packing and Cutting Stock are in fact the same problem! In bin packing we are given an item vector (r i ), i = 1,..., m and we want to imize the number of bins of capacity b used to pack all items. In cutting stock we are given a size vector (w i ), i = 1,..., s, a multiplicity vector d i, i = 1,..., s, and we want to imize the number of bins of capacity b used to pack all items. So, in priciple the problems are equivalent. Yet, cutting stock uses a more compact input representation, thus: 6

7 poly-time algorithm for BPP is not necessarily poly-time for CSP poly-size formulation for BPP is not necessarily poly-size for CSP..because the size of the input of CSP can be exponentially small compared to BPP input size! Example: Fixed Charge Network Design This problem often appears in design problems that involve material flows in networks, such as water supply, heating, roads, telecomm. etc. We are given a directed graph G = (V, A). Each node i V has a demand/supply/transit b i. An arc (i, j) A means that direct shipping from i to j is allowed (possibly with maximum capacity u ij if the network is capacitated). There is also a variable cost c ij associated with each unit of flow along arc (i, j). There is a fixed cost d ij associated with opening arc (i, j) (think of this as the cost to build the link). We can associate two variables with each arc: x ij is a binary variable taking value 1 if arc (i, j) is open, 0 otherwise. f ij is a nonnegative continuous variable representing the flow on arc (i, j). (i,j) A (d ijx ij + c ij f ij ) (i,j) A f ij (j,i) A f ji = b i i V f ij Mx ij (i, j) A (5) x {0, 1} A, f R A +. If the network is capacitated we also need: f ij u ij, (i, j) A. Note that if the network is capacitated, we can model dependency constraints (5) as well as capacity constraints as follows: f ij u ij x ij (i, j) A. 7

8 3 Modeling a Restricted Set of Values We may want a variable x to only take on values in the set {a 1,..., a m } We introduce m binary variables y j, j = 1,..., m and the following constraints: x = m j=1 a jy j m j=1 y j = 1 y {0, 1} m. The set of variables y j, j = 1,..., m is called a special ordered set (SOS) of variables. 4 Piecewise Linear Cost Functions We can use binary variables to model arbitrary piecewise linear cost functions. The function is specified by ordered pairs (a i, f(a i )) and we wish to evaluate it at point x. We can use a binary variable y i, which indicates whether a i x a i+1. To evaluate the function, we will take linear combinations k i=1 λ if(a i ) of the given function values. This works iff the only two nonzero λ s are the ones corresponding to the endpoints of the interval in which x lies... otherwise the description is not unique! Note that piecewise linear functions that are convex (concave) can be imized (maximized) by linear programg because the slope of the segments are increasing (decreasing)... If f(x) is convex piecewise linear we can represent it as f(x) = max i=1,...,k 1 {a T i x + b i }: max i=1,...,k 1 {at i x + b i } t : t a T i x + b i (i = 1,..., k 1). If f(x) is concave piecewise linear we can represent it as f(x) = i=1,...,k 1 {a T i x+b i }: max i=1,...,k 1 {at i x + b i } max t : t a T i x + b i (i = 1,..., k 1). Clearly, this does not work for i- and maxi-max problems! 8

9 General piecewise linear functions are neither convex nor concave, so binary variables are needed to select the correct segment for a given value of x. k i=1 λ if(a i ) k i=1 λ i = 1 λ 1 y 1 λ i y i 1 + y i (i = 2... k 1) λ k y k 1 k 1 i=1 y i = 1 y {0, 1} k 1, λ R k +. Why making such a fuss of piecewise linear functions? The point is that an arbitrary continuous function of one variable can be approximated by a piecewise linear function with the quality of the approximation being controlled by the size of the linear segments. So, if we are given a separable function f(x 1,..., x p ) = p j=1 f(x j), we know a trick! 5 Modeling Disjunctive Constraints We are given two constraints a T x b and c T x d with nonnegative coefficients. Instead of insisting both constraints be satisfied, we want at least one of the two constraints to be satisfied. To model this, we define a binary variable y and impose the following: a T x by c T x (1 y)d y {0, 1}. More generally, we can impose that at least k out of m constraints be satisfied with the following: (a i ) T x b i y i m i=1 y i k y {0, 1} m. 9

10 Even more generally, suppose P i = {x R p + : A i x b i, x d}, i = 1... m. Then w : A i x b i + w, x d, i = 1... m. So, x contained in at least k of the P i iff the following set is nonempty: A i x b i + w(1 y i ) (i = 1... m) m i=1 y i k x d y {0, 1} m, x R p +. This follows because y i = 1 yields the constraint A i x b i, while y i = 0 yields the redundant constraint A i x b i + w. When k = 1 this is an alternative formulation: A i x i b i y i (i = 1... m) x i dy i (i = 1... m) m i=1 y i = 1 m i=1 xi = x y {0, 1} m, x R p +, x i R p + (i = 1... m). First, given that x m i=1p i, suppose w.l.o.g. that x P 1. Then a solution to the above model is y 1 = 1, y i = 0 otherwise; x 1 = x, and x i = 0 otherwise. On the other hand, suppose the above model has a solution, and, w.l.o.g., suppose y 1 = 1, y i = 0 otherwise. Then we obtain x = x 1, and x i = 0 otherwise. Thus, x P 1 and x m i=1p i. Example: A scheduling problem Disjunctive constraints arise naturally in scheduling problems where several jobs have to be processed on a machine and where the order in which they are processed is not specified. Thus we obtain disjunctive constraints of the type either job k precedes job j on machine i or viceversa. Suppose there are n jobs and m machines and each job must be processed on each machine. For each job, the machine order is fixed, that is, job j must first be processed on machine j(1) and then on machine j(2), and so on. A machine can only process one job at a time, and once a job is started on any machine it must be processed to completion. The objective is to imize the sum of the completion times of all the jobs. The data that specify an instance of the problem are: 10

11 1. m, n and p ij for i = 1... m and j = 1... n, which is the processing time of job j on machine i. 2. the machine order j(1), j(2),..., j(m) for each job j. To model this problem we use two types of variables: t ij is a nonnegative continuous variable representing the start time of job j on machine i. x ijk is a binary variable taking value 1 if job j precedes job k on machine i (0 otherwise), where j < k. n j=1 t j(m),j t j(r+1),j t j(r),j + p j(r),j (r = 1... m 1; j = 1... n) t ij t ik p ij + M(1 x ijk ) (i = 1... m; 1 j < k n) t ik t ij p ik + Mx ijk (i = 1... m; 1 j < k n) x {0, 1} m(n 2), t R mn +. 6 Indicator Variables We have already seen this trick here and there, but let s look at the details. An indicator variable is a binary variable that indicates whether or not a constraint is satisfied. Given a constraint j N a jx j b, let M and m denote respectively an UB and a LB on the value of j N a jx j b, and ɛ the tolerance. (δ = 1 = j N a jx j b) j N a jx j b + M(1 δ) ( j N a jx j b = δ = 1) j N a jx j b + ɛ + δ(m ɛ) (δ = 1 = j N a jx j b) j N a jx j b + m(1 δ) ( j N a jx j b = δ = 1) j N a jx j b ɛ + δ(m + ɛ) Exercise: Production Planning An engineering plant can produce 5 types of products : p 1, p 2,..., p 5 by using two production processes: grinding and drilling. Each product requires the following number of hours for each process, and contributes the following (in hundreds of euros) to the net total profit. Max the total profit 11

12 Table 1: Production Planning Data p 1 p 2 p 3 p 4 p 5 Grinding Drilling Profit Each unit of each product takes 20 man-hours for final assembly. The factory has 3 grinding machines and 2 drilling machines. The factory works a 6 day week with two shifts of 8 hours/day. Eight workers are employed in assembly, each working one shift per day. If we manufacture p 1 or p 2 (or both), then at least one of p 3, p 4, p 5 must also be manufactured. We can model this as a MILP: Good IP Models max 55x x x x x 5 12x x x x x 1 + 8x x x x x x x x i Mz i (i = ) z 1 + z 2 2δ 0 z 3 + z 4 + z 5 δ 0 x 0, z {0, 1} 5, δ {0, 1}. In IP formulating a good model is of crucial importance to solving the model. A model consists of variables, objective function, and constraints. When addressing a model, the first question to ask is: what are the variables?. Once we have defined the variables we can give an implicit representation of the problem: max{c T x : x S Z n +} where S is the set of feasible points in Z n +. We say that max{c T x : Ax b, x Z n +} is a valid formulation if S = {x Z n + (A, b)! Which one to choose? : Ax b}. The point is: there are many choices for 12

13 Example. S = {(0000), (1000), (0100), (0010), (0001), (0110), (0101), (0011) {0, 1} 4 1. S = {x B 4 : 93x x x x 4 111} 2. S = {x B 4 : 2x 1 + x 2 + x 3 + x 4 2} 3. Which one is the best? S = {x B 4 : 2x 1 + x 2 + x 3 + x 4 2 x 1 + x 2 1 x 1 + x 3 1 x 1 + x 4 1} Most IP algorithms require an UB (assug max problem) on the value of the objective function, and the efficiency of the algorithm is very dependent on the sharpness of the bound. An U B is obtained, for example, relaxing the integrality constraints (aka continuous relaxation), so we solve z LP = max{c T x : Ax b, x R n +} since S P = {x : Ax b, x R n +}. Now, given two valid formulations defined by (A i, b i ), i = 1, 2 let P i = {x : A i x b i, x R n +} and zlp i = max{ct x : x P i }. Clearly, P 1 P 2 = zlp 1 z2 LP. Also, it is instinctive to believe that computation time increases as the number of constraints increases. Yet, trying to find a formulation with a small number of constraints is often a bad strategy! In fact, one of the main algorithmic approaches involves a systematic addition of constraints, known as cutting planes or valid inequalities. To understand how to generate valid inequalities for integer programs, it is first necessary to understand valid inequalities for polyhedra (or linear programs). So, the first question is: When is the inequality π T x π 0 valid for P = {x R n : Ax b, x 0}? π T x π 0 is valid for P = {x R n : Ax b, x 0} iff π T x π 0 is satisfied by all points in P. More mathy... π T x π 0 is valid for P = {x R n : Ax b, x 0} iff u 0 such that u T A π and u T b π 0. 13

14 Proof: The linear program Π = max{π T x x P } is feasible because P and bounded because Π π 0. Therefore the dual D = {u T b u 0, u T A π} is also feasible and bounded. A basic optimal solution u proves the claim. Given two valid inequalities π T x π 0 and µ T x µ 0, we say π T x π 0 doates µ T x µ 0 iff α > 0 π αµ and π 0 αµ 0 (if (π T, π 0 ) = (αµ T, αµ 0 ) they are equivalent). If π T x π 0 doates µ T x µ 0 = {x R n + π T x π 0 } {x R n + µ T x µ 0 }. A valid inequality µ T x µ 0 is redundant, if there exist k 1 valid inequalities π it x π0, i i = 1... k, and weights α i > 0, i = 1... k such that ( k i=1 α iπ i ) T x k i=1 α iπ0 i doates µ T x µ 0. Geometrically, we can see that there must be an infinite number of formulations, so how can we choose between them? The geometry again helps us to find the answer. 7 Graph Problems 7.1 A(n) (in)famous Problem: TSP TSP stands for Traveling Salesman Problem. Here we consider the asymmetric version (ATSP) which is defined on a directed graph. (The so-called symmetric or standard TSP is defined on an undirected graph). We are given a directed graph G = (V, A), where V = {1,..., n} represents the set of cities that a salesman must visit. The arcs A represent ordered pairs of cities with direct travel possibility, and c ij is the travel cost for arc (i, j). The salesman wishes to visit each of n cities exactly once and then return to his starting point. Find the order in which he should make his tour so as to do it with imum cost. To model this problem we can use a binary variable for each arc x ij taking value one if the salesman goes directly from town i to town j, and x ij = 0 otherwise. (ij) A c ijx ij (6) (ij) δ + (i) x ij = 1 (i = 1,..., n) (7) (ij) δ (j) x ij = 1 (j = 1,..., n) (8) (ij) A(S) x ij S 1 S V, 2 S n 2 (9) x {0, 1} A. (10) Constraints (9) are called subtour eliation constraints. An alternative to (9) are the so-called cut-set constraints: x ij 1 S V, 2 S n 2. (11) (ij) δ + (S) 14

15 Cut-set constraints can also be written as follows: for an arbitrary r V. (ij) δ + (S) x ij 1 S V, r S The two formulations (6) (10) and (6) (8), (11), (10) are equivalent (i.e., they provide the same lower bound), and are due to Dantzig, Fulkerson & Johnson, hence we call them DF J. A key feature of DF J is that constraints (9) (as well as (11)) are exponential in number, which makes cutting-planes methods necessary. On the other hand, a wide variety of compact formulations exist, i.e., formulations with a polynomial number of both variables and constraints. All of them start by setting V = {1,..., n} and viewing node 1 as a depot, which the salesman must leave at the start of the tour and return at the end of the tour. All of them can be used for the asymmetric TSP as well as for the symmetric TSP. We start giving the (compact) formulation due to Miller, Tucker & Zemlin (MT Z). This formulation is based on the following observation: if the salesman travels from i to j, then the position of node j is one more that that of node i. So, for i = 2... n let u i be a continuous variable representing the position of node i in the tour. (The depot can be thought of as being at position 0 and n). The MT Z formulation looks as follows: (ij) A c ijx ij (ij) δ + (i) x ij = 1 (i = 1,..., n) (ij) δ (j) x ij = 1 (j = 1,..., n) u i u j + (n 1)x ij n 2 (2 i, j n; i j) (12) 1 u i n 1 (2 i n) (13) x {0, 1} A. The MT Z formulation is compact, having only O(n 2 ) variables and O(n 2 ) constraints. Unfortunately, Padberg & Sung showed that its LP relaxation yields an extremely weak lower bound, much weaker than that of DF J formulation. Another compact formulation is due to Gavish & Groves and is called single-commodity flow (SCF ). The idea is that the salesman carries n 1 units of a (single) commodity when he leaves the depot (i.e., node 1), and delivers one unit of this commodity to each other node: 15

16 (ij) A c ijx ij (ij) δ + (i) x ij = 1 (i = 1,..., n) (ij) δ (j) x ij = 1 (j = 1,..., n) n j=1 g ji n j=2 g ij = 1 (2 i n) (14) 0 g ij (n 1)x ij (1 i, j n; i j) (15) x {0, 1} A. where continuous variables g ij represent the amount of the commodity (if any) passing directly from node i to node j. The constraints (14) ensure that one unit of the commodity is delivered to each non-depot node. The bounds (15) ensure that the commodity can only flow along edges that are in the tour. The SCF formulation has O(n 2 ) variables and O(n) constraints. The associated LB is intermediate between the DF J and MT Z bounds. Finally, we give a multicommodity flow (M CF ) formulation by Claus. Here the salesman carries n 1 commodities, one unit of each for each customer: (ij) A c ijx ij (ij) δ + (i) x ij = 1 (i = 1,..., n) (ij) δ (j) x ij = 1 (j = 1,..., n) n j=1 f k ji n j=2 f k ij = 0 2 k n; (2 i n); i k (16) 0 f k ij x ij (1 i, j n; i j); 2 k n (17) n i=2 f k 1i = 1 2 k n (18) n i=1 f k ik = 1 2 k n (19) x {0, 1} A. where the continuous variables f k ij represent the amount of the k-th commodity (if any) passing directly form node i to node j. The constraints (17) state that a commodity cannot flow along an edge unless that edge belongs to the tour. The constraints (18), (19) impose that each commodity leaves the depot and arrives at its destination. The constraints (16) ensure that, when a commodity arrives at a node that is not its final destination, then it also leaves the node. The MCF formulation has O(n 3 ) variables and O(n 3 ) constraints. The associated LB is equal to the DF J bound. 7.2 Stable Set In graph theory, stable set or independent set is a (sub)set of vertices in a graph G = (V, E), no two of which are adjacent. That is, it is a (sub)set I V of vertices such that for every 16

17 two vertices in I, there is no edge in G connecting the two. Equivalently, each edge in the graph has at most one endpoint in I. The size of the stable set is the number of vertices it contains. A maximal stable set is either a stable set such that adding any other vertex to the set forces the set to contain an edge (or the set of all vertices of the empty graph). In the maximum stable set problem each vertex i V is given a weight p i. We wish to find a stable set of maximum weight, clearly if all vertices are given the same weight this corresponds to a stable set of maximum cardinality. The problem is also called vertex packing or node packing. The following is a valid formulation for the maximum stable set problem. Let x i be a binary variable associated to a vertex i V, the variable takes on value 1 if the vertex belongs to the stables set, 0 otherwise: max i V p ix i x i + x j 1 (i, j) E (20) x {0, 1} V. The maximum set packing problem with incidence matrix A {0, 1} m n and cost vector c, is equivalent to the stable set problem on the intersection graph G(A) = (V, E) with vertices V = {1... n}, weights p j = c j j V and edges E = {(i, j) : a hi = a hj for some h {1... m}}. Also, the maximum stable set problem is equivalent to the maximum clique problem on the complement graph. A clique in an undirected graph G = (V, E) is a subset of the vertex set K V, such that for every two vertices in K, there exists an edge connecting the two. This is equivalent to saying that the subgraph induced by K is complete. Every clique corresponds to a stable set (of the same size) in the complement graph Ḡ = (V, Ē). Finally, the maximum stable set problem is equivalent to the imum vertex cover on the same graph. A vertex cover (aka node cover) of a graph is a (sub)set of vertices C V such that each edge of the graph is incident to at least one vertex of the set C. A (sub)set of vertices C is a vertex cover, if and only if its complement I = V \ C is an independent set. Note that the above formulation can be strenthened using clique inequalities. 7.3 Vertex Colouring We are given an undirected graph G = (V, E), and we want to assign a colour to each vertex in V such that: two adjacent vertices get a different colour each imize the number of colours used Let n = V. We use binary variables y j, j = 1... n, taking value 1 if colour j is used, 0 otherwise; and binary variables x ij = 1 taking value one if colour j is used for vertex i V. 17

18 n j=1 y j x ij + x hj y j (i, h) E, j = 1,..., n (21) n j=1 x ij = 1 i V (22) y {0, 1} n, x {0, 1} n Network Flow Problems The following are easy problems with TU constraint matrices Minimum Cost Network Flow Given a digraph G = (V, A) with arc capacities h ij for all (ij) A, demands b i at each node i V, and unit flow costs c ij for all (ij) A, we want to find a feasible flow that satisfies all the demands at imum cost: (ij) A c ijx ij (ij) δ + (i) x ij (ji) δ (i) x ji = b i (i V ) 0 x ij h ij (i, j) A. A generalization of this is the so called Min-Cost Multi-commodity Network Flow problem. Again, we are given a network represented by a digraph G = (V, A), with arc capacities h ij for all (ij) A, and unit flow costs c ij for all (ij) A. Also, we are given K commodities, each one represented by a triple (s k, t k, b k ), i.e., source-sink-demand. We want to find a feasible flow for all K commodities that satisfies all the demands at imum cost: K k=1 (ij) A c ijx k ij (ij) δ + (i) xk ij (ji) δ (i) xk ji = b k i (i V, k = 1... K) K k=1 xk ij h ij (i, j) A x k ij 0 (i, j) A, k = 1... K. Note that: LP is big: km variables, kn + m constraints, km non-negativity constraints. The size of constraint matrix is km(kn + m) = k 2 nm + km 2. So, this is a computationally challenging problem! 18

19 Constraint matrix is not TU. Optimal solution to a multi-commodity flow LP might be fractional. Finding a feasible integer solution is N P-hard Shortest Path Given a digraph G = (V, A), two distinguished nodes s, t V, and nonnegative arc costs c ij for all (ij) A, find a imum cost s t path: (ij) A c ijx ij (ij) δ + (i) x ij (ji) δ (i) x ji = 1 (i = s) (ij) δ + (i) x ij (ji) δ (i) x ji = 0 (i V \ {s, t}) (ij) δ + (i) x ij (ji) δ (i) x ji = 1 (i = t) x ij 0 (i, j) A. A N P-hard version is the so called Constrained Shortest Path Problem: (ij) A c ijx ij (ij) δ + (i) x ij (ji) δ (i) x ji = 1 (i = s) (ij) δ + (i) x ij (ji) δ (i) x ji = 0 (i V \ {s, t}) (ij) δ + (i) x ij (ji) δ (i) x ji = 1 (i = t) (ij) A t ijx ij T x ij {0, 1} (i, j) A Maximum Flow Given a digraph G = (V, A), two distinguished nodes s, t V, and nonnegative arc capacities h ij for all (ij) A, find a maximum flow form s to t. Adding a backward arc from t to s, the maximum s t flow problem can be formulated as: max x ts (ij) δ + (i) x ij (ji) δ (i) x ji = 0 (i V ) 0 x ij h ij (i, j) A. 19

20 Taking the dual we get: (ij) A h ijw ij u i u j + w ij 0 (i, j) A u t u s 1. which corresponds to the imum s t cut problem: Transportation X h ij : s X V \ {t} }. (ij) δ + (X) Given a digraph G = (V, A), the set of nodes is partitioned into two subsets S = {1,..., m} and D = {1,..., n}, respectively the set of node-sources and the set of node-demands. Each source i S has a quantity s i to send, and each destination j D has a quantity d j to receive. The cost of shipping one unit of material from source i to destination j is c ij. We look for a shipping from sources to destinations with imum cost: m n i=1 j=1 c ijx ij n j=1 x ij = s i (i = 1... m) m i=1 x ij = d j (j = 1... n) x ij 0 (i, j) A. Note that the problem is feasible iff i S s i = j D d j Also, note that if m = n and s i = d j = 1 for all i, j, the problem corresponds to the Assignment problem, which is indeed another easy problem. 7.5 Minimum Spanning Tree We are given an undirected graph G = (V, E), a subset of edges T E is a spanning tree iff any two of the following conditions hold: 1. connected 2. acyclic 3. n 1 edges 20

21 We use conditions 2 and 3 and look for a imum cost spanning tree in G: e E w ex e e E x e = n 1 e E(S) x e S 1 S V (23) x e {0, 1} E. Note that: MST is an easy problem, but the constraint matrix is not TU and the number of constraints is exponential. This formulation corresponds to the integral hull of MST incidence vectors. Another valid formulation can be obtained by substituing subtour eliation (23) constraints with the following connectivity constraints: x e 1 S V. (24) e δ(s) For a di-graph G = (A, V ) the problem is known as shortest spanning arborescence with root r (SSA-r), and has the following formulation: (ij) A w ijx ij (ij) δ (j) x ij = 1 j V, j r (25) (ir) δ (r) x ir = 0 (ij) δ + (S) x ij 1 S V, r S (26) x ij {0, 1} A. Constraints (25) can be substituted by the following cardinality constraints: (ij) A x ij = n 1 (27) (SSA-r) is an easy problem and can be solved in O(n 2 ) time. 21

22 7.6 Maximum Matching We are given an undirected graph G = (V, E), a subset of edges M E is a matching in G iff all edges in M are non-incident. We look for a maximum weight matching in G: max e E w ex e e δ(i) x e 1 i V (28) x e {0, 1} E. Note that: Matching is an easy problem, but the formulation above with degree constraints (28), although valid, does not describe the integral hull of matching incidence vectors. In fact, we need to add the following odd-set inequalities: e E(S) x e S 1 2 S V, S 3 odd (29) The formulation with (28) and (29) describes the integral hull of matching incidence vectors. So, matching is another easy problem where the constraint matrix is not TU and the number of constraints is exponential. Note that: Maximum Matching and Minimum Vertex Cover form a weak dual pair. König Theorem: Strong duality holds if and only if the graph is bipartite, indeed, if the graph is bipartite, the constraint matrix is TU. The (linear) assignment problem corresponds to a imum cost perfect matching problem on a bipartite graph. 8 Hard feasibility problems Example: Satisfability Problem (SAT) This is a N P-complete feasibility problem. We are given: a finite set N = {1,..., n} (aka the literals) 22

23 m pairs of subsets of N, C i = (C + i, C i ) (aka the clauses) an instance is feasible iff the set: {x B n x j + (1 x j ) 1 i = 1,..., m} j C + i j C i is nonempty. Example: Partition Problem This is a another N P-complete feasibility problem. We are given: a finite set N = {1,..., n} of items with weights (a 1, a 2,..., a n ) an integer b an instance is feasible iff the set: {x B n n a j x j = b} j=1 is nonempty. The problem is still N P-complete if b = n j=1 a j 2 9 The Combinatorial Explosion: Bang! Assignment, 0-1 Knapsack, Set Covering, TSP, are all combinatorial problems, in the sense that the optimal solution is some subset of a finite set. Thus, in principle, they could be solved by enumeration: Assignment: all possible permutations of {1,..., n} = n! 0-1 KP and Set Covering: all possible subsets of {1,..., n} = 2 N TSP: all possible Hamiltonian tours = (n 1)! Using complete enumeration we can only hope to solve such problems for very small values of n. So we need sth more clever... 23

Chapter 3: Discrete Optimization Integer Programming

Chapter 3: Discrete Optimization Integer Programming Chapter 3: Discrete Optimization Integer Programming Edoardo Amaldi DEIB Politecnico di Milano edoardo.amaldi@polimi.it Sito web: http://home.deib.polimi.it/amaldi/ott-13-14.shtml A.A. 2013-14 Edoardo

More information

Chapter 3: Discrete Optimization Integer Programming

Chapter 3: Discrete Optimization Integer Programming Chapter 3: Discrete Optimization Integer Programming Edoardo Amaldi DEIB Politecnico di Milano edoardo.amaldi@polimi.it Website: http://home.deib.polimi.it/amaldi/opt-16-17.shtml Academic year 2016-17

More information

Computational Integer Programming. Lecture 2: Modeling and Formulation. Dr. Ted Ralphs

Computational Integer Programming. Lecture 2: Modeling and Formulation. Dr. Ted Ralphs Computational Integer Programming Lecture 2: Modeling and Formulation Dr. Ted Ralphs Computational MILP Lecture 2 1 Reading for This Lecture N&W Sections I.1.1-I.1.6 Wolsey Chapter 1 CCZ Chapter 2 Computational

More information

Introduction to Mathematical Programming IE406. Lecture 21. Dr. Ted Ralphs

Introduction to Mathematical Programming IE406. Lecture 21. Dr. Ted Ralphs Introduction to Mathematical Programming IE406 Lecture 21 Dr. Ted Ralphs IE406 Lecture 21 1 Reading for This Lecture Bertsimas Sections 10.2, 10.3, 11.1, 11.2 IE406 Lecture 21 2 Branch and Bound Branch

More information

Combinatorial optimization problems

Combinatorial optimization problems Combinatorial optimization problems Heuristic Algorithms Giovanni Righini University of Milan Department of Computer Science (Crema) Optimization In general an optimization problem can be formulated as:

More information

3.7 Cutting plane methods

3.7 Cutting plane methods 3.7 Cutting plane methods Generic ILP problem min{ c t x : x X = {x Z n + : Ax b} } with m n matrix A and n 1 vector b of rationals. According to Meyer s theorem: There exists an ideal formulation: conv(x

More information

Travelling Salesman Problem

Travelling Salesman Problem Travelling Salesman Problem Fabio Furini November 10th, 2014 Travelling Salesman Problem 1 Outline 1 Traveling Salesman Problem Separation Travelling Salesman Problem 2 (Asymmetric) Traveling Salesman

More information

Introduction to Bin Packing Problems

Introduction to Bin Packing Problems Introduction to Bin Packing Problems Fabio Furini March 13, 2015 Outline Origins and applications Applications: Definition: Bin Packing Problem (BPP) Solution techniques for the BPP Heuristic Algorithms

More information

Introduction to Integer Programming

Introduction to Integer Programming Lecture 3/3/2006 p. /27 Introduction to Integer Programming Leo Liberti LIX, École Polytechnique liberti@lix.polytechnique.fr Lecture 3/3/2006 p. 2/27 Contents IP formulations and examples Total unimodularity

More information

Outline. Outline. Outline DMP204 SCHEDULING, TIMETABLING AND ROUTING. 1. Scheduling CPM/PERT Resource Constrained Project Scheduling Model

Outline. Outline. Outline DMP204 SCHEDULING, TIMETABLING AND ROUTING. 1. Scheduling CPM/PERT Resource Constrained Project Scheduling Model Outline DMP204 SCHEDULING, TIMETABLING AND ROUTING Lecture 3 and Mixed Integer Programg Marco Chiarandini 1. Resource Constrained Project Model 2. Mathematical Programg 2 Outline Outline 1. Resource Constrained

More information

New Integer Programming Formulations of the Generalized Travelling Salesman Problem

New Integer Programming Formulations of the Generalized Travelling Salesman Problem American Journal of Applied Sciences 4 (11): 932-937, 2007 ISSN 1546-9239 2007 Science Publications New Integer Programming Formulations of the Generalized Travelling Salesman Problem Petrica C. Pop Department

More information

Integer programming: an introduction. Alessandro Astolfi

Integer programming: an introduction. Alessandro Astolfi Integer programming: an introduction Alessandro Astolfi Outline Introduction Examples Methods for solving ILP Optimization on graphs LP problems with integer solutions Summary Introduction Integer programming

More information

MVE165/MMG630, Applied Optimization Lecture 6 Integer linear programming: models and applications; complexity. Ann-Brith Strömberg

MVE165/MMG630, Applied Optimization Lecture 6 Integer linear programming: models and applications; complexity. Ann-Brith Strömberg MVE165/MMG630, Integer linear programming: models and applications; complexity Ann-Brith Strömberg 2011 04 01 Modelling with integer variables (Ch. 13.1) Variables Linear programming (LP) uses continuous

More information

4/12/2011. Chapter 8. NP and Computational Intractability. Directed Hamiltonian Cycle. Traveling Salesman Problem. Directed Hamiltonian Cycle

4/12/2011. Chapter 8. NP and Computational Intractability. Directed Hamiltonian Cycle. Traveling Salesman Problem. Directed Hamiltonian Cycle Directed Hamiltonian Cycle Chapter 8 NP and Computational Intractability Claim. G has a Hamiltonian cycle iff G' does. Pf. Suppose G has a directed Hamiltonian cycle Γ. Then G' has an undirected Hamiltonian

More information

8.5 Sequencing Problems

8.5 Sequencing Problems 8.5 Sequencing Problems Basic genres. Packing problems: SET-PACKING, INDEPENDENT SET. Covering problems: SET-COVER, VERTEX-COVER. Constraint satisfaction problems: SAT, 3-SAT. Sequencing problems: HAMILTONIAN-CYCLE,

More information

Technische Universität München, Zentrum Mathematik Lehrstuhl für Angewandte Geometrie und Diskrete Mathematik. Combinatorial Optimization (MA 4502)

Technische Universität München, Zentrum Mathematik Lehrstuhl für Angewandte Geometrie und Diskrete Mathematik. Combinatorial Optimization (MA 4502) Technische Universität München, Zentrum Mathematik Lehrstuhl für Angewandte Geometrie und Diskrete Mathematik Combinatorial Optimization (MA 4502) Dr. Michael Ritter Problem Sheet 1 Homework Problems Exercise

More information

Section Notes 8. Integer Programming II. Applied Math 121. Week of April 5, expand your knowledge of big M s and logical constraints.

Section Notes 8. Integer Programming II. Applied Math 121. Week of April 5, expand your knowledge of big M s and logical constraints. Section Notes 8 Integer Programming II Applied Math 121 Week of April 5, 2010 Goals for the week understand IP relaxations be able to determine the relative strength of formulations understand the branch

More information

3.10 Lagrangian relaxation

3.10 Lagrangian relaxation 3.10 Lagrangian relaxation Consider a generic ILP problem min {c t x : Ax b, Dx d, x Z n } with integer coefficients. Suppose Dx d are the complicating constraints. Often the linear relaxation and the

More information

Asymmetric Traveling Salesman Problem (ATSP): Models

Asymmetric Traveling Salesman Problem (ATSP): Models Asymmetric Traveling Salesman Problem (ATSP): Models Given a DIRECTED GRAPH G = (V,A) with V = {,, n} verte set A = {(i, j) : i V, j V} arc set (complete digraph) c ij = cost associated with arc (i, j)

More information

IE418 Integer Programming

IE418 Integer Programming IE418: Integer Programming Department of Industrial and Systems Engineering Lehigh University 23rd February 2005 The Ingredients Some Easy Problems The Hard Problems Computational Complexity The ingredients

More information

Maximum Flow Problem (Ford and Fulkerson, 1956)

Maximum Flow Problem (Ford and Fulkerson, 1956) Maximum Flow Problem (Ford and Fulkerson, 196) In this problem we find the maximum flow possible in a directed connected network with arc capacities. There is unlimited quantity available in the given

More information

CS675: Convex and Combinatorial Optimization Fall 2014 Combinatorial Problems as Linear Programs. Instructor: Shaddin Dughmi

CS675: Convex and Combinatorial Optimization Fall 2014 Combinatorial Problems as Linear Programs. Instructor: Shaddin Dughmi CS675: Convex and Combinatorial Optimization Fall 2014 Combinatorial Problems as Linear Programs Instructor: Shaddin Dughmi Outline 1 Introduction 2 Shortest Path 3 Algorithms for Single-Source Shortest

More information

15.081J/6.251J Introduction to Mathematical Programming. Lecture 24: Discrete Optimization

15.081J/6.251J Introduction to Mathematical Programming. Lecture 24: Discrete Optimization 15.081J/6.251J Introduction to Mathematical Programming Lecture 24: Discrete Optimization 1 Outline Modeling with integer variables Slide 1 What is a good formulation? Theme: The Power of Formulations

More information

Week Cuts, Branch & Bound, and Lagrangean Relaxation

Week Cuts, Branch & Bound, and Lagrangean Relaxation Week 11 1 Integer Linear Programming This week we will discuss solution methods for solving integer linear programming problems. I will skip the part on complexity theory, Section 11.8, although this is

More information

Preliminaries and Complexity Theory

Preliminaries and Complexity Theory Preliminaries and Complexity Theory Oleksandr Romanko CAS 746 - Advanced Topics in Combinatorial Optimization McMaster University, January 16, 2006 Introduction Book structure: 2 Part I Linear Algebra

More information

Discrete (and Continuous) Optimization WI4 131

Discrete (and Continuous) Optimization WI4 131 Discrete (and Continuous) Optimization WI4 131 Kees Roos Technische Universiteit Delft Faculteit Electrotechniek, Wiskunde en Informatica Afdeling Informatie, Systemen en Algoritmiek e-mail: C.Roos@ewi.tudelft.nl

More information

Resource Constrained Project Scheduling Linear and Integer Programming (1)

Resource Constrained Project Scheduling Linear and Integer Programming (1) DM204, 2010 SCHEDULING, TIMETABLING AND ROUTING Lecture 3 Resource Constrained Project Linear and Integer Programming (1) Marco Chiarandini Department of Mathematics & Computer Science University of Southern

More information

Modelling linear and linear integer optimization problems An introduction

Modelling linear and linear integer optimization problems An introduction Modelling linear and linear integer optimization problems An introduction Karen Aardal October 5, 2015 In optimization, developing and analyzing models are key activities. Designing a model is a skill

More information

Discrete Optimization in a Nutshell

Discrete Optimization in a Nutshell Discrete Optimization in a Nutshell Integer Optimization Christoph Helmberg : [,] Contents Integer Optimization 2: 2 [2,2] Contents Integer Optimization. Bipartite Matching.2 Integral Polyhedra ( and directed

More information

The traveling salesman problem

The traveling salesman problem Chapter 58 The traveling salesman problem The traveling salesman problem (TSP) asks for a shortest Hamiltonian circuit in a graph. It belongs to the most seductive problems in combinatorial optimization,

More information

Lecture 8: Column Generation

Lecture 8: Column Generation Lecture 8: Column Generation (3 units) Outline Cutting stock problem Classical IP formulation Set covering formulation Column generation A dual perspective Vehicle routing problem 1 / 33 Cutting stock

More information

Introduction to Integer Linear Programming

Introduction to Integer Linear Programming Lecture 7/12/2006 p. 1/30 Introduction to Integer Linear Programming Leo Liberti, Ruslan Sadykov LIX, École Polytechnique liberti@lix.polytechnique.fr sadykov@lix.polytechnique.fr Lecture 7/12/2006 p.

More information

3. Linear Programming and Polyhedral Combinatorics

3. Linear Programming and Polyhedral Combinatorics Massachusetts Institute of Technology 18.453: Combinatorial Optimization Michel X. Goemans April 5, 2017 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the introductory

More information

CS675: Convex and Combinatorial Optimization Fall 2016 Combinatorial Problems as Linear and Convex Programs. Instructor: Shaddin Dughmi

CS675: Convex and Combinatorial Optimization Fall 2016 Combinatorial Problems as Linear and Convex Programs. Instructor: Shaddin Dughmi CS675: Convex and Combinatorial Optimization Fall 2016 Combinatorial Problems as Linear and Convex Programs Instructor: Shaddin Dughmi Outline 1 Introduction 2 Shortest Path 3 Algorithms for Single-Source

More information

CS 301: Complexity of Algorithms (Term I 2008) Alex Tiskin Harald Räcke. Hamiltonian Cycle. 8.5 Sequencing Problems. Directed Hamiltonian Cycle

CS 301: Complexity of Algorithms (Term I 2008) Alex Tiskin Harald Räcke. Hamiltonian Cycle. 8.5 Sequencing Problems. Directed Hamiltonian Cycle 8.5 Sequencing Problems Basic genres. Packing problems: SET-PACKING, INDEPENDENT SET. Covering problems: SET-COVER, VERTEX-COVER. Constraint satisfaction problems: SAT, 3-SAT. Sequencing problems: HAMILTONIAN-CYCLE,

More information

P P P NP-Hard: L is NP-hard if for all L NP, L L. Thus, if we could solve L in polynomial. Cook's Theorem and Reductions

P P P NP-Hard: L is NP-hard if for all L NP, L L. Thus, if we could solve L in polynomial. Cook's Theorem and Reductions Summary of the previous lecture Recall that we mentioned the following topics: P: is the set of decision problems (or languages) that are solvable in polynomial time. NP: is the set of decision problems

More information

Discrete Optimization 23

Discrete Optimization 23 Discrete Optimization 23 2 Total Unimodularity (TU) and Its Applications In this section we will discuss the total unimodularity theory and its applications to flows in networks. 2.1 Total Unimodularity:

More information

Introduction to optimization and operations research

Introduction to optimization and operations research Introduction to optimization and operations research David Pisinger, Fall 2002 1 Smoked ham (Chvatal 1.6, adapted from Greene et al. (1957)) A meat packing plant produces 480 hams, 400 pork bellies, and

More information

NP and Computational Intractability

NP and Computational Intractability NP and Computational Intractability 1 Polynomial-Time Reduction Desiderata'. Suppose we could solve X in polynomial-time. What else could we solve in polynomial time? don't confuse with reduces from Reduction.

More information

3.7 Strong valid inequalities for structured ILP problems

3.7 Strong valid inequalities for structured ILP problems 3.7 Strong valid inequalities for structured ILP problems By studying the problem structure, we can derive strong valid inequalities yielding better approximations of conv(x ) and hence tighter bounds.

More information

Easy Problems vs. Hard Problems. CSE 421 Introduction to Algorithms Winter Is P a good definition of efficient? The class P

Easy Problems vs. Hard Problems. CSE 421 Introduction to Algorithms Winter Is P a good definition of efficient? The class P Easy Problems vs. Hard Problems CSE 421 Introduction to Algorithms Winter 2000 NP-Completeness (Chapter 11) Easy - problems whose worst case running time is bounded by some polynomial in the size of the

More information

3.3 Easy ILP problems and totally unimodular matrices

3.3 Easy ILP problems and totally unimodular matrices 3.3 Easy ILP problems and totally unimodular matrices Consider a generic ILP problem expressed in standard form where A Z m n with n m, and b Z m. min{c t x : Ax = b, x Z n +} (1) P(b) = {x R n : Ax =

More information

What is an integer program? Modelling with Integer Variables. Mixed Integer Program. Let us start with a linear program: max cx s.t.

What is an integer program? Modelling with Integer Variables. Mixed Integer Program. Let us start with a linear program: max cx s.t. Modelling with Integer Variables jesla@mandtudk Department of Management Engineering Technical University of Denmark What is an integer program? Let us start with a linear program: st Ax b x 0 where A

More information

3. Linear Programming and Polyhedral Combinatorics

3. Linear Programming and Polyhedral Combinatorics Massachusetts Institute of Technology 18.433: Combinatorial Optimization Michel X. Goemans February 28th, 2013 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the introductory

More information

Integer Programming ISE 418. Lecture 8. Dr. Ted Ralphs

Integer Programming ISE 418. Lecture 8. Dr. Ted Ralphs Integer Programming ISE 418 Lecture 8 Dr. Ted Ralphs ISE 418 Lecture 8 1 Reading for This Lecture Wolsey Chapter 2 Nemhauser and Wolsey Sections II.3.1, II.3.6, II.4.1, II.4.2, II.5.4 Duality for Mixed-Integer

More information

Representations of All Solutions of Boolean Programming Problems

Representations of All Solutions of Boolean Programming Problems Representations of All Solutions of Boolean Programming Problems Utz-Uwe Haus and Carla Michini Institute for Operations Research Department of Mathematics ETH Zurich Rämistr. 101, 8092 Zürich, Switzerland

More information

Separating Simple Domino Parity Inequalities

Separating Simple Domino Parity Inequalities Separating Simple Domino Parity Inequalities Lisa Fleischer Adam Letchford Andrea Lodi DRAFT: IPCO submission Abstract In IPCO 2002, Letchford and Lodi describe an algorithm for separating simple comb

More information

Linear and Integer Programming - ideas

Linear and Integer Programming - ideas Linear and Integer Programming - ideas Paweł Zieliński Institute of Mathematics and Computer Science, Wrocław University of Technology, Poland http://www.im.pwr.wroc.pl/ pziel/ Toulouse, France 2012 Literature

More information

Algorithms and Theory of Computation. Lecture 22: NP-Completeness (2)

Algorithms and Theory of Computation. Lecture 22: NP-Completeness (2) Algorithms and Theory of Computation Lecture 22: NP-Completeness (2) Xiaohui Bei MAS 714 November 8, 2018 Nanyang Technological University MAS 714 November 8, 2018 1 / 20 Set Cover Set Cover Input: a set

More information

The Maximum Flow Problem with Disjunctive Constraints

The Maximum Flow Problem with Disjunctive Constraints The Maximum Flow Problem with Disjunctive Constraints Ulrich Pferschy Joachim Schauer Abstract We study the maximum flow problem subject to binary disjunctive constraints in a directed graph: A negative

More information

Relaxations and Bounds. 6.1 Optimality and Relaxations. Suppose that we are given an IP. z = max c T x : x X,

Relaxations and Bounds. 6.1 Optimality and Relaxations. Suppose that we are given an IP. z = max c T x : x X, 6 Relaxations and Bounds 6.1 Optimality and Relaxations Suppose that we are given an IP z = max c T x : x X, where X = x : Ax b,x Z n and a vector x X which is a candidate for optimality. Is there a way

More information

Polynomial-Time Reductions

Polynomial-Time Reductions Reductions 1 Polynomial-Time Reductions Classify Problems According to Computational Requirements Q. Which problems will we be able to solve in practice? A working definition. [von Neumann 1953, Godel

More information

Totally unimodular matrices. Introduction to integer programming III: Network Flow, Interval Scheduling, and Vehicle Routing Problems

Totally unimodular matrices. Introduction to integer programming III: Network Flow, Interval Scheduling, and Vehicle Routing Problems Totally unimodular matrices Introduction to integer programming III: Network Flow, Interval Scheduling, and Vehicle Routing Problems Martin Branda Charles University in Prague Faculty of Mathematics and

More information

NP-Completeness. NP-Completeness 1

NP-Completeness. NP-Completeness 1 NP-Completeness Reference: Computers and Intractability: A Guide to the Theory of NP-Completeness by Garey and Johnson, W.H. Freeman and Company, 1979. NP-Completeness 1 General Problems, Input Size and

More information

MODELING (Integer Programming Examples)

MODELING (Integer Programming Examples) MODELING (Integer Programming Eamples) IE 400 Principles of Engineering Management Integer Programming: Set 5 Integer Programming: So far, we have considered problems under the following assumptions:

More information

NP-Completeness. Andreas Klappenecker. [based on slides by Prof. Welch]

NP-Completeness. Andreas Klappenecker. [based on slides by Prof. Welch] NP-Completeness Andreas Klappenecker [based on slides by Prof. Welch] 1 Prelude: Informal Discussion (Incidentally, we will never get very formal in this course) 2 Polynomial Time Algorithms Most of the

More information

Design and Analysis of Algorithms

Design and Analysis of Algorithms Design and Analysis of Algorithms CSE 5311 Lecture 25 NP Completeness Junzhou Huang, Ph.D. Department of Computer Science and Engineering CSE5311 Design and Analysis of Algorithms 1 NP-Completeness Some

More information

Scheduling and Optimization Course (MPRI)

Scheduling and Optimization Course (MPRI) MPRI Scheduling and optimization: lecture p. /6 Scheduling and Optimization Course (MPRI) Leo Liberti LIX, École Polytechnique, France MPRI Scheduling and optimization: lecture p. /6 Teachers Christoph

More information

Advanced Linear Programming: The Exercises

Advanced Linear Programming: The Exercises Advanced Linear Programming: The Exercises The answers are sometimes not written out completely. 1.5 a) min c T x + d T y Ax + By b y = x (1) First reformulation, using z smallest number satisfying x z

More information

Introduction 1.1 PROBLEM FORMULATION

Introduction 1.1 PROBLEM FORMULATION Introduction. PROBLEM FORMULATION This book deals with a single type of network optimization problem with linear cost, known as the transshipment or minimum cost flow problem. In this section, we formulate

More information

Vehicle Routing and MIP

Vehicle Routing and MIP CORE, Université Catholique de Louvain 5th Porto Meeting on Mathematics for Industry, 11th April 2014 Contents: The Capacitated Vehicle Routing Problem Subproblems: Trees and the TSP CVRP Cutting Planes

More information

Integer Linear Programming Modeling

Integer Linear Programming Modeling DM554/DM545 Linear and Lecture 9 Integer Linear Programming Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark Outline 1. 2. Assignment Problem Knapsack Problem

More information

Bounds on the Traveling Salesman Problem

Bounds on the Traveling Salesman Problem Bounds on the Traveling Salesman Problem Sean Zachary Roberson Texas A&M University MATH 613, Graph Theory A common routing problem is as follows: given a collection of stops (for example, towns, stations,

More information

8.5 Sequencing Problems. Chapter 8. NP and Computational Intractability. Hamiltonian Cycle. Hamiltonian Cycle

8.5 Sequencing Problems. Chapter 8. NP and Computational Intractability. Hamiltonian Cycle. Hamiltonian Cycle Chapter 8 NP and Computational Intractability 8.5 Sequencing Problems Basic genres. Packing problems: SET-PACKING, INDEPENDENT SET. Covering problems: SET-COVER, VERTEX-COVER. Constraint satisfaction problems:

More information

Show that the following problems are NP-complete

Show that the following problems are NP-complete Show that the following problems are NP-complete April 7, 2018 Below is a list of 30 exercises in which you are asked to prove that some problem is NP-complete. The goal is to better understand the theory

More information

arxiv: v1 [cs.ds] 26 Feb 2016

arxiv: v1 [cs.ds] 26 Feb 2016 On the computational complexity of minimum-concave-cost flow in a two-dimensional grid Shabbir Ahmed, Qie He, Shi Li, George L. Nemhauser arxiv:1602.08515v1 [cs.ds] 26 Feb 2016 Abstract We study the minimum-concave-cost

More information

NP Completeness and Approximation Algorithms

NP Completeness and Approximation Algorithms Chapter 10 NP Completeness and Approximation Algorithms Let C() be a class of problems defined by some property. We are interested in characterizing the hardest problems in the class, so that if we can

More information

Algorithms Design & Analysis. Approximation Algorithm

Algorithms Design & Analysis. Approximation Algorithm Algorithms Design & Analysis Approximation Algorithm Recap External memory model Merge sort Distribution sort 2 Today s Topics Hard problem Approximation algorithms Metric traveling salesman problem A

More information

8. INTRACTABILITY I. Lecture slides by Kevin Wayne Copyright 2005 Pearson-Addison Wesley. Last updated on 2/6/18 2:16 AM

8. INTRACTABILITY I. Lecture slides by Kevin Wayne Copyright 2005 Pearson-Addison Wesley. Last updated on 2/6/18 2:16 AM 8. INTRACTABILITY I poly-time reductions packing and covering problems constraint satisfaction problems sequencing problems partitioning problems graph coloring numerical problems Lecture slides by Kevin

More information

Chapter 8. NP and Computational Intractability

Chapter 8. NP and Computational Intractability Chapter 8 NP and Computational Intractability Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved. Acknowledgement: This lecture slide is revised and authorized from Prof.

More information

NP-completeness. Chapter 34. Sergey Bereg

NP-completeness. Chapter 34. Sergey Bereg NP-completeness Chapter 34 Sergey Bereg Oct 2017 Examples Some problems admit polynomial time algorithms, i.e. O(n k ) running time where n is the input size. We will study a class of NP-complete problems

More information

Section Notes 9. IP: Cutting Planes. Applied Math 121. Week of April 12, 2010

Section Notes 9. IP: Cutting Planes. Applied Math 121. Week of April 12, 2010 Section Notes 9 IP: Cutting Planes Applied Math 121 Week of April 12, 2010 Goals for the week understand what a strong formulations is. be familiar with the cutting planes algorithm and the types of cuts

More information

Discrete Optimization 2010 Lecture 8 Lagrangian Relaxation / P, N P and co-n P

Discrete Optimization 2010 Lecture 8 Lagrangian Relaxation / P, N P and co-n P Discrete Optimization 2010 Lecture 8 Lagrangian Relaxation / P, N P and co-n P Marc Uetz University of Twente m.uetz@utwente.nl Lecture 8: sheet 1 / 32 Marc Uetz Discrete Optimization Outline 1 Lagrangian

More information

Introduction to integer programming III:

Introduction to integer programming III: Introduction to integer programming III: Network Flow, Interval Scheduling, and Vehicle Routing Problems Martin Branda Charles University in Prague Faculty of Mathematics and Physics Department of Probability

More information

21. Set cover and TSP

21. Set cover and TSP CS/ECE/ISyE 524 Introduction to Optimization Spring 2017 18 21. Set cover and TSP ˆ Set covering ˆ Cutting problems and column generation ˆ Traveling salesman problem Laurent Lessard (www.laurentlessard.com)

More information

Part III: Traveling salesman problems

Part III: Traveling salesman problems Transportation Logistics Part III: Traveling salesman problems c R.F. Hartl, S.N. Parragh 1/282 Motivation Motivation Why do we study the TSP? c R.F. Hartl, S.N. Parragh 2/282 Motivation Motivation Why

More information

Optimisation and Operations Research

Optimisation and Operations Research Optimisation and Operations Research Lecture 11: Integer Programming Matthew Roughan http://www.maths.adelaide.edu.au/matthew.roughan/ Lecture_notes/OORII/ School of Mathematical

More information

Week 4. (1) 0 f ij u ij.

Week 4. (1) 0 f ij u ij. Week 4 1 Network Flow Chapter 7 of the book is about optimisation problems on networks. Section 7.1 gives a quick introduction to the definitions of graph theory. In fact I hope these are already known

More information

NP Completeness and Approximation Algorithms

NP Completeness and Approximation Algorithms Winter School on Optimization Techniques December 15-20, 2016 Organized by ACMU, ISI and IEEE CEDA NP Completeness and Approximation Algorithms Susmita Sur-Kolay Advanced Computing and Microelectronic

More information

Computational Complexity. IE 496 Lecture 6. Dr. Ted Ralphs

Computational Complexity. IE 496 Lecture 6. Dr. Ted Ralphs Computational Complexity IE 496 Lecture 6 Dr. Ted Ralphs IE496 Lecture 6 1 Reading for This Lecture N&W Sections I.5.1 and I.5.2 Wolsey Chapter 6 Kozen Lectures 21-25 IE496 Lecture 6 2 Introduction to

More information

CS 583: Algorithms. NP Completeness Ch 34. Intractability

CS 583: Algorithms. NP Completeness Ch 34. Intractability CS 583: Algorithms NP Completeness Ch 34 Intractability Some problems are intractable: as they grow large, we are unable to solve them in reasonable time What constitutes reasonable time? Standard working

More information

Compact Formulations of the Steiner Traveling Salesman Problem and Related Problems

Compact Formulations of the Steiner Traveling Salesman Problem and Related Problems Compact Formulations of the Steiner Traveling Salesman Problem and Related Problems Adam N. Letchford Saeideh D. Nasiri Dirk Oliver Theis March 2012 Abstract The Steiner Traveling Salesman Problem (STSP)

More information

Topics in Complexity Theory

Topics in Complexity Theory Topics in Complexity Theory Announcements Final exam this Friday from 12:15PM-3:15PM Please let us know immediately after lecture if you want to take the final at an alternate time and haven't yet told

More information

Discrete Optimization 2010 Lecture 10 P, N P, and N PCompleteness

Discrete Optimization 2010 Lecture 10 P, N P, and N PCompleteness Discrete Optimization 2010 Lecture 10 P, N P, and N PCompleteness Marc Uetz University of Twente m.uetz@utwente.nl Lecture 9: sheet 1 / 31 Marc Uetz Discrete Optimization Outline 1 N P and co-n P 2 N P-completeness

More information

Lecture 18: More NP-Complete Problems

Lecture 18: More NP-Complete Problems 6.045 Lecture 18: More NP-Complete Problems 1 The Clique Problem a d f c b e g Given a graph G and positive k, does G contain a complete subgraph on k nodes? CLIQUE = { (G,k) G is an undirected graph with

More information

- Well-characterized problems, min-max relations, approximate certificates. - LP problems in the standard form, primal and dual linear programs

- Well-characterized problems, min-max relations, approximate certificates. - LP problems in the standard form, primal and dual linear programs LP-Duality ( Approximation Algorithms by V. Vazirani, Chapter 12) - Well-characterized problems, min-max relations, approximate certificates - LP problems in the standard form, primal and dual linear programs

More information

Chapter 3: Proving NP-completeness Results

Chapter 3: Proving NP-completeness Results Chapter 3: Proving NP-completeness Results Six Basic NP-Complete Problems Some Techniques for Proving NP-Completeness Some Suggested Exercises 1.1 Six Basic NP-Complete Problems 3-SATISFIABILITY (3SAT)

More information

More on NP and Reductions

More on NP and Reductions Indian Institute of Information Technology Design and Manufacturing, Kancheepuram Chennai 600 127, India An Autonomous Institute under MHRD, Govt of India http://www.iiitdm.ac.in COM 501 Advanced Data

More information

Integer Programming, Part 1

Integer Programming, Part 1 Integer Programming, Part 1 Rudi Pendavingh Technische Universiteit Eindhoven May 18, 2016 Rudi Pendavingh (TU/e) Integer Programming, Part 1 May 18, 2016 1 / 37 Linear Inequalities and Polyhedra Farkas

More information

Exercises NP-completeness

Exercises NP-completeness Exercises NP-completeness Exercise 1 Knapsack problem Consider the Knapsack problem. We have n items, each with weight a j (j = 1,..., n) and value c j (j = 1,..., n) and an integer B. All a j and c j

More information

Decomposition and Reformulation in Integer Programming

Decomposition and Reformulation in Integer Programming and Reformulation in Integer Programming Laurence A. WOLSEY 7/1/2008 / Aussois and Reformulation in Integer Programming Outline 1 Resource 2 and Reformulation in Integer Programming Outline Resource 1

More information

Chapter 11. Approximation Algorithms. Slides by Kevin Wayne Pearson-Addison Wesley. All rights reserved.

Chapter 11. Approximation Algorithms. Slides by Kevin Wayne Pearson-Addison Wesley. All rights reserved. Chapter 11 Approximation Algorithms Slides by Kevin Wayne. Copyright @ 2005 Pearson-Addison Wesley. All rights reserved. 1 P and NP P: The family of problems that can be solved quickly in polynomial time.

More information

3.10 Column generation method

3.10 Column generation method 3.10 Column generation method Many relevant decision-making problems can be formulated as ILP problems with a very large (exponential) number of variables. Examples: cutting stock, crew scheduling, vehicle

More information

Optimization Exercise Set n. 4 :

Optimization Exercise Set n. 4 : Optimization Exercise Set n. 4 : Prepared by S. Coniglio and E. Amaldi translated by O. Jabali 2018/2019 1 4.1 Airport location In air transportation, usually there is not a direct connection between every

More information

ECS122A Handout on NP-Completeness March 12, 2018

ECS122A Handout on NP-Completeness March 12, 2018 ECS122A Handout on NP-Completeness March 12, 2018 Contents: I. Introduction II. P and NP III. NP-complete IV. How to prove a problem is NP-complete V. How to solve a NP-complete problem: approximate algorithms

More information

Summer School on Introduction to Algorithms and Optimization Techniques July 4-12, 2017 Organized by ACMU, ISI and IEEE CEDA.

Summer School on Introduction to Algorithms and Optimization Techniques July 4-12, 2017 Organized by ACMU, ISI and IEEE CEDA. Summer School on Introduction to Algorithms and Optimization Techniques July 4-12, 2017 Organized by ACMU, ISI and IEEE CEDA NP Completeness Susmita Sur-Kolay Advanced Computing and Microelectronics Unit

More information

7.5 Bipartite Matching

7.5 Bipartite Matching 7. Bipartite Matching Matching Matching. Input: undirected graph G = (V, E). M E is a matching if each node appears in at most edge in M. Max matching: find a max cardinality matching. Bipartite Matching

More information

Chapter 8. NP and Computational Intractability. Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved.

Chapter 8. NP and Computational Intractability. Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved. Chapter 8 NP and Computational Intractability Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved. 1 8.5 Sequencing Problems Basic genres.! Packing problems: SET-PACKING,

More information

Data Structures in Java

Data Structures in Java Data Structures in Java Lecture 21: Introduction to NP-Completeness 12/9/2015 Daniel Bauer Algorithms and Problem Solving Purpose of algorithms: find solutions to problems. Data Structures provide ways

More information