Discrete (and Continuous) Optimization WI4 131

Similar documents
Discrete (and Continuous) Optimization Solutions of Exercises 2 WI4 131

Discrete (and Continuous) Optimization WI4 131

Dual bounds: can t get any better than...

3.4 Relaxations and bounds

3.10 Lagrangian relaxation

to work with) can be solved by solving their LP relaxations with the Simplex method I Cutting plane algorithms, e.g., Gomory s fractional cutting

3.7 Cutting plane methods

Course Notes for MS4315 Operations Research 2

and to estimate the quality of feasible solutions I A new way to derive dual bounds:

Network Flows. 6. Lagrangian Relaxation. Programming. Fall 2010 Instructor: Dr. Masoud Yaghini

Travelling Salesman Problem

Bounds on the Traveling Salesman Problem

Discrete Optimization 2010 Lecture 8 Lagrangian Relaxation / P, N P and co-n P

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs

Integer Programming ISE 418. Lecture 8. Dr. Ted Ralphs

Relaxations and Bounds. 6.1 Optimality and Relaxations. Suppose that we are given an IP. z = max c T x : x X,

Lecture 7: Lagrangian Relaxation and Duality Theory

Discrete Optimization 2010 Lecture 7 Introduction to Integer Programming

3.7 Strong valid inequalities for structured ILP problems

CO759: Algorithmic Game Theory Spring 2015

Combinatorial optimization problems

Technische Universität München, Zentrum Mathematik Lehrstuhl für Angewandte Geometrie und Diskrete Mathematik. Combinatorial Optimization (MA 4502)

MVE165/MMG631 Linear and integer optimization with applications Lecture 8 Discrete optimization: theory and algorithms

In the original knapsack problem, the value of the contents of the knapsack is maximized subject to a single capacity constraint, for example weight.

Duality of LPs and Applications

Lecture 15 (Oct 6): LP Duality

Computational Integer Programming. Lecture 2: Modeling and Formulation. Dr. Ted Ralphs

BBM402-Lecture 20: LP Duality

Section Notes 9. Midterm 2 Review. Applied Math / Engineering Sciences 121. Week of December 3, 2018

Chapter 3: Discrete Optimization Integer Programming

MVE165/MMG630, Applied Optimization Lecture 6 Integer linear programming: models and applications; complexity. Ann-Brith Strömberg

Lecture 8: Column Generation

What is an integer program? Modelling with Integer Variables. Mixed Integer Program. Let us start with a linear program: max cx s.t.

Introduction to Mathematical Programming IE406. Lecture 21. Dr. Ted Ralphs

Section Notes 8. Integer Programming II. Applied Math 121. Week of April 5, expand your knowledge of big M s and logical constraints.

Introduction to Bin Packing Problems

Modeling with Integer Programming

Lectures 6, 7 and part of 8

Modelling linear and linear integer optimization problems An introduction

Chapter 3: Discrete Optimization Integer Programming

Outline. Relaxation. Outline DMP204 SCHEDULING, TIMETABLING AND ROUTING. 1. Lagrangian Relaxation. Lecture 12 Single Machine Models, Column Generation

1 T 1 = where 1 is the all-ones vector. For the upper bound, let v 1 be the eigenvector corresponding. u:(u,v) E v 1(u)

The traveling salesman problem

Exercises NP-completeness

The Greedy Algorithm for the Symmetric TSP

Introduction to Integer Linear Programming

Advanced Linear Programming: The Exercises

Lecture 9: Dantzig-Wolfe Decomposition

Decomposition and Reformulation in Integer Programming

4. Duality Duality 4.1 Duality of LPs and the duality theorem. min c T x x R n, c R n. s.t. ai Tx = b i i M a i R n

(P ) Minimize 4x 1 + 6x 2 + 5x 3 s.t. 2x 1 3x 3 3 3x 2 2x 3 6

Lecture 23 Branch-and-Bound Algorithm. November 3, 2009

3.8 Strong valid inequalities

CS 6820 Fall 2014 Lectures, October 3-20, 2014

New Integer Programming Formulations of the Generalized Travelling Salesman Problem

5 Integer Linear Programming (ILP) E. Amaldi Foundations of Operations Research Politecnico di Milano 1

Nonlinear Programming

4. Algebra and Duality

Operations Research Lecture 6: Integer Programming

Introduction to optimization and operations research

5 Flows and cuts in digraphs

Optimization for Communications and Networks. Poompat Saengudomlert. Session 4 Duality and Lagrange Multipliers

21. Set cover and TSP

15.081J/6.251J Introduction to Mathematical Programming. Lecture 24: Discrete Optimization

Part III: Traveling salesman problems

Week Cuts, Branch & Bound, and Lagrangean Relaxation

Linear Programming Redux

Introduction to Integer Programming

Dual fitting approximation for Set Cover, and Primal Dual approximation for Set Cover

How to Relax. CP 2008 Slide 1. John Hooker Carnegie Mellon University September 2008

Section Notes 9. IP: Cutting Planes. Applied Math 121. Week of April 12, 2010

Week 8. 1 LP is easy: the Ellipsoid Method

Introduction to integer programming II

Solutions to Exercises

CS675: Convex and Combinatorial Optimization Fall 2014 Combinatorial Problems as Linear Programs. Instructor: Shaddin Dughmi

Branch-and-Bound. Leo Liberti. LIX, École Polytechnique, France. INF , Lecture p. 1

Duality Theory, Optimality Conditions

Linear Programming. Scheduling problems

Integer Linear Programming (ILP)

CS675: Convex and Combinatorial Optimization Fall 2016 Combinatorial Problems as Linear and Convex Programs. Instructor: Shaddin Dughmi

EXERCISES SHORTEST PATHS: APPLICATIONS, OPTIMIZATION, VARIATIONS, AND SOLVING THE CONSTRAINED SHORTEST PATH PROBLEM. 1 Applications and Modelling

6. Linear Programming

Integer Programming: Cutting Planes

Linear and Integer Programming - ideas

IE 5531: Engineering Optimization I

Lecture 4: NP and computational intractability

On the Existence of Ideal Solutions in Multi-objective 0-1 Integer Programs

where X is the feasible region, i.e., the set of the feasible solutions.

Planning and Optimization

Extended Formulations, Lagrangian Relaxation, & Column Generation: tackling large scale applications

- Well-characterized problems, min-max relations, approximate certificates. - LP problems in the standard form, primal and dual linear programs

Algorithmic Game Theory and Applications. Lecture 7: The LP Duality Theorem

CS6999 Probabilistic Methods in Integer Programming Randomized Rounding Andrew D. Smith April 2003

ON COST MATRICES WITH TWO AND THREE DISTINCT VALUES OF HAMILTONIAN PATHS AND CYCLES

EXACT ALGORITHMS FOR THE ATSP

Algorithms and Theory of Computation. Lecture 13: Linear Programming (2)

4/12/2011. Chapter 8. NP and Computational Intractability. Directed Hamiltonian Cycle. Traveling Salesman Problem. Directed Hamiltonian Cycle

Branch-and-Bound for the Travelling Salesman Problem

Asymmetric Traveling Salesman Problem (ATSP): Models

Optimization Exercise Set n. 4 :

Transcription:

Discrete (and Continuous) Optimization WI4 131 Kees Roos Technische Universiteit Delft Faculteit Electrotechniek, Wiskunde en Informatica Afdeling Informatie, Systemen en Algoritmiek e-mail: C.Roos@ewi.tudelft.nl URL: http://www.isa.ewi.tudelft.nl/ roos November December, A.D. 2004

Course Schedule 1. Formulations (18 pages) 2. Optimality, Relaxation, and Bounds (10 pages) 3. Well-solved Problems (13 pages) 4. Matching and Assigments (10 pages) 5. Dynamic Programming (11 pages) 6. Complexity and Problem Reduction (8 pages) 7. Branch and Bound (17 pages) 8. Cutting Plane Algorithms (21 pages) 9. Strong Valid Inequalities (22 pages) 10. Lagrangian Duality (14 pages) 11. Column Generation Algorithms (16 pages) 12. Heuristic Algorithms (15 pages) 13. From Theory to Solutions (20 pages) Optimization Group 1

Capter 2 Optimality, Relaxation, and Bounds Optimization Group 2

Optimality and Relaxation (IP) z = max {c(x) : x X Z n } Basic idea underlying methods for solving (IP): find a lower bound z z and an upper bound z z such that z = z = z. Practically, this means that any algorithm will find a decreasing sequence z 1 > z 2 >... > z s z of upper bounds, and an increasing sequence z 1 < z 2 <... < z t z of lower bounds. The stopping criterion in general takes the form z s z t ǫ, where epsilon is some suitably chosen small nonnegative number. Optimization Group 3

How to obtain Bounds? Every feasible solution x X provides a lower bound z = c(x) z. This is essentially the only way to obtain lower bounds. For some IPs, it is easy to find a feasible solution (e.g. Assignment, TSP, Knapsack), but for other IPs, finding a feasible solution may be very difficult. The most important approach for finding upper bounds is by relaxation. The given IP is replaced by a simpler problem whose optimal value is at least as large as z. There are two obvious ways to get a relaxation: (i) Enlarge the feasible set. (ii) Replace the objective function by a function that has the same or a larger value everywhere. Definition 1 The problem (RP) z R = max {f(x) : x T Z n } is a relaxation of (IP) if X T and f(x) c(x) for all x X. Proposition 1 If (RP) is a relaxation of (IP) then z R z. Proof: If x is optimal for (IP), then x X T and z = c(x ) f(x ). As x T, f(x ) is a lower bound for z R, it follows that z f(x ) z R. Optimization Group 4

Linear Relaxations Definition 2 For the IP max { c T x : x X = P Z n} with formulation P, a linear relaxation is the LO problem z LP = max { c T x : x P }. Recall that P = { x R n + : Ax b}. As P Z n P and the objective is unchanged, this is clearly a relaxation. Not surprisingly, better formulations give tighter (upper) bounds. Proposition 2 If P 1 and P 2 are two formulations for the feasible set X in an IP, and P 1 P 2, the the respective upper bounds zi LP (i = 1,2) satisfy z1 LP z2 LP. Sometimes the relaxation RP immediately solves the IP. Proposition 3 If the relaxation (RP) is infeasible, then so is (IP). On the other hand, if (RP) has an optimal solution x and x X and c(x ) = f(x ), then x is an optimal solution of (IP). Proof: If (RP) is infeasible then T =. Since X T, also X =. For the second part of the lemma: as x X, z c(x ) = f(x ) = z R and z z R we get c(x ) = z. Optimization Group 5

Example z = max {4x y : 7x 2y 14, y 3, 2x 2y 3, x, y 0, x, y integer} The figure below illustrates the situation. y 7 6 5 4 3 2 1 0 0 1 2 3 4 5 6 7 c x The figure makes clear that x = 2, y = 1 is the optimal solution, with 7 as objective value. The optimal solution of the LP relaxation is x = 20 7, y = 3 with 59 7 as objective value. This is an upper bound. Rounding this bound to an integer gives 8 as the linear relaxation bound. Optimization Group 6

Combinatorial Relaxations Whenever the relaxation is a combinatorial optimization problem, we speak of a combinatorial relaxation. Below follow some examples. TSP: The TSP, on a digraph D = (V, A), amounts to finding a (salesman, or Hamiltonian) tour T with minimal length in terms of given arc weights c ij, (i, j) A. We have seen that a tour is an assignment without subtours. So we have z TSP = min (i,j) T c ij : T is a tour min (i,j) T c ij : T is an assigment Symmetric TSP: The STSP, on a graph G = (V, E), amounts to finding a tour T with minimal length in terms of given edge weights c e, e E. Definition 3 A 1-tree is a subgraph consisting of two edges adjacent to node 1, plus the edges of a tree on the remaining nodes {2,..., n}. Observe that a tour consists of two edges adjacent to node 1, plus the edges of a path through the remaining nodes. Since a path is a special case of a tree we have z STSP = min e T c e : T is a tour min e T c e : T is a 1-tree.. Optimization Group 7

Combinatorial Relaxations (cont.) Quadratic 0-1 Problem: This is the (in general hard!) problem of maximizing a quadratic function over the unit cube: z = max 1 i<j n q ij x i x j + 1 i n p i x i : x {0,1} n Replacing all terms q ij x i x j with q ij < 0 by 0 the objective function does not decrease. So we have the relaxation z R = max 1 i<j n max { q ij,0 } x i x j + 1 i n p i x i : x {0,1} n This problem is also a quadratic 0-1 problem, but now the quadratic terms have nonnegative coefficients. Such a problem can be solved by solved by solving a series of (easy!) maximum flow problems (see Chapter 9). Knapsack Problem: The set underlying this problem is X = This set can be extended to X R = the largest integer less than or equal to a. x ZN + : n j=1 x ZN + : n aj xj b j=1 a j x j b., where a denotes Optimization Group 8

Consider the IP Lagrangian Relaxations (IP) z = max { c T x : Ax b, x X Z n}. If this problem is too difficult to solve directly, one possible way to proceed is to drop the constraint Ax b. This enlarges the feasible region and so yields a relaxation of (IP). An important extension of this idea is not just to drop complicating constraints, but then to add them into the objective function with Lagrange multipliers. Proposition 4 Suppose that (IP) has an optimal solution, and let u R m. Define z(u) = max { c T x + u T (b Ax) : x X }. Then z(u) z for all u 0. Proof: Let x be optimal for (IP). Then c T x = z, Ax b and x X. Since u 0, it follows that c T x + u T (b Ax ) c T x = z. The main challenge is, of course, to find Lagrange multipliers that minimize z(u). The best Lagrange multipliers are found by solving (if we can!) min z(u) = min max { c T x + u T (b Ax) }. u 0 u 0 x X Optimization Group 9

LO Duality (recapitulation) Consider the LO problem (P) z = max { c T x : Ax b, x 0 }. Its dual problem is (D) w = min { b T y : A T y c, y 0 }. Proposition 5 (Weak duality) If x is feasible for (P) and y is feasible for (D) then c T x b T y. Proof: c T x ( A T y ) T x = y T Ax y T b = b T y. Proposition 6 (Strong duality) Let x be feasible for (P) and y feasible for (D). Then x and y are optimal if and only if c T x = b T y. Scheme for dualizing: Primal problem (P) Dual problem (D) min c T x max b T y equality constraint free variable inequality constraint variable 0 inequality constraint variable 0 free variable equality constraint variable 0 inequality constraint variable 0 inequality constraint Optimization Group 10

Duality In the case of Linear Optimization every feasible solution of the dual problem gives a bound for the optimal value of the primal problem. It is natural to ask whether it is possible to find a dual problem for an IP. Definition 4 The two problems (IP) z = max {c(x) : x X}; (D) w = min {ω(u) : u U} form a (weak)-dual pair if c(x) ω(u) for all x X and all u U. When moreover w = z, they form a strong-dual pair. N.B. Any feasible solution of a dual problem yields an upper bound for IP. On the contrary, for a relaxation only its optimal solution yields an upper bound. Proposition 7 The IP z = max { c T x : Ax b, x Z n +} and the IP w LP = min { u T b : A T u c, u Z m +} form a dual pair. Proposition 8 Suppose that (IP) and (D) are a dual pair. If (D) is unbounded then (IP) is infeasible. On the other hand, if x X and u U satisfy c(x ) = w(u ), then x is an optimal solution of (IP) and u is an optimal solution of (D). Optimization Group 11

A Dual for the Matching Problem Given a graph G = (V, E), a matching M E is a set of (vertex-)disjoint edges. A covering by nodes is a set R V of nodes such that every edge has at least one end point in R. In the graph at the left the red edges form a matching, and the green nodes form a covering by nodes. Proposition 9 The problem of finding a maximum cardinality matching: extra tak max { M : M is a matching} M E and the problem of finding a minimum cardinality covering by nodes: min R V form a weak-dual pair. { R : R is covering by nodes} Proof: If M is a matching then the end nodes of its edges are distinct, so there number is 2k, where k = M. Any covering by nodes R must contain at least one of the end nodes of each edge in M. Hence R k. Therefore, R M. Unfortunately, this duality is not strong! Since the given matching is maximal and the covering by node is minimal, as easily can be verified, the above graph proves this. Optimization Group 12

A Dual for the Matching Problem (cont.) The former result can also be obtained from LO duality. Definition 5 The node-edge matrix of a graph G = (V, E) is an n = V by m = E matrix A with A i,e = 1 when node i is incident with edge e, and A i,e = 0 otherwise. With the help of the node-edge matrix A of G the matching problem can be formulated as the following IP: and the covering by nodes problem as: Using LO duality we may write z = max { 1 T x : Ax 1, x Z m + w = min { 1 T y : A T y 1, y Z n +}. z = max { 1 T x : Ax 1, x Z m + max { 1 T x : Ax 1, x 0 } = min { 1 T y : A T y 1, y 0 } min { 1 T y : A T y 1, y Z n +} = w. } } Optimization Group 13

Primal Bounds: Greedy Search The idea of a greedy heuristic is to construct a solution from scratch (the empty set), choosing at each step the item bringing the best immediate award. We give some examples. 0-1 Knapsack problem: z = max 12x 1 +8x 2 +17x 3 +11x 4 +6x 5 +2x 6 +2x 7 4x 1 +3x 2 +7x 3 +5x 4 +3x 5 +2x 6 +3x 7 9 x {0,1} 7 Greedy Solution: Order the variables so that their profit per unit is nondecreasing. already done. This is Variables with a low index are now more attractive than variables with higher indices. So we proceed as shown in the table. var. c i a i value use of resource resource remaining x 1 3 1 4 5 8 x 2 3 1 3 2 17 x 3 7 0 7 2 11 x 4 5 0 5 2 x 5 2 0 3 2 x 6 1 1 2 2 x 7 2 3 0 3 0 The resulting solution is x G = (1,1,0,0,0,1,0) with objective value z G = 22. All we know is that 22 is a lower bound for the optimal value. Observe that x = (1,0,0,1,0,0,0) is feasible with the higher value 23. Optimization Group 14

Primal Bounds: Greedy Search (cont.) Symmetric TSP: Consider an instance with the distance matrix: 9 2 8 12 11 7 19 10 32 29 18 6 24 3 19 Greedy Solution: Order the edges according to nondecreasing cost, and seek to use them in this order. So we proceed as follows to construct the tour at the left. 1 2 6 3 5 4 Heuristic tour step arc length 1 (1,3) 2 accept 2 (4,6) 3 accept 3 (3,6) 6 accept 4 (2,3) 7 conflict in node 3 5 (1,4) 8 creates subtour 6 (1,2) 9 accept 7 (2,5) 10 accept 8 (4,5) 24 forced to accept 2 3 1 4 Better tour 6 5 The length of the created tour is z G = 54. The tour 1 4 6 5 2 3 1 is shorter: it has length 49. Optimization Group 15

Primal Bounds: Local Search Local search methods assume that a feasible solution is known. It is called the incumbent. The idea of a local search heuristic is to define a neighborhood of solutions close to the incumbent. Then the best solution in this neighborhood is found. If it is better than the incumbent, it replaces it, and the procedure is repeated. Otherwise, the incumbent is locally optimal with respect to the neighborhood, and the heuristic terminates. Below we give two examples. Optimization Group 16

Primal Bounds: Local Search (cont.) Uncapacitated Facility Location: Consider an instance with m = 6 clients and n = 4 depots, and costs as shown below: 6 2 3 4 1 9 4 11 15 2 6 3 (c ij ) = and (f 9 11 4 8 j ) = (21, 16, 11, 24) 7 23 2 9 4 3 1 5 Let N = {1,2,3,4} denote the set of depots, and S the set of open depots. Let the incumbent be the solution with depots 1 and 2 open, so S = {1,2}. Each client is served by the open depot with cheapest cost for the client. So the costs for the incumbent are (2 + 1 + 2 + 9 + 7 + 3) + (21 + 16) = 61. A possible neighborhood Q(S) of S is the set of all solutions obtained from S by adding or removing a single depot: Q(S) = {T N : T = S {j} for j / S or T = S \ {i} for i S}. In the current example: Q(S) = {{1}, {2}, {1, 2, 3}, {1, 2, 4}}. A simple computation makes clear that the costs for these 4 solutions are 63, 66, 60 and 84, respectively. So S = {1,2,3} is the next incumbent. The new neighborhood becomes Q(S) = {{1,2}, {1,3}, {2,3}, {1,2,3,4}}, with minimal costs 42 for S = {2,3}, which is the new incumbent. The new neighborhood becomes Q(S) = {{2}, {3}, {1,2,3}, {2,3,4}}, with minimal costs 31 for S = {3}, the new incumbent. The new neighborhood becomes Q(S) = {{1,3}, {2,3}, {3,4}, }, with all costs > 31. So S = {3} is a locally optimal solution. Optimization Group 17

Primal Bounds: Local Search (cont.) Graph Equipartition Problem: Given a graph G = (V, E) and n = V, the problem is to find a subset S V with S = n 2 for which the number c(s) of edges in the cut set δ(s, V \ S) is minimized, where δ(s, V \ S) = {(i, j) E : i S, j / S}. In this example all feasible sets have the same size, n 2. A natural neighborhood of a feasible set S V therefore consists of all subsets of nodes obtained by replacing one element in S by one element not in S: Q(S) = {T N : T \ S = S \ T = 1}. 1 Example: In the graph shown left we start with S = {1,2,3} for which c(s) = 6. Then 2 6 Q(S) = {{1,2,4}, {1,2,5}, {1,2,6}, {1,3,4}, {1,3,5}, {1,3,6}, {2,3,4}, {2,3,5}, {2,3,6}}. 3 4 5 for which c(t) = 6,5,4,4,5,6,5,2,5 respectively. The new incumbent is S = {2,3,5} with c(s) = 2. Q(S) does not contain a better solution, as may be easily verified, so S = {2,3,5} is locally optimal. Optimization Group 18