Combinatorial optimization problems
|
|
- Lucinda Reed
- 5 years ago
- Views:
Transcription
1 Combinatorial optimization problems Heuristic Algorithms Giovanni Righini University of Milan Department of Computer Science (Crema)
2 Optimization In general an optimization problem can be formulated as: where x is a vector of variables; z(x) is the objective function; minimize z(x) subject to x X X is the feasible region, i.e. the set of solutions satisfying the constraints. A solution is an assignment of values to the variables.
3 Combinatorial optimization In general a combinatorial optimization problem can be formulated as: minimize z(x) subject to x {0, 1} E The feasible region X is defined as a subset of the set of all possible subsets of a given ground set (!). Let s say it again with an example...
4 Combinatorial optimization: example Ground set E: the set of edges of a given graph G. All possible subsets of E are the 2 E subsets of edges. Only a subset X of them are, for instance, spanning trees. The condition x X describes the feasible region of any problem involving the search of an optimal spanning tree. Although the ground set E is rather small, the number of its subsets is exponential in its cardinality (2 E ) and hence the number of solutions can be very large. Even restricting the search to feasible solutions, the cardinality of X can be combinatorial: it grows as a combinatorial number when E grows.
5 The combinatorial problems structure The variables, the constraints and the objective function of a combinatorial optimization problem define its combinatorial structure. This is a semi-informal way to indicate the main characteristics of the problem, that affect the effectiveness of different solution procedures. The analysis of the combinatorial structure of any given problem gives useful indications on the most suitable algorithm to solve it. Moreover it can uncover similarities between seemingly different problems.
6 Problems on weighted sets: the knapsack problem (KP) From a ground set of items, select a subset such that the value of the selected items is maximum and their weight does not exceed a given capacity. We are given a ground set N of items a weight function a : N N a capacity b N a value function c : N N We can associate a binary variable x with each element of the ground set: the solution space is {0, 1} n, where n = N. In this way every solution corresponds to a subset and to its binary characteristic vector x. The feasible region X contains the subsets with total weight not larger than b: X = {x {0, 1} n : j N a j x j b} The objective is the maximization of the total value: max z(x) = j N c j x j.
7 Example A D B E C F Knapsack N A B C D E F c a b = 8 x = (0, 0, 1, 1, 1, 0, 0) X x = (1, 0, 1, 1, 0, 0, 0) X z(x ) = 13 z(x ) = 16 D Knapsack E D C Knapsack C A
8 Problems on subsets with a metric: the Max Diversity Problem (MDP) Given a ground set of items, we want to select a subset of given cardinality, maximizing a measure of the pairwise distances between the selected items. We are given a ground set N, a distance function d : N N N, a positive integer number k {1,..., N }. A possible choice of the variables is analogous to the previous case: x is the binary characteristic vector of the selected subset. The feasible region X contains all the subsets of cardinality k: X = {x {0, 1} n : j N x j = k}. The objective is to maximize the sum of the pairwise distances: max z(x) = d ij x i x j. i,j N
9 Example A B F D C E G x = (0, 0, 1, 1, 1, 0, 0) X x = (1, 0, 1, 0, 0, 0, 1) X z(x ) = 24 z(x ) = 46 A B F D C E G A B F D C E G
10 Additive and non-additive objective functions In general the objective function associates rational or integer values with (feasible) subsets of the ground set. z : X N Computing its value can be more or less difficult. The KP has an additive (linear) objective function: its value is the sum of the values of another value function c whose domain is the ground set: c : N N The MDP has a non-additive (quadratic) objective-function. Both of them are easy to compute, but the additive objective function of the KP is easier to update when an item is inserted or deleted from the solution. It is enough to add c j for each inserted item j N; subtract c j for each deleted item j N. For the non-additive objective function of the MDP this is not true.
11 Set partitioning problems: the Bin Packing Problem (BPP) A set of weighted items must be partitioned into the minimum number of subsets, so that the total weight of each subset is within a given capacity. We are given: a set N of items, a set M of bins, a weight function a : N N, a capacity b of the bins. The ground set of the problem contains all the item-bin pairs. E = N M A solution is represented by nm binary variables with two indices i N and j M. The feasible region contains the partitions of the items, complying with the capacity constraints: X = {x {0, 1} nm : j M x ij = 1 i N, i N a i x ij b j M}.
12 Set partitioning problems: the Bin Packing Problem (BPP) The objective is to minimize the number of bins used. min z(x) = {j M : x ij > 0}. i N To get rid of the function cardinality of, we need additional binary variables: the characteristic vector y of the bin subset. We obtain: X = {(x, y) {0, 1} nm+m : j M x ij = 1 i N, i N a i x ij by j j M}. min z(y) = j M y j.
13 Example S = { (A, 1),(B, 1),(C, 2),(D, 2),(E, 2),(F, 3), (G, 4),(H, 5),(I, 5) } X y = (1, 1, 1, 1, 1) z(y ) = 5 S = { (A, 1),(B, 1),(C, 2),(D, 2),(E, 2),(F, 3), (G, 4),(H, 1),(I, 4) } X y = (1, 1, 1, 1, 0) z(y ) = 4
14 Set partitioning problems: the Parallel Machine Scheduling Problem (PMSP) A set of indivisible jobs of given duration must be assigned to a set of machines, minimizing the overall completion time. We are given: a set N of jobs, a set M of machines, a processing time function p : N N. The ground set contains all job-machine pairs. We can use the same variable choice as for the BPP. The feasible region X contains the partitions of N into subsets. X = {x {0, 1} nm : j M x ij = 1 i N.} The objective function is the minimization of the maximum working time among all machines. min z(x) = max j M p i x ij. i N
15 Example N = {L1, L2, L3, L4, L5, L6} M = {M1, M2, M3} Job L1 L2 L3 L4 L5 L6 p M1 M2 L1 L2 L3 L4 L5 S = { (L1, M1),(L2, M2),(L3, M2), (L4, M2),(L5, M1) } X M3 L6 z(x ) = M1 M2 L3 L1 L4 L5 L2 S = { (L1, M1),(L2, M1),(L3, M2), (L4, M2),(L5, M2) } X M3 L6 z(x ) =
16 Sensitive and insensitive objective functions The objective functions of the BPP and the PMSP are not additive, are not easy to compute. Small changes in the solution x may have different impact on the objective function value: variation equal to the duration of the changed job (e.g. L5 on M1); no variation (e.g., L5 on M3); intermediate variation (e.g., L2 on M2). This is because the effect of the change depends both on the modified elements, on the non-modified elements. In both problems the objective function is flat : many different feasible solutions have the same value.
17 Problems on matrices: the Set Covering Problem (SCP) Given a binary matrix and a vector of costs associated with the columns, select a minimum cost subset of columns covering all the rows. We are given: a binary matrix a B m,n with a set R or m rows and a set C of n columns, a cost function c : C N. A column j C covers a row i R if and only if a ij = 1. The ground set is the set of columns C. The feasible region contains the subsets of columns that cover all the rows. X = {x {0, 1} n : j C a ij x j 1 i R}. The objective is to minimize the total cost of the selected columns: min z(x) = j C c j x j.
18 Example c a a a x = (1, 0, 1, 0, 1, 0) X z(x ) = 19 x = (1, 0, 0, 0, 1, 1) X z(x ) = 15
19 The feasibility test In a heuristic algorithm the following sub-problem may often occur: Given a solution x, is it feasible or not? x X? This is a decision problem. The feasibility test may require an instant check on a single number (e.g. the total volume in the KP, the cardinality of the subset in the MDP) a quick scan of some attributes of the solution (e.g. exactly one machine for each job PMSP) the computation and check of many different values (e.g. the volume in each bin in the BPP). The time required may be different according to the feasibility test being done on a generic solution x; on a solution x obtained by a slight modification of a feasible solution x.
20 Problems on matrices: the Set Packing Problem Given a binary matrix and a weight vector associated with the columns, select a maximum weight subset of columns with no conflicts. We are given: a binary matrix a B m,n with a set R of m rows and a set C of n columns; a weight function w : C N. Two columns j, j C are in conflict if and only if there is at least a row i R such that a ij = a ij = 1. The ground set is the set of columns C. The feasible region contains the subsets of columns with no conflicts. X = {x {0, 1} n : j N a ij x j 1 i R}. The objective is to maximize the total weight of the selected columns. max z(x) = j N w j x j
21 Example w a a a x = (0, 1, 0, 1, 0, 0) X z(x ) = 20 x = (1, 0, 0, 0, 1, 1) X z(x ) = 15
22 Problems on matrices: the Set Partitioning Problem (SPP) Given a binary matrix and a cost vector associated with the columns, select a minimum subset of columns covering all the rows and with no conflicts. a binary matrix a B m,n with a set R of m rows and a set C of n columns; a cost function c : C N. The ground set is the set of columns C. The feasible region contains the subsets of columns covering all the rows with no conflicts. X = {x {0, 1} n : j N a ij x j = 1 i R}. The objective is to minimize the total cost of the selected columns. min z(x) = j N c j x j.
23 Example c a a a x = (0, 1, 0, 1, 0, 1) X z(x ) = 20 x = (1, 0, 0, 0, 1, 1) / X z(x ) = 15
24 The search for feasible solutions In a heuristic algorithm the following sub-problem may occur: Find a feasible solution x X. This is a search problem. For some problems the solution is trivial. in the KP the empty subset is feasible; in the MDP, any subset of k elements is feasible; in the SCP, the whole column set is feasible (or no feasible solution exists); in other problem it is enough to respect easy consistency constraints. But in some cases the search for a feasible solution is difficult. in the BPP it is not obvious how many bins are needed (an upper bound is the number of items); in the SPP no polynomial-time algorithm is known to provide a feasible solution.
25 Graph optimization problems. the Vertex Cover Problem (VCP) Given a graph G = (V, E), select a minimum cardinality vertex subset such that every edge is incident to it. We are given a graph G(V, E), where: V is the set of vertices, of cardinality n; E is the set of edges. The ground set is V (we have a binary variable for each vertex). The feasible region contains all vertex subsets covering the edges: X = {x {0, 1} n : x i + x j 1 [i, j] E} The objective is to minimize the number of selected vertices: min z(x) = i V x i
26 Example A B C D E F G H A B C D E F G H x = (0, 1, 0, 1, 1, 1, 1, 0) X z(x ) = 5 A B C D E F G H x = (1, 0, 1, 0, 0, 0, 0, 1) / X z(x ) = 3
27 Graph optimization problems: the Max Clique Problem (MCP) Given a graph with weights associated with the vertices, select a maximum weight vertex subset such that all its elements are adjacent to each other. We are given: a graph G = (V, E) with n vertices; a weight function w : V N. The ground set is the vertex set V. The feasible region contains the cliques (complete vertex subsets, i.e. vertex subsets containing all possible edges): X = {x {0, 1} n : x i + x j 1 [i, j] E}. The objective is to maximize the weight of the selected subset: max z(x) = i V w i x i.
28 Example A B C D E F G H Uniform weight: w i = 1 for each i V A B C D E F G H x = (0, 1, 1, 0, 0, 1, 1) X z(x ) = 4 A B C D E F G H x = (1, 0, 0, 1, 1, 0, 0) X z(x ) = 3
29 Graph optimization problems: the Max Independent Set Problem (MISP) Given a graph with weights associated with the vertices, select a maximum weight vertex subset such that all its elements are not adjacent to each other. We are given: a graph G = (V, E) with n vertices; a weight function w : V N. The ground set is the vertex set V. The feasible region contains the independent sets (empty vertex subsets, i.e. vertex subsets containing no edges): X = {x {0, 1} n : x i + x j 1 [i, j] E}. The objective is to maximize the weight of the selected subset: max z(x) = i V w i x i.
30 Example A B C D E F G H A B C D E F G x = (0, 1, 1, 0, 0, 1, 1) X z(x ) = 4 H A B C D E F G x = (1, 0, 0, 1, 1, 0, 0) X z(x ) = 3 H
31 Relationships between problems (1) Every instance of the MCP is equivalent to an instance of the MISP defined on the complementary graph. A B C A B C D E F G D E F G H H A B C A B C D E F G D E F G H H
32 Relationships between problems (2) The VCP and the SCP are also linked: every VCP instance can be translated into a SCP instance: each edge of the graph in the VCP corresponds to a row of the matrix in the SCP; each vertex of the graph in the VCP corresponds to a column of the matrix in the SCP; if and only if an edge e is incident to a vertex v, then a ev = 1 (two entries are equal to 1 in each row). The optimal solution of the SCP corresponds to the optimal solution of the VCP.
33 A B C D E F G H A B C D E F G H (A, D) (A, E) (B, C) (B, F) (B, G) (C, F) (C, G) (D, E) (D, H) (F, G) (F, H) The converse is not possible in general.
34 Relationships between problems (3) The BPP and the PMSP are also equivalent, but the correspondence is more complex: jobs in the PMSP correspond to items in the BPP; machines in the PMSP correspond to bins in the BPP, but in the BPP the capacity is given and the number of bins is minimized; in the PMSP the number of machines is given and the completion time is minimized. To find the minimum number of bins of the BPP 1. set a tentative value k; 2. define the corresponding instance of the PMSP (with k machines); 3. find the minimum completion time t; if t is larger than the bin capacity of the BPP, increase k and repeat; if t is not larger than the bin capacity of the BPP, decrease k and repeat.
35 M1 L1 M1 L1 L5 M2 L2 L3 M2 L2 L3 L4 M3 L6 M3 L6 M4 L4 L The inverse procedure is also possible. The two problems are equivalent but the transformation implies that one of them be solved repeatedly.
36 Graph optimization problems: the (Asymmetric) Traveling Salesman Problem (ATSP) Given a digraph with costs on the arcs, find a minimum cost Hamiltonian circuit. We are given: a digraph G = (N, A); a cost function c : A N The ground set is the arc set A (we use a binary variable per each arc). The feasible region contains the Hamiltonian circuits How can we describe such a feasible region? How can we modify a feasible solution into another feasible solution? Is it always possible to find a Hamiltonian circuit? The objective is to minimize the total cost of the selected arcs: min z(x) = c ij x ij (i,j) A
37 Example C = { (1, 4),(4, 5),(5, 8),(8, 7), (7, 6),(6, 2),(2, 3),(3, 1) } X z(x ) = C = { (4, 5),(5, 8),(8, 7),(7, 4), (1, 2),(2, 3),(3, 6),(6, 1) } / X z(x ) = 106
38 Graph optimization problems: the (Asymmetric) Vehicle Routing Problem (AVRP) Given a digraph with costs on the arcs, a depot node, a demand for each node and a capacity, find a minimum cost subset of circuits such that they cover all the nodes, they all include the depot and the total demand in each of them does not exceed the capacity. We are given: a digraph G = (N, A), a depot node d N, a cost function c : A N, a demand function w : N N, a capacity W N.
39 The ground set E can be the arc set A; the set of possible pairs (node,circuit). The feasible region could contain all subsets of arcs satisfying the constraints would require to visit a graph.) (The feasibility test all partitions of the non-depot nodes into subsets of limited weight (demand) that can be visited along a circuit also including the depot (Difficult sub-problem (ATSP)!) The objective is to minimize the total cost of the selected arcs: min z(x) = c j x ij (i,j) A This is the expression of z with A as a ground set.
40 Example d = 1 The solutions could be represented as: d = 1 subsets of arcs S = { (d, 2),(2, 3),(3, 6),(6, d),(d, 4), (4, 5),(5, 8),(8, 7),(7, d) } X partitions of nodes S = { (2, 1),(3, 1),(6, 1), (4, 2),(5, 2),(7, 2),(8, 2) } X In both cases z(x) = 133.
41 Combining alternative formulations The AVRP (as many other combinatorial optimization problems) exhibits an important feature: different definitions of the ground set are preferable for different purposes. When the ground set is the arc set it is easy to evaluate the objective function; it is difficult to test the feasibility; When the ground set is made by the (node,circuit) pairs it is easy to test the feasibility; it is difficult to evaluate the objective. The same holds when we update/modify a solution. Which formulation should we adopt? The one that makes the most frequent operations more efficient. Both of them, with the additional task of keeping them consistent.
Technische Universität München, Zentrum Mathematik Lehrstuhl für Angewandte Geometrie und Diskrete Mathematik. Combinatorial Optimization (MA 4502)
Technische Universität München, Zentrum Mathematik Lehrstuhl für Angewandte Geometrie und Diskrete Mathematik Combinatorial Optimization (MA 4502) Dr. Michael Ritter Problem Sheet 1 Homework Problems Exercise
More informationChapter 3: Discrete Optimization Integer Programming
Chapter 3: Discrete Optimization Integer Programming Edoardo Amaldi DEIB Politecnico di Milano edoardo.amaldi@polimi.it Website: http://home.deib.polimi.it/amaldi/opt-16-17.shtml Academic year 2016-17
More informationPreliminaries and Complexity Theory
Preliminaries and Complexity Theory Oleksandr Romanko CAS 746 - Advanced Topics in Combinatorial Optimization McMaster University, January 16, 2006 Introduction Book structure: 2 Part I Linear Algebra
More informationChapter 3: Discrete Optimization Integer Programming
Chapter 3: Discrete Optimization Integer Programming Edoardo Amaldi DEIB Politecnico di Milano edoardo.amaldi@polimi.it Sito web: http://home.deib.polimi.it/amaldi/ott-13-14.shtml A.A. 2013-14 Edoardo
More informationIntroduction to Bin Packing Problems
Introduction to Bin Packing Problems Fabio Furini March 13, 2015 Outline Origins and applications Applications: Definition: Bin Packing Problem (BPP) Solution techniques for the BPP Heuristic Algorithms
More informationAlgorithm Design Strategies V
Algorithm Design Strategies V Joaquim Madeira Version 0.0 October 2016 U. Aveiro, October 2016 1 Overview The 0-1 Knapsack Problem Revisited The Fractional Knapsack Problem Greedy Algorithms Example Coin
More information4/12/2011. Chapter 8. NP and Computational Intractability. Directed Hamiltonian Cycle. Traveling Salesman Problem. Directed Hamiltonian Cycle
Directed Hamiltonian Cycle Chapter 8 NP and Computational Intractability Claim. G has a Hamiltonian cycle iff G' does. Pf. Suppose G has a directed Hamiltonian cycle Γ. Then G' has an undirected Hamiltonian
More informationModeling with Integer Programming
Modeling with Integer Programg Laura Galli December 18, 2014 We can use 0-1 (binary) variables for a variety of purposes, such as: Modeling yes/no decisions Enforcing disjunctions Enforcing logical conditions
More informationNP-Completeness. NP-Completeness 1
NP-Completeness Reference: Computers and Intractability: A Guide to the Theory of NP-Completeness by Garey and Johnson, W.H. Freeman and Company, 1979. NP-Completeness 1 General Problems, Input Size and
More informationAnalysis of Algorithms. Unit 5 - Intractable Problems
Analysis of Algorithms Unit 5 - Intractable Problems 1 Intractable Problems Tractable Problems vs. Intractable Problems Polynomial Problems NP Problems NP Complete and NP Hard Problems 2 In this unit we
More informationCS Algorithms and Complexity
CS 50 - Algorithms and Complexity Linear Programming, the Simplex Method, and Hard Problems Sean Anderson 2/15/18 Portland State University Table of contents 1. The Simplex Method 2. The Graph Problem
More informationThe Maximum Flow Problem with Disjunctive Constraints
The Maximum Flow Problem with Disjunctive Constraints Ulrich Pferschy Joachim Schauer Abstract We study the maximum flow problem subject to binary disjunctive constraints in a directed graph: A negative
More informationMat 3770 Bin Packing or
Basic Algithm Spring 2014 Used when a problem can be partitioned into non independent sub problems Basic Algithm Solve each sub problem once; solution is saved f use in other sub problems Combine solutions
More informationDiscrete (and Continuous) Optimization WI4 131
Discrete (and Continuous) Optimization WI4 131 Kees Roos Technische Universiteit Delft Faculteit Electrotechniek, Wiskunde en Informatica Afdeling Informatie, Systemen en Algoritmiek e-mail: C.Roos@ewi.tudelft.nl
More informationPolynomial-time Reductions
Polynomial-time Reductions Disclaimer: Many denitions in these slides should be taken as the intuitive meaning, as the precise meaning of some of the terms are hard to pin down without introducing the
More informationTravelling Salesman Problem
Travelling Salesman Problem Fabio Furini November 10th, 2014 Travelling Salesman Problem 1 Outline 1 Traveling Salesman Problem Separation Travelling Salesman Problem 2 (Asymmetric) Traveling Salesman
More informationData Structures in Java
Data Structures in Java Lecture 21: Introduction to NP-Completeness 12/9/2015 Daniel Bauer Algorithms and Problem Solving Purpose of algorithms: find solutions to problems. Data Structures provide ways
More informationOptimization Prof. A. Goswami Department of Mathematics Indian Institute of Technology, Kharagpur. Lecture - 20 Travelling Salesman Problem
Optimization Prof. A. Goswami Department of Mathematics Indian Institute of Technology, Kharagpur Lecture - 20 Travelling Salesman Problem Today we are going to discuss the travelling salesman problem.
More informationThe traveling salesman problem
Chapter 58 The traveling salesman problem The traveling salesman problem (TSP) asks for a shortest Hamiltonian circuit in a graph. It belongs to the most seductive problems in combinatorial optimization,
More informationIntroduction to integer programming III:
Introduction to integer programming III: Network Flow, Interval Scheduling, and Vehicle Routing Problems Martin Branda Charles University in Prague Faculty of Mathematics and Physics Department of Probability
More informationMaximum Flow Problem (Ford and Fulkerson, 1956)
Maximum Flow Problem (Ford and Fulkerson, 196) In this problem we find the maximum flow possible in a directed connected network with arc capacities. There is unlimited quantity available in the given
More informationCSC 8301 Design & Analysis of Algorithms: Lower Bounds
CSC 8301 Design & Analysis of Algorithms: Lower Bounds Professor Henry Carter Fall 2016 Recap Iterative improvement algorithms take a feasible solution and iteratively improve it until optimized Simplex
More informationTotally unimodular matrices. Introduction to integer programming III: Network Flow, Interval Scheduling, and Vehicle Routing Problems
Totally unimodular matrices Introduction to integer programming III: Network Flow, Interval Scheduling, and Vehicle Routing Problems Martin Branda Charles University in Prague Faculty of Mathematics and
More informationExercises NP-completeness
Exercises NP-completeness Exercise 1 Knapsack problem Consider the Knapsack problem. We have n items, each with weight a j (j = 1,..., n) and value c j (j = 1,..., n) and an integer B. All a j and c j
More informationPart III: Traveling salesman problems
Transportation Logistics Part III: Traveling salesman problems c R.F. Hartl, S.N. Parragh 1/282 Motivation Motivation Why do we study the TSP? c R.F. Hartl, S.N. Parragh 2/282 Motivation Motivation Why
More informationMinimum cost transportation problem
Minimum cost transportation problem Complements of Operations Research Giovanni Righini Università degli Studi di Milano Definitions The minimum cost transportation problem is a special case of the minimum
More informationIn complexity theory, algorithms and problems are classified by the growth order of computation time as a function of instance size.
10 2.2. CLASSES OF COMPUTATIONAL COMPLEXITY An optimization problem is defined as a class of similar problems with different input parameters. Each individual case with fixed parameter values is called
More informationCombinatorial Optimization
Combinatorial Optimization Problem set 8: solutions 1. Fix constants a R and b > 1. For n N, let f(n) = n a and g(n) = b n. Prove that f(n) = o ( g(n) ). Solution. First we observe that g(n) 0 for all
More informationDiscrete (and Continuous) Optimization Solutions of Exercises 2 WI4 131
Discrete (and Continuous) Optimization Solutions of Exercises 2 WI4 131 Kees Roos Technische Universiteit Delft Faculteit Electrotechniek, Wiskunde en Informatica Afdeling Informatie, Systemen en Algoritmiek
More informationMVE165/MMG630, Applied Optimization Lecture 6 Integer linear programming: models and applications; complexity. Ann-Brith Strömberg
MVE165/MMG630, Integer linear programming: models and applications; complexity Ann-Brith Strömberg 2011 04 01 Modelling with integer variables (Ch. 13.1) Variables Linear programming (LP) uses continuous
More information1.1 P, NP, and NP-complete
CSC5160: Combinatorial Optimization and Approximation Algorithms Topic: Introduction to NP-complete Problems Date: 11/01/2008 Lecturer: Lap Chi Lau Scribe: Jerry Jilin Le This lecture gives a general introduction
More information8.5 Sequencing Problems
8.5 Sequencing Problems Basic genres. Packing problems: SET-PACKING, INDEPENDENT SET. Covering problems: SET-COVER, VERTEX-COVER. Constraint satisfaction problems: SAT, 3-SAT. Sequencing problems: HAMILTONIAN-CYCLE,
More information3.4 Relaxations and bounds
3.4 Relaxations and bounds Consider a generic Discrete Optimization problem z = min{c(x) : x X} with an optimal solution x X. In general, the algorithms generate not only a decreasing sequence of upper
More informationEasy Problems vs. Hard Problems. CSE 421 Introduction to Algorithms Winter Is P a good definition of efficient? The class P
Easy Problems vs. Hard Problems CSE 421 Introduction to Algorithms Winter 2000 NP-Completeness (Chapter 11) Easy - problems whose worst case running time is bounded by some polynomial in the size of the
More informationWhat is an integer program? Modelling with Integer Variables. Mixed Integer Program. Let us start with a linear program: max cx s.t.
Modelling with Integer Variables jesla@mandtudk Department of Management Engineering Technical University of Denmark What is an integer program? Let us start with a linear program: st Ax b x 0 where A
More informationAsymmetric Traveling Salesman Problem (ATSP): Models
Asymmetric Traveling Salesman Problem (ATSP): Models Given a DIRECTED GRAPH G = (V,A) with V = {,, n} verte set A = {(i, j) : i V, j V} arc set (complete digraph) c ij = cost associated with arc (i, j)
More informationChapter 9: Relations Relations
Chapter 9: Relations 9.1 - Relations Definition 1 (Relation). Let A and B be sets. A binary relation from A to B is a subset R A B, i.e., R is a set of ordered pairs where the first element from each pair
More informationHill climbing: Simulated annealing and Tabu search
Hill climbing: Simulated annealing and Tabu search Heuristic algorithms Giovanni Righini University of Milan Department of Computer Science (Crema) Hill climbing Instead of repeating local search, it is
More informationComputational complexity theory
Computational complexity theory Introduction to computational complexity theory Complexity (computability) theory deals with two aspects: Algorithm s complexity. Problem s complexity. References S. Cook,
More informationIntroduction into Vehicle Routing Problems and other basic mixed-integer problems
Introduction into Vehicle Routing Problems and other basic mixed-integer problems Martin Branda Charles University in Prague Faculty of Mathematics and Physics Department of Probability and Mathematical
More informationRepresentations of All Solutions of Boolean Programming Problems
Representations of All Solutions of Boolean Programming Problems Utz-Uwe Haus and Carla Michini Institute for Operations Research Department of Mathematics ETH Zurich Rämistr. 101, 8092 Zürich, Switzerland
More informationPart III: Traveling salesman problems
Transportation Logistics Part III: Traveling salesman problems c R.F. Hartl, S.N. Parragh 1/74 Motivation Motivation Why do we study the TSP? it easy to formulate it is a difficult problem many significant
More informationComputer Science 385 Analysis of Algorithms Siena College Spring Topic Notes: Limitations of Algorithms
Computer Science 385 Analysis of Algorithms Siena College Spring 2011 Topic Notes: Limitations of Algorithms We conclude with a discussion of the limitations of the power of algorithms. That is, what kinds
More informationBounds on the Traveling Salesman Problem
Bounds on the Traveling Salesman Problem Sean Zachary Roberson Texas A&M University MATH 613, Graph Theory A common routing problem is as follows: given a collection of stops (for example, towns, stations,
More informationOptimization Exercise Set n.5 :
Optimization Exercise Set n.5 : Prepared by S. Coniglio translated by O. Jabali 2016/2017 1 5.1 Airport location In air transportation, usually there is not a direct connection between every pair of airports.
More informationInteger Linear Programming Modeling
DM554/DM545 Linear and Lecture 9 Integer Linear Programming Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark Outline 1. 2. Assignment Problem Knapsack Problem
More informationUnit 1A: Computational Complexity
Unit 1A: Computational Complexity Course contents: Computational complexity NP-completeness Algorithmic Paradigms Readings Chapters 3, 4, and 5 Unit 1A 1 O: Upper Bounding Function Def: f(n)= O(g(n)) if
More informationClassical Complexity and Fixed-Parameter Tractability of Simultaneous Consecutive Ones Submatrix & Editing Problems
Classical Complexity and Fixed-Parameter Tractability of Simultaneous Consecutive Ones Submatrix & Editing Problems Rani M. R, Mohith Jagalmohanan, R. Subashini Binary matrices having simultaneous consecutive
More informationECS122A Handout on NP-Completeness March 12, 2018
ECS122A Handout on NP-Completeness March 12, 2018 Contents: I. Introduction II. P and NP III. NP-complete IV. How to prove a problem is NP-complete V. How to solve a NP-complete problem: approximate algorithms
More informationLimitations of Algorithm Power
Limitations of Algorithm Power Objectives We now move into the third and final major theme for this course. 1. Tools for analyzing algorithms. 2. Design strategies for designing algorithms. 3. Identifying
More information3.7 Cutting plane methods
3.7 Cutting plane methods Generic ILP problem min{ c t x : x X = {x Z n + : Ax b} } with m n matrix A and n 1 vector b of rationals. According to Meyer s theorem: There exists an ideal formulation: conv(x
More informationCSC 1700 Analysis of Algorithms: P and NP Problems
CSC 1700 Analysis of Algorithms: P and NP Problems Professor Henry Carter Fall 2016 Recap Algorithmic power is broad but limited Lower bounds determine whether an algorithm can be improved by more than
More informationComputational complexity theory
Computational complexity theory Introduction to computational complexity theory Complexity (computability) theory deals with two aspects: Algorithm s complexity. Problem s complexity. References S. Cook,
More information5 Integer Linear Programming (ILP) E. Amaldi Foundations of Operations Research Politecnico di Milano 1
5 Integer Linear Programming (ILP) E. Amaldi Foundations of Operations Research Politecnico di Milano 1 Definition: An Integer Linear Programming problem is an optimization problem of the form (ILP) min
More informationON COST MATRICES WITH TWO AND THREE DISTINCT VALUES OF HAMILTONIAN PATHS AND CYCLES
ON COST MATRICES WITH TWO AND THREE DISTINCT VALUES OF HAMILTONIAN PATHS AND CYCLES SANTOSH N. KABADI AND ABRAHAM P. PUNNEN Abstract. Polynomially testable characterization of cost matrices associated
More informationFundamentals of optimization problems
Fundamentals of optimization problems Dmitriy Serdyuk Ferienakademie in Sarntal 2012 FAU Erlangen-Nürnberg, TU München, Uni Stuttgart September 2012 Overview 1 Introduction Optimization problems PO and
More informationInteger Linear Programming (ILP)
Integer Linear Programming (ILP) Zdeněk Hanzálek, Přemysl Šůcha hanzalek@fel.cvut.cz CTU in Prague March 8, 2017 Z. Hanzálek (CTU) Integer Linear Programming (ILP) March 8, 2017 1 / 43 Table of contents
More informationCS 301: Complexity of Algorithms (Term I 2008) Alex Tiskin Harald Räcke. Hamiltonian Cycle. 8.5 Sequencing Problems. Directed Hamiltonian Cycle
8.5 Sequencing Problems Basic genres. Packing problems: SET-PACKING, INDEPENDENT SET. Covering problems: SET-COVER, VERTEX-COVER. Constraint satisfaction problems: SAT, 3-SAT. Sequencing problems: HAMILTONIAN-CYCLE,
More informationMore on NP and Reductions
Indian Institute of Information Technology Design and Manufacturing, Kancheepuram Chennai 600 127, India An Autonomous Institute under MHRD, Govt of India http://www.iiitdm.ac.in COM 501 Advanced Data
More informationIntroduction to Graph Theory
Introduction to Graph Theory George Voutsadakis 1 1 Mathematics and Computer Science Lake Superior State University LSSU Math 351 George Voutsadakis (LSSU) Introduction to Graph Theory August 2018 1 /
More informationThe P versus NP Problem. Ker-I Ko. Stony Brook, New York
The P versus NP Problem Ker-I Ko Stony Brook, New York ? P = NP One of the seven Millenium Problems The youngest one A folklore question? Has hundreds of equivalent forms Informal Definitions P : Computational
More informationNP-Complete Problems and Approximation Algorithms
NP-Complete Problems and Approximation Algorithms Efficiency of Algorithms Algorithms that have time efficiency of O(n k ), that is polynomial of the input size, are considered to be tractable or easy
More informationTRANSPORTATION & NETWORK PROBLEMS
TRANSPORTATION & NETWORK PROBLEMS Transportation Problems Problem: moving output from multiple sources to multiple destinations. The objective is to minimise costs (maximise profits). Network Representation
More informationTheoretical Computer Science
Theoretical Computer Science 411 (010) 417 44 Contents lists available at ScienceDirect Theoretical Computer Science journal homepage: wwwelseviercom/locate/tcs Resource allocation with time intervals
More informationCS/COE
CS/COE 1501 www.cs.pitt.edu/~nlf4/cs1501/ P vs NP But first, something completely different... Some computational problems are unsolvable No algorithm can be written that will always produce the correct
More informationApproximation Algorithms
Approximation Algorithms What do you do when a problem is NP-complete? or, when the polynomial time solution is impractically slow? assume input is random, do expected performance. Eg, Hamiltonian path
More informationCHAPTER 3 FUNDAMENTALS OF COMPUTATIONAL COMPLEXITY. E. Amaldi Foundations of Operations Research Politecnico di Milano 1
CHAPTER 3 FUNDAMENTALS OF COMPUTATIONAL COMPLEXITY E. Amaldi Foundations of Operations Research Politecnico di Milano 1 Goal: Evaluate the computational requirements (this course s focus: time) to solve
More informationComputational Integer Programming. Lecture 2: Modeling and Formulation. Dr. Ted Ralphs
Computational Integer Programming Lecture 2: Modeling and Formulation Dr. Ted Ralphs Computational MILP Lecture 2 1 Reading for This Lecture N&W Sections I.1.1-I.1.6 Wolsey Chapter 1 CCZ Chapter 2 Computational
More informationDetermine the size of an instance of the minimum spanning tree problem.
3.1 Algorithm complexity Consider two alternative algorithms A and B for solving a given problem. Suppose A is O(n 2 ) and B is O(2 n ), where n is the size of the instance. Let n A 0 be the size of the
More informationScheduling and Optimization Course (MPRI)
MPRI Scheduling and optimization: lecture p. /6 Scheduling and Optimization Course (MPRI) Leo Liberti LIX, École Polytechnique, France MPRI Scheduling and optimization: lecture p. /6 Teachers Christoph
More informationFINAL EXAM PRACTICE PROBLEMS CMSC 451 (Spring 2016)
FINAL EXAM PRACTICE PROBLEMS CMSC 451 (Spring 2016) The final exam will be on Thursday, May 12, from 8:00 10:00 am, at our regular class location (CSI 2117). It will be closed-book and closed-notes, except
More informationApproximation Algorithms for Re-optimization
Approximation Algorithms for Re-optimization DRAFT PLEASE DO NOT CITE Dean Alderucci Table of Contents 1.Introduction... 2 2.Overview of the Current State of Re-Optimization Research... 3 2.1.General Results
More informationAlgorithms. NP -Complete Problems. Dong Kyue Kim Hanyang University
Algorithms NP -Complete Problems Dong Kyue Kim Hanyang University dqkim@hanyang.ac.kr The Class P Definition 13.2 Polynomially bounded An algorithm is said to be polynomially bounded if its worst-case
More informationChapter 3. Complexity of algorithms
Chapter 3 Complexity of algorithms In this chapter, we see how problems may be classified according to their level of difficulty. Most problems that we consider in these notes are of general character,
More informationModelling linear and linear integer optimization problems An introduction
Modelling linear and linear integer optimization problems An introduction Karen Aardal October 5, 2015 In optimization, developing and analyzing models are key activities. Designing a model is a skill
More informationComputers and Intractability. The Bandersnatch problem. The Bandersnatch problem. The Bandersnatch problem. A Guide to the Theory of NP-Completeness
Computers and Intractability A Guide to the Theory of NP-Completeness The Bible of complexity theory Background: Find a good method for determining whether or not any given set of specifications for a
More informationExact and Heuristic Algorithms for the Symmetric and Asymmetric Vehicle Routing Problem with Backhauls
Exact and Heuristic Algorithms for the Symmetric and Asymmetric Vehicle Routing Problem with Backhauls Paolo Toth, Daniele Vigo ECCO IX - Dublin 1996 Exact and Heuristic Algorithms for VRPB 1 Vehicle Routing
More informationConnectedness of Efficient Solutions in Multiple. Objective Combinatorial Optimization
Connectedness of Efficient Solutions in Multiple Objective Combinatorial Optimization Jochen Gorski Kathrin Klamroth Stefan Ruzika Communicated by H. Benson Abstract Connectedness of efficient solutions
More informationGreedy Algorithms My T. UF
Introduction to Algorithms Greedy Algorithms @ UF Overview A greedy algorithm always makes the choice that looks best at the moment Make a locally optimal choice in hope of getting a globally optimal solution
More informationComputers and Intractability
Computers and Intractability A Guide to the Theory of NP-Completeness The Bible of complexity theory M. R. Garey and D. S. Johnson W. H. Freeman and Company, 1979 The Bandersnatch problem Background: Find
More informationShow that the following problems are NP-complete
Show that the following problems are NP-complete April 7, 2018 Below is a list of 30 exercises in which you are asked to prove that some problem is NP-complete. The goal is to better understand the theory
More informationACO Comprehensive Exam October 14 and 15, 2013
1. Computability, Complexity and Algorithms (a) Let G be the complete graph on n vertices, and let c : V (G) V (G) [0, ) be a symmetric cost function. Consider the following closest point heuristic for
More informationPolynomial-time reductions. We have seen several reductions:
Polynomial-time reductions We have seen several reductions: Polynomial-time reductions Informal explanation of reductions: We have two problems, X and Y. Suppose we have a black-box solving problem X in
More informationIntroduction to optimization and operations research
Introduction to optimization and operations research David Pisinger, Fall 2002 1 Smoked ham (Chvatal 1.6, adapted from Greene et al. (1957)) A meat packing plant produces 480 hams, 400 pork bellies, and
More informationChapter 8 Dynamic Programming
Chapter 8 Dynamic Programming Copyright 2007 Pearson Addison-Wesley. All rights reserved. Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by
More information3.3 Easy ILP problems and totally unimodular matrices
3.3 Easy ILP problems and totally unimodular matrices Consider a generic ILP problem expressed in standard form where A Z m n with n m, and b Z m. min{c t x : Ax = b, x Z n +} (1) P(b) = {x R n : Ax =
More information3.10 Column generation method
3.10 Column generation method Many relevant decision-making problems can be formulated as ILP problems with a very large (exponential) number of variables. Examples: cutting stock, crew scheduling, vehicle
More informationChapter 3: Proving NP-completeness Results
Chapter 3: Proving NP-completeness Results Six Basic NP-Complete Problems Some Techniques for Proving NP-Completeness Some Suggested Exercises 1.1 Six Basic NP-Complete Problems 3-SATISFIABILITY (3SAT)
More informationCO759: Algorithmic Game Theory Spring 2015
CO759: Algorithmic Game Theory Spring 2015 Instructor: Chaitanya Swamy Assignment 1 Due: By Jun 25, 2015 You may use anything proved in class directly. I will maintain a FAQ about the assignment on the
More informationLecture 4: NP and computational intractability
Chapter 4 Lecture 4: NP and computational intractability Listen to: Find the longest path, Daniel Barret What do we do today: polynomial time reduction NP, co-np and NP complete problems some examples
More information15.083J/6.859J Integer Optimization. Lecture 2: Efficient Algorithms and Computational Complexity
15.083J/6.859J Integer Optimization Lecture 2: Efficient Algorithms and Computational Complexity 1 Outline Efficient algorithms Slide 1 Complexity The classes P and N P The classes N P-complete and N P-hard
More informationCS 320, Fall Dr. Geri Georg, Instructor 320 NP 1
NP CS 320, Fall 2017 Dr. Geri Georg, Instructor georg@colostate.edu 320 NP 1 NP Complete A class of problems where: No polynomial time algorithm has been discovered No proof that one doesn t exist 320
More information3.10 Column generation method
3.10 Column generation method Many relevant decision-making (discrete optimization) problems can be formulated as ILP problems with a very large (exponential) number of variables. Examples: cutting stock,
More informationIE418 Integer Programming
IE418: Integer Programming Department of Industrial and Systems Engineering Lehigh University 23rd February 2005 The Ingredients Some Easy Problems The Hard Problems Computational Complexity The ingredients
More informationNonnegative Matrices I
Nonnegative Matrices I Daisuke Oyama Topics in Economic Theory September 26, 2017 References J. L. Stuart, Digraphs and Matrices, in Handbook of Linear Algebra, Chapter 29, 2006. R. A. Brualdi and H. J.
More informationTheory of Computation CS3102 Spring 2014 A tale of computers, math, problem solving, life, love and tragic death
Theory of Computation CS3102 Spring 2014 A tale of computers, math, problem solving, life, love and tragic death Nathan Brunelle Department of Computer Science University of Virginia www.cs.virginia.edu/~njb2b/theory
More informationNP-COMPLETE PROBLEMS. 1. Characterizing NP. Proof
T-79.5103 / Autumn 2006 NP-complete problems 1 NP-COMPLETE PROBLEMS Characterizing NP Variants of satisfiability Graph-theoretic problems Coloring problems Sets and numbers Pseudopolynomial algorithms
More informationCS6999 Probabilistic Methods in Integer Programming Randomized Rounding Andrew D. Smith April 2003
CS6999 Probabilistic Methods in Integer Programming Randomized Rounding April 2003 Overview 2 Background Randomized Rounding Handling Feasibility Derandomization Advanced Techniques Integer Programming
More information4. Duality Duality 4.1 Duality of LPs and the duality theorem. min c T x x R n, c R n. s.t. ai Tx = b i i M a i R n
2 4. Duality of LPs and the duality theorem... 22 4.2 Complementary slackness... 23 4.3 The shortest path problem and its dual... 24 4.4 Farkas' Lemma... 25 4.5 Dual information in the tableau... 26 4.6
More information