Lecture 4: An FPTAS for Knapsack, and K-Center
|
|
- Sibyl Blair
- 5 years ago
- Views:
Transcription
1 Comp 260: Advanced Algorithms Tufts University, Spring 2016 Prof. Lenore Cowen Scribe: Eric Bailey Lecture 4: An FPTAS for Knapsack, and K-Center 1 Introduction Definition The Knapsack problem (restated) Given n objects {a 1,..., a n }, with sizes {s 1,..., s n }, and profits {p 1,..., p n }, and a knapsack with capacity B, where a i, s i, p i, B N and a i B i, find a subset of objects whose total size is bounded by B and whose total profit is maximized. 2 Hardness Of Approximation Theorem If P NP, then no polynomial-time algorithm can solve the Knapsack Problem with a p k solution for and fixed constant k. Proof. Assume there exists a polynomial-time algorithm A with performance guarantee k > 0 for all instances of the Knapsack Problem. We show that A can be used to construct a solution with value p in polynomial-time. Suppose we are given an instance I = {< a i, p i, s i >} of knapsack of size n and capacity B. Let I = {< a i, p i, s i >} where a i = a i, s i = s i, p i = (k + 1)p i, and B = B = m. Definition A solution is feasible if it can fit in the knapsack. Remark A feasible solution for I, i.e. a set of objects that fit in the knapsack, is equivalent to a feasible solution for I. 1
2 Run algorithm A on I which yields solution A(I ) such that A(I ) p I k. Considering the same solution M on I yields (k + 1)M (k + 1)p I k. So dividing by k + 1, M p I k k + 1 < 1. But since all solutions are integral, M p I = 0. Therefore M is an optimal solution to the Knapsack Problem. Definition Let π be an optimization problem with objective function f π and optimal solution S. A is an approximation scheme for π if on input (I, ɛ), where I is an instance of π and ɛ > 0 is an error parameter, it outputs a solution S such that: { (1 + ɛ)s if π is a minimization problem f π (I, ɛ) (1 ɛ)s if π is a maximization problem Definition A is said to be a PTAS (Polynomial Time Approximation Scheme) if for each fixed ɛ > 0, its running time is polynomial in the size of of instance I. Definition A is said to be an FPTAS (Fully PTAS) if the running time of A is bounded by a polynomial in the size of I and 1 ɛ. Claim For ɛ < 1, there exists an algorithm giving a (1 ɛ)p solution in O(n 1 3 ) time for the Knapsack Problem. ɛ 2
3 3 An NP-hard dynamic programming algorithm for Knapsack Let p max be the profit of the most profitable object. p max = max i n p i and let p denote the optimal solution, the most profit we can take home in the knapsack. Then it follows that n p max p This is obvious since p is less then the sum of all n p i s. And this sum is in turn at most n p max. For each i {1,..., n} and p {1,..., n p max }, let S i,p denote a subset of {a 1,..., a i } whose total profit is exactly p and whose total size is minimized. Let A(i, p) denote the size of S i,p, where A(i, p) = if no such set S i,p exists. Thus, the p can be expressed as: p = max{p A(n, p) B} We can use a dynamic programming algorithm which runs in O(n 2 p max ) to compute all A(i, p) s, and then select the instance with the smallest size and maximum profit, thus solving the knapsack problem. Wait! I thought this was a NP-Hard problem? Didn t you contradict yourself by stating a polynomial running time? Actually no. It would be polynomial 3
4 if p max were polynomial wrt n. But we are not guaranteed this. If we were, then yes, this algorithm runs in polynomial time. Instead, this algorithm is called pseudo-polynomial, as the actual value of p max is of size O(2 n ) with regard to the input to the problem. This is because p max is written in binary in the input, thus n O(log p max ) when actually input into the problem. 3.1 Dynamic Programming Algorithm for Knapsack Goal: Compute A(i, p) for i {1,..., n}, p {1,..., n p max } in time O(n 2 p max ) using dynamic programming. First, compute A(1, p) for each p in {1,..., n p max }. That s simply: A(1, p) = s 1 if p 1 = p A(1, p) = if p 1! = p To demonstrate, here is an example knapsack with objects, sizes and profits as specified: Object A B C D E Size Profit We can construct a table where we store the results from our dynamic programming algorithm. The number of columns in the table is determined by n p max (in this case, 5 * 3). The first row is as follows: A(1, p)
5 To calculate A(2,p) and so on, we use the following recurrence: A(i + 1) = min(a(i, p), s i+1 + A(i, p p i+1 )) if p i+1 < p = A(i, p) if otherwise Using this recurrence, we can fill in the next few rows of the table like so: A(1, p) 7... A(2, p) A(2, p) Given the position in question, the recurrence gives a choice between the value in the column directly above (calculated without taking into consideration the i + 1st element), or the value gotten from using the i + 1st element plus whatever is in the table using the first i elements to generate profit p p i+1. For example, to calculate A(3, 5) we notice that p 3 < p, or 3 < 5, so we have a choice between the value in A(2, 5), which is 9 or s 3 + A(2, 2) which equals 11. Clearly 9 is the minimum of the two and gets assigned as the value of A(3, 5). Once the table is completely filled, we scan the profit columns right to left looking for the first occurrence of a size B. That gives us our p. END Also, note that we must store backpointers to explain where each entry came from, in order to choose the actual set of items responsible for the actual values of the matrix we decided to fill. As values can only be filled from the space directly above the actual value A(i, j) or to the top left of this value, backpointers may only point up or to the top left. Pointers that point to the top left item indicate that we chose item i, and pointers that just point up indicate that we did not choose item i. The problem with this method is that there could potentially be many columns given a large enough p max. So, the next question is: how to turn this into an approximation algorithm which runs in polynomial time regardless of p max? 5
6 4 An FPTAS for Knapsack In this section we construct a FPTAS for knapsack; we ll refer to it as KNAP- SACK FTPAS. To make this algorithm run in polynomial time, we will simply ignore a certain number of the least significant digits, so we will get a pretty good approximation (by only looking at the most important digits), but still not perfect, as we are losing information. Steps: 1. Given ɛ > 0, let k = ɛ pmax n 2. For each a i, define p i = p i k 3. Let I = (a i, s i, p i) where a i = a i, s i = s i, and p i is as shown above. The dynamic algorithm for solving Knapsack is then applied to the new instance I and outputs max{s max, S }, where S max is the smallest object of profit p max if S max B. Lemma Let A denote the set output by KNAPSACK FPTAS. profit(a) p ɛ Proof. Let O denote the set with profit p on an instance I of knapsack. We now reason about I and the associated rounded profit instance I defined above. Note that any feasible (meaning the items fit in the knapsack) instance of knapsack in I corresponds to a feasible instance of knapsack in I and vice versa, since the objects and sizes haven t changed, only profits have changed. For an instance N of knapsack we denote by P rofit(n) its profit under the original instance, and by P rofit (N) its profit using the new rounded profits. 6
7 For all objects a, p a k can be smaller than p a (because of the floor function) but not by more than k. This follows from the definition of p i. Restated: p a p a k k Thus: P rofit(o) k P rofit (O) nk (1) Now, P rofit (S ) is optimal, which implies P rofit (S ) P rofit (Y ), for any Y that fits in the knapsack. (2) Therefore: P rofit (S ) P rofit (O) (3) multiplying both sides by k, we get: k P rofit (S ) k P rofit (O) (4) So by (2) we get: P rofit(s ) k P rofit (S ) (by defn of P rofit ) k P rofit (O) (by 3) P rofit(o) nk (by 1) p nk (by defn of P rofit(0)) p ɛ p max (by defn of k) 7
8 We also know that P rofit(a) p max (5) and also that P rofit(a) P rofit(s ) (6) since A returns the max of these two values. Therefore: P rofit(a) profit(s ) (since the profits of S are lower than A s) p ɛ p max p ɛ P rofit(a) By simple algebra, we get: P rofit(a) ɛ p which completes our proof of the lemma. 4.1 Proof of Polynomial Time Execution Theorem KNAPSACK FPTAS is an FPTAS Proof. By the lemma, the solution is within 1 - ɛ of p. By the definition of k, the running time is: 8
9 O(n 2 p max k ) = O(n2 n ɛ ) QED. Note: the smaller the ɛ, or the closer you want to get to p, the more the running time inflates. Definition A problem Π is strongly NP-Hard if every problem in NP can be polynomially reduced to Π so that all numbers in the reduced instance can be written in unary. Note: If a problem has an FPTAS, it can t be strongly NP-Hard. 5 k-center Problem Imagine we have a complete, undirected graph, where each node is a city, and edges represent the shortest distance between these cities. We have funds to build exactly k emergency centers. The k-center problem with triangle inequality is to place our k emergency centers such that no one has to go too far to get to their closest center. k-center Problem: Input: Given G = (V, E), a complete undirected graph whose edges are shortest paths between each pair of nodes. Let D ij denote the path distance between nodes i and j. (Remember, if we start with an incomplete graph, we can make it complete by adding edges where D ij is the length of the shortest existing path between i and j.) Output: A subset of nodes S V with S = k, such that the longest distance of a node to its closest node in S is minimized. Specifically, we want to minimize cost(s) = max j V min i S D ij. Our Approximation Algorithm: 9
10 We assume that G satisfies the triangle inequality (i.e. D ij + D jk D ik, i, j, k V ). So first, we reorder the edges e 1, e 2..., e m, in order of cost, such that cost(e 1 ) cost(e 2 )... cost(e m ). Then, we add the lightest edge e 1 and look at that graph, then add the edges e 1, e 2, and look at that graph, then add edges e 1, e 2, e 3... Let G i = (V, E i ), where E i = {e 1, e 2,..., e i }. Note that our original graph G is now G m. Definition A dominating set of G is a subset S V such that every node in V S is adjacent to a vertex in S. (That is, for each node, either you re in, or you have a neighbor who is in the dominating set.) Claim The optimal solution to a k-center problem is a dominating set in G. (Note that this is a trick question because G is complete. Any one vertex or set of vertices in a complete graph is a dominating set!) Claim The optimal solution to a k-center problem in G is a dominating set in G i for some i i 0. This claim is trivially true (when i = m). Look at graphs G 1, G 2, G 3,..., G m 2, G m 1, G m. As we move backward from G m, at which point do we no longer have a dominating set? Let C be the cost of an optimal solution to k-center in G. Let e c be the LAST edge of cost c. Remember, this edge is not unique! Multiple edges could have the same cost, so e c is the LAST edge, such that for all e c+1...e i, cost(e c+1...) > cost(g c ). If we consider G c, we know we have a graph that only includes edges up to cost c. For example, if we want to get everyone to the emergency center in 20 minutes or less, we ignore all edges that take more than 20 minutes, and we are now considering G 20. Claim There is a dominating set in G c of size k or less, and if we can find it, we have our solution to k-center. Claim There is no dominating set in G c 1 of size k or less. For ease of argument, let s assume all edges have distinct costs. This claim is trivially true by contradiction. Suppose otherwise, that is, there is a dominating set in G c 1 of size k or less. This feasible, dominating 10
11 set is a solution to k-center of cost < C, which is a contradiction since we assume C is optimal. According to the two claims above, the k-center problem with triangle inequality is equivalent to finding the smallest index i, such that G i has a dominating set of size k. Since finding the dominating set like k-center is NP-hard, we can use this fact to approximate k-center by lower-bounding the size of the dominating set in G i. Definition The square of graph G = (V, E), denoted G 2 = (V, E 2 ) has an edge between i and j, if and only if there is a path of length 1 or 2 between i and j. Notes: We can compute the square of a graph by multiplying the adjacency matrices. The cube of graph G, denoted G 3, adds edges between i and j if there exists a path of length 1, 2, or 3 between i and j. This can be extended to create G 4, G 5,... This makes no difference for G (which is complete), but we are also going to be looking at G c, for different c s, which are certainly not complete graphs. Definition An independent set in a graph G = (V, E) is a set S V, such that i S, if (i, j) E, then j S. Definition A maximal independent set (MIS) in a graph G = (V, E) is an independent set such that v V, either v S, or u such that (u, v) E, and u S. Finding a maximum independent set of a graph is NP-hard, but finding a maximal independent set of a graph can be solved in polynomial time using a simple greedy algorithm. Lemma Let H = (V, E). Let I be an independent set in H 2. Then I dom(h), where dom(h) denotes the size of a minimum cardinality dominating set in H. (Note: dom(h) is NP-hard to compute, but any independent set is smaller than this.) 11
12 Proof. Let D be a minimum cardinality dominating set in H. For each vertex d D, its neighborhood forms a clique in H 2. So H 2 contains D cliques spanning all the vertices, which implies any independent set in H 2 can pick at most 1 vertex per clique. So I D. If we start with a vertex and its neighbors and square it, we have a clique! Therefore, we can only take 1 vertex from each of these cliques / neighborhoods. 6 2-Approximation Algorithm for k-center Algorithm A: 1. Construct G 2 1, G 2 2,...,G 2 m. 2. Compute a maximal independent set (MIS) L i in each graph G 2 i. 3. Find the smallest index i, such that L i k, and call that MIS L j 4. Return L j Lemma For j as defined in the algorithm above, the cost(e j ) C. Note that cost(e j ) is the most expensive edge in L j. Proof. For every i < j, we have L i > k since dom(g i ) L i by Lemma 1. (That is, if we go through and look at G 2 1, G 2 2, G 2 3,..., G 2 m, we run out of our k centers before we reach G 2 m. That implies dom(g i ) > k. So the first index for which the k-center problem forms a dominating set > i, so C > cost(e i ).) Theorem Algorithm A returns a solution of cost at most 2 OP T. Proof. Observe that a maximal independent set in H 2 is also a dominating set in H 2 (any maximal independent set is a dominating set, but not vice versa). Thus, if we have a maximal independent set that is equal to the dominating set in G 2 i (let s call it D), then every vertex is on a path of length at most 2 to a vertex in D in the original graph G i. 12
13 Since i < C by lemma 2, then each edge e G i, cost(e) < cost(c ), the path of length 2 in G has edges of cost less than cost(c ). Thus, by triangle inequality, cost 2C, vertex to their closest vertex in D. We have shown a 2-approximation to the k-center problem, but can we do any better? The answer is NO, as we show in the following section. 7 Hardness of Approximation Theorem Approximating the k-center problem with triangle inequality within a factor of 2 ɛ is NP-hard for any ɛ > 0. Proof by reduction from dominating set. Given a graph G = (V, E), we construct an instance of k-center satisfying the triangle inequality, such that if G has a dominating set of size k, then the optimal cost of the k-center is 1. Otherwise the optimal cost of the k- center is 2. We put the following weights on the edges of the complete graph (note that they satisfy the triangle inequality): w(e) = { 1 if e E; 2 if e E. Thus, the approximation algorithm will output a solution of cost 1 if there is a dominating set in G and output a solution of cost 2 otherwise. So we can use the approximation algorithm to decide whether there is a dominating set in G. Therefore, the approximation algorithm is also NP-hard. 13
1 The Knapsack Problem
Comp 260: Advanced Algorithms Prof. Lenore Cowen Tufts University, Spring 2018 Scribe: Tom Magerlein 1 Lecture 4: The Knapsack Problem 1 The Knapsack Problem Suppose we are trying to burgle someone s house.
More informationThis means that we can assume each list ) is
This means that we can assume each list ) is of the form ),, ( )with < and Since the sizes of the items are integers, there are at most +1pairs in each list Furthermore, if we let = be the maximum possible
More information8 Knapsack Problem 8.1 (Knapsack)
8 Knapsack In Chapter 1 we mentioned that some NP-hard optimization problems allow approximability to any required degree. In this chapter, we will formalize this notion and will show that the knapsack
More information1 Ordinary Load Balancing
Comp 260: Advanced Algorithms Prof. Lenore Cowen Tufts University, Spring 208 Scribe: Emily Davis Lecture 8: Scheduling Ordinary Load Balancing Suppose we have a set of jobs each with their own finite
More information1 T 1 = where 1 is the all-ones vector. For the upper bound, let v 1 be the eigenvector corresponding. u:(u,v) E v 1(u)
CME 305: Discrete Mathematics and Algorithms Instructor: Reza Zadeh (rezab@stanford.edu) Final Review Session 03/20/17 1. Let G = (V, E) be an unweighted, undirected graph. Let λ 1 be the maximum eigenvalue
More informationThe Knapsack Problem. 28. April /44
The Knapsack Problem 20 10 15 20 W n items with weight w i N and profit p i N Choose a subset x of items Capacity constraint i x w i W wlog assume i w i > W, i : w i < W Maximize profit i x p i 28. April
More informationLecture 11 October 7, 2013
CS 4: Advanced Algorithms Fall 03 Prof. Jelani Nelson Lecture October 7, 03 Scribe: David Ding Overview In the last lecture we talked about set cover: Sets S,..., S m {,..., n}. S has cost c S. Goal: Cover
More information1 The Arthur-Merlin Story
Comp 260: Advanced Algorithms Tufts University, Spring 2018 Prof. Lenore Cowen Scribe: Elias Marcopoulos Lecture 1a: Bipartite Perfect Matching 1 The Arthur-Merlin Story In the land ruled by the legendary
More information1 Review for Lecture 2 MaxFlow
Comp 260: Advanced Algorithms Tufts University, Spring 2009 Prof. Lenore Cowen Scribe: Wanyu Wang Lecture 13: Back to MaxFlow/Edmonds-Karp 1 Review for Lecture 2 MaxFlow A flow network showing flow and
More informationGeometric Steiner Trees
Geometric Steiner Trees From the book: Optimal Interconnection Trees in the Plane By Marcus Brazil and Martin Zachariasen Part 3: Computational Complexity and the Steiner Tree Problem Marcus Brazil 2015
More informationa 1 a 2 a 3 a 4 v i c i c(a 1, a 3 ) = 3
AM 221: Advanced Optimization Spring 2016 Prof. Yaron Singer Lecture 17 March 30th 1 Overview In the previous lecture, we saw examples of combinatorial problems: the Maximal Matching problem and the Minimum
More informationK-center Hardness and Max-Coverage (Greedy)
IOE 691: Approximation Algorithms Date: 01/11/2017 Lecture Notes: -center Hardness and Max-Coverage (Greedy) Instructor: Viswanath Nagarajan Scribe: Sentao Miao 1 Overview In this lecture, we will talk
More informationLecture 18: More NP-Complete Problems
6.045 Lecture 18: More NP-Complete Problems 1 The Clique Problem a d f c b e g Given a graph G and positive k, does G contain a complete subgraph on k nodes? CLIQUE = { (G,k) G is an undirected graph with
More informationIE418 Integer Programming
IE418: Integer Programming Department of Industrial and Systems Engineering Lehigh University 23rd February 2005 The Ingredients Some Easy Problems The Hard Problems Computational Complexity The ingredients
More informationAlgorithms. Outline! Approximation Algorithms. The class APX. The intelligence behind the hardware. ! Based on
6117CIT - Adv Topics in Computing Sci at Nathan 1 Algorithms The intelligence behind the hardware Outline! Approximation Algorithms The class APX! Some complexity classes, like PTAS and FPTAS! Illustration
More informationWeek Cuts, Branch & Bound, and Lagrangean Relaxation
Week 11 1 Integer Linear Programming This week we will discuss solution methods for solving integer linear programming problems. I will skip the part on complexity theory, Section 11.8, although this is
More informationFundamentals of optimization problems
Fundamentals of optimization problems Dmitriy Serdyuk Ferienakademie in Sarntal 2012 FAU Erlangen-Nürnberg, TU München, Uni Stuttgart September 2012 Overview 1 Introduction Optimization problems PO and
More informationEssential facts about NP-completeness:
CMPSCI611: NP Completeness Lecture 17 Essential facts about NP-completeness: Any NP-complete problem can be solved by a simple, but exponentially slow algorithm. We don t have polynomial-time solutions
More informationLecture 13 March 7, 2017
CS 224: Advanced Algorithms Spring 2017 Prof. Jelani Nelson Lecture 13 March 7, 2017 Scribe: Hongyao Ma Today PTAS/FPTAS/FPRAS examples PTAS: knapsack FPTAS: knapsack FPRAS: DNF counting Approximation
More informationSolutions to Exercises
1/13 Solutions to Exercises The exercises referred to as WS 1.1(a), and so forth, are from the course book: Williamson and Shmoys, The Design of Approximation Algorithms, Cambridge University Press, 2011,
More informationCS 6820 Fall 2014 Lectures, October 3-20, 2014
Analysis of Algorithms Linear Programming Notes CS 6820 Fall 2014 Lectures, October 3-20, 2014 1 Linear programming The linear programming (LP) problem is the following optimization problem. We are given
More informationU.C. Berkeley Better-than-Worst-Case Analysis Handout 3 Luca Trevisan May 24, 2018
U.C. Berkeley Better-than-Worst-Case Analysis Handout 3 Luca Trevisan May 24, 2018 Lecture 3 In which we show how to find a planted clique in a random graph. 1 Finding a Planted Clique We will analyze
More informationNote that M i,j depends on two entries in row (i 1). If we proceed in a row major order, these two entries will be available when we are ready to comp
CSE 3500 Algorithms and Complexity Fall 2016 Lecture 18: October 27, 2016 Dynamic Programming Steps involved in a typical dynamic programming algorithm are: 1. Identify a function such that the solution
More informationLecture 15 (Oct 6): LP Duality
CMPUT 675: Approximation Algorithms Fall 2014 Lecturer: Zachary Friggstad Lecture 15 (Oct 6): LP Duality Scribe: Zachary Friggstad 15.1 Introduction by Example Given a linear program and a feasible solution
More informationDuality of LPs and Applications
Lecture 6 Duality of LPs and Applications Last lecture we introduced duality of linear programs. We saw how to form duals, and proved both the weak and strong duality theorems. In this lecture we will
More informationComputational Complexity. IE 496 Lecture 6. Dr. Ted Ralphs
Computational Complexity IE 496 Lecture 6 Dr. Ted Ralphs IE496 Lecture 6 1 Reading for This Lecture N&W Sections I.5.1 and I.5.2 Wolsey Chapter 6 Kozen Lectures 21-25 IE496 Lecture 6 2 Introduction to
More information- Well-characterized problems, min-max relations, approximate certificates. - LP problems in the standard form, primal and dual linear programs
LP-Duality ( Approximation Algorithms by V. Vazirani, Chapter 12) - Well-characterized problems, min-max relations, approximate certificates - LP problems in the standard form, primal and dual linear programs
More informationUniversity of California Berkeley CS170: Efficient Algorithms and Intractable Problems November 19, 2001 Professor Luca Trevisan. Midterm 2 Solutions
University of California Berkeley Handout MS2 CS170: Efficient Algorithms and Intractable Problems November 19, 2001 Professor Luca Trevisan Midterm 2 Solutions Problem 1. Provide the following information:
More informationLec. 2: Approximation Algorithms for NP-hard Problems (Part II)
Limits of Approximation Algorithms 28 Jan, 2010 (TIFR) Lec. 2: Approximation Algorithms for NP-hard Problems (Part II) Lecturer: Prahladh Harsha Scribe: S. Ajesh Babu We will continue the survey of approximation
More information(x 1 +x 2 )(x 1 x 2 )+(x 2 +x 3 )(x 2 x 3 )+(x 3 +x 1 )(x 3 x 1 ).
CMPSCI611: Verifying Polynomial Identities Lecture 13 Here is a problem that has a polynomial-time randomized solution, but so far no poly-time deterministic solution. Let F be any field and let Q(x 1,...,
More informationMaximum sum contiguous subsequence Longest common subsequence Matrix chain multiplication All pair shortest path Kna. Dynamic Programming
Dynamic Programming Arijit Bishnu arijit@isical.ac.in Indian Statistical Institute, India. August 31, 2015 Outline 1 Maximum sum contiguous subsequence 2 Longest common subsequence 3 Matrix chain multiplication
More informationChapter 11. Approximation Algorithms. Slides by Kevin Wayne Pearson-Addison Wesley. All rights reserved.
Chapter 11 Approximation Algorithms Slides by Kevin Wayne. Copyright @ 2005 Pearson-Addison Wesley. All rights reserved. 1 Approximation Algorithms Q. Suppose I need to solve an NP-hard problem. What should
More information10.4 The Kruskal Katona theorem
104 The Krusal Katona theorem 141 Example 1013 (Maximum weight traveling salesman problem We are given a complete directed graph with non-negative weights on edges, and we must find a maximum weight Hamiltonian
More information8.5 Sequencing Problems
8.5 Sequencing Problems Basic genres. Packing problems: SET-PACKING, INDEPENDENT SET. Covering problems: SET-COVER, VERTEX-COVER. Constraint satisfaction problems: SAT, 3-SAT. Sequencing problems: HAMILTONIAN-CYCLE,
More informationImproved Fully Polynomial time Approximation Scheme for the 0-1 Multiple-choice Knapsack Problem
Improved Fully Polynomial time Approximation Scheme for the 0-1 Multiple-choice Knapsack Problem Mukul Subodh Bansal V.Ch.Venkaiah International Institute of Information Technology Hyderabad, India Abstract
More informationLecture 2: Divide and conquer and Dynamic programming
Chapter 2 Lecture 2: Divide and conquer and Dynamic programming 2.1 Divide and Conquer Idea: - divide the problem into subproblems in linear time - solve subproblems recursively - combine the results in
More informationMassachusetts Institute of Technology Department of Electrical Engineering and Computer Science Algorithms For Inference Fall 2014
Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.438 Algorithms For Inference Fall 2014 Recitation 3 1 Gaussian Graphical Models: Schur s Complement Consider
More informationDiscrete Optimization 2010 Lecture 2 Matroids & Shortest Paths
Matroids Shortest Paths Discrete Optimization 2010 Lecture 2 Matroids & Shortest Paths Marc Uetz University of Twente m.uetz@utwente.nl Lecture 2: sheet 1 / 25 Marc Uetz Discrete Optimization Matroids
More informationKnapsack. Bag/knapsack of integer capacity B n items item i has size s i and profit/weight w i
Knapsack Bag/knapsack of integer capacity B n items item i has size s i and profit/weight w i Goal: find a subset of items of maximum profit such that the item subset fits in the bag Knapsack X: item set
More informationLecture 3. 1 Polynomial-time algorithms for the maximum flow problem
ORIE 633 Network Flows August 30, 2007 Lecturer: David P. Williamson Lecture 3 Scribe: Gema Plaza-Martínez 1 Polynomial-time algorithms for the maximum flow problem 1.1 Introduction Let s turn now to considering
More informationDS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.
DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1
More informationNP-COMPLETE PROBLEMS. 1. Characterizing NP. Proof
T-79.5103 / Autumn 2006 NP-complete problems 1 NP-COMPLETE PROBLEMS Characterizing NP Variants of satisfiability Graph-theoretic problems Coloring problems Sets and numbers Pseudopolynomial algorithms
More information6.854J / J Advanced Algorithms Fall 2008
MIT OpenCourseWare http://ocw.mit.edu 6.85J / 8.5J Advanced Algorithms Fall 008 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. 8.5/6.85 Advanced Algorithms
More informationTopics in Approximation Algorithms Solution for Homework 3
Topics in Approximation Algorithms Solution for Homework 3 Problem 1 We show that any solution {U t } can be modified to satisfy U τ L τ as follows. Suppose U τ L τ, so there is a vertex v U τ but v L
More informationOptimization Exercise Set n. 4 :
Optimization Exercise Set n. 4 : Prepared by S. Coniglio and E. Amaldi translated by O. Jabali 2018/2019 1 4.1 Airport location In air transportation, usually there is not a direct connection between every
More informationBBM402-Lecture 20: LP Duality
BBM402-Lecture 20: LP Duality Lecturer: Lale Özkahya Resources for the presentation: https://courses.engr.illinois.edu/cs473/fa2016/lectures.html An easy LP? which is compact form for max cx subject to
More informationP,NP, NP-Hard and NP-Complete
P,NP, NP-Hard and NP-Complete We can categorize the problem space into two parts Solvable Problems Unsolvable problems 7/11/2011 1 Halting Problem Given a description of a program and a finite input, decide
More informationarxiv: v1 [math.oc] 3 Jan 2019
The Product Knapsack Problem: Approximation and Complexity arxiv:1901.00695v1 [math.oc] 3 Jan 2019 Ulrich Pferschy a, Joachim Schauer a, Clemens Thielen b a Department of Statistics and Operations Research,
More informationAlgorithms: Lecture 12. Chalmers University of Technology
Algorithms: Lecture 1 Chalmers University of Technology Today s Topics Shortest Paths Network Flow Algorithms Shortest Path in a Graph Shortest Path Problem Shortest path network. Directed graph G = (V,
More information1 Primals and Duals: Zero Sum Games
CS 124 Section #11 Zero Sum Games; NP Completeness 4/15/17 1 Primals and Duals: Zero Sum Games We can represent various situations of conflict in life in terms of matrix games. For example, the game shown
More informationCS Algorithms and Complexity
CS 50 - Algorithms and Complexity Linear Programming, the Simplex Method, and Hard Problems Sean Anderson 2/15/18 Portland State University Table of contents 1. The Simplex Method 2. The Graph Problem
More informationComputability and Complexity Theory
Discrete Math for Bioinformatics WS 09/10:, by A Bockmayr/K Reinert, January 27, 2010, 18:39 9001 Computability and Complexity Theory Computability and complexity Computability theory What problems can
More informationOptimization Exercise Set n.5 :
Optimization Exercise Set n.5 : Prepared by S. Coniglio translated by O. Jabali 2016/2017 1 5.1 Airport location In air transportation, usually there is not a direct connection between every pair of airports.
More informationOn the Tightness of an LP Relaxation for Rational Optimization and its Applications
OPERATIONS RESEARCH Vol. 00, No. 0, Xxxxx 0000, pp. 000 000 issn 0030-364X eissn 526-5463 00 0000 000 INFORMS doi 0.287/xxxx.0000.0000 c 0000 INFORMS Authors are encouraged to submit new papers to INFORMS
More informationAlgorithms Exam TIN093 /DIT602
Algorithms Exam TIN093 /DIT602 Course: Algorithms Course code: TIN 093, TIN 092 (CTH), DIT 602 (GU) Date, time: 21st October 2017, 14:00 18:00 Building: SBM Responsible teacher: Peter Damaschke, Tel. 5405
More informationCOL351: Analysis and Design of Algorithms (CSE, IITD, Semester-I ) Name: Entry number:
Name: Entry number: There are 5 questions for a total of 75 points. 1. (5 points) You are given n items and a sack that can hold at most W units of weight. The weight of the i th item is denoted by w(i)
More informationLecture #21. c T x Ax b. maximize subject to
COMPSCI 330: Design and Analysis of Algorithms 11/11/2014 Lecture #21 Lecturer: Debmalya Panigrahi Scribe: Samuel Haney 1 Overview In this lecture, we discuss linear programming. We first show that the
More information/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Matroids and Greedy Algorithms Date: 10/31/16
60.433/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Matroids and Greedy Algorithms Date: 0/3/6 6. Introduction We talked a lot the last lecture about greedy algorithms. While both Prim
More informationDiscrete Optimization 2010 Lecture 10 P, N P, and N PCompleteness
Discrete Optimization 2010 Lecture 10 P, N P, and N PCompleteness Marc Uetz University of Twente m.uetz@utwente.nl Lecture 9: sheet 1 / 31 Marc Uetz Discrete Optimization Outline 1 N P and co-n P 2 N P-completeness
More informationU.C. Berkeley CS294: Beyond Worst-Case Analysis Handout 2 Luca Trevisan August 29, 2017
U.C. Berkeley CS94: Beyond Worst-Case Analysis Handout Luca Trevisan August 9, 07 Scribe: Mahshid Montazer Lecture In this lecture, we study the Max Cut problem in random graphs. We compute the probable
More informationCombinatorial Optimization
Combinatorial Optimization Problem set 8: solutions 1. Fix constants a R and b > 1. For n N, let f(n) = n a and g(n) = b n. Prove that f(n) = o ( g(n) ). Solution. First we observe that g(n) 0 for all
More informationSemidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5
Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5 Instructor: Farid Alizadeh Scribe: Anton Riabov 10/08/2001 1 Overview We continue studying the maximum eigenvalue SDP, and generalize
More informationLecture 13 Spectral Graph Algorithms
COMS 995-3: Advanced Algorithms March 6, 7 Lecture 3 Spectral Graph Algorithms Instructor: Alex Andoni Scribe: Srikar Varadaraj Introduction Today s topics: Finish proof from last lecture Example of random
More informationP, NP, NP-Complete, and NPhard
P, NP, NP-Complete, and NPhard Problems Zhenjiang Li 21/09/2011 Outline Algorithm time complicity P and NP problems NP-Complete and NP-Hard problems Algorithm time complicity Outline What is this course
More informationCOSC 341: Lecture 25 Coping with NP-hardness (2)
1 Introduction Figure 1: Famous cartoon by Garey and Johnson, 1979 We have seen the definition of a constant factor approximation algorithm. The following is something even better. 2 Approximation Schemes
More informationEfficient approximation algorithms for the Subset-Sums Equality problem
Efficient approximation algorithms for the Subset-Sums Equality problem Cristina Bazgan 1 Miklos Santha 2 Zsolt Tuza 3 1 Université Paris-Sud, LRI, bât.490, F 91405 Orsay, France, bazgan@lri.fr 2 CNRS,
More informationCMPSCI611: Three Divide-and-Conquer Examples Lecture 2
CMPSCI611: Three Divide-and-Conquer Examples Lecture 2 Last lecture we presented and analyzed Mergesort, a simple divide-and-conquer algorithm. We then stated and proved the Master Theorem, which gives
More informationClassical Complexity and Fixed-Parameter Tractability of Simultaneous Consecutive Ones Submatrix & Editing Problems
Classical Complexity and Fixed-Parameter Tractability of Simultaneous Consecutive Ones Submatrix & Editing Problems Rani M. R, Mohith Jagalmohanan, R. Subashini Binary matrices having simultaneous consecutive
More informationACO Comprehensive Exam March 17 and 18, Computability, Complexity and Algorithms
1. Computability, Complexity and Algorithms (a) Let G(V, E) be an undirected unweighted graph. Let C V be a vertex cover of G. Argue that V \ C is an independent set of G. (b) Minimum cardinality vertex
More informationLecture 4. 1 FPTAS - Fully Polynomial Time Approximation Scheme
Theory of Computer Science to Msc Students, Spring 2007 Lecturer: Dorit Aharonov Lecture 4 Scribe: Ram Bouobza & Yair Yarom Revised: Shahar Dobzinsi, March 2007 1 FPTAS - Fully Polynomial Time Approximation
More informationSpring 2017 CO 250 Course Notes TABLE OF CONTENTS. richardwu.ca. CO 250 Course Notes. Introduction to Optimization
Spring 2017 CO 250 Course Notes TABLE OF CONTENTS richardwu.ca CO 250 Course Notes Introduction to Optimization Kanstantsin Pashkovich Spring 2017 University of Waterloo Last Revision: March 4, 2018 Table
More informationTheoretical Computer Science
Theoretical Computer Science 411 (010) 417 44 Contents lists available at ScienceDirect Theoretical Computer Science journal homepage: wwwelseviercom/locate/tcs Resource allocation with time intervals
More informationCombinatorial optimization problems
Combinatorial optimization problems Heuristic Algorithms Giovanni Righini University of Milan Department of Computer Science (Crema) Optimization In general an optimization problem can be formulated as:
More informationTheory of Computation Chapter 9
0-0 Theory of Computation Chapter 9 Guan-Shieng Huang May 12, 2003 NP-completeness Problems NP: the class of languages decided by nondeterministic Turing machine in polynomial time NP-completeness: Cook
More informationLecture 2: Network Flows 1
Comp 260: Advanced Algorithms Tufts University, Spring 2011 Lecture by: Prof. Cowen Scribe: Saeed Majidi Lecture 2: Network Flows 1 A wide variety of problems, including the matching problems discussed
More informationACO Comprehensive Exam 19 March Graph Theory
1. Graph Theory Let G be a connected simple graph that is not a cycle and is not complete. Prove that there exist distinct non-adjacent vertices u, v V (G) such that the graph obtained from G by deleting
More informationExercises NP-completeness
Exercises NP-completeness Exercise 1 Knapsack problem Consider the Knapsack problem. We have n items, each with weight a j (j = 1,..., n) and value c j (j = 1,..., n) and an integer B. All a j and c j
More informationEfficient Approximation for Restricted Biclique Cover Problems
algorithms Article Efficient Approximation for Restricted Biclique Cover Problems Alessandro Epasto 1, *, and Eli Upfal 2 ID 1 Google Research, New York, NY 10011, USA 2 Department of Computer Science,
More informationPCPs and Inapproximability Gap-producing and Gap-Preserving Reductions. My T. Thai
PCPs and Inapproximability Gap-producing and Gap-Preserving Reductions My T. Thai 1 1 Hardness of Approximation Consider a maximization problem Π such as MAX-E3SAT. To show that it is NP-hard to approximation
More informationarxiv: v1 [cs.cg] 29 Jun 2012
Single-Source Dilation-Bounded Minimum Spanning Trees Otfried Cheong Changryeol Lee May 2, 2014 arxiv:1206.6943v1 [cs.cg] 29 Jun 2012 Abstract Given a set S of points in the plane, a geometric network
More informationOutline. 1 NP-Completeness Theory. 2 Limitation of Computation. 3 Examples. 4 Decision Problems. 5 Verification Algorithm
Outline 1 NP-Completeness Theory 2 Limitation of Computation 3 Examples 4 Decision Problems 5 Verification Algorithm 6 Non-Deterministic Algorithm 7 NP-Complete Problems c Hu Ding (Michigan State University)
More informationComplexity Theory of Polynomial-Time Problems
Complexity Theory of Polynomial-Time Problems Lecture 5: Subcubic Equivalences Karl Bringmann Reminder: Relations = Reductions transfer hardness of one problem to another one by reductions problem P instance
More informationChapter 8. NP and Computational Intractability
Chapter 8 NP and Computational Intractability Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved. Acknowledgement: This lecture slide is revised and authorized from Prof.
More informationComputer Science 385 Analysis of Algorithms Siena College Spring Topic Notes: Limitations of Algorithms
Computer Science 385 Analysis of Algorithms Siena College Spring 2011 Topic Notes: Limitations of Algorithms We conclude with a discussion of the limitations of the power of algorithms. That is, what kinds
More informationApproximation Algorithms and Hardness of Approximation. IPM, Jan Mohammad R. Salavatipour Department of Computing Science University of Alberta
Approximation Algorithms and Hardness of Approximation IPM, Jan 2006 Mohammad R. Salavatipour Department of Computing Science University of Alberta 1 Introduction For NP-hard optimization problems, we
More informationThe Knapsack Problem. n items with weight w i N and profit p i N. Choose a subset x of items
Sanders/van Stee: Approximations- und Online-Algorithmen 1 The Knapsack Problem 10 15 W n items with weight w i N and profit p i N Choose a subset x of items Capacity constraint i x w i W wlog assume i
More informationIntroduction to Semidefinite Programming I: Basic properties a
Introduction to Semidefinite Programming I: Basic properties and variations on the Goemans-Williamson approximation algorithm for max-cut MFO seminar on Semidefinite Programming May 30, 2010 Semidefinite
More informationCPSC 320 (Intermediate Algorithm Design and Analysis). Summer Instructor: Dr. Lior Malka Final Examination, July 24th, 2009
CPSC 320 (Intermediate Algorithm Design and Analysis). Summer 2009. Instructor: Dr. Lior Malka Final Examination, July 24th, 2009 Student ID: INSTRUCTIONS: There are 6 questions printed on pages 1 7. Exam
More informationTopics in Theoretical Computer Science April 08, Lecture 8
Topics in Theoretical Computer Science April 08, 204 Lecture 8 Lecturer: Ola Svensson Scribes: David Leydier and Samuel Grütter Introduction In this lecture we will introduce Linear Programming. It was
More informationLecture 6: Greedy Algorithms I
COMPSCI 330: Design and Analysis of Algorithms September 14 Lecturer: Rong Ge Lecture 6: Greedy Algorithms I Scribe: Fred Zhang 1 Overview In this lecture, we introduce a new algorithm design technique
More informationIE 5531: Engineering Optimization I
IE 5531: Engineering Optimization I Lecture 7: Duality and applications Prof. John Gunnar Carlsson September 29, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I September 29, 2010 1
More information2. A vertex in G is central if its greatest distance from any other vertex is as small as possible. This distance is the radius of G.
CME 305: Discrete Mathematics and Algorithms Instructor: Reza Zadeh (rezab@stanford.edu) HW#1 Due at the beginning of class Thursday 01/21/16 1. Prove that at least one of G and G is connected. Here, G
More informationLecture 18: P & NP. Revised, May 1, CLRS, pp
Lecture 18: P & NP Revised, May 1, 2003 CLRS, pp.966-982 The course so far: techniques for designing efficient algorithms, e.g., divide-and-conquer, dynamic-programming, greedy-algorithms. What happens
More informationMore NP-Complete Problems
CS 473: Algorithms, Spring 2018 More NP-Complete Problems Lecture 23 April 17, 2018 Most slides are courtesy Prof. Chekuri Ruta (UIUC) CS473 1 Spring 2018 1 / 57 Recap NP: languages/problems that have
More informationCS 350 Algorithms and Complexity
CS 350 Algorithms and Complexity Winter 2019 Lecture 15: Limitations of Algorithmic Power Introduction to complexity theory Andrew P. Black Department of Computer Science Portland State University Lower
More informationBranching. Teppo Niinimäki. Helsinki October 14, 2011 Seminar: Exact Exponential Algorithms UNIVERSITY OF HELSINKI Department of Computer Science
Branching Teppo Niinimäki Helsinki October 14, 2011 Seminar: Exact Exponential Algorithms UNIVERSITY OF HELSINKI Department of Computer Science 1 For a large number of important computational problems
More informationCS 350 Algorithms and Complexity
1 CS 350 Algorithms and Complexity Fall 2015 Lecture 15: Limitations of Algorithmic Power Introduction to complexity theory Andrew P. Black Department of Computer Science Portland State University Lower
More information8. INTRACTABILITY I. Lecture slides by Kevin Wayne Copyright 2005 Pearson-Addison Wesley. Last updated on 2/6/18 2:16 AM
8. INTRACTABILITY I poly-time reductions packing and covering problems constraint satisfaction problems sequencing problems partitioning problems graph coloring numerical problems Lecture slides by Kevin
More informationNP-Completeness. Andreas Klappenecker. [based on slides by Prof. Welch]
NP-Completeness Andreas Klappenecker [based on slides by Prof. Welch] 1 Prelude: Informal Discussion (Incidentally, we will never get very formal in this course) 2 Polynomial Time Algorithms Most of the
More informationLecture 12 : Graph Laplacians and Cheeger s Inequality
CPS290: Algorithmic Foundations of Data Science March 7, 2017 Lecture 12 : Graph Laplacians and Cheeger s Inequality Lecturer: Kamesh Munagala Scribe: Kamesh Munagala Graph Laplacian Maybe the most beautiful
More information