Lecture 11 October 7, 2013

Similar documents
Lecture 13 March 7, 2017

Knapsack. Bag/knapsack of integer capacity B n items item i has size s i and profit/weight w i

1 The Knapsack Problem

a 1 a 2 a 3 a 4 v i c i c(a 1, a 3 ) = 3

This means that we can assume each list ) is

Approximation Basics

Dual fitting approximation for Set Cover, and Primal Dual approximation for Set Cover

More Approximation Algorithms

Chapter 11. Approximation Algorithms. Slides by Kevin Wayne Pearson-Addison Wesley. All rights reserved.

8 Knapsack Problem 8.1 (Knapsack)

Chapter 11. Approximation Algorithms. Slides by Kevin Wayne Pearson-Addison Wesley. All rights reserved.

COSC 341: Lecture 25 Coping with NP-hardness (2)

Lecture 18: PCP Theorem and Hardness of Approximation I

Lecture #21. c T x Ax b. maximize subject to

Integer Programming (IP)

APTAS for Bin Packing

Introduction to Bin Packing Problems

1 Agenda. 2 History. 3 Probabilistically Checkable Proofs (PCPs). Lecture Notes Definitions. PCPs. Approximation Algorithms.

1 Submodular functions

Lecture 4: An FPTAS for Knapsack, and K-Center

Lecture 6,7 (Sept 27 and 29, 2011 ): Bin Packing, MAX-SAT

Notes for Lecture 2. Statement of the PCP Theorem and Constraint Satisfaction

Approximation Algorithms

Lecture 20: LP Relaxation and Approximation Algorithms. 1 Introduction. 2 Vertex Cover problem. CSCI-B609: A Theorist s Toolkit, Fall 2016 Nov 8

Lecture 15 (Oct 6): LP Duality

NETS 412: Algorithmic Game Theory March 28 and 30, Lecture Approximation in Mechanism Design. X(v) = arg max v i (a)

Fundamentals of optimization problems

Intractable Problems Part Two

P,NP, NP-Hard and NP-Complete

K-center Hardness and Max-Coverage (Greedy)

Topics in Theoretical Computer Science April 08, Lecture 8

Lec. 2: Approximation Algorithms for NP-hard Problems (Part II)

Topic: Primal-Dual Algorithms Date: We finished our discussion of randomized rounding and began talking about LP Duality.

Lecture 21: Counting and Sampling Problems

Lecture : Lovász Theta Body. Introduction to hierarchies.

Approximation Algorithms and Hardness of Approximation. IPM, Jan Mohammad R. Salavatipour Department of Computing Science University of Alberta

CS 583: Approximation Algorithms: Introduction

Lectures 6, 7 and part of 8

CS 6901 (Applied Algorithms) Lecture 3

3.4 Relaxations and bounds

The Knapsack Problem. 28. April /44

Essential facts about NP-completeness:

Advanced Algorithms 南京大学 尹一通

Easy Problems vs. Hard Problems. CSE 421 Introduction to Algorithms Winter Is P a good definition of efficient? The class P

CS6999 Probabilistic Methods in Integer Programming Randomized Rounding Andrew D. Smith April 2003

Lecture 4. 1 FPTAS - Fully Polynomial Time Approximation Scheme

Approximation Algorithms for the k-set Packing Problem

Topic: Sampling, Medians of Means method and DNF counting Date: October 6, 2004 Scribe: Florin Oprea

1 Perfect Matching and Matching Polytopes

PCP Course; Lectures 1, 2

Week Cuts, Branch & Bound, and Lagrangean Relaxation

Linear Programming. Scheduling problems

- Well-characterized problems, min-max relations, approximate certificates. - LP problems in the standard form, primal and dual linear programs

CS599: Convex and Combinatorial Optimization Fall 2013 Lecture 17: Combinatorial Problems as Linear Programs III. Instructor: Shaddin Dughmi

BCCS: Computational methods for complex systems Wednesday, 16 June Lecture 3

EECS 495: Combinatorial Optimization Lecture Manolis, Nima Mechanism Design with Rounding

1 Maximum Budgeted Allocation

CS264: Beyond Worst-Case Analysis Lecture #18: Smoothed Complexity and Pseudopolynomial-Time Algorithms

Aside: Golden Ratio. Golden Ratio: A universal law. Golden ratio φ = lim n = 1+ b n = a n 1. a n+1 = a n + b n, a n+b n a n

4. How to prove a problem is NPC

Network Design and Game Theory Spring 2008 Lecture 2

CS 6783 (Applied Algorithms) Lecture 3

Integer Programming: Cutting Planes

Santa Claus Schedules Jobs on Unrelated Machines

COT 6936: Topics in Algorithms! Giri Narasimhan. ECS 254A / EC 2443; Phone: x3748

CS264: Beyond Worst-Case Analysis Lecture #15: Smoothed Complexity and Pseudopolynomial-Time Algorithms

Discrepancy Theory in Approximation Algorithms

Approximation Algorithms for the Incremental Knapsack Problem via Disjunctive Programming

CSE 421 Dynamic Programming

Lecture 18: More NP-Complete Problems

ACO Comprehensive Exam March 17 and 18, Computability, Complexity and Algorithms

1 T 1 = where 1 is the all-ones vector. For the upper bound, let v 1 be the eigenvector corresponding. u:(u,v) E v 1(u)

Now just show p k+1 small vs opt. Some machine gets k=m jobs of size at least p k+1 Can also think of it as a family of (1 + )-approximation algorithm

CS294: PCP and Hardness of Approximation January 25, Notes for Lecture 3

ILP Formulations for the Lazy Bureaucrat Problem

Approximate Counting and Markov Chain Monte Carlo

Algorithms. Outline! Approximation Algorithms. The class APX. The intelligence behind the hardware. ! Based on

CS675: Convex and Combinatorial Optimization Fall 2016 Combinatorial Problems as Linear and Convex Programs. Instructor: Shaddin Dughmi

Dynamic Programming: Interval Scheduling and Knapsack

Symmetry and hardness of submodular maximization problems

CS675: Convex and Combinatorial Optimization Fall 2014 Combinatorial Problems as Linear Programs. Instructor: Shaddin Dughmi

Outline. 1 NP-Completeness Theory. 2 Limitation of Computation. 3 Examples. 4 Decision Problems. 5 Verification Algorithm

CS151 Complexity Theory. Lecture 15 May 22, 2017

Topic: Intro, Vertex Cover, TSP, Steiner Tree Date: 1/23/2007

The Knapsack Problem. n items with weight w i N and profit p i N. Choose a subset x of items

NP Completeness and Approximation Algorithms

Integer Linear Programs

The knapsack problem

BBM402-Lecture 20: LP Duality

On the Complexity of the Minimum Independent Set Partition Problem

MATH 409 LECTURES THE KNAPSACK PROBLEM

Optimization of Submodular Functions Tutorial - lecture I

Approximation Algorithms for Maximum. Coverage and Max Cut with Given Sizes of. Parts? A. A. Ageev and M. I. Sviridenko

Discrete Optimization 2010 Lecture 8 Lagrangian Relaxation / P, N P and co-n P

Bin packing and scheduling

Computational Complexity. IE 496 Lecture 6. Dr. Ted Ralphs

CO759: Algorithmic Game Theory Spring 2015

Duality of LPs and Applications

Data Structures in Java

A Polynomial Time Approximation Scheme for the Multiple Knapsack Problem

Transcription:

CS 4: Advanced Algorithms Fall 03 Prof. Jelani Nelson Lecture October 7, 03 Scribe: David Ding Overview In the last lecture we talked about set cover: Sets S,..., S m {,..., n}. S has cost c S. Goal: Cover all of [n] with minimal cost. One algorithm is the greedy algorithm: while there exists an uncovered element, pick S minimizing c s # new elements covered by S. In this lecture we Finish study approximation algorithms via dual fitting. Learn about LP Integrality Gaps Define poly-time approximation schemes (PTAS, FPTAS, FPRAS) Approximation algorithms via dual fitting. Greedy algorithm for set cover Theorem. The greedy algorithm is an O(log n) approximation. The unweighted case is proven in [] and []. The weighted case is proven in [3]. Proof. We consider the following LP relaxation for set cover. Primal: minimize S a Sx S with the constraints i [n], i S X S, and S, x S 0. Dual: maximize n e=, with the constraints S, e S y e c S and e, y e 0.. We will build the primal solution, and show how we can maintain the dual solution. When we take set S into our cover:. Set x S in primal.

. For each newly covered element e, set y e = c S # elements newly covered by S. We see that cost P (x) = cost(y). It is clear that this procedure makes x feasible. Claim. y H c is feasible. Proof. We need to show that S, e S y e c S H n. Let us order the elements of S by when they are first covered, e, e,..., e k (where S = k). Note that e i is not necessarily covered by S. Right c before e i was covered, we could have chosen S at price S k i+, so that y e i c S k i+. Hence, y e c S e S k i= i = c SH k c S H n. Hence, the greedy algorithm gives a log n approximation.. Vertex cover We now consider the vertex cover problem. The input is an undirected graph G = (V, E), where V = n and E = m. We want to pick the minimal S V such that each edge e E is incident upon at least one vertex of S. Note that vertex cover is a special case of set cover, where each vertex corresponds to a set, and each edge to an element in the universe. There is a greedy algorithm for vertex cover. Namely, while there is an uncovered edge e = (u, v), we add both u and v to the vertex cover. Claim 3. The greedy algorithm gives a -approximation. Proof. There is a simple proof, as detailed in the CS 4 notes. We will present another proof using primal-dual approach. We consider the following LP relaxation: Primal: Minimize n v= x v where e = (u, v), x u + x v and x 0. Dual: Maximize e E y e where v V, v e y e and y 0. We again use dual fitting. When greedy covers an edge e = (u, v) E, set x u and x v, and set y e. These give feasible solutions. Moreover, primal cost is at most twice the dual cost, so this is a -approximation. 3 LP Integrality Gaps The question we want to answer is, what is the limit of using a given LP relaxation? In our proofs, we achieve cost αopt(lp ). But if there is some gap βoptlp OPT(IP ) for some particular instance, then we cannot hope to achieve α < β via LP relaxation. We call β the integrality gap.

Example 4. Consider vertex cover. We had the constraint that u, x u 0. This should have been x u = {0, }, but we relaxed it to x u 0. This can cause some problems. For instance, consider the complete graph on n-vertices, K n. Then, we can assign x u, and in fact, this is an optimal solution. Hence, OPT(LP ) = n. However, with integer programming, OPT(IP ) = n, since we must take all but vertex. This gives an integrality gap of. Example 5. Consider set cover. We define n = q for some q > 0. The elements of the universe are in correspondance with elements of F q \{0}. Then, sets are also in correspondence with elements of F q : if α Fq, S α = {e : α, e = }. First consider the fractional solution. Each element is contained in exactly half the sets. Hence, we can take x u = m for all m (i.e., # sets containing e ), so OPT(LP ). For the integer solution, suppose we have a solution with q sets, S α,..., S αq. q i= S α q = {0}, a dimension zero vector space. But q i= S αq = {e : i,..., q, α i, e = 0} has codimension at most q, so it has dimension at least. q/ = O(log n). Then, We conclude that the gap is Remark 6. The integrality problem shows that the relaxation we considered cannot achieve a certain approximation gap. A stronger statement would be to show that unless P = NP, we cannot achieve a better approximation gap. These theorems typically use the PCP theorem. 4 Polynomial time approximation schemes We consider minimization problems. Definition 7. A problem m admits a PTAS (polynomial time approximation scheme) if ɛ (0, ) fixed, and for all problem size n, we can achieve a ( + ɛ)-approximation in time O(n f(/ɛ) ). Definition 8. A problem m admits a FPTAS (fully polynomial time approximation scheme) if ɛ (0, ) fixed, and for all problem size n, we can achieve a ( + ɛ)-approximation in time O ( poly ( n ɛ )). Definition 9. A problem m admits a FPRAS (fully polynomial time randomized approximation scheme) if ɛ (0, ) fixed, and for all problem size n, we can achieve a ( + ɛ)-approximation in time O ( poly ( )) n ɛ with probability at least 3. Remark 0. By running a FPRAS multiple times and taking medians, we can reduce the error probability to be arbitrarily small. We will provide a PTAS/FPTAS for knapsack and a FPRAS for counting solutions to DNF [4]. 4. Knapsack Problem We have a knapsack with capacity W and given items with weights w,..., w n and values v,..., v n. We want to pack our knapsack to maximize the total value n i= v ix i with constraints n i= w ix i 3

W with x i {0, }. This problem is NP-hard. There are dynamic programming solutions for the knapsack problem. For example, there is a O(nW ) algorithm. We set { 0 if i = 0 f(i, b) = max(f(i, b), v i + f(i, b w i ) if b w i 0) if i = 0 This does not violate the NP-hardness result, since W takes only log W bits to specify, so the algorithm is exponential in the size of the problem. Another dynamic programming approach gives time O(nV ), where V = n i= v i. Namely, we define f(i, p) = min weight to get value exactly v using only items from {,..., i}. Suppose we relax the last constraint to 0 x i. Then, we would sort the items in decreasing order by v i w i and fill knapsack in this order (put as much of item i as you can before starting to pack item i + ). This algorithm can be bad, when, for example, v = + ɛ, w =, v w = W, w = W. This is only a W -approximation. But let us now consider a modified greedy algorithm that avoid this bad case. Namely, we take either the output of greedy algorithm or we take the most valuable item, depending on which has greater value. Claim. The above algorithm is a -approximation. Proof. Lemma. Suppose the greedy algorithm takes items v w v w w v k w k and we have no room for the k-th item. Then, k i= v i OPT. Proof. The sum k i= v i is even larger than the fractional knapsack s optimal solution, which is at least the optimum of the integral knapsack problem. Now, k i= v i = (v +... + v k ) + v k OPT by the lemma. Hence, one of v +... + v k or v k is at least OPT as desired. Observation 3. If no item has value > ɛopt, then the greedy algorithm achieves ( ɛ)opt. Observation 4. At most ɛ items in optimal solution have value > ɛopt. Now, we present a PTAS algorithm for the knapsack problem. We first guess the set S consisting of the elements in the optimal solution of value at least ɛopt. Then, S ɛ. We then remove all items remaining which have value bigger than anything in S. Finally, we run the greedy algorithm on what is left, and our knapsack will contain S and whatever the greedy algorithm chooses. We can do this for all O(n /ɛ ) possible subsets of size at most ɛ, so the running time is O(n/ɛ poly(n)). Claim 5. The profit achieved by this scheme is ( ɛ)opt. 4

Proof. Suppose we guess S correctly. Then, OPT = v(s) + OPT, where v(s) is the value of packing S and OPT is the best packing of items of value at most ɛopt. Our scheme then achieve v(s) + (greedy on rest). Now let us analyze how the greedy algorithm performs. We know that v k ɛopt (recall that v k is the value of the k-th and last element that the fractional greedy algorithm tries to include). Also, we know that the greedy output plus v k is at least OPT by Lemma. Hence, greedy achieves a profit of at least OPT ɛopt, so our scheme achieves at least ( ɛ)opt. References [] David Johnson. Approximation algorithms for combinatorial problems. J. Comput. Syst. Sci., 9(3):56 78, 974. [] László Lovász. On the ratio of optimal integral and fractional covers. Discrete Mathematics, 3(4):383 390, 975. [3] Vašek Chvátal. A greedy heuristic for the set covering problem. Math. Oper. Res., 4(3):33 35, 979. [4] Richard Karp, Michael Luby, Neal Madras. Monte-carlo approximation algorithms for enumeration problems. J. Algorithms, 0(3):49 448, 989. 5