a 1 a 2 a 3 a 4 v i c i c(a 1, a 3 ) = 3

Similar documents
1 Overview. 2 Multilinear Extension. AM 221: Advanced Optimization Spring 2016

Lecture 11 October 7, 2013

1 The Knapsack Problem

Lecture 13 March 7, 2017

This means that we can assume each list ) is

NETS 412: Algorithmic Game Theory March 28 and 30, Lecture Approximation in Mechanism Design. X(v) = arg max v i (a)

arxiv: v1 [math.oc] 3 Jan 2019

CS264: Beyond Worst-Case Analysis Lecture #18: Smoothed Complexity and Pseudopolynomial-Time Algorithms

COSC 341: Lecture 25 Coping with NP-hardness (2)

Introduction to Bin Packing Problems

Knapsack. Bag/knapsack of integer capacity B n items item i has size s i and profit/weight w i

1 Column Generation and the Cutting Stock Problem

CS599: Convex and Combinatorial Optimization Fall 2013 Lecture 17: Combinatorial Problems as Linear Programs III. Instructor: Shaddin Dughmi

1 T 1 = where 1 is the all-ones vector. For the upper bound, let v 1 be the eigenvector corresponding. u:(u,v) E v 1(u)

8 Knapsack Problem 8.1 (Knapsack)

Approximation Basics

Intractable Problems Part Two

CSE541 Class 22. Jeremy Buhler. November 22, Today: how to generalize some well-known approximation results

Lecture 4: An FPTAS for Knapsack, and K-Center

9. Submodular function optimization

CS264: Beyond Worst-Case Analysis Lecture #15: Smoothed Complexity and Pseudopolynomial-Time Algorithms

Integer Programming Part II

More Approximation Algorithms

Chapter 11. Approximation Algorithms. Slides by Kevin Wayne Pearson-Addison Wesley. All rights reserved.

Data Structures in Java

Week Cuts, Branch & Bound, and Lagrangean Relaxation

Aside: Golden Ratio. Golden Ratio: A universal law. Golden ratio φ = lim n = 1+ b n = a n 1. a n+1 = a n + b n, a n+b n a n

CS 6901 (Applied Algorithms) Lecture 3

Improved Fully Polynomial time Approximation Scheme for the 0-1 Multiple-choice Knapsack Problem

Lecture 4. 1 FPTAS - Fully Polynomial Time Approximation Scheme

2001 Dennis L. Bricker Dept. of Industrial Engineering The University of Iowa. Reducing dimensionality of DP page 1

Chapter 11. Approximation Algorithms. Slides by Kevin Wayne Pearson-Addison Wesley. All rights reserved.

Multi-criteria approximation schemes for the resource constrained shortest path problem

EECS 495: Combinatorial Optimization Lecture Manolis, Nima Mechanism Design with Rounding

Lec. 2: Approximation Algorithms for NP-hard Problems (Part II)

Lecture 15 (Oct 6): LP Duality

Integer Programming: Cutting Planes

Solutions to Exercises

Lecture #21. c T x Ax b. maximize subject to

3.4 Relaxations and bounds

1 Submodular functions

1 Overview. 2 Extreme Points. AM 221: Advanced Optimization Spring 2016

CMPSCI611: The Matroid Theorem Lecture 5

CS599: Algorithm Design in Strategic Settings Fall 2012 Lecture 7: Prior-Free Multi-Parameter Mechanism Design. Instructor: Shaddin Dughmi

ILP Formulations for the Lazy Bureaucrat Problem

CS Analysis of Recursive Algorithms and Brute Force

Dual fitting approximation for Set Cover, and Primal Dual approximation for Set Cover

Today s Outline. CS 561, Lecture 15. Greedy Algorithms. Activity Selection. Greedy Algorithm Intro Activity Selection Knapsack

Greedy vs Dynamic Programming Approach

15.081J/6.251J Introduction to Mathematical Programming. Lecture 24: Discrete Optimization

20. Dynamic Programming II

CS325: Analysis of Algorithms, Fall Final Exam

Optimization of Submodular Functions Tutorial - lecture I

Computational Integer Programming. Lecture 2: Modeling and Formulation. Dr. Ted Ralphs

Submodular Functions, Optimization, and Applications to Machine Learning

Optimization Methods in Management Science

1 Ordinary Load Balancing

CS6999 Probabilistic Methods in Integer Programming Randomized Rounding Andrew D. Smith April 2003

Finding Large Induced Subgraphs

Dynamic Programming: Interval Scheduling and Knapsack

0-1 Knapsack Problem

21. Dynamic Programming III. FPTAS [Ottman/Widmayer, Kap. 7.2, 7.3, Cormen et al, Kap. 15,35.5]

Outline / Reading. Greedy Method as a fundamental algorithm design technique

Integer Linear Programs

Lecture 2: Scheduling on Parallel Machines

Asymptotic Polynomial-Time Approximation (APTAS) and Randomized Approximation Algorithms

CS 561, Lecture: Greedy Algorithms. Jared Saia University of New Mexico

Optimisation and Operations Research

Linear Programming. Scheduling problems

Network Design and Game Theory Spring 2008 Lecture 6

Integer Linear Programming Modeling

On the knapsack closure of 0-1 Integer Linear Programs. Matteo Fischetti University of Padova, Italy

Minimizing the Number of Tardy Jobs

- Well-characterized problems, min-max relations, approximate certificates. - LP problems in the standard form, primal and dual linear programs

Multicriteria approximation through decomposition

CO759: Algorithmic Game Theory Spring 2015

Duality of LPs and Applications

Approximation Algorithms for the Incremental Knapsack Problem via Disjunctive Programming

Topic: Primal-Dual Algorithms Date: We finished our discussion of randomized rounding and began talking about LP Duality.

On the Complexity of Budgeted Maximum Path Coverage on Trees

MATH 409 LECTURES THE KNAPSACK PROBLEM

Approximation Algorithms

Combinatorial optimization problems

Planning and Optimization

Lecture 21: Counting and Sampling Problems

Lecture 11 October 11, Information Dissemination through Social Networks

A Characterization of Graphs with Fractional Total Chromatic Number Equal to + 2

arxiv: v1 [math.oc] 1 Apr 2017

11.1 Set Cover ILP formulation of set cover Deterministic rounding

Connectedness of Efficient Solutions in Multiple. Objective Combinatorial Optimization

Lecture 5: Probabilistic tools and Applications II

Submodular and Linear Maximization with Knapsack Constraints. Ariel Kulik

Topics in Theoretical Computer Science April 08, Lecture 8

Reconnect 04 Introduction to Integer Programming

Lecture 6,7 (Sept 27 and 29, 2011 ): Bin Packing, MAX-SAT

Computational Complexity. IE 496 Lecture 6. Dr. Ted Ralphs

The Knapsack Problem. 28. April /44

The Knapsack Problem. n items with weight w i N and profit p i N. Choose a subset x of items

Approximation Algorithms for Maximum. Coverage and Max Cut with Given Sizes of. Parts? A. A. Ageev and M. I. Sviridenko

Dynamic Programming( Weighted Interval Scheduling)

Transcription:

AM 221: Advanced Optimization Spring 2016 Prof. Yaron Singer Lecture 17 March 30th 1 Overview In the previous lecture, we saw examples of combinatorial problems: the Maximal Matching problem and the Minimum Spanning Tree problem. For both problems, we say that a natural greedy algorithm was able to find an optimal solution. Today, we will study the Knapsack problem. We will see that a simple greedy algorithm is able to compute a good approximation to the optimal solution. We will then be able to improve on this algorithm using a new tool called dynamic programming. 2 Greediness and the Knapsack Problem Let us first define the Knapsack Problem. Input: n items. Each item i [n] has: cost c i Z + value v i Z + budget B Z + Goal: Find a subset S of items whose cost c(s) = i S c i is smaller than B and whose value v(s) = i S v i is maximal. Example. Let us consider the Knapsack instance whose items are given by the following table: a 1 a 2 a 3 a 4 v i 2 1 7 2 c i 1 3 2 2 and with budget B = 4. Consider the following three solutions: a 1, a 2 } a 1, a 3 } Feasible v(a 1, a 2 ) = 3 Feasible v(a 1, a 3 ) = 9 Not Feasible c(a 1, a 2 ) = 4 c(a 1, a 3 ) = 3 a 1, a 2, a 3 } v(a 1, a 2, a 3 ) = 10 c(a 1, a 2, a 3 ) = 6 The last solution is not feasible because it exceeds the budget. Among the first two solutions, the second is better because its value is larger. In fact it has maximal value among all feasible solutions. 1

Mathematical Program for Knapsack. Integer program The Knapsack problem can be written as the following max x v x n s.t. c i x i B i=1 x i 0, 1} As we will see in section, Knapsack is an NP-complete optimization problem and it is being too optimistic to hope to solve it in polynomial time. Instead we turn to the problem of designing an approximation algorithm for it. Definition 1. For a maximization problem, an algorithm is called α-approximation if for any input, the algorithm returns a feasible solution S such that: f(s) f(o) α where O is an optimal solution, and f evaluates the quality of the solution. Fractional View of Knapsack. In problem set 4, you analyzed the Linear Programming relaxation of the Knapsack problem, and proved that an optimal solution to the relaxation could be obtained as follows: sort the items in decreasing order of densities. The density of element i is defined by the ration v i /c i (value per cost). add items to the solution one-by-one in this order as long as the sum of the costs does not exceed the budget. For the first item i which would violate the budget, only add a fraction x i of it such that the budget constraint is tight. The fraction x i is defined by: x i = B j<i c j c i Approximation algorithm for Knapsack. The algorithm from problem set 4 for the fractional relaxation of Knapsack suggests the following greedy algorithm for the original Knapsack problem. Algorithm 1 Greedy algorithm for Knapsack 1: S ( ) v 2: while c S argmax i i/ S c i B do v 3: S S argmax i i/ S c i 4: end while 5: return S This algorithm by itself is not enough to provide a constant approximation to the optimal solution. Let us indeed consider the Knapsack instance with two items: 2

item 1 has cost 1 and value 2 item 2 has cost M and value M (for some M > 2) the budget is B The greedy algorithm will select item 1 (its value per cost is better than item 2) and will then return since item 2 cannot be added without exceeding the budget. However it is clear that the optimal solution is to select item 2. The ratio between the value of the solution returned by the greedy algorithm and the optimal solution is 1 M which is arbitrarily small as M grows to infinity. However, a simple modification of Algorithm 1 is a constant (1/2) approximation algorithm as stated in the following theorem. Theorem 2. Let S = argmaxv(s), v(a)}, where S is the solution returned by Greedy and a is the element with highest density (value per cost) that is not in S. Then: v(s ) OP T 2 where OP T denotes the value of an optimal solution. Proof. Let OP T LP be the value of an optimal solution for fractional knapsack problem as discussed above. Then OP T OP T LP since the fractional knapsack problem is a relaxation of Knapsack. It is also clear from the construction of the fractional optimal solution that OP T LP v(s) + v(a) since only a fraction x a 1 of the last element is added. Putting everything together, we obtain: OP T OP T LP v(s) + v(a) 2 maxv(s), v(a)}. 3 Dynamic Programming Algorithm for Knapsack Let us define: S i,p : subset of a 1,..., a i } with minimal cost which has value exactly p. if Si,p does not exist c(s i,p ) : a i S i,p c(a i ) otherwise and let us consider the following recurrence: a 1 } if p = v 1 S 1,p = _ otherwise [ ] argmin c(s i,p ), c(a i+1 S i,p vi+1 ) S i+1,p = S i,p if v i+1 p and S i,p vi+1 _ otherwise 3

Algorithm 2 Dynamic programming algorithm for Knapsack a 1 if p = v i 1: S 1,p = _ otherwise 2: for i 1,..., n} do 3: for p 1,..., n V } do argmin c(s i,p ), c(a i+1 S i,p vi+1 ) } if v i+1 p and S i,p vi+1 _ 4: S i+1,p = S i,p otherwise 5: end for 6: end for 7: return argmax p:c(sn,p) B v(s n,p ) It is clear that this recurrence computes S i,p for all i and p. Furthermore, we only need to consider values of p in the range 0,..., n V } where V = max 1 i n v i. The optimal solution to Knapsack is then given by argmax p:c(sn,p) B v(s n,p ). This motivates the following dynamic programming algorithm for Knapsack. It follows from the discussion above that Algorithm 2 computes an optimal solution to the Knapsack problem. The running time of this algorithm is O(n 2 V ). Even though it appears to be a polynomialtime algorithm, it is in fact only pseudo-polynomial: indeed, as you will see in section, the running time complexity is measured as a function of the size (number of bits) of the input. The value V can be written using log V bits, so a polynomial-time algorithm should have a complexity O(log c V ) for some c 1. The complexity O(n 2 V ) is in fact exponential in the size of the input. 4 Polynomial-time Approximation Scheme for Knapsack Even though the dynamic programming algorithm from Section 3 is not a polynomial time algorithm, it is possible to transform it into a polynomial-time approximation algorithm with arbitrarily good approximation ratio. Algorithm 3 Polynomial-time approximation scheme for Knapsack Require: parameter 1: k V n 2: for i 1,..., n} do 3: v i v i k 4: end for ( (c1 5: return the solution given by Algorithm 2 for input, v 1 ), (c 2, v 2 ),..., (c n, v n) }, B) Theorem 3. Algorithm 3 runs in time O( n3 ), and returns a solution S such that: v(s) (1 )OP T Proof. We will first prove the running time and then the approximation ratio. 4

Running time. As discussed above, the running time of the dynamic programming algorithm is O(n 2 α) where α is largest value in input, in our case: ( O(n 2 α) = O n 2 ) V k ( = O n 2 n ) ( ) n 3 = O Approximation Ratio. v(s) k v (S) k v (O) v(o) k O v(o) k n = OP T V (1 )OP T Remark. An algorithm like Algorithm 3 is called an approximation scheme: the algorithm is parametrized by and for any returns a solution which is a (1 ) approximation to the optimal solution. It is then interesting to look at how the complexity depends on : it should grow to infinity has converges to zero since we don t know a polynomial time algorithm for Knapsack (more about that in section). When the complexity is polynomial in n and 1 as in Theorem 3, the scheme is called a fully polynomial-time approximation scheme (FPTAS). 5