The Knapsack Problem. 28. April /44

Similar documents
The Knapsack Problem. n items with weight w i N and profit p i N. Choose a subset x of items

Knapsack. Bag/knapsack of integer capacity B n items item i has size s i and profit/weight w i

8 Knapsack Problem 8.1 (Knapsack)

CS 6901 (Applied Algorithms) Lecture 3

Lecture 4: An FPTAS for Knapsack, and K-Center

arxiv: v1 [math.oc] 3 Jan 2019

1 The Knapsack Problem

This means that we can assume each list ) is

Bin packing and scheduling

Chapter 11. Approximation Algorithms. Slides by Kevin Wayne Pearson-Addison Wesley. All rights reserved.

Algorithms. Outline! Approximation Algorithms. The class APX. The intelligence behind the hardware. ! Based on

Lecture 11 October 7, 2013

Efficient approximation algorithms for the Subset-Sums Equality problem

Lecture 2: Divide and conquer and Dynamic programming

COSC 341: Lecture 25 Coping with NP-hardness (2)

Chapter 11. Approximation Algorithms. Slides by Kevin Wayne Pearson-Addison Wesley. All rights reserved.

Limitations of Algorithm Power

Approximation Basics

P,NP, NP-Hard and NP-Complete

Lecture 13: Dynamic Programming Part 2 10:00 AM, Feb 23, 2018

0-1 Knapsack Problem

Dynamic Programming( Weighted Interval Scheduling)

The Knapsack Problem

Lec. 2: Approximation Algorithms for NP-hard Problems (Part II)

Linear Programming. Scheduling problems

Week Cuts, Branch & Bound, and Lagrangean Relaxation

Unit 1A: Computational Complexity

1 Column Generation and the Cutting Stock Problem

APTAS for Bin Packing

Lecture 13 March 7, 2017

Improved dynamic programming and approximation results for the Knapsack Problem with Setups

3.4 Relaxations and bounds

Easy Problems vs. Hard Problems. CSE 421 Introduction to Algorithms Winter Is P a good definition of efficient? The class P

Solutions to Exercises

Approximation Algorithms

CS151 Complexity Theory. Lecture 15 May 22, 2017

Approximation Algorithms

Integer Optimization with Penalized Fractional Values: The Knapsack Case

Graph. Supply Vertices and Demand Vertices. Supply Vertices. Demand Vertices

Approximation complexity of min-max (regret) versions of shortest path, spanning tree, and knapsack

Problem Complexity Classes

Approximation Algorithms and Hardness of Approximation. IPM, Jan Mohammad R. Salavatipour Department of Computing Science University of Alberta

Decision Procedures An Algorithmic Point of View

Approximation Algorithms for Orthogonal Packing Problems for Hypercubes

Approximation Algorithms for Re-optimization

Computability and Complexity Theory

Theoretical Computer Science

1. Introduction Recap

Greedy vs Dynamic Programming Approach

Dynamic Programming. Cormen et. al. IV 15

CS 6783 (Applied Algorithms) Lecture 3

a 1 a 2 a 3 a 4 v i c i c(a 1, a 3 ) = 3

arxiv: v1 [cs.cg] 29 Jun 2012

Hardness of Approximation

COT 6936: Topics in Algorithms! Giri Narasimhan. ECS 254A / EC 2443; Phone: x3748

Scheduling Parallel Jobs with Linear Speedup

CSE 421 Dynamic Programming

Column Generation. i = 1,, 255;

CSE 202 Homework 4 Matthias Springer, A

Fundamentals of optimization problems

Scheduling on Unrelated Parallel Machines. Approximation Algorithms, V. V. Vazirani Book Chapter 17

Recoverable Robustness in Scheduling Problems

Approximation of min-max and min-max regret versions of some combinatorial optimization problems

A An Overview of Complexity Theory for the Algorithm Designer

Optimisation and Operations Research

4/12/2011. Chapter 8. NP and Computational Intractability. Directed Hamiltonian Cycle. Traveling Salesman Problem. Directed Hamiltonian Cycle

Approximation Preserving Reductions

LP based heuristics for the multiple knapsack problem. with assignment restrictions

Intractable Problems Part Two

Recoverable Robust Knapsacks: Γ -Scenarios

A PTAS for the Uncertain Capacity Knapsack Problem

Asymptotic Polynomial-Time Approximation (APTAS) and Randomized Approximation Algorithms

Lecture 6,7 (Sept 27 and 29, 2011 ): Bin Packing, MAX-SAT

Introduction to Semidefinite Programming I: Basic properties a

A Polynomial Time Approximation Scheme for the Multiple Knapsack Problem

Mat 3770 Bin Packing or

Welcome to... Problem Analysis and Complexity Theory , 3 VU

All-norm Approximation Algorithms

Lecture 20: LP Relaxation and Approximation Algorithms. 1 Introduction. 2 Vertex Cover problem. CSCI-B609: A Theorist s Toolkit, Fall 2016 Nov 8

Lecture 8: Column Generation

3.10 Column generation method

MATH 409 LECTURES THE KNAPSACK PROBLEM

NP Completeness and Approximation Algorithms

Lecture 8: Column Generation

6. DYNAMIC PROGRAMMING I

Lecture 5: Computational Complexity

Computational complexity theory

Introduction to Complexity Theory

Exercises NP-completeness

Algorithms. NP -Complete Problems. Dong Kyue Kim Hanyang University

ILP Formulations for the Lazy Bureaucrat Problem

Dynamic Programming: Interval Scheduling and Knapsack

Computational Complexity. IE 496 Lecture 6. Dr. Ted Ralphs

3.10 Column generation method

NP-Completeness. Andreas Klappenecker. [based on slides by Prof. Welch]

Integer Linear Programs

5 Integer Linear Programming (ILP) E. Amaldi Foundations of Operations Research Politecnico di Milano 1

On the Tightness of an LP Relaxation for Rational Optimization and its Applications

More Approximation Algorithms

Maximum sum contiguous subsequence Longest common subsequence Matrix chain multiplication All pair shortest path Kna. Dynamic Programming

Transcription:

The Knapsack Problem 20 10 15 20 W n items with weight w i N and profit p i N Choose a subset x of items Capacity constraint i x w i W wlog assume i w i > W, i : w i < W Maximize profit i x p i 28. April 2010 1/44

Optimization problem Set of instances I Function F that gives for all w I the set of feasible solutions F(w) Goal function g that gives for each s F(w) the value g(s) Optimization goal: Given input w, maximize or minimize the value g(s) among all s F(w) Decision problem: Given w I and k N, decide whether OPT(w) k (minimization) OPT(w) k (maximization) where OPT(w) is the optimal function value among all s F(w). 28. April 2010 2/44

Optimization problem Set of instances I Function F that gives for all w I the set of feasible solutions F(w) Goal function g that gives for each s F(w) the value g(s) Optimization goal: Given input w, maximize or minimize the value g(s) among all s F(w) Decision problem: Given w I and k N, decide whether OPT(w) k (minimization) OPT(w) k (maximization) where OPT(w) is the optimal function value among all s F(w). 28. April 2010 2/44

Quality of approximation algorithms Recall: An approximation algorithm A producing a solution of value A(w) on a given input w I has approximation ratio r iff A(w) OPT(w) r w I (for maximization problems) or OPT(w) A(w) r w I (for minimization problems) How good your approximation algorithm is depends on the value of r and its running time. 28. April 2010 3/44

Negative result We cannot find a result with bounded difference to the optimal solution in polynomial time. Interestingly, the problem remains NP-hard if all items have the same weight to size ratio! 28. April 2010 4/44

Reminder?: Linear Programming Definition A linear program with n variables and m constraints is specified by the following minimization problem Cost function f(x) = c x c is called the cost vector m constraints of the form a i x i b i where i {,,=}, a i R n We have L = { x R n : 1 i m : x i 0 a i x i b i }. Let a ij denote the j-th component of vector a i. 28. April 2010 5/44

Reminder?: Linear Programming Definition A linear program with n variables and m constraints is specified by the following minimization problem Cost function f(x) = c x c is called the cost vector m constraints of the form a i x i b i where i {,,=}, a i R n We have L = { x R n : 1 i m : x i 0 a i x i b i }. Let a ij denote the j-th component of vector a i. 28. April 2010 6/44

Complexity Theorem A linear program can be solved in polynomial time. Worst case bounds are rather high The algorithm used in practice (simplex algorithm) might take exponential worst case time Reuse is not only possible but almost necessary 28. April 2010 7/44

Integer Linear Programming ILP: Integer Linear Program, A linear program with the additional constraint that all the x i Z Linear Relaxation: Remove the integrality constraints from an ILP 28. April 2010 8/44

Example: The Knapsack Problem subject to maximize p x w x W, x i {0, 1} for 1 i n. x i = 1 iff item i is put into the knapsack. 0/1 variables are typical for ILPs 28. April 2010 9/44

Linear relaxation for the knapsack problem subject to maximize p x w x W, 0 x i 1 for 1 i n. We allow items to be picked fractionally x 1 = 1/3 means that 1/3 of item 1 is put into the knapsack This makes the problem much easier. How would you solve it? 28. April 2010 10/44

The Knapsack Problem 20 10 15 20 W n items with weight w i N and profit p i N Choose a subset x of items Capacity constraint i x w i W wlog assume i w i > W, i : w i < W Maximize profit i x p i 28. April 2010 11/44

How to Cope with ILPs Solving ILPs is NP-hard + Powerful modeling language + There are generic methods that sometimes work well + Many ways to get approximate solutions. + The solution of the integer relaxation helps. For example sometimes we can simply round. 28. April 2010 12/44

Linear Time Algorithm for Linear Relaxation of Knapsack Classify elements by profit density p i w i into B,{k}, S such that i B, j S : p i p k p j, and, w i w k w j w i > W. w i W but w k + i B i B 1 if i B W i B Set x i = w i w k if i = k 0 if i S B S k 10 15 20 20 W 28. April 2010 13/44

1 if i B W i B x i = w i w k if i = k 0 if i S Lemma x is the optimal solution of the linear relaxation. Proof. Let x denote the optimal solution w x = W otherwise increase some x i i B : xi 10 = 1 otherwise B increase xi and decrease some xj 15 for j {k} S j S : xj = 0 otherwise k 20 decrease xj and increase xk This only leaves x k = W i B w i S 20 w k W 28. April 2010 14/44

1 if i B W i B x i = w i w k if i = k 0 if i S Lemma x is the optimal solution of the linear relaxation. Proof. Let x denote the optimal solution w x = W otherwise increase some x i i B : xi 10 = 1 otherwise B increase xi and decrease some xj 15 for j {k} S j S : xj = 0 otherwise k 20 decrease xj and increase xk This only leaves x k = W i B w i S 20 w k W 28. April 2010 14/44

1 if i B W i B x i = w i w k if i = k 0 if i S Lemma x is the optimal solution of the linear relaxation. Proof. Let x denote the optimal solution w x = W otherwise increase some x i i B : xi 10 = 1 otherwise B increase xi and decrease some xj 15 for j {k} S j S : xj = 0 otherwise k 20 decrease xj and increase xk This only leaves x k = W i B w i S 20 w k W 28. April 2010 14/44

1 if i B W i B x i = w i w k if i = k 0 if i S Lemma x is the optimal solution of the linear relaxation. Proof. Let x denote the optimal solution w x = W otherwise increase some x i i B : xi 10 = 1 otherwise B increase xi and decrease some xj 15 for j {k} S j S : xj = 0 otherwise k 20 decrease xj and increase xk This only leaves x k = W i B w i S 20 w k W 28. April 2010 14/44

Lemma 1 if i B W i B x i = w i w k if i = k 0 if i S opt i x i p i 2opt Proof. We have i B p i opt. Furthermore, since w k < W, p k opt. We get opt i x i p i i B p i + p k B k 10 15 20 W opt+opt = 2opt S 20 28. April 2010 15/44

Two-approximation of Knapsack 1 if i B W i B x i = w i w k if i = k 0 if i S Exercise: Prove that either B or {k} is a 2-approximation of the (nonrelaxed) knapsack problem. B S k 10 15 20 20 W 28. April 2010 16/44

Dynamic Programming Building it Piece By Piece Principle of Optimality An optimal solution can be viewed as constructed of optimal solutions for subproblems Solutions with the same objective values are interchangeable Example: Shortest Paths Any subpath of a shortest path is a shortest path Shortest subpaths are interchangeable s u v t 28. April 2010 17/44

Dynamic Programming by Capacity for the Knapsack Problem Define P(i, C) = optimal profit from items 1,...,i using capacity C. Lemma 1 i n : P(i, C) = max(p(i 1, C), P(i 1, C w i )+p i ) Of course this only holds for C large enough: we must have C w i. 28. April 2010 18/44

Lemma 1 i n : P(i, C) = max(p(i 1, C), P(i 1, C w i )+p i ) Proof. To prove: P(i, C) max(p(i 1, C), P(i 1, C w i )+p i ) Assume the contrary x that is optimal for the subproblem such that P(i 1, C) < p x P(i 1, C w i )+p i < p x Case x i = 0: x is also feasible for P(i 1, C). Hence, P(i 1, C) p x. Contradiction Case x i = 1: Setting x i = 0 we get a feasible solution x for P(i 1, C w i ) with profit p x = p x p i. Add p i... 28. April 2010 19/44

Computing P(i, C) bottom up: Procedure knapsack(p, c, n, W ) array P[0...W] = [0,..., 0] bitarray decision[1...n, 0... W] = [(0,..., 0),...,(0,..., 0)] for i := 1 to n do // invariant: C {1,...,W} : P[C] = P(i 1, C) for C := W downto w i do if P[C w i ]+p i > P[C] then P[C] := P[C w i ]+p i decision[i, C] := 1 28. April 2010 20/44

Recovering a Solution C := W array x[1...n] for i := n downto 1 do x[i] := decision[i, C] if x[i] = 1 then C := C w i endfor return x Analysis: Time: O(nW) pseudo-polynomial Space: W +O(n) words plus Wn bits. 28. April 2010 21/44

Example: A Knapsack Instance maximize (10, 20, 15, 20) x subject to (1, 3, 2, 4) x 5 P(i, C),(decision[i, C]) i \ C 0 1 2 3 4 5 0 0 0 0 0 0 0 1 0, (0) 10, (1) 10, (1) 10, (1) 10, (1) 10, (1) 2 3 4 28. April 2010 22/44

Example: A Knapsack Instance maximize (10, 20, 15, 20) x subject to (1, 3, 2, 4) x 5 Entries in table are P(i, C),(decision[i, C]) i \ C 0 1 2 3 4 5 0 0 0 0 0 0 0 1 0, (0) 10, (1) 10, (1) 10, (1) 10, (1) 10, (1) 2 3 4 We check each time whether P[C w i ]+p i > P[C] 28. April 2010 23/44

Example: A Knapsack Instance maximize (10, 20, 15, 20) x subject to (1, 3, 2, 4) x 5 Entries in table are P(i, C),(decision[i, C]) i \ C 0 1 2 3 4 5 0 0 0 0 0 0 0 1 0, (0) 10, (1) 10, (1) 10, (1) 10, (1) 10, (1) 2 3 4 We check each time whether P[C w i ]+p i > P[C] 28. April 2010 24/44

Example: A Knapsack Instance maximize (10, 20, 15, 20) x subject to (1, 3, 2, 4) x 5 Entries in table are P(i, C),(decision[i, C]) i \ C 0 1 2 3 4 5 0 0 0 0 0 0 0 1 0, (0) 10, (1) 10, (1) 10, (1) 10, (1) 10, (1) 2 0, (0) 10, (0) 10, (0) 20, (1) 30, (1) 30, (1) 3 4 We check each time whether P[C w i ]+p i > P[C] 28. April 2010 25/44

Example: A Knapsack Instance maximize (10, 20, 15, 20) x subject to (1, 3, 2, 4) x 5 Entries in table are P(i, C),(decision[i, C]) i \ C 0 1 2 3 4 5 0 0 0 0 0 0 0 1 0, (0) 10, (1) 10, (1) 10, (1) 10, (1) 10, (1) 2 0, (0) 10, (0) 10, (0) 20, (1) 30, (1) 30, (1) 3 0, (0) 10, (0) 15, (1) 25, (1) 30, (0) 35, (1) 4 We check each time whether P[C w i ]+p i > P[C] 28. April 2010 26/44

Example: A Knapsack Instance maximize (10, 20, 15, 20) x subject to (1, 3, 2, 4) x 5 Entries in table are P(i, C),(decision[i, C]) i \ C 0 1 2 3 4 5 0 0 0 0 0 0 0 1 0, (0) 10, (1) 10, (1) 10, (1) 10, (1) 10, (1) 2 0, (0) 10, (0) 10, (0) 20, (1) 30, (1) 30, (1) 3 0, (0) 10, (0) 15, (1) 25, (1) 30, (0) 35, (1) 4 0, (0) 10, (0) 15, (0) 25, (0) 30, (0) 35, (0) We check each time whether P[C w i ]+p i > P[C] 28. April 2010 27/44

Dynamic Programming by Profit for the Knapsack Problem Define C(i, P) = smallest capacity from items 1,...,i giving profit P. Lemma 1 i n : C(i, P) = min(c(i 1, P), C(i 1, P p i )+w i ) 28. April 2010 28/44

Dynamic Programming by Profit Let ˆP:= p x where x is the optimal solution of the linear relaxation. Thus ˆP is the value (profit) of this solution. ( ) Time: O nˆp pseudo-polynomial Space: ˆP +O(n) words plus ˆPn bits. 28. April 2010 29/44

A Faster Algorithm Dynamic programs are only pseudo-polynomial-time A polynomial-time solution is not possible (unless P=NP...), because this problem is NP-hard However, it would be possible if the numbers in the input were small (i.e. polynomial in n) To get a good approximation in polynomial time, we are going to ignore the least significant bits in the input 28. April 2010 30/44

Fully Polynomial Time Approximation Scheme Algorithm A is a (Fully) Polynomial Time Approximation Scheme for minimization problem Π if: maximization Input: Instance I, error parameter ε Output Quality: f(x) (1+ε 1 ε )opt Time: Polynomial in I (and 1/ε) 28. April 2010 31/44

Example Bounds PTAS FPTAS n+2 1/ε n 2 + 1 ε n log 1 ε n+ 1 ε 4 n 1 ε n/ε n 42/ε3. n+2 21000/ε... 28. April 2010 32/44

Problem classes We can classify problems according to the approximation ratios which they allow. APX: constant approximation ratio achievable in time polynomial in n (Metric TSP, Vertex Cover) PTAS: 1+ε achievable in time polynomial in n for any ε > 0 (Euclidean TSP) FPTAS: 1+ achievable in time polynomial in n and 1/ε for any > 0 (Knapsack) 28. April 2010 33/44

FPTAS optimal solution By choosing ε small enough, you can guarantee that the solution you find is in fact optimal. The running time will depend on the size of the optimal solution, and will thus again not be strictly polynomial-time (for all inputs). 28. April 2010 34/44

FPTAS for Knapsack Recall that p i N for all i! P:= max i p i K := εp n p i := p i K x := dynamicprogrammingbyprofit(p, c, C) output x // maximum profit // scaling factor // scale profits 28. April 2010 35/44

FPTAS for Knapsack Recall that p i N for all i! P:= max i p i 20 11 16 21 K := εp n p i := p i K x := dynamicprogrammingbyprofit(p, c, C) output x W // maximum profit // scaling factor // scale profits Example: ε = 1/3, n = 4, P = 20 K = 5/3 p = (11, 20, 16, 21) p = (6, 12, 9, 12) (or p = (2, 4, 3, 4)) 28. April 2010 36/44

Lemma p x (1 ε)opt. Proof. Consider the optimal solution x. p x K p x = pi ) (p i K K i x ( pi )) (p i K K 1 = x K nk, i x i.e., K p x p x nk. Furthermore, K p x K p x = pi K K p i K K = p x. i x i x We use that x is an optimal solution for the modified problem. p x K p x p x nk = opt εp (1 ε)opt 28. April 2010 37/44

Lemma p x (1 ε)opt. Proof. Consider the optimal solution x. p x K p x = pi ) (p i K K i x ( pi )) (p i K K 1 = x K nk, i x i.e., K p x p x nk. Furthermore, K p x K p x = pi K K p i K K = p x. i x i x We use that x is an optimal solution for the modified problem. p x K p x p x nk = opt εp (1 ε)opt 28. April 2010 37/44

Lemma p x (1 ε)opt. Proof. Consider the optimal solution x. p x K p x i x (p i K ( pi K 1 )) = x K nk, i.e., K p x p x nk. Furthermore, K p x K p x = pi K K p i K K = p x. i x i x Hence, p x K p x p x nk = opt εp (1 ε)opt 28. April 2010 38/44

Lemma Running time O ( n 3 /ε ). Proof. The running time of dynamic ) programming dominates. Recall that this is O (n P where P = p x. We have n ˆP n (n max p i ) = n2 P K Pn = n 2 n3 εp ε. 28. April 2010 39/44

A Faster FPTAS for Knapsack Simplifying assumptions: 1/ε N: Otherwise ε:= 1/ 1/ε. Upper bound ˆP is known: Use linear relaxation to get a quick 2-approximation. min i p i εˆp: Treat small profits separately. For these items greedy works well. (Costs a factor O(log(1/ε)) time.) 28. April 2010 40/44

A Faster FPTAS for Knapsack M:= 1 ε 2 ; K := ˆPε 2 = ˆP/M p i := pi // p K i { 1 ε,..., M} value of optimal solution was at most ˆP, is now M Define buckets C j := { i 1..n : p i = j } keep only the M j lightest (smallest) items from each C j do dynamic programming on the remaining items Lemma px (1 ε)opt. Proof. Similar as before, note that x 1/ε for any solution. 28. April 2010 41/44

Lemma Running time O(n+ Poly(1/ε)). Proof. preprocessing time: O(n) values: M = 1/ε 2 M M pieces: M j i=1/ε time dynamic programming: O M 1 j i=1/ε ( ) log(1/ε) ε 4 ( ) log(1/ε) M ln M = O ε 2 28. April 2010 42/44

The Best Known FPTAS [Kellerer, Pferschy 04] ( { O min n log 1 ε + log2 1 ε ε 3,... }) Less buckets C j (nonuniform) Sophisticated dynamic programming 28. April 2010 43/44

Optimal Algorithm for the Knapsack Problem The best work in near linear time for almost all inputs! Both in a probabilistic and in a practical sense. [Beier, Vöcking, An Experimental Study of Random Knapsack Problems, European Symposium on Algorithms, 2004.] [Kellerer, Pferschy, Pisinger, Knapsack Problems, Springer 2004.] Main additional tricks: reduce to core items with good profit density, Horowitz-Sahni decomposition for dynamic programming 28. April 2010 44/44