MATH 409 LECTURES THE KNAPSACK PROBLEM

Similar documents
Computational Complexity. IE 496 Lecture 6. Dr. Ted Ralphs

2001 Dennis L. Bricker Dept. of Industrial Engineering The University of Iowa. Reducing dimensionality of DP page 1

In the original knapsack problem, the value of the contents of the knapsack is maximized subject to a single capacity constraint, for example weight.

directed weighted graphs as flow networks the Ford-Fulkerson algorithm termination and running time

Exercises NP-completeness

Mat 3770 Bin Packing or

This means that we can assume each list ) is

Math Homework 5 Solutions

Week Cuts, Branch & Bound, and Lagrangean Relaxation

CS Algorithms and Complexity

MVE165/MMG630, Applied Optimization Lecture 6 Integer linear programming: models and applications; complexity. Ann-Brith Strömberg

Section Notes 9. IP: Cutting Planes. Applied Math 121. Week of April 12, 2010

Integer programming: an introduction. Alessandro Astolfi

Lecture 11 October 7, 2013

COT 6936: Topics in Algorithms! Giri Narasimhan. ECS 254A / EC 2443; Phone: x3748

Lecture 18: More NP-Complete Problems

NP-problems continued

Introduction to Bin Packing Problems

SECTION 5.1: Polynomials

4/12/2011. Chapter 8. NP and Computational Intractability. Directed Hamiltonian Cycle. Traveling Salesman Problem. Directed Hamiltonian Cycle

Data Structures in Java

P versus NP. Math 40210, Spring April 8, Math (Spring 2012) P versus NP April 8, / 9

- Well-characterized problems, min-max relations, approximate certificates. - LP problems in the standard form, primal and dual linear programs

Discrete (and Continuous) Optimization Solutions of Exercises 2 WI4 131

a 1 a 2 a 3 a 4 v i c i c(a 1, a 3 ) = 3

CS264: Beyond Worst-Case Analysis Lecture #18: Smoothed Complexity and Pseudopolynomial-Time Algorithms

Topics in Theoretical Computer Science April 08, Lecture 8

Chapter 11. Approximation Algorithms. Slides by Kevin Wayne Pearson-Addison Wesley. All rights reserved.

Section Notes 9. Midterm 2 Review. Applied Math / Engineering Sciences 121. Week of December 3, 2018

More NP-Complete Problems

Intractable Problems Part Two

CS264: Beyond Worst-Case Analysis Lecture #15: Smoothed Complexity and Pseudopolynomial-Time Algorithms

Lecture 4 Scheduling 1

Computational Integer Programming Universidad de los Andes. Lecture 1. Dr. Ted Ralphs

Integer Linear Programming Modeling

Chapter 11. Approximation Algorithms. Slides by Kevin Wayne Pearson-Addison Wesley. All rights reserved.

4. How to prove a problem is NPC

Lecture notes on the ellipsoid algorithm

Lecture 11: Generalized Lovász Local Lemma. Lovász Local Lemma

BBM402-Lecture 20: LP Duality

Technische Universität München, Zentrum Mathematik Lehrstuhl für Angewandte Geometrie und Diskrete Mathematik. Combinatorial Optimization (MA 4502)

Operations Research Lecture 6: Integer Programming

Lecture 22: Counting

Knapsack. Bag/knapsack of integer capacity B n items item i has size s i and profit/weight w i

Supplementary lecture notes on linear programming. We will present an algorithm to solve linear programs of the form. maximize.

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 6: Provable Approximation via Linear Programming

Discrete (and Continuous) Optimization WI4 131

Approximation Algorithms

Computational Integer Programming. Lecture 2: Modeling and Formulation. Dr. Ted Ralphs

Lecture #21. c T x Ax b. maximize subject to

NETS 412: Algorithmic Game Theory March 28 and 30, Lecture Approximation in Mechanism Design. X(v) = arg max v i (a)

The Knapsack Problem. 28. April /44

Data Structures and Algorithms (CSCI 340)

CS261: Problem Set #3

3.4 Relaxations and bounds

P versus NP. Math 40210, Spring September 16, Math (Spring 2012) P versus NP September 16, / 9

Computer Science 385 Analysis of Algorithms Siena College Spring Topic Notes: Limitations of Algorithms

Recoverable Robustness in Scheduling Problems

Solutions to Exercises

arxiv: v1 [math.oc] 3 Jan 2019

1.1 P, NP, and NP-complete

Lecture 9: Dantzig-Wolfe Decomposition

0-1 Knapsack Problem in parallel Progetto del corso di Calcolo Parallelo AA

DAA 8 TH UNIT DETAILS

Integer Programming ISE 418. Lecture 12. Dr. Ted Ralphs

Lecture 13 March 7, 2017

THE EXPONENTIAL DISTRIBUTION ANALOG OF THE GRUBBS WEAVER METHOD

Q = Set of states, IE661: Scheduling Theory (Fall 2003) Primer to Complexity Theory Satyaki Ghosh Dastidar

Integer Programming. Wolfram Wiesemann. December 6, 2007

PGSS Discrete Math Solutions to Problem Set #4. Note: signifies the end of a problem, and signifies the end of a proof.

Two Applications of Maximum Flow

to work with) can be solved by solving their LP relaxations with the Simplex method I Cutting plane algorithms, e.g., Gomory s fractional cutting

} } } Lecture 23: Computational Complexity. Lecture Overview. Definitions: EXP R. uncomputable/ undecidable P C EXP C R = = Examples

Dual fitting approximation for Set Cover, and Primal Dual approximation for Set Cover

Approximation Basics

NP-COMPLETE PROBLEMS. 1. Characterizing NP. Proof

8 Knapsack Problem 8.1 (Knapsack)

1 T 1 = where 1 is the all-ones vector. For the upper bound, let v 1 be the eigenvector corresponding. u:(u,v) E v 1(u)

1 Seidel s LP algorithm

Column Generation. MTech Seminar Report. Soumitra Pal Roll No: under the guidance of

RO: Exercices Mixed Integer Programming

Propositional Logic. What is discrete math? Tautology, equivalence, and inference. Applications

Topic: Balanced Cut, Sparsest Cut, and Metric Embeddings Date: 3/21/2007

Combinatorial optimization problems

Polynomial-time Reductions

CS 583: Approximation Algorithms: Introduction

Approximation Algorithms

NP Complete Problems. COMP 215 Lecture 20

The Knapsack Problem. n items with weight w i N and profit p i N. Choose a subset x of items

Graph fundamentals. Matrices associated with a graph

Problem Complexity Classes

Higher-Degree Polynomial Functions. Polynomials. Polynomials

Algorithms and Theory of Computation. Lecture 22: NP-Completeness (2)

Algorithms 6.5 REDUCTIONS. designing algorithms establishing lower bounds classifying problems intractability

1 Computational Problems

Admin NP-COMPLETE PROBLEMS. Run-time analysis. Tractable vs. intractable problems 5/2/13. What is a tractable problem?

P versus NP. Math 40210, Fall November 10, Math (Fall 2015) P versus NP November 10, / 9

P P P NP-Hard: L is NP-hard if for all L NP, L L. Thus, if we could solve L in polynomial. Cook's Theorem and Reductions

The P versus NP Problem. Ker-I Ko. Stony Brook, New York

Algorithms and Theory of Computation. Lecture 11: Network Flow

Transcription:

MATH 409 LECTURES 19-21 THE KNAPSACK PROBLEM REKHA THOMAS We now leave the world of discrete optimization problems that can be solved in polynomial time and look at the easiest case of an integer program, called the knapsack problem. The Knapsack Problem. Given: c 1,c 2,...,c n,w 1,w 2,...,w n and W all nonnegative integers. Find: a subset S {1,...,n} such that j S w j W and j S c j is maximum. The physical interpretation is that we have a knapsack that can carry a total weight of W and can be filled with n different items where the j- th item has weight w j and value c j. We would like to find a collection of items to put into the knapsack so that the total weight of the knapsack is not exceeded and the total value of the knapsack is maximized. The knapsack problem can be written as a 0/1 integer program as follows. maximize n j=1 c jx j s.t. wj x j W x j = 0, 1 j = 1,...,n Relaxing the 0/1 constraint on the variables, we get the linear programming relaxation of the knapsack problem that is usually called the fractional knapsack problem. maximize n j=1 c jx j s.t. wj x j W 0 x j 1 j = 1,...,n This linear program has a very easy solution that we show below. Proposition 1. (Dantzig 1957) Order the variables so that c 1 w 1 c 2 w 2 c n w n Date: May 19, 2010. 1

2 REKHA THOMAS and let k := min {j {1,...,n} : j w i > W }. Then the optimal solution of the fractional knapsack problem is x 1 = 1, x 2 = 1,..., x k 1 = 1, x k = W k 1 j=1 w j w k, x j = 0 j > k. Note that c j w j is the value per unit length of the j-th item. So we first order the items in decreasing order of value per unit length and the optimal solution of the fractional knapsack problem is gotten by simply putting items in this order from the most valuable down until we cannot fit the next item, at which point we cut this last item to fit. Exercise 2. Argue that Dantzig s proposed solution is indeed an optimal solution to the fractional knapsack problem. (First check that the proposed solution is indeed a feasible solution and then argue that no other solution has higher value for n j=1 c jx j.) We now turn to the original knapsack problem. First note that we can assume that w j W for all j = 1,...,n since otherwise, the j-th item will not fit in the knapsack and we would never include it in any solution making x j = 0 always. As a start, we show that it is very easy to come up with a feasible solution of the knapsack problem whose objective function value is at least half the optimal value of the knapsack problem. This is an example of an approximation algorithm where one is typically interested in finding solutions to hard problems that come within a guaranteed factor of the optimal solution in value and where this feasible solution can be found quickly (in polynomial time). Proposition 3. Suppose w j W for all j = 1,...,n, c 1 c 2 c n w 1 w 2 w n and j k := min {j {1,...,n} : w i > W }. Then the better of the two solutions: i=1 i=1 s 1 : (x 1 = 1, x 2 = 1,..., x k 1 = 1, x j = 0 j k) s 2 : (x k = 1, x j = 0 j k) is a feasible solution of the knapsack problem whose objective function value is at least half the optimal value of the knapsack problem.

MATH 409 LECTURES 19-21 THE KNAPSACK PROBLEM 3 Proof. From the optimal solution to the fractional knapsack problem, note that C := k j=1 c j is an upper bound on the optimal value of the fractional knapsack problem and hence also for the knapsack problem. Now note that C is the sum of the objective function value of s 1 and the objective function value of s 2. Therefore, at least one of the summands is half of C. This means that the better solution (with respect to n j=1 c jx j ) has objective function value at least half of the optimal value of the knapsack problem. Exercise 4. Let G = (V,E) be a graph and suppose we are interested in the problem of finding a cut in G of largest size. This means that we want a partition of V into two sets S and V \S such that the edges that leave S and enter V \S are as many as possible. This is a very difficult problem in discrete optimization called the max cut problem in G. However, it is not hard to come up with a cut in the graph that is guaranteed to have at least half as many edges as the max cut. This exercise walks you through the approximation procedure. Suppose you have two colors red and blue to color the vertices of G. The coloring is done as follows. Start at any vertex and color it one of the two colors. Call this vertex v 1. To color the i-th vertex, we use the following rule: let b i be the number of neighbors of v i colored blue and r i be the number of neighbors of v i colored red. If b i r i then color v i red and otherwise, color v i blue. Argue that at the end, at least half the edges in the graph have the property that one of their end points is red and the other is blue. This produces a cut in G (induced by the partition of the vertices into red vertices and blue vertices) of size at least half the total number of edges in the graph. We now write down an algorithm to compute the optimal solution of the knapsack problem. Dynamic Programming Knapsack Algorithm Let C be an upper bound on the optimal value of the knapsack problem. For instance take C := n j=1 c j. (1) Set x(0, 0) := 0 and x(0,k) := for all k = 1,...C. (2) For j = 1,...,n do For k = 0,...,C do set s(j,k) := 0 and x(j,k) := x(j 1,k) For k = c j,...,c do

4 REKHA THOMAS If x(j 1,k c j ) + w j min {W,x(j,k)} then set x(j,k) := x(j 1,k c j ) + w j and s(j,k) := 1 (3) Let k = max {i {0,...,C} : x(n,i) is finite }. Set S :=. For j = n,...,1 do If s(j,k) = 1 then set S := S {j} and k := k c j. The optimal solution of the knapsack problem is given by choosing the items in S. Exercise 5. Run the above algorithm on the following problem. max 3x 1 + 2x 2 + x 3 + x 4 + x 5 s.t. x 1 + 2x 2 + 3x 3 + 4x 4 + 5x 5 13 x 1,...,x 5 = 0, 1 Theorem 6. The dynamic programming algorithm finds an optimal solution to the knapsack problem in O(nC) time. Proof. The algorithm takes O(nC) steps just by looking at all the for loops. To prove the algorithm, we interpret the variable x(j, k) as the minimum total weight of a subset S {1,...,j} with i S c i = k. Check that this is true in the example you did. The algorithm computes these values by the following recursion: x(j 1,k c j ) + w j x(j,k) = x(j 1, k) if c j k and x(j 1,k c j ) + w j min {W,x(j 1,k)} otherwise for j = 1,...,n and k = 0,...,C. The variable s(j,k) indicates which of the two cases happen. The algorithm enumerates all subsets S {1,...,n} that are feasible and not dominated by others. A set S is dominated by a set S if j S c j = j S c j and j S w j j S w j. In step (3) the best feasible S is chosen. Note that the size of the input to the knapsack problem is O(nlog C+ nlog W) and that O(nC) is not polynomial in the size of the input. Definition 7. Let P be a decision problem or an optimization problem such that each instance x consists of a list of integers. Denote by largest(x) the largest of these integers. An algorithm for P is called pseudopolynomial if its running time is bounded by a polynomial in size(x) and largest(x).

MATH 409 LECTURES 19-21 THE KNAPSACK PROBLEM 5 The dynamic programming algorithm mentioned above is a pseudopolynomial time algorithm since nc is a quadratic polynomial in n and C and n is part of the input size and C is bounded above by largest(x). A second example of a pseudopolynomial time algorithm is the algorithm to test for the primality of an integer p by dividing p with all integers from 2,..., p.