Knapsack Cover Subject to a Matroid Constraint

Size: px
Start display at page:

Download "Knapsack Cover Subject to a Matroid Constraint"

Transcription

1 Knapsack Cover Subject to a Matroid Constraint Venkatesan T. Chakaravarthy 1, Anamitra Roy Choudhury 2, Sivaramakrishnan R. Natarajan 3, and Sambuddha Roy 4 1 IBM Research, New Delhi, India, vechakra@in.ibm.com 2 IBM Research, New Delhi, India, anamchou@in.ibm.com 3 University of Washington, Seattle, USA, srk2siva@gmail.com 4 IBM Research, New Delhi, India, sambuddha@in.ibm.com Abstract We consider the Knapsack Covering problem subject to a matroid constraint. In this problem, we are given an universe U of n items where item i has attributes: a cost c(i) and a size s(i). We also have a demand D. We are also given a matroid M = (U, I) on the set U. A feasible solution S to the problem is one such that (i) the cumulative size of the items chosen is at least D, and (ii) the set S is independent in the matroid M (i.e. S I). The objective is to minimize the total cost of the items selected, i S c(i). Our main result proves a 2-factor approximation for this problem. The problem described above falls in the realm of mixed packing covering problems. We also consider packing extensions of certain other covering problems and prove that in such cases it is not possible to derive any constant factor approximations ACM Subject Classification F.2.2 Nonnumerical Algorithms and Problems Keywords and phrases Approximation Algorithms, LP rounding, Matroid Constraints, Knapsack problems Digital Object Identifier /LIPIcs.FSTTCS Introduction In this paper, we consider the problem of computing the minimum cost Knapsack cover subject to matroid constraints. The Knapsack Cover problem (shortened as KC) is a minimization problem that takes as input, an universe U of n elements. Each element i is equipped with a cost c(i) and a size s(i). There is also a demand D; a feasible solution S is a collection of items such that the cumulative size is at least D. The objective in this problem is to find the feasible solution of minimum total cost. Thus, the KC problem is the covering version of the (more usual) knapsack packing problem. The main problem considered in this paper is the KC problem subject to certain constraints that are called matroid constraints. In this scenario, in addition to the input for the KC problem as mentioned above, we are given a matroid M = (U, I) where I 2 U is the family of independent sets (we give the formal definition of a matroid in Section 5). A set S is considered feasible iff (i) the cumulative size of S is at least D, and (ii) S is independent i.e. S I. We will denote the Knapsack Cover problem subject to a Matroid constraint as the KCM problem. The KC problem naturally arises in various applications where we have a certain demand to fulfill and a certain number of options, varying in profit (i.e. size) and cost. For instance, consider a workplace where we need a certain number of developers for a certain project; and the project manager wants to outsource this to various teams/companies where each team Venkatesan T. Chakaravarthy, Anamitra Roy Choudhury, Sivaramakrishnan R. Natarajan, and Sambuddha Roy; licensed under Creative Commons License CC-BY 33rd Int l Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2013). Editors: Anil Seth and Nisheeth K. Vishnoi; pp Leibniz International Proceedings in Informatics Schloss Dagstuhl Leibniz-Zentrum für Informatik, Dagstuhl Publishing, Germany

2 276 Knapsack Cover Subject to a Matroid Constraint can provide a certain number of developers at a specific cost. This is precisely the knapsack cover problem discussed above. Given this context, consider the following variants. Any single company can either provide 5 developers at a cost of 100 units or 6 developers at a cost of 115 units, etc. In the framework of knapsack cover problems, we may model these different options as distinct items in the knapsack, but with a partition matroid constraint on these distinct items. In this setting this means that a feasible solution may pick at most one of the options from a single company; this is precisely what we want. Another (admittedly less realistic) scenario is where different companies have mutual incompatibilities; thus two companies may not be employed for the same project at the same time. Again, this corresponds to matroid constraints in addition to the demand fulfillment constraints. Thus these are all instances of the KCM problem. The main result of this paper is the following: Theorem 1. There is a PTAS for the KCM problem; specifically, given any ɛ > 0, there is an algorithm that runs in time n O(1/ɛ) and outputs a (1 + ɛ)-factor approximate solution. Given that the KCM problem is a natural generalization of the KC problem, it is instructive to compare solution approaches for the KC problem. For the knapsack cover problem, a FPTAS is known via dynamic programming with parameter scaling. The natural LP for the KC problem has an unbounded integrality gap. Despite this, authors [4, 3] have shown how to achieve constant integrality gap via augmenting the natural LP with so-called flow-cover inequalities. Carr et al. [4] first used such LP based relaxations and LP rounding to provide a 2-factor approximation for the KC problem (among other capacitated covering problems). Carnes and Shmoys [3] showed an elegant primal dual algorithm for the same LP to also derive a 2-factor approximation. Note that while the vanilla version of the knapsack covering problem is a covering problem, in the KCM problem, we have both covering and packing constraints. Typically, such mixed packing covering problems are harder to analyse as compared to pure packing or pure covering problems. This is partially because for such problems, even checking the feasibility of the constraint set can be a NP-hard problem. However, for the KCM problem, the feasibility problem is indeed in polynomial time, as we show in Section 7. To the best of our knowledge the KCM problem has not been considered earlier in literature. 2 Our Contribution & Techniques We prove the following results: Given the knapsack cover problem with a single matroid constraint, we show a PTAS. Given knapsack cover with multiple matroid constraints, the feasibility problem is NPhard. Given this, we show a bicriteria approximation guarantee: we exhibit an algorithm that outputs a solution of value at most that of the optimal solution OPT that is nearly-feasible. The formal statement is given as Theorem 5 in Section 8. Given the mixed packing/covering nature of the KCM problem, it is difficult to apply primal dual schemas; since the dual objective function in this context have both positive and negative coefficients. Previous literature has indeed considered primal dual schemas with dual objective functions having coefficients of either sign (for instance, see [9]). However in such cases, the primal dual algorithms give only approximately feasible solutions. One other possibility to consider are combinatorial algorithms. For instance there is a simple minded greedy algorithm for the knapsack cover KC problem, based on the cost-effectiveness c(i)/s(i) of an element i. The algorithm proceeds as follows: it guesses the costliest element (say, of

3 V. T. Chakaravarthy, A. R. Choudhury, S. R. Natarajan, and S. Roy 277 cost C )in the optimal cover OPT, and removes all the elements of cost larger than C. The algorithm considers the rest of the elements in increasing order of their cost-effectiveness, i.e. their c(i)/s(i) values, and continues selecting elements while the covering requirement is not met. This gives a 2-factor approximation to the KC problem. However, there are certain issues in adapting this greedy approach to the KCM problem as we illustrate below. For the KCM problem, if we pick up elements according to non-decreasing c(i)/s(i) values, it may happen that we cannot pick any more elements without violating the matroid constraints and yet the cumulative size of the elements picked falls short of the requisite demand D. For instance, this happens in the following scenario. We are given as input, U = {1, 2, 3, 4}, with D = 4 and a cardinality matroid constraint over the whole universe with k = 2. The items ordered by their c( )/s( ) values are { 1 ɛ 1, 2 2, 2 2, 3+ɛ 3 }. In this instance, OPT would pick either {1, 4} or {2, 3}. The greedy solution would yield {1, 2} of total cost (3 ɛ) but satisfying only 3 units of the demand. There exists another ordering natural for the problem. Picking elements in decreasing order of their s(i) s while being feasible for the matroid constraints is the quickest way to satisfy the knapsack covering constraint; but this may cause the cost to blow up. For the above instance, this gives the (non-optimal) solution {3, 4} of cost (5 + ɛ). One natural idea then, is to proceed along an amalgam of the two orderings. Thus, one could start out with the knapsack greedy ordering, and then when some matroid constraint becomes tight, start swapping or exchanging items, while ensuring an improvement in s(i)x i towards the target demand of D. We are not able to make such ideas work as of now. On the other hand, in order to attempt LP rounding for the KCM problem, we have to surmount the obstacle of high integrality gap. For the KC problem, [3, 4] include the exponentially many knapsack flow-cover inequalities to overcome the unbounded gap. In the KCM setting, since we already have exponentially many matroid constraints, it would be preferable not to add another collection of exponentially many constraints. As mentioned above, the natural LP for the problem has an unbounded integrality gap that it inherits from the LP for the KC problem. Nevertheless, we adopt a LP based approach, where we use properties of the basic feasible solutions of the LP. In the case of cardinality matroids, we also show a cycle cancelling approach to derive the desired result. 3 Other Results For the special case of the KCM problem where the matroid is a partition matroid, we are able to show a FPTAS; this uses the dynamic programming approach along with parameter scaling. For space considerations, we defer the proof to the full version. We consider certain other covering problems and augment such problems with matroid constraints. We demonstrate that the above cases are the exception rather than the rule. As a sample, we consider the problem of interval covering with a partition matroid constraint. In this problem, we are given a collection of intervals I = {I 1, I 2,, I n } over a time range T = {1, 2,, T }. We are also given a partition P 1, P 2,, P m of the intervals in I. A feasible solution S is a collection of intervals such that (1) every timeslot in T is covered by some interval in S and (2) there is at most one interval chosen from each P j (for 1 j m). The objective is to choose the minimum number of intervals. We prove that testing feasibility in this case is NP-hard (and so, no constant factor approximation algorithm is possible for this problem). This is shown via a reduction from the Vertex Cover problem; details are provided in Section 9. On the other hand, the problem considered with a cardinality matroid (where, a feasible solution can pick at most a certain number of intervals F S T T C S

4 278 Knapsack Cover Subject to a Matroid Constraint from I) has a dynamic programming solution that solves it in polynomial time. It is more likely than not that augmenting a covering problem with a matroid constraint might render the feasibility problem NP-hard. As another instance, consider Vertex Cover. If we were to add a cardinality constraint, then the feasibility problem is an instance of unweighted Vertex Cover, and is NP-hard. 4 Related Work In this paper, we consider the problem of knapsack covering subject to matroid constraints. This naturally leads us to the question of considering more general objectives; thus we might want to minimize an arbitrary submodular function subject to one knapsack covering constraint and a matroid constraint. However, note that in this case, strong lower bounds exist, even if we ignore the matroid constraint. Svitkina and Fleischer [14] prove the following: they show that given a submodular function f(s) over a universe of size n, and a cardinality constraint S k, it is NP-hard to get a factor better than Ω( n log n ) for approximating the problem. They call this the SML (Submodular Minimization with cardinality Lower bounds) problem. Their lower bound is even stronger: it holds even for the special case of monotone submodular functions; and they derive lower bounds for bicriteria algorithms. In the recent past, there has been a surge of work in the area of submodular maximization under various constraints. There have been several papers considering the problem of maximizing a monotone submodular function subject to matroid constraints, culminating in the breakthrough result of Vondrák [15](also see [2]). Vondrák [15] shows the optimal (1 1/e)-factor approximation for the problem via a continuous greedy process. Also, see the paper by Filmus and Ward [7] who give an elegant non-oblivious local search technique achieving the same approximation factor. In the space of non-monotone submodular functions, a recent result of Buchbinder et al. [1] (also see [6]) gives the optimal 1/2-factor approximation for the problem of unconstrained submodular maximization. The interested reader is referred to a presentation by Vondrák [16] (see slide 45) for an overview of the results in the area of submodular maximization subject to various (knapsack, matroid) constraints and their combinations. Another line of work considers the problem of minimizing a submodular function subject to matroid constraints. Clearly, it does not make sense to minimize a monotone submodular function subject to such packing constraints. Thus, research has focused on the problem of minimizing a symmetric submodular function subject to matroid constraints. Originating from work by Shaddin [5], Goemans and Soto [8] prove a strong result that shows that symmetric submodular functions can be minimized subject to such constraints in polynomial time! In fact, their result extends to a wider class of constraints called hereditary constraints. Thus, given the matroid constraints in the KCM problem, it is natural to ask about the relation between our problem and the existing literature. Here, we reiterate that most of the problems considered in literature are mostly purely packing problems: for instance, submodular maximization subject to matroid constraints or symmetric submodular minimization subject to matroid constraints. Our problem does not belong to the above frameworks because of the mixed packing covering nature of our constraints. 4.1 Organization We present the relevant definitions in Section 5. We prove our result for the case of a cardinality matroid in Section 6 (see Theorem 2). We build on the case of cardinality

5 V. T. Chakaravarthy, A. R. Choudhury, S. R. Natarajan, and S. Roy 279 matroids to give the proof of Theorem 1 in Section 7. We state and prove the bicriteria factor approximation for knapsack cover subject to multiple matroid constraints in Section 8. We conclude with discussions and open problems in Section 9. 5 Preliminaries Sets: In this paper, we will use the following notation: given disjoint sets A and B we will use A + B to serve as shorthand for A B. Vice versa, when we write A + B it will hold implicitly that the sets A and B are disjoint. We will use the letter U for the universe; the universe will typically contain n elements. Given a set A, let χ(a) denote its characteristic vector: this is a vector such that {χ(a)} i = 1 if i A and 0 otherwise. Also given an element-wise function f with domain U, we will extend it to subsets in the natural way: f(s) = i S f(i) for S U. Monotone: A set function f is called monotone if f(s) f(t ) whenever S T. Submodular: A set function f : 2 U R + over a universe U is called submodular if the following holds for any two sets A, B U: f(a) + f(b) f(a B) + f(a B) Matroid: A matroid is a pair M = (U, I) where I 2 U, and 1. (Hereditary Property) B I, A B = A I. 2. (Extension Property) A, B I : A < B = x B \ A : A + x I Matroids are generalizations of vector spaces in linear algebra and are ubiquitous in combinatorial optimization because of their connection with greedy algorithms. Typically the sets in I are called independent sets, this being an abstraction of linear independence in linear algebra. The maximal independent sets in a matroid are called the bases (again preserving the terminology from linear algebra). An important fact for matroids is that all bases have equal cardinality this is an outcome of the Extension Property of matroids. Any matroid is equipped with a rank function r : 2 U R +. The rank of a subset S is defined to be the size of the largest independent set contained in the subset S. By the Extension Property, this is well-defined. The rank function of any matroid is well-known to be a monotone submodular function. See the excellent text by Schrijver [13] for details. Two commonly encountered matroids are the (i) Cardinality Matroid: Given a universe U and r N, the cardinality matroid is the matroid M = (U, I), where a set A is independent (i.e. belongs to I) iff A r. (ii) Partition Matroid: Given a universe U and a partition of U as U 1,, U r and non-negative integers r 1,, r t, the partition matroid is M = (U, I), where a set A belongs to I iff A U i r i for all i = 1, 2,, t. Knapsack Cover with Matroid Constraints (KCM): We are given n items of sizes s(1), s(2),, s(n), and with costs c(1), c(2),, c(n). We are also given a cumulative demand D and a matroid M. A feasible solution is a subset F such that the cumulative size of the subset F is at least D, and so that the set F is independent in the matroid M. The objective is to produce a feasible solution of minimum cumulative cost. Knapsack Cover with Cardinality Matroid (KCCard): In this variant of KCM, we are given a number k and a specific subset A {1, 2,, n}. A feasible solution is a subset F such that the cumulative size of F is at least D and no more than k elements are chosen from the subset A. F S T T C S

6 280 Knapsack Cover Subject to a Matroid Constraint 6 KCM with Cardinality Matroids In this section, we will consider the Knapsack Cover problem subject to a cardinality matroid constraint. Recall that in this scenario, we are given a subset A of the universe U and we may pick at most k elements from A in a feasible solution. We use x(a) as a shorthand for i A x i. The LP is as follows: LP 1 min s.t. i c i x i s i x i D i x(a) k i 0 x i 1 We will call the constraints x i 0 and x i 1 as trivial; the constraints s i x i D or x(a) k will be called the non-trivial constraints. In order that we may produce feasible solutions to the above LP, it is necessary to check that the feasibility problem is solvable in polynomial time. This is easy to do for the specific case of a cardinality matroid. In fact, we will prove the result in Lemma 3 for LPs corresponding to arbitrary matroid constraints. Theorem 2. There is a PTAS for the KCCard problem. Proof. Let us consider a BFS solution to the above LP. It is easy to see that since there are two non-trivial constraints on the x is, in a BFS solution, at most 2 variables will be set fractionally, and the other variables will be set integrally. Also note that the only way that a BFS solution may have two fractional variables is if both the constraints s i x i D and x(a) k are tight. Renaming variables, let the fractional variables be x 1 and x 2. Since the other variables are integral, the following equalities hold: s 1 x 1 + s 2 x 2 = D and x 1 + x 2 = k where k is an integer. Clearly because of the constraints 0 x i 1 we have that 0 k 2. If k = 0, then x 1 = x 2 = 0, contrary to their being fractional. Likewise if k = 2, then x 1 = x 2 = 1, again a contradiction. Thus, the only case is that k = 1. Thus D is a convex combination of s 1 and s 2. Without loss of generality, let s 1 D s 2. Since the constraint x(a) k is a packing constraint, we will not be able to pick both of x 1 and x 2. We will simply pick x 1 (this makes the constraint s i x i D feasible), and set x 2 to be 0. Thus, in this process we have raised at most 1 fractional variable. We now use the idea (in the manner of [12]) of pruning. Let c max be the item of highest cost in OPT. Although we do not know this item, we can guess this item by running through the n possibilities for such an item. Using this guess, we can remove items i from the LP that have c i > c max. Thus, in the above process, when we raise x 1 from its current fractional value to 1, we increase the cost of our solution by at most c max. Thereby the total cost of the solution generated is OPT + c max 2OPT. In order to achieve a (1 + ɛ)-factor approximation for any ɛ > 0, we may guess all the items in OPT of cost at least ɛ OPT. Note that there can be at most 1/ɛ such items. Let S denote this set of items, of total cost C(S). Remove the items of S from the input instance to derive a modified instance. Thus, the modified instance has its parameters D and k appropriately reduced. Note that if S is indeed the set of items in OPT of cost at least ɛ OPT, then the optimal cost of the modified instance is at most OPT C(S). Since we remove all the items of S from the LP, for every item i still remaining in the LP, it holds that c i ɛ OPT. As before, the total cost of our solution is at most (OPT C(S)) + ɛ OPT + C(S) = (1 + ɛ)opt.

7 V. T. Chakaravarthy, A. R. Choudhury, S. R. Natarajan, and S. Roy 281 Thus the algorithm considers all possible sets S of size 1/ɛ, and for each choice of S, solves the LP for the modified instance. The total run-time of the algorithm is n O(1/ɛ). Alternative Proof The proof given above considers the BFS of the LP and proves certain properties of the BFS solution; in this sense, it is akin to iterative rounding (see [11]). However, one can provide another proof of the fact that any optimal solution to LP 1 can be modified to an optimal solution that contains at most two fractional variables. This proof goes via cycle cancelling. In the normal usage of cycle cancelling, the weight (i.e. the LP values) is shifted between two variables. However, in the current scenario, we will need to shift the weight between three variables. This is the primary reason why we have two fractional variables. In fact, the (more combinatorial styled) cycle cancelling technique applied to LP 1 shows that any feasible solution can be modified in this manner. Suppose we are given a solution x to LP 1, and wlog rename the variables so that the fractional variables are x 1, x 2,, x m. Given these fractional variables, we will attempt to change only the first 3 variables, by amounts δ 1, δ 2, and δ 3 (and leave the other variables - fractional or integral - unchanged). Thus the values of the variables x 1, x 2, x 3 will become (x 1 + δ 1 ), (x 2 + δ 2 ), (x 3 + δ 3 ). It is required that this operation does not violate the knapsack cover constraint or the cardinality constraint. We also want that in this process, the objective function does not degrade. These conditions translate to the following inequalities for δ 1, δ 2, δ 3 : δ 1 + δ 2 + δ 3 = 0 s 1 δ 1 + s 2 δ 2 + s 3 δ 3 0 c 1 δ 1 + c 2 δ 2 + c 3 δ 3 0 Since the system is homogeneous, the (0, 0, 0) solution is always feasible. However, in order to perform cycle cancelling, we want a non-zero δ i vector. We may eliminate the variable δ 3 from the above system of inequations to get: (s 1 s 3 )δ 1 + (s 2 s 3 )δ 2 0 (c 1 c 3 )δ 1 + (c 2 c 3 )δ 2 0 The pertinent question is whether this system of inequalities always has a non-zero solution in (δ 1, δ 2 ). The two inequalities correspond to two halfplanes in R 2. The halfplanes are intersecting - for instance, the point (0, 0) lies on both of them. But the intersection of two halfplanes is an unbounded region, so has a non-zero vector in it. For instance, if s 1 = s 3 and c 1 = c 3, we would need to set δ 2 = 0, but we may set δ 1 = 1 (and consequently, δ 3 is set to 1). While this method works for the cardinality matroid (where we have a single constraint corresponding to the matroid constraint), we do not know how to make this work for an arbitrary matroid. 7 KCM with arbitrary Matroids In this section, we extend the result in Theorem 2 to arbitrary matroids. Let us recall the problem: the universe U consists of n items, each item with attributes costs c(i) and sizes s(i). There is a minimum coverage demand D. We are also given a matroid M = (U, I). A feasible solution S to the KCM is one that satisfies the knapsack covering constraints and F S T T C S

8 282 Knapsack Cover Subject to a Matroid Constraint Guess c max, the costliest element in OPT. for each guess of c = c max do A {i : c(i) > c max } Augment LP 2 with constraints x i = 0, i A Solve LP 2 ; let the two fractional variables be x 1 and x 2 If s 1 s 2, raise x 1 to 1, x 2 = 0; else x 1 = 0, x 2 = 1. Let this solution be denoted by S c. end for Output the subset S c with the minimum cost Figure 1 Main Algorithm. such that S I. Let r( ) denote the rank function of the matroid M. Then the LP for the KCM problem stands as follows: LP 2 : min i c(i) x i s.t. i s(i) x i D S x(s) r(s) i 0 x i 1 Firstly, we show that we can solve the feasibility problem in polynomial time. Lemma 3. The feasibility problem for LP 2 is solvable in polynomial time. Proof. Note that the feasibility problem may be converted into the following problem: LP 3 : max i s(i) x i S x(s) r(s) i 0 x i 1 But this is precisely the maximum independent set question in a matroid and is well known to be solvable in polynomial time [13]. We are now ready to prove Theorem 1. Proof (of Theorem 1). Let x denote a BFS solution to LP 2. Let there be l fractional variables in the solution x. We will call any constraint that is not of the form x i 0 or x i 1 as nontrivial". We will consider the non-trivial constraints that are satisfied with equality by the solution x. Given that there are l fractional (and hence n l integral) variables, we have that precisely l non-trivial constraints are tight. Moreover, this collection of tight constraints are linearly independent. We will also assume the following normal form for the tight constraints provided by a BFS. If a variable x i is integral (either 0 or 1) in the solution x, we will consider the corresponding equation x i = 0 or x i = 1 as being tight. Thus, by virtue of linear independence of the tight constraints in a BFS, any non-trivial tight constraint has to contain at least one fractional variable. There are two cases: either the constraint i s(i)x i D is tight or is not. Let us consider the situation where this constraint is not tight. Thus the l tight constraints are all of the form x(s) r(s) for some subset S. It is a well known property (for instance, see [11]) that the sets corresponding to these tight constraints may be assumed to form a chain. We record this as a claim and for completeness, we prove this here. This proposition uses the fact that the rank function of a matroid is submodular.

9 V. T. Chakaravarthy, A. R. Choudhury, S. R. Natarajan, and S. Roy 283 Claim 4. The linearly independent tight constraints x(s) = r(s) can be assumed to form a chain. Thus, if there are l linearly independent tight constraints, then the corresponding sets may be relabeled S 1, S 2,, S l such that S i S i+1 for all 1 i (l 1). Proof. Consider any two tight constraints corresponding to sets S 1 and S 2. Thus, x(s 1 ) = r(s 1 ) and x(s 2 ) = r(s 2 ). Consider the following chain of inequalities: r(s 1 ) + r(s 2 ) tight = x(s 1 ) + x(s 2 ) = x(s 1 S 2 ) + x(s 1 S 2 ) r(s 1 S 2 ) + r(s 1 S 2 ) submodular r(s 1 ) + r(s 2 ) Thus equalities hold throughout the chain, and so x(s 1 S 2 ) = r(s 1 S 2 ) and x(s 1 S 2 ) = r(s 1 S 2 ). This means that S 1 S 2 and S 1 S 2 also are tight sets. Note that χ(s 1 ) + χ(s 2 ) = χ(s 1 S 2 ) + χ(s 1 S 2 ). So, if S 1 and S 2 are such that S 1 \ S 2 and S 2 \S 1, then we can replace one of the sets S 1 or S 2 in our system of linearly independent equations by S 1 S 2 and S 1 S 2. Repeating this process ensures that a maximal collection of linearly independent tight constraints form a chain. The sets S i may not be equal since that would violate the linear independence of the corresponding constraints. Also, an earlier observation implies that S i+1 \ S i has to contain at least one fractional variable (and, bottoming out, this holds true for S 1 \ = S 1 too). However since x(s i+1 ) x(s i ) is an integer, there has to be at least 2 fractional variables in S i+1 \ S i. But since the sets S i+1 \ S i are disjoint (for 1 i (l 1) we thereby collect at least 2l fractional variables. Since we started the argument with l fractional variables, this implies that l 2l, which is impossible since l 1. Thus, the constraint s(i)x i D has to be tight. But we can again decompose the tight constraints of the form x(s) r(s) as a chain of (l 1) constraints. A similar argument like the one above gives that there are at least 2(l 1) fractional variables. Thus l 2(l 1), and we get that l 2. Since s(i)x i D is tight, this implies that there is precisely one tight constraint of the form x(s) r(s). Now we are back to the cardinality case, and we can mimic the proof of Theorem 2, and we thereby prove that LP 3 has at most 2 fractional variables. Let the fractional variables be x 1 and x 2 (modulo renaming of variables). The tight constraint s(i)x i D simplifies to s 1 x 1 + s 2 x 2 = D. Also we have that x 1 + x 2 = k for some integral k. Since 0 < x i < 1 for all i = 1, 2, the only possibility is that k = 1. This implies that D is a convex combination of s 1 and s 2, and thus, one of these quantities is at least as large as D ; wlog, let s 1 D. Thus, we can raise the variable x 1 to 1 and reduce the variable x 2 to 0. This change in the variables x 1 and x 2 may potentially make the solution infeasible for the LP. To this end, let us consider the constraints in which the variables x 1 or x 2 appear. First, note that the constraint s(i)x i D is kept feasible, because of the choice of the fractional variable to raise. Now consider any matroid constraint x(s) r(s). If S contains both of items x 1 and x 2, then feasibility for this constraint is maintained (since x 1 + x 2 is not changed). Suppose that S contains only the item x 1. Then the constraint x(s) r(s) could not have been tight in the BFS solution x. This is because r(s) is an integer whereas x(s) contains just a single fractional variable and cannot be an integer. Thus, raising x 1 to 1 does not violate feasibility for this constraint. If S contains only the item x 2, the argument is simpler: the value of x 2 is lowered, and so feasibility is maintained for the packing constraint x(s) r(s). F S T T C S

10 284 Knapsack Cover Subject to a Matroid Constraint Finally, given an ɛ > 0, we prune items of the input by guessing the elements of high-cost (i.e. elements of cost > ɛ OPT) and modifying the input instance as in Theorem 2; this gives us the (1 + ɛ)-factor guarantee. 8 Multiple Matroids In this section, we consider the knapsack cover problem subject to multiple matroid constraints; call this the KCMM problem. In this problem, in addition to the knapsack cover constraint s(i)xi D, we have t N matroid constraints. Given the matroids M i for 1 i t, let r i denote the rank function for matroid M i. The IP for this problem is as follows: IP 4 : min i c(i) x i s.t. i s(i) x i D S, t x(s) r t (S) i x i {0, 1} The feasibility problem for this IP is the Matroid Intersection problem. For t 3, this is NP-hard: for instance, the Hamiltonian Circuit problem is a special case of the intersection of 3 matroids. This leads to the hardness of approximation of the KCMM problem: for t 3, it is impossible to achieve any factor approximation for the KCMM problem. We show that this obstacle can be overcome by considering bicriteria approximation algorithms. The main result of this section is that the KCMM problem allows good bicriteria approximations. Theorem 5. There is a bicriteria approximation algorithm for the KCMM problem that outputs a solution S that satisfies the following properties: (i) c(s) c(opt), (ii) S satisfies all the matroid constraints and (iii) s(s) θ( D t ). Proof. Consider the auxiliary IP parametrized by a quantity β. This IP is obtained by interchanging the roles of the objective and the knapsack cover constraint in IP 4. max i s(i) x i IP 5 : s.t. i c(i) x i β S, t x(s) r t (S) i x i {0, 1} Thus we have converted the mixed packing covering problem IP 4 into a purely packing problem IP 5. The objective in the problem IP 5 is a submodular function (in fact a linear function) and the problem is to maximize this function subject to 1 knapsack (packing) constraint and t matroid constraints. There are efficient algorithms [10] that show O(t)-factor approximations for this problem; denote this procedure by P. We run the procedure P for various values of β in decreasing order; for each value of β, P gives an approximate solution to IP 5. We stop when the objective value of the solution returned by P is D/θ(t). Let this solution be S. We return S as the solution to IP 4 and the corresponding value of β is the objective value. If there is a feasible solution F to IP 4 of objective value β, then F is a feasible solution to IP 5 (with β = β ) of objective value at least D. This is because F is a feasible solution to IP 4 ; so i F s(i) D. So, procedure P on IP 5 with β = β produces a solution F that has value i F s(i) D/θ(t). This proves the result.

11 V. T. Chakaravarthy, A. R. Choudhury, S. R. Natarajan, and S. Roy Discussion & Open Problems In this paper we considered a mixed packing-covering problem called the KCM problem. We showed that it admits a PTAS. We note that the same result applies if we replace the matroid constraints by so-called polymatroid constraints (where the constraints are x(s) f(s) for an arbitrary submodular function f instead of the rank function). However, we do not expect (even) constant factor approximations for most covering problems with matroid constraints. This is because the feasibility problem for such a covering problem with a matroid constraint may be NP-hard. As an instance, let us consider the interval cover problem mentioned in Section 3. We will reduce the Vertex Cover problem to an interval cover instance, with a partition matroid. Let the input instance be G = (V, E) and a number k; the decision problem is to find if G has a vertex cover of size at most k. The interval cover instance is constructed as follows. For every vertex v V, there is a long interval I v ; for different vertices v and v the corresponding intervals are disjoint. Given an edge e = (u, v), there are two short intervals corresponding to the edge I e,u and I e,v. The span of I e,u for an edge e incident on the vertex u is contained within the interval I u. The various intervals I e,u for different e s incident on u are all disjoint (and contained in I u ). The partition matroid constraints are as follows: out of all the intervals I u (u V ), we are allowed to pick at most k; and out of the intervals I e,u and I e,v (for e = (u, v)) we are allowed to pick at most 1. Suppose there is a vertex cover S V of size at k in G. Then our feasible solution for the interval cover problem will be constructed as follows: pick interval I u for u S. Since S is a vertex cover, every edge e has one of its endpoints in S. Thus at least one of the short intervals I e,u or I e,v is not required, since the overlapping long interval I u (or I v ) is picked. Thus the other short interval corresponding to e may be picked, while preserving the partition matroid constraint for I e,u and I e,v. In the other direction, consider a feasible solution F for the interval cover instance. For any edge e, at most one of the short intervals I e,u, I e,v may be in the feasible solution. Thus for the short interval that is absent, say, I e,u, the long interval I u has to be present in the solution. This means that the set of u s such that the interval I u belongs to the solution F forms a vertex cover. Since feasibility stipulates at most k of the long intervals be selected, this means that we are able to extract a vertex cover of size at most k. Despite feasibility being the principal obstacle for covering problems with matroid constraints, some open problems do remain. The principal open question is the following. In Section 8, we considered the case of t matroid constraints along with a knapsack cover constraint. However, note that for t = 2, the feasibility problem is in polynomial time (this is the Matroid Intersection problem for two matroids). So, a constant factor approximation algorithm is not ruled out for t = 2. Our solution to the KCM problem involves solving multiple LP s. The knapsack cover (KC) problem has efficient greedy algorithms. Therefore, a natural question is whether the KCM problem has efficient greedy algorithms. Our results do not rule out a FPTAS for the KCM problem. In fact, we are able to show FPTASes for the case of the KCM problem with a partition matroid. Is there a FPTAS for the KCM problem for arbitrary matroids? Acknowledgements. We gratefully acknowledge discussions with Amit Kumar, Rajiv Gandhi, Guy Kortsarz and Yogish Sabharwal. We also gratefully acknowledge modifications suggested by the anonymous reviewers. The submission version contained only a 2-factor approximation for the KCM problem; a reviewer noted that a slight modification F S T T C S

12 286 Knapsack Cover Subject to a Matroid Constraint of the same technique also yields a PTAS (that is now included as Theorem 1). The same reviewer also observed that the results essentially carry over to polymatroid constraints. References 1 Niv Buchbinder, Moran Feldman, Joseph Naor, and Roy Schwartz. A tight linear time (1/2)-approximation for unconstrained submodular maximization. In FOCS, pages , Gruia Călinescu, Chandra Chekuri, Martin Pál, and Jan Vondrák. Maximizing a monotone submodular function subject to a matroid constraint. SIAM J. Comput., 40(6): , Tim Carnes and David B. Shmoys. Primal-dual schema for capacitated covering problems. In IPCO, pages , Robert D. Carr, Lisa Fleischer, Vitus J. Leung, and Cynthia A. Phillips. Strengthening integrality gaps for capacitated network design and covering problems. In SODA, pages , Shaddin Dughmi. Submodular functions: Extensions, distributions, and algorithms. a survey. CoRR, abs/ , Uriel Feige, Vahab S. Mirrokni, and Jan Vondrák. Maximizing non-monotone submodular functions. SIAM J. Comput., 40(4): , Yuval Filmus and Justin Ward. A tight combinatorial algorithm for submodular maximization subject to a matroid constraint. In FOCS, pages , Michel X. Goemans and José A. Soto. Symmetric submodular function minimization under hereditary family constraints. CoRR, abs/ , Fabrizio Grandoni, Jochen Könemann, Alessandro Panconesi, and Mauro Sozio. A primaldual bicriteria distributed algorithm for capacitated vertex cover. SIAM J. Comput., 38(3): , Anupam Gupta, Viswanath Nagarajan, and R. Ravi. Thresholded covering algorithms for robust and max-min optimization. In ICALP (1), pages , Lap-Chi Lau, R. Ravi, and Mohit Singh. Iterative Methods in Combinatorial Optimization. Cambridge University Press, New York, NY, USA, 1st edition, Jan Karel Lenstra, David B. Shmoys, and Éva Tardos. Approximation algorithms for scheduling unrelated parallel machines. In FOCS, pages , A. Schrijver. Combinatorial Optimization - Polyhedra and Efficiency. Springer, Zoya Svitkina and Lisa Fleischer. Submodular approximation: Sampling-based algorithms and lower bounds. SIAM J. Comput., 40(6): , Jan Vondrák. Optimal approximation for the submodular welfare problem in the value oracle model. In STOC, pages 67 74, Jan Vondrák. Submodular functions and their applications, Plenary talk at SODA

Monotone Submodular Maximization over a Matroid

Monotone Submodular Maximization over a Matroid Monotone Submodular Maximization over a Matroid Yuval Filmus January 31, 2013 Abstract In this talk, we survey some recent results on monotone submodular maximization over a matroid. The survey does not

More information

A (k + 3)/2-approximation algorithm for monotone submodular k-set packing and general k-exchange systems

A (k + 3)/2-approximation algorithm for monotone submodular k-set packing and general k-exchange systems A (k + 3)/-approximation algorithm for monotone submodular k-set packing and general k-exchange systems Justin Ward Department of Computer Science, University of Toronto Toronto, Canada jward@cs.toronto.edu

More information

Constrained Maximization of Non-Monotone Submodular Functions

Constrained Maximization of Non-Monotone Submodular Functions Constrained Maximization of Non-Monotone Submodular Functions Anupam Gupta Aaron Roth September 15, 2009 Abstract The problem of constrained submodular maximization has long been studied, with near-optimal

More information

Submodular Functions: Extensions, Distributions, and Algorithms A Survey

Submodular Functions: Extensions, Distributions, and Algorithms A Survey Submodular Functions: Extensions, Distributions, and Algorithms A Survey Shaddin Dughmi PhD Qualifying Exam Report, Department of Computer Science, Stanford University Exam Committee: Serge Plotkin, Tim

More information

Approximating Submodular Functions. Nick Harvey University of British Columbia

Approximating Submodular Functions. Nick Harvey University of British Columbia Approximating Submodular Functions Nick Harvey University of British Columbia Approximating Submodular Functions Part 1 Nick Harvey University of British Columbia Department of Computer Science July 11th,

More information

Symmetry and hardness of submodular maximization problems

Symmetry and hardness of submodular maximization problems Symmetry and hardness of submodular maximization problems Jan Vondrák 1 1 Department of Mathematics Princeton University Jan Vondrák (Princeton University) Symmetry and hardness 1 / 25 Submodularity Definition

More information

Submodular Functions and Their Applications

Submodular Functions and Their Applications Submodular Functions and Their Applications Jan Vondrák IBM Almaden Research Center San Jose, CA SIAM Discrete Math conference, Minneapolis, MN June 204 Jan Vondrák (IBM Almaden) Submodular Functions and

More information

Maximizing Submodular Set Functions Subject to Multiple Linear Constraints

Maximizing Submodular Set Functions Subject to Multiple Linear Constraints Maximizing Submodular Set Functions Subject to Multiple Linear Constraints Ariel Kulik Hadas Shachnai Tami Tamir Abstract The concept of submodularity plays a vital role in combinatorial optimization.

More information

Optimization of Submodular Functions Tutorial - lecture I

Optimization of Submodular Functions Tutorial - lecture I Optimization of Submodular Functions Tutorial - lecture I Jan Vondrák 1 1 IBM Almaden Research Center San Jose, CA Jan Vondrák (IBM Almaden) Submodular Optimization Tutorial 1 / 1 Lecture I: outline 1

More information

1 Matroid intersection

1 Matroid intersection CS 369P: Polyhedral techniques in combinatorial optimization Instructor: Jan Vondrák Lecture date: October 21st, 2010 Scribe: Bernd Bandemer 1 Matroid intersection Given two matroids M 1 = (E, I 1 ) and

More information

Lecture 11 October 7, 2013

Lecture 11 October 7, 2013 CS 4: Advanced Algorithms Fall 03 Prof. Jelani Nelson Lecture October 7, 03 Scribe: David Ding Overview In the last lecture we talked about set cover: Sets S,..., S m {,..., n}. S has cost c S. Goal: Cover

More information

The Power of Local Search: Maximum Coverage over a Matroid

The Power of Local Search: Maximum Coverage over a Matroid The Power of Local Search: Maximum Coverage over a Matroid Yuval Filmus 1,2 and Justin Ward 1 1 Department of Computer Science, University of Toronto {yuvalf,jward}@cs.toronto.edu 2 Supported by NSERC

More information

Santa Claus Schedules Jobs on Unrelated Machines

Santa Claus Schedules Jobs on Unrelated Machines Santa Claus Schedules Jobs on Unrelated Machines Ola Svensson (osven@kth.se) Royal Institute of Technology - KTH Stockholm, Sweden March 22, 2011 arxiv:1011.1168v2 [cs.ds] 21 Mar 2011 Abstract One of the

More information

Maximum Coverage over a Matroid Constraint

Maximum Coverage over a Matroid Constraint Maximum Coverage over a Matroid Constraint Yuval Filmus Justin Ward University of Toronto STACS 2012, Paris Max Coverage: History Location of bank accounts: Cornuejols, Fisher & Nemhauser 1977, Management

More information

On Lagrangian Relaxation and Subset Selection Problems

On Lagrangian Relaxation and Subset Selection Problems On agrangian Relaxation and Subset Selection Problems (Extended Abstract) Ariel Kulik Hadas Shachnai Abstract We prove a general result demonstrating the power of agrangian relaxation in solving constrained

More information

1 Maximizing a Submodular Function

1 Maximizing a Submodular Function 6.883 Learning with Combinatorial Structure Notes for Lecture 16 Author: Arpit Agarwal 1 Maximizing a Submodular Function In the last lecture we looked at maximization of a monotone submodular function,

More information

A simple LP relaxation for the Asymmetric Traveling Salesman Problem

A simple LP relaxation for the Asymmetric Traveling Salesman Problem A simple LP relaxation for the Asymmetric Traveling Salesman Problem Thành Nguyen Cornell University, Center for Applies Mathematics 657 Rhodes Hall, Ithaca, NY, 14853,USA thanh@cs.cornell.edu Abstract.

More information

Linear Programming. Scheduling problems

Linear Programming. Scheduling problems Linear Programming Scheduling problems Linear programming (LP) ( )., 1, for 0 min 1 1 1 1 1 11 1 1 n i x b x a x a b x a x a x c x c x z i m n mn m n n n n! = + + + + + + = Extreme points x ={x 1,,x n

More information

1 Submodular functions

1 Submodular functions CS 369P: Polyhedral techniques in combinatorial optimization Instructor: Jan Vondrák Lecture date: November 16, 2010 1 Submodular functions We have already encountered submodular functions. Let s recall

More information

arxiv: v1 [cs.dm] 9 Sep 2016

arxiv: v1 [cs.dm] 9 Sep 2016 O(f) Bi-Approximation for Capacitated Covering with Hard Capacities Mong-Jen Kao 1, Hai-Lun Tu 2, and D.T. Lee 1,2 1 Institute of Information Science, Academia Sinica, Taipei, Taiwan. 2 Department of Computer

More information

On the Complexity of Budgeted Maximum Path Coverage on Trees

On the Complexity of Budgeted Maximum Path Coverage on Trees On the Complexity of Budgeted Maximum Path Coverage on Trees H.-C. Wirth An instance of the budgeted maximum coverage problem is given by a set of weighted ground elements and a cost weighted family of

More information

arxiv: v2 [cs.ds] 28 Aug 2014

arxiv: v2 [cs.ds] 28 Aug 2014 Constrained Monotone Function Maximization and the Supermodular Degree Moran Feldman Rani Izsak arxiv:1407.6328v2 [cs.ds] 28 Aug 2014 August 29, 2014 Abstract The problem of maximizing a constrained monotone

More information

Revisiting the Greedy Approach to Submodular Set Function Maximization

Revisiting the Greedy Approach to Submodular Set Function Maximization Submitted to manuscript Revisiting the Greedy Approach to Submodular Set Function Maximization Pranava R. Goundan Analytics Operations Engineering, pranava@alum.mit.edu Andreas S. Schulz MIT Sloan School

More information

Streaming Algorithms for Submodular Function Maximization

Streaming Algorithms for Submodular Function Maximization Streaming Algorithms for Submodular Function Maximization Chandra Chekuri Shalmoli Gupta Kent Quanrud University of Illinois at Urbana-Champaign October 6, 2015 Submodular functions f : 2 N R if S T N,

More information

The Power of Local Search: Maximum Coverage over a Matroid

The Power of Local Search: Maximum Coverage over a Matroid The Power of Local Search: Maximum Coverage over a Matroid Yuval Filmus,2 and Justin Ward Department of Computer Science, University of Toronto {yuvalf,jward}@cs.toronto.edu 2 Supported by NSERC Abstract

More information

1 Overview. 2 Multilinear Extension. AM 221: Advanced Optimization Spring 2016

1 Overview. 2 Multilinear Extension. AM 221: Advanced Optimization Spring 2016 AM 22: Advanced Optimization Spring 26 Prof. Yaron Singer Lecture 24 April 25th Overview The goal of today s lecture is to see how the multilinear extension of a submodular function that we introduced

More information

Approximation Algorithms for Maximum. Coverage and Max Cut with Given Sizes of. Parts? A. A. Ageev and M. I. Sviridenko

Approximation Algorithms for Maximum. Coverage and Max Cut with Given Sizes of. Parts? A. A. Ageev and M. I. Sviridenko Approximation Algorithms for Maximum Coverage and Max Cut with Given Sizes of Parts? A. A. Ageev and M. I. Sviridenko Sobolev Institute of Mathematics pr. Koptyuga 4, 630090, Novosibirsk, Russia fageev,svirg@math.nsc.ru

More information

CS675: Convex and Combinatorial Optimization Fall 2014 Combinatorial Problems as Linear Programs. Instructor: Shaddin Dughmi

CS675: Convex and Combinatorial Optimization Fall 2014 Combinatorial Problems as Linear Programs. Instructor: Shaddin Dughmi CS675: Convex and Combinatorial Optimization Fall 2014 Combinatorial Problems as Linear Programs Instructor: Shaddin Dughmi Outline 1 Introduction 2 Shortest Path 3 Algorithms for Single-Source Shortest

More information

Combinatorial Algorithms for Minimizing the Weighted Sum of Completion Times on a Single Machine

Combinatorial Algorithms for Minimizing the Weighted Sum of Completion Times on a Single Machine Combinatorial Algorithms for Minimizing the Weighted Sum of Completion Times on a Single Machine James M. Davis 1, Rajiv Gandhi, and Vijay Kothari 1 Department of Computer Science, Rutgers University-Camden,

More information

CS675: Convex and Combinatorial Optimization Fall 2016 Combinatorial Problems as Linear and Convex Programs. Instructor: Shaddin Dughmi

CS675: Convex and Combinatorial Optimization Fall 2016 Combinatorial Problems as Linear and Convex Programs. Instructor: Shaddin Dughmi CS675: Convex and Combinatorial Optimization Fall 2016 Combinatorial Problems as Linear and Convex Programs Instructor: Shaddin Dughmi Outline 1 Introduction 2 Shortest Path 3 Algorithms for Single-Source

More information

Hardness of Submodular Cost Allocation: Lattice Matching and a Simplex Coloring Conjecture

Hardness of Submodular Cost Allocation: Lattice Matching and a Simplex Coloring Conjecture Hardness of Submodular Cost Allocation: Lattice Matching and a Simplex Coloring Conjecture Alina Ene,2 and Jan Vondrák 3 Center for Computational Intractability, Princeton University 2 Department of Computer

More information

Dual fitting approximation for Set Cover, and Primal Dual approximation for Set Cover

Dual fitting approximation for Set Cover, and Primal Dual approximation for Set Cover duality 1 Dual fitting approximation for Set Cover, and Primal Dual approximation for Set Cover Guy Kortsarz duality 2 The set cover problem with uniform costs Input: A universe U and a collection of subsets

More information

arxiv: v1 [math.oc] 3 Jan 2019

arxiv: v1 [math.oc] 3 Jan 2019 The Product Knapsack Problem: Approximation and Complexity arxiv:1901.00695v1 [math.oc] 3 Jan 2019 Ulrich Pferschy a, Joachim Schauer a, Clemens Thielen b a Department of Statistics and Operations Research,

More information

8 Knapsack Problem 8.1 (Knapsack)

8 Knapsack Problem 8.1 (Knapsack) 8 Knapsack In Chapter 1 we mentioned that some NP-hard optimization problems allow approximability to any required degree. In this chapter, we will formalize this notion and will show that the knapsack

More information

Approximation Algorithms for the Incremental Knapsack Problem via Disjunctive Programming

Approximation Algorithms for the Incremental Knapsack Problem via Disjunctive Programming Approximation Algorithms for the Incremental Knapsack Problem via Disjunctive Programming Daniel Bienstock, Jay Sethuraman, Chun Ye Department of Industrial Engineering and Operations Research Columbia

More information

Dependent Randomized Rounding for Matroid Polytopes and Applications

Dependent Randomized Rounding for Matroid Polytopes and Applications Dependent Randomized Rounding for Matroid Polytopes and Applications Chandra Chekuri Jan Vondrák Rico Zenklusen November 4, 2009 Abstract Motivated by several applications, we consider the problem of randomly

More information

EECS 495: Combinatorial Optimization Lecture Manolis, Nima Mechanism Design with Rounding

EECS 495: Combinatorial Optimization Lecture Manolis, Nima Mechanism Design with Rounding EECS 495: Combinatorial Optimization Lecture Manolis, Nima Mechanism Design with Rounding Motivation Make a social choice that (approximately) maximizes the social welfare subject to the economic constraints

More information

An Improved Approximation Algorithm for Requirement Cut

An Improved Approximation Algorithm for Requirement Cut An Improved Approximation Algorithm for Requirement Cut Anupam Gupta Viswanath Nagarajan R. Ravi Abstract This note presents improved approximation guarantees for the requirement cut problem: given an

More information

Iterative Rounding and Relaxation

Iterative Rounding and Relaxation Iterative Rounding and Relaxation James Davis Department of Computer Science Rutgers University Camden jamesdav@camden.rutgers.edu March 11, 2010 James Davis (Rutgers Camden) Iterative Rounding 1 / 58

More information

Randomized Pipage Rounding for Matroid Polytopes and Applications

Randomized Pipage Rounding for Matroid Polytopes and Applications Randomized Pipage Rounding for Matroid Polytopes and Applications Chandra Chekuri Jan Vondrák September 23, 2009 Abstract We present concentration bounds for linear functions of random variables arising

More information

Capacitated Network Design Problems: Hardness, Approximation Algorithms, and Connections to Group Steiner Tree

Capacitated Network Design Problems: Hardness, Approximation Algorithms, and Connections to Group Steiner Tree Capacitated Network Design Problems: Hardness, Approximation Algorithms, and Connections to Group Steiner Tree MohammadTaghi Hajiaghayi 1, Rohit Khandekar 2, GuyKortsarz 3, and Zeev Nutov 4 1 University

More information

Learning symmetric non-monotone submodular functions

Learning symmetric non-monotone submodular functions Learning symmetric non-monotone submodular functions Maria-Florina Balcan Georgia Institute of Technology ninamf@cc.gatech.edu Nicholas J. A. Harvey University of British Columbia nickhar@cs.ubc.ca Satoru

More information

3. Linear Programming and Polyhedral Combinatorics

3. Linear Programming and Polyhedral Combinatorics Massachusetts Institute of Technology 18.433: Combinatorial Optimization Michel X. Goemans February 28th, 2013 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the introductory

More information

Approximation Algorithms for Online Weighted Rank Function Maximization under Matroid Constraints

Approximation Algorithms for Online Weighted Rank Function Maximization under Matroid Constraints Approximation Algorithms for Online Weighted Rank Function Maximization under Matroid Constraints Niv Buchbinder 1, Joseph (Seffi) Naor 2, R. Ravi 3, and Mohit Singh 4 1 Computer Science Dept., Open University

More information

Disjoint Bases in a Polymatroid

Disjoint Bases in a Polymatroid Disjoint Bases in a Polymatroid Gruia Călinescu Chandra Chekuri Jan Vondrák May 26, 2008 Abstract Let f : 2 N Z + be a polymatroid (an integer-valued non-decreasing submodular set function with f( ) =

More information

A Polynomial Time Approximation Scheme for the Multiple Knapsack Problem

A Polynomial Time Approximation Scheme for the Multiple Knapsack Problem University of Pennsylvania ScholarlyCommons Departmental Papers (CIS) Department of Computer & Information Science February 2006 A Polynomial Time Approximation Scheme for the Multiple Knapsack Problem

More information

c 2014 Society for Industrial and Applied Mathematics

c 2014 Society for Industrial and Applied Mathematics SIAM J. COMPUT. Vol. 43, No. 2, pp. 514 542 c 2014 Society for Industrial and Applied Mathematics MONOTONE SUBMODULAR MAXIMIZATION OVER A MATROID VIA NON-OBLIVIOUS LOCAL SEARCH YUVAL FILMUS AND JUSTIN

More information

Maximizing Non-monotone Submodular Set Functions Subject to Different Constraints: Combined Algorithms

Maximizing Non-monotone Submodular Set Functions Subject to Different Constraints: Combined Algorithms Maximizing Non-monotone Submodular Set Functions Subject to Different Constraints: Combined Algorithms Salman Fadaei MohammadAmin Fazli MohammadAli Safari August 31, 2011 We study the problem of maximizing

More information

arxiv: v1 [cs.ds] 15 Jul 2018

arxiv: v1 [cs.ds] 15 Jul 2018 Online Submodular Maximization: Beating 1/2 Made Simple Niv Buchbinder Moran Feldman Mohit Garg July 17, 2018 arxiv:1807.05529v1 [cs.ds] 15 Jul 2018 Abstract The Submodular Welfare Maximization problem

More information

An approximation algorithm for the minimum latency set cover problem

An approximation algorithm for the minimum latency set cover problem An approximation algorithm for the minimum latency set cover problem Refael Hassin 1 and Asaf Levin 2 1 Department of Statistics and Operations Research, Tel-Aviv University, Tel-Aviv, Israel. hassin@post.tau.ac.il

More information

Submodular Functions, Optimization, and Applications to Machine Learning

Submodular Functions, Optimization, and Applications to Machine Learning Submodular Functions, Optimization, and Applications to Machine Learning Spring Quarter, Lecture 12 http://www.ee.washington.edu/people/faculty/bilmes/classes/ee596b_spring_2016/ Prof. Jeff Bilmes University

More information

Multiply Balanced k Partitioning

Multiply Balanced k Partitioning Multiply Balanced k Partitioning Amihood Amir 1,2,, Jessica Ficler 1,, Robert Krauthgamer 3, Liam Roditty 1, and Oren Sar Shalom 1 1 Department of Computer Science, Bar-Ilan University, Ramat-Gan 52900,

More information

A primal-dual schema based approximation algorithm for the element connectivity problem

A primal-dual schema based approximation algorithm for the element connectivity problem A primal-dual schema based approximation algorithm for the element connectivity problem Kamal Jain Ion Măndoiu Vijay V. Vazirani David P. Williamson Abstract The element connectivity problem falls in the

More information

Submodular and Linear Maximization with Knapsack Constraints. Ariel Kulik

Submodular and Linear Maximization with Knapsack Constraints. Ariel Kulik Submodular and Linear Maximization with Knapsack Constraints Ariel Kulik Submodular and Linear Maximization with Knapsack Constraints Research Thesis Submitted in partial fulfillment of the requirements

More information

On Generalizations of Network Design Problems with Degree Bounds

On Generalizations of Network Design Problems with Degree Bounds On Generalizations of Network Design Problems with Degree Bounds Nikhil Bansal Rohit Khandekar Jochen Könemann Viswanath Nagarajan Britta Peis Abstract Iterative rounding and relaxation have arguably become

More information

On Maximizing Welfare when Utility Functions are Subadditive

On Maximizing Welfare when Utility Functions are Subadditive On Maximizing Welfare when Utility Functions are Subadditive Uriel Feige October 8, 2007 Abstract We consider the problem of maximizing welfare when allocating m items to n players with subadditive utility

More information

From Primal-Dual to Cost Shares and Back: A Stronger LP Relaxation for the Steiner Forest Problem

From Primal-Dual to Cost Shares and Back: A Stronger LP Relaxation for the Steiner Forest Problem From Primal-Dual to Cost Shares and Back: A Stronger LP Relaxation for the Steiner Forest Problem Jochen Könemann 1, Stefano Leonardi 2, Guido Schäfer 2, and Stefan van Zwam 3 1 Department of Combinatorics

More information

On Fixed Cost k-flow Problems

On Fixed Cost k-flow Problems On Fixed Cost k-flow Problems MohammadTaghi Hajiaghayi 1, Rohit Khandekar 2, Guy Kortsarz 3, and Zeev Nutov 4 1 University of Maryland, College Park, MD. hajiagha@cs.umd.edu. 2 Knight Capital Group, Jersey

More information

Duality of LPs and Applications

Duality of LPs and Applications Lecture 6 Duality of LPs and Applications Last lecture we introduced duality of linear programs. We saw how to form duals, and proved both the weak and strong duality theorems. In this lecture we will

More information

An approximation algorithm for the partial covering 0 1 integer program

An approximation algorithm for the partial covering 0 1 integer program An approximation algorithm for the partial covering 0 1 integer program Yotaro Takazawa Shinji Mizuno Tomonari Kitahara submitted January, 2016 revised July, 2017 Abstract The partial covering 0 1 integer

More information

The Steiner k-cut Problem

The Steiner k-cut Problem The Steiner k-cut Problem Chandra Chekuri Sudipto Guha Joseph (Seffi) Naor September 23, 2005 Abstract We consider the Steiner k-cut problem which generalizes both the k-cut problem and the multiway cut

More information

New Approaches to Multi-Objective Optimization

New Approaches to Multi-Objective Optimization New Approaches to Multi-Objective Optimization Fabrizio Grandoni R. Ravi Mohit Singh Rico Zenklusen July 17, 2013 Abstract A natural way to deal with multiple, partially conflicting objectives is turning

More information

Submodular Optimization in the MapReduce Model

Submodular Optimization in the MapReduce Model Submodular Optimization in the MapReduce Model Paul Liu and Jan Vondrak Stanford University, USA {paulliu, jvondrak}@stanford.edu Abstract Submodular optimization has received significant attention in

More information

Query and Computational Complexity of Combinatorial Auctions

Query and Computational Complexity of Combinatorial Auctions Query and Computational Complexity of Combinatorial Auctions Jan Vondrák IBM Almaden Research Center San Jose, CA Algorithmic Frontiers, EPFL, June 2012 Jan Vondrák (IBM Almaden) Combinatorial auctions

More information

The 2-valued case of makespan minimization with assignment constraints

The 2-valued case of makespan minimization with assignment constraints The 2-valued case of maespan minimization with assignment constraints Stavros G. Kolliopoulos Yannis Moysoglou Abstract We consider the following special case of minimizing maespan. A set of jobs J and

More information

3. Linear Programming and Polyhedral Combinatorics

3. Linear Programming and Polyhedral Combinatorics Massachusetts Institute of Technology 18.453: Combinatorial Optimization Michel X. Goemans April 5, 2017 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the introductory

More information

Communication Complexity of Combinatorial Auctions with Submodular Valuations

Communication Complexity of Combinatorial Auctions with Submodular Valuations Communication Complexity of Combinatorial Auctions with Submodular Valuations Shahar Dobzinski Weizmann Institute of Science Rehovot, Israel dobzin@gmail.com Jan Vondrák IBM Almaden Research Center San

More information

Lecture notes on the ellipsoid algorithm

Lecture notes on the ellipsoid algorithm Massachusetts Institute of Technology Handout 1 18.433: Combinatorial Optimization May 14th, 007 Michel X. Goemans Lecture notes on the ellipsoid algorithm The simplex algorithm was the first algorithm

More information

Scheduling on Unrelated Parallel Machines. Approximation Algorithms, V. V. Vazirani Book Chapter 17

Scheduling on Unrelated Parallel Machines. Approximation Algorithms, V. V. Vazirani Book Chapter 17 Scheduling on Unrelated Parallel Machines Approximation Algorithms, V. V. Vazirani Book Chapter 17 Nicolas Karakatsanis, 2008 Description of the problem Problem 17.1 (Scheduling on unrelated parallel machines)

More information

EE595A Submodular functions, their optimization and applications Spring 2011

EE595A Submodular functions, their optimization and applications Spring 2011 EE595A Submodular functions, their optimization and applications Spring 2011 Prof. Jeff Bilmes University of Washington, Seattle Department of Electrical Engineering Winter Quarter, 2011 http://ee.washington.edu/class/235/2011wtr/index.html

More information

Scheduling Parallel Jobs with Linear Speedup

Scheduling Parallel Jobs with Linear Speedup Scheduling Parallel Jobs with Linear Speedup Alexander Grigoriev and Marc Uetz Maastricht University, Quantitative Economics, P.O.Box 616, 6200 MD Maastricht, The Netherlands. Email: {a.grigoriev, m.uetz}@ke.unimaas.nl

More information

Approximation Algorithms for 2-Stage Stochastic Scheduling Problems

Approximation Algorithms for 2-Stage Stochastic Scheduling Problems Approximation Algorithms for 2-Stage Stochastic Scheduling Problems David B. Shmoys 1 and Mauro Sozio 2 1 School of ORIE and Dept. of Computer Science, Cornell University, Ithaca, NY 14853 shmoys@cs.cornell.edu

More information

Budgeted Allocations in the Full-Information Setting

Budgeted Allocations in the Full-Information Setting Budgeted Allocations in the Full-Information Setting Aravind Srinivasan 1 Dept. of Computer Science and Institute for Advanced Computer Studies, University of Maryland, College Park, MD 20742. Abstract.

More information

Thresholded Covering Algorithms for Robust and Max-min Optimization

Thresholded Covering Algorithms for Robust and Max-min Optimization Thresholded Covering Algorithms for Robust and Max-min Optimization Anupam Gupta 1,, Viswanath Nagarajan 2, and R. Ravi 3, 1 Computer Science Department, Carnegie Mellon University, Pittsburgh, PA 15213,

More information

Minimization of Symmetric Submodular Functions under Hereditary Constraints

Minimization of Symmetric Submodular Functions under Hereditary Constraints Minimization of Symmetric Submodular Functions under Hereditary Constraints J.A. Soto (joint work with M. Goemans) DIM, Univ. de Chile April 4th, 2012 1 of 21 Outline Background Minimal Minimizers and

More information

Maximization of Submodular Set Functions

Maximization of Submodular Set Functions Northeastern University Department of Electrical and Computer Engineering Maximization of Submodular Set Functions Biomedical Signal Processing, Imaging, Reasoning, and Learning BSPIRAL) Group Author:

More information

Machine Minimization for Scheduling Jobs with Interval Constraints

Machine Minimization for Scheduling Jobs with Interval Constraints Machine Minimization for Scheduling Jobs with Interval Constraints Julia Chuzhoy Sudipto Guha Sanjeev Khanna Joseph (Seffi) Naor Abstract The problem of scheduling jobs with interval constraints is a well-studied

More information

arxiv: v1 [math.oc] 1 Jun 2015

arxiv: v1 [math.oc] 1 Jun 2015 NEW PERFORMANCE GUARANEES FOR HE GREEDY MAXIMIZAION OF SUBMODULAR SE FUNCIONS JUSSI LAIILA AND AE MOILANEN arxiv:1506.00423v1 [math.oc] 1 Jun 2015 Abstract. We present new tight performance guarantees

More information

On Allocating Goods to Maximize Fairness

On Allocating Goods to Maximize Fairness 2009 50th Annual IEEE Symposium on Foundations of Computer Science On Allocating Goods to Maximize Fairness Deeparnab Chakrabarty Dept. of C&O University of Waterloo Waterloo, ON N2L 3G1 Email: deepc@math.uwaterloo.ca

More information

On the Complexity of the Minimum Independent Set Partition Problem

On the Complexity of the Minimum Independent Set Partition Problem On the Complexity of the Minimum Independent Set Partition Problem T-H. Hubert Chan 1, Charalampos Papamanthou 2, and Zhichao Zhao 1 1 Department of Computer Science the University of Hong Kong {hubert,zczhao}@cs.hku.hk

More information

1 Perfect Matching and Matching Polytopes

1 Perfect Matching and Matching Polytopes CS 598CSC: Combinatorial Optimization Lecture date: /16/009 Instructor: Chandra Chekuri Scribe: Vivek Srikumar 1 Perfect Matching and Matching Polytopes Let G = (V, E be a graph. For a set E E, let χ E

More information

Submodularity and curvature: the optimal algorithm

Submodularity and curvature: the optimal algorithm RIMS Kôkyûroku Bessatsu Bx (200x), 000 000 Submodularity and curvature: the optimal algorithm By Jan Vondrák Abstract Let (X, I) be a matroid and let f : 2 X R + be a monotone submodular function. The

More information

Stochastic Matching in Hypergraphs

Stochastic Matching in Hypergraphs Stochastic Matching in Hypergraphs Amit Chavan, Srijan Kumar and Pan Xu May 13, 2014 ROADMAP INTRODUCTION Matching Stochastic Matching BACKGROUND Stochastic Knapsack Adaptive and Non-adaptive policies

More information

A Faster Strongly Polynomial Time Algorithm for Submodular Function Minimization

A Faster Strongly Polynomial Time Algorithm for Submodular Function Minimization A Faster Strongly Polynomial Time Algorithm for Submodular Function Minimization James B. Orlin Sloan School of Management, MIT Cambridge, MA 02139 jorlin@mit.edu Abstract. We consider the problem of minimizing

More information

arxiv: v1 [cs.ds] 22 Nov 2018

arxiv: v1 [cs.ds] 22 Nov 2018 Approximate Multi-Matroid Intersection via Iterative Refinement arxiv:1811.09027v1 [cs.ds] 22 Nov 2018 André Linhares,1, Neil Olver,2, Chaitanya Swamy,1, and Rico Zenklusen,3 1 Dept. of Combinatorics and

More information

Dual Consistent Systems of Linear Inequalities and Cardinality Constrained Polytopes. Satoru FUJISHIGE and Jens MASSBERG.

Dual Consistent Systems of Linear Inequalities and Cardinality Constrained Polytopes. Satoru FUJISHIGE and Jens MASSBERG. RIMS-1734 Dual Consistent Systems of Linear Inequalities and Cardinality Constrained Polytopes By Satoru FUJISHIGE and Jens MASSBERG December 2011 RESEARCH INSTITUTE FOR MATHEMATICAL SCIENCES KYOTO UNIVERSITY,

More information

Approximation Algorithms for Re-optimization

Approximation Algorithms for Re-optimization Approximation Algorithms for Re-optimization DRAFT PLEASE DO NOT CITE Dean Alderucci Table of Contents 1.Introduction... 2 2.Overview of the Current State of Re-Optimization Research... 3 2.1.General Results

More information

Submodular function optimization: A brief tutorial (Part I)

Submodular function optimization: A brief tutorial (Part I) Submodular function optimization: A brief tutorial (Part I) Donglei Du (ddu@unb.ca) Faculty of Business Administration, University of New Brunswick, NB Canada Fredericton E3B 9Y2 Workshop on Combinatorial

More information

Stochastic Submodular Cover with Limited Adaptivity

Stochastic Submodular Cover with Limited Adaptivity Stochastic Submodular Cover with Limited Adaptivity Arpit Agarwal Sepehr Assadi Sanjeev Khanna Abstract In the submodular cover problem, we are given a non-negative monotone submodular function f over

More information

Machine scheduling with resource dependent processing times

Machine scheduling with resource dependent processing times Mathematical Programming manuscript No. (will be inserted by the editor) Alexander Grigoriev Maxim Sviridenko Marc Uetz Machine scheduling with resource dependent processing times Received: date / Revised

More information

Approximation Algorithms for Asymmetric TSP by Decomposing Directed Regular Multigraphs

Approximation Algorithms for Asymmetric TSP by Decomposing Directed Regular Multigraphs Approximation Algorithms for Asymmetric TSP by Decomposing Directed Regular Multigraphs Haim Kaplan Tel-Aviv University, Israel haimk@post.tau.ac.il Nira Shafrir Tel-Aviv University, Israel shafrirn@post.tau.ac.il

More information

An algorithm to increase the node-connectivity of a digraph by one

An algorithm to increase the node-connectivity of a digraph by one Discrete Optimization 5 (2008) 677 684 Contents lists available at ScienceDirect Discrete Optimization journal homepage: www.elsevier.com/locate/disopt An algorithm to increase the node-connectivity of

More information

New approaches to multi-objective optimization

New approaches to multi-objective optimization Math. Program., Ser. A (2014) 146:525 554 DOI 10.1007/s10107-013-0703-7 FULL LENGTH PAPER New approaches to multi-objective optimization Fabrizio Grandoni R. Ravi Mohit Singh Rico Zenklusen Received: 30

More information

More Approximation Algorithms

More Approximation Algorithms CS 473: Algorithms, Spring 2018 More Approximation Algorithms Lecture 25 April 26, 2018 Most slides are courtesy Prof. Chekuri Ruta (UIUC) CS473 1 Spring 2018 1 / 28 Formal definition of approximation

More information

Lectures 6, 7 and part of 8

Lectures 6, 7 and part of 8 Lectures 6, 7 and part of 8 Uriel Feige April 26, May 3, May 10, 2015 1 Linear programming duality 1.1 The diet problem revisited Recall the diet problem from Lecture 1. There are n foods, m nutrients,

More information

Chapter 11. Approximation Algorithms. Slides by Kevin Wayne Pearson-Addison Wesley. All rights reserved.

Chapter 11. Approximation Algorithms. Slides by Kevin Wayne Pearson-Addison Wesley. All rights reserved. Chapter 11 Approximation Algorithms Slides by Kevin Wayne. Copyright @ 2005 Pearson-Addison Wesley. All rights reserved. 1 Approximation Algorithms Q. Suppose I need to solve an NP-hard problem. What should

More information

9. Submodular function optimization

9. Submodular function optimization Submodular function maximization 9-9. Submodular function optimization Submodular function maximization Greedy algorithm for monotone case Influence maximization Greedy algorithm for non-monotone case

More information

CS6999 Probabilistic Methods in Integer Programming Randomized Rounding Andrew D. Smith April 2003

CS6999 Probabilistic Methods in Integer Programming Randomized Rounding Andrew D. Smith April 2003 CS6999 Probabilistic Methods in Integer Programming Randomized Rounding April 2003 Overview 2 Background Randomized Rounding Handling Feasibility Derandomization Advanced Techniques Integer Programming

More information

Approximating Minimum-Power Degree and Connectivity Problems

Approximating Minimum-Power Degree and Connectivity Problems Approximating Minimum-Power Degree and Connectivity Problems Guy Kortsarz Vahab S. Mirrokni Zeev Nutov Elena Tsanko Abstract Power optimization is a central issue in wireless network design. Given a graph

More information

This means that we can assume each list ) is

This means that we can assume each list ) is This means that we can assume each list ) is of the form ),, ( )with < and Since the sizes of the items are integers, there are at most +1pairs in each list Furthermore, if we let = be the maximum possible

More information