Constrained Maximization of Non-Monotone Submodular Functions
|
|
- Justina Merritt
- 5 years ago
- Views:
Transcription
1 Constrained Maximization of Non-Monotone Submodular Functions Anupam Gupta Aaron Roth September 15, 2009 Abstract The problem of constrained submodular maximization has long been studied, with near-optimal results known under a variety of constraints when the submodular function is monotone. The case of nonmonotone submodular maximization is not as well understood: the first approximation algorithms even for unconstrainted maximization were given by Feige et al. [FMV07]. More recently, Lee et al. [LMNS09] showed (among other results) how to approximately maximize non-monotone submodular functions when the constraints are given by the intersection of p matroid constraints; their algorithm is based on a localsearch procedure that considers p-swaps, and hence the running time may be n Ω(p), implying their algorithm is polynomial-time only for constantly many matroids. In this paper, we give an algorithm that works for p-independence systems (which generalize constraints given by the intersection of p matroids), where the running time is polynomial in p. In fact, our algorithm essentially reduces the non-monotone maximization problem to multiple runs of the greedy algorithm widely used in the monotone case. The proofs work by noting that the monotonicity property was used in previous proofs for the greedy algorithm in a fairly mild way, and those occurrences could be removed at the expense of multiple runs of the greedy algorithm. Our approximation factors, however, are worse than the best results known for p matroid constraints. Our approach of solving the non-monotone problems using an existing algorithm for the monotone version of the problem also works for submodular maximization subject to a knapsack constraint: we show that Sviridenko s algorithm and proof can be modified to give another constant-factor approximation for this problem. (Lee et al. gave a 5 + ɛ-approximation that works for any constant number of knapsack constraints.) 1 Introduction We present algorithms for maximizing (not necessarily monotone) non-negative submodular functions satisfying f( ) = 0 under a variety of constraints considered earlier in the literature. Instead of working with local-search algorithms as Lee et al. [LMNS09, LSV09] did, we consider greedy algorithms previously used for monotone submodular maximization and show how these algorithms can be adapted very simply to non-monotone maximization as well. Using this idea, we show the following results: We give an O(p)-approximation for maximizing submodular functions subject to a p-independence system. This extends the result of Lee et al. [LMNS09, LSV09] which applied to constraints given by the intersection of p matroids, where p was a constant. Our greedy-based algorithm has a run-time that is polynomial in p, and hence give polynomial-time algorithms for even non-constant values of p. We give a constant-factor approximation for maximizing submodular functions subject to a knapsack constraint. This greedy-based algorithm gives an alternate approach to solve this problem; recall that Lee et al. [LMNS09] gave LP-rounding-based algorithms that achieved a (5 + ɛ)-approximation algorithm for constraints given by the intersection of p knapsack constraints, where p is a constant. We would like to emphasize that our approximation factors are currently worse than the best possible known: maximizing nonmonotone submodular functions subject to p 2 matroid constraints currently has a ( p2 p 1 + ɛ) approximation due to a paper of Lee, Sviridenko and Vondrák [LSV09], and for p = 1 1
2 the best result is a 3.23-approximation by Vondrák [Von09]. While we have not tried hard to get the best possible constants, it seems that matching (or improving on) those results will need more than choosing the parameters carefully. We leave such substantial improvements as an open problem. 1.1 Our Main Ideas At a high level, the simple yet crucial observation is this: many of the previous algorithms and proofs for constrained monotone submodular maximization can be adapted to show that the set S produced by them satisfies f(s) βf(s C ), for some 0 < β 1, and C being an optimal solution. In the monotone case, the right hand side is at least f(c ) = OPT and we are done. In the non-monotone case, we cannot do this. However, we observe that if f(s C ) is a reasonable fraction of OPT, then (approximately) finding the most valuable set within S would give us a large value and since we work with constraints that are downwards closed, finding such a set is just unconstrained maximization on f( ) restricted to S, for which Feige et al. [FMV07] give good algorithms! On the other hand, if f(s C ) ɛopt and f(s) is also too small, then a remarkable lemma due to [LMNS09] shows that deleting the elements in S and again running the procedure again to find another set S X \ S with f(s ) βf(s (C \ S)) would guarantee a good solution. Details for the specific problems appear in the following sections; we first consider the simplest cardinality constraint case in Section 3 to illustrate the general idea, and then give more general results in Sections 4 and 5. 2 Preliminaries A function f : 2 Ω R + is submodular if for all S, T Ω, f(s) + f(t ) f(s T ) + f(s T ). An alternative (but equivalent) characterization of submodular functions states that f is submodular if it represents a set function with decreasing marginal utility. That is, for all S T Ω, and for all e Ω, f(s {e}) f(s) f(t {e}) f(t ). Given f and S Ω, define f S : 2 Ω R as f S (A) := f(s A) f(s). Proposition 2.1. If f is submodular with f( ) = 0, then for any S, f S is submodular with f S ( ) = 0, and f is also subadditive; i.e., for disjoint sets A, B, we have f(a) + f(b) f(a B). Proof. For the first fact, recall that f S (A) = f(s A) f(s). For submodularity note that f S (A) + f S (B) = f(s A) + f(s B) 2f(S) f(s A B) + f(s (A B)) 2f(S) = f S (A B) + f S (A B); moreover f S ( ) = 0 by definition. (Note that f S is not necessarily non-negative even if f was.) The second fact follows from the defintion of submodularity and that f(a B) = f( ) = 0 by assumption. Given a set S and an element e, we use S + e to denote S {e}. 2.1 Unconstrained Submodular Optimization We use FMV α (S) to denote an approximation algorithm given by Feige, Mirrokni, and Vondrak [FMV07] for unconstrained submodular maximization in the non-monotone setting: it returns a set T S such that f(t ) 1 α max T S f(t ). In fact, Feige et al. present many such algorithms, the best approximation ratio among these is α = 2.5 via a local-search algorithm, the easiest is a 4-approximation that just returns a uniformly random subset of S. 2
3 3 Submodular Maximixation subject to a Cardinality Constraint We first give an offline algorithm for submodular maximization subject to a cardinality constraint; this is inspired by but slightly different from the local-search algorithm of Lee et al. [LMNS09], and will illustrate our main ideas. The proof of the first lemma is a slight adaptation of the one commonly used for monotone submodular function maximization subject to a cardinality constraint. Lemma 3.1. For any set C k, the greedy algorithm returns a set S that satisfies f(s) 1 2 f(s C). Proof. Suppose not. Then f S (C) = f(s C) f(s) > f(s), and hence there is at least one element e C \ S that has f S ({e}) > f(s) C\S > f(s) k. Since we ran the greedy algorithm, at each step this element e would have been a contender to be added, and by submodularity, e s marginal value would have been only higher then. Hence the elements actually added in each of the k steps would have had marginal value more than e s marginal value at that time, which is more than f(s)/k. This implies that f(s) > k f(s)/k, a contradiction. Lemma 3.2 (Special Case of Claim 5 in [LMNS09]). Given sets C, S 1 U, let C = C\S 1, and S 2 U \S 1. Then Proof. By submodularity, it follows that again using submodularity, we get f(s 1 C) + f(s 1 C) + f(s 2 C ) f(c). (1) f(s 1 C) + f(s 2 C ) f(s 1 S 2 C) + f(c ); f(c ) + f(s 1 C) f(c) + f( ). Putting these together and using non-negativity of f( ), the lemma follows. 3.1 The Algorithm We now give our algorithm for submodular maximization: it has the same structure as that of Lee et al., but uses the greedy analysis above instead of a local-search algorithm. Algorithm 1 Submod-Max-Cardinality 1: let X 1 X 2: for i = 1 to 2 do 3: let S i Greedy(X 1 ) 4: let S i FMV α(s i ) 5: let X i+1 X i \ S i. 6: end for 7: return best of S 1, S 1, S 2. Theorem 3.3. The algorithm Submod-Max-Cardinality is a (4 + α)-approximation. Proof. Let C be the optimal solution with f(c ) = OPT. We know that f(s 1 ) 1 2 f(s 1 C ). Also, if f(s 1 C ) is at least ɛ OPT, then we know that the α-approximate algorithm algorithm FMV α gives us a value of at (ɛ/α)opt. Else, f(s 1 ) 1 2 f(s 1 C ) 1 2 f(s 1 C ) f(s 1 C ) ɛ OPT/2 (2) 3
4 Similarly, we get that f(s 2 ) 1 2 f(s 2 (C \ S 1 )). (3) Now adding (2) and (3), we get 2 max(f(s 1 ), f(s 2 )) f(s 1 ) + f(s 2 ) 1 2( f(s1 C ) + f(s 1 C ) + f(s 2 (C \ S 1 )) ) ɛopt/2 (4) 1 2 f(c ) ɛopt/2 (5) 1 OPT(1 ɛ) (6) 2 where we used Lemma 3.2 to get from (4) to (5). Hence max{f(s 1 ), f(s 2 )} 1 ɛ 4 OPT. The approximation factor now is max{α/ɛ, 4/(1 ɛ)}. Setting ɛ =, we get a (4 + α)-approximation, as claimed. α α+4 Using the known value of α = 2.5 from Feige et al. [FMV07], we get a 6.5-approximation for submodular maximization under cardinality constraints. Note this is weaker than the 3.23-approximation of Vondrák [Von09], or even the 4-approximation we could get from Lee et al. [LMNS09] for this special case, but we show the basic idea works in several other contexts, allowing us to use (and slightly modify) existing results for monotone submodular maximization to immediately get results for non-monotone maximization that are weaker than the best known results by constant factors. 4 Submodular Maximization for Independence Systems 4.1 Independence Systems Let Ω be a universe of elements and consider a collection I 2 Ω of subsets of Ω. (Ω, I) is called an independence system if (a) I, and (b) if X I and Y X, then Y I as well. The subsets in I are called independent; for any set S of elements, an inclusion-wise maximal subset T of S is called a basis of S. For brevity, we say that T is a basis, if it is a basis of Ω. Definition 4.1. Given an independence system (Ω, I) and a subset S Ω. The rank r(s) is defined as the cardinality of the largest basis of S, and the lower rank ρ(s) is the cardinality of the smallest basis of S. The independence system is called a p-independence system (or a p-system) if r(s) max S Ω ρ(s) p See, e.g., [CCPV09] for a discussion of independence systems and their relationship to other families of constraints; for our purposes it is useful to know that intersections of p matroids forms a p-independent system. It was shown by [Jen76, KH78, HKJ80] that the greedy algorithm is a p-approximation for maximizing a modular function subject to a p-system; Ficher et al. [NWF78] showed that greedy is also a (p + 1)-approximation for maximizing monotone submodular functions over p matroids. 4.2 The Greedy Algorithm Suppose we are given an independence system (Ω, I), a subset X Ω and a non-negative submodular function f that is potentially non-monotone, but has f( ) = 0. We want to find (or at least approximate) max S X:S I f(s). The greedy algorithm for this problem is what you would expect: start with the set S =, and at each step pick an element e X \ S that maximizes f S (e) and ensures that S {e} is also independent. If no such element exists, the algorithm terminates, else we set S S {e}, and repeat. 4
5 (Ideally, we would also check to see if f S (e) 0, and terminate at the first time this happens; we don t do that, and instead we add elements even when the marginal gain is negative until we cannot add any more elements without violating independence.) The following analysis is essentially the same as in [CCPV09, Appendix A], but we give it in the form we need, whilst avoiding any use of monotonicity. Let e 1, e 2,..., e k be the elements added to S by greedy, and let S i be the first i elements in this order, with δ i = f Si 1 ({e i }) = f(s i ) f(s i 1 ), which may be positive or negative. Since f( ) = 0, we have f(s = S k ) = i δ i. And since f is submodular, δ i δ i+1 for all i. Let O be an optimal solution; i.e., OPT = f(o). Lemma 4.2 (following [CCPV09]). f(s k ) 1 p+1 f(o S k). Proof. We show the existence of a partition of O into O 1, O 2,..., O k with the following two properties: for all i [k], p 1 + p p i i p where p i := O i, and for all i [k], p i δ i f Sk (O i ). Assuming such a partition, we can complete the proof thus: p i δ i i p i δ i i f Sk (O i ) f Sk (O) = f(s k O) f(s k ), (7) where the first inequality follows from [CCPV09, Claim A.1] (using the first property above, and that the δ s are non-increasing), the second from the second property of the partition of O, the third from subadditivity of f Sk ( ) (which is implied by the submodularity of f and applications of both facts in Proposition 2.1), and the fourth from the definition of f Sk ( ). Using the fact that i δ i = f(s k ), and rearranging, we get the lemma. Now to prove the existence of such a partition of O. Define A 0, A 1,..., A k as follows: A i = {e O \S i S i + e I}. Note that since O I, it follows that A 0 = O; since the independence system is closed under subsets, we have A i A i 1 ; and since the greedy algorithm stops only when there are no more elements to add, we get A k =. Defining O i = A i 1 \ A i ensures we have a partition O 1, O 2,..., O k of O. Fix a value i. We claim that S i is a basis (a maximal independent set) for S i (O 1 O 2... O i ) = S i (O \ A i ). Clearly S i I by construction; moreover, any e (O \ A i ) \ S i was not considered but not added to A i because S i +e I. Moreover, (O 1 O 2... O i ) O is clearly independent by subset-closure. Since I is a p-independence system, O 1 O 2... O i p S i, and thus i O i = i p i i p, proving the first property. For the second property, note that O i = A i 1 \ A i A i 1 ; hence each e O i does not belong to S i 1 but could have been added to S i 1 whilst maintaining independence, and was considered by the greedy algorithm. Since greedy chose the e i maximizing the gain, δ i f Si 1 ({e}) for each e O i. Summing over all e O i, we get p i δ i e O i f Si 1 ({e}) f Si 1 (O i ), where the last inequality is by the subadditivity of f Si 1. Again, by submodularity, f Si 1 (O i ) f Sk (O i ), which proves the second fact about the partition {O j } k j=1 of O. Clearly, the greedy algorithm works no worse if we stop it when the best gain is negative, but the above proof does not use that fact. 4.3 Submodular Maximization over p-independence Systems Consider the following algorithm for maximizing a non-monotone submodular function f with f( ) = 0 over a p-independence system. Theorem 4.3. The above algorithm is a (1 + α)(p /p)-approximation for maximizing a nonmonotone submodular function over a p-independence system, where α is the approximation guarantee for unconstrained (non-monotone) submodular maximization. 5
6 Algorithm 2 NonMonotoneMax(X, I, f) 1: X 1 X 2: for i = 1 to p + 1 do 3: S i Greedy(X i, I, f) 4: S i FMV α(s i ) 5: X i+1 X i \ S i 6: end for 7: return S set among {S i } p+1 i=1 {S i }p+1 i=1 that maximizes f( ). Proof. Let C be an optimal solution with OPT = f(c), and let C i = C X i for all i [p + 1] hence C 1 = C. Note that C i is a feasible solution to the greedy optimization in Step 3. Hence, by Lemma 4.2, we know that f(s i ) 1 p+1 f(c i S i ). Now, if for some i, it holds that f(s i C i ) ɛopt (for ɛ > 0 to be chosen later), then the guarantees of FMV α ensure that f(s i ) (ɛopt)/α, and we will get a α/ɛ-approximation. Else, it holds for all i [p + 1] that f(s i ) 1 p + 1 f(c i S i ) + f(c i S i ) ɛ OPT. (8) Now we can add all these inequalities and use [LMNS09, Claim 5] to infer that ( ) p p f(s) f(c) ɛ OPT = OPT (p + 1) 2 (p + 1) 2 ɛ. (9) Thus the approximation factor is max{α/ɛ, ( ɛ) 1 }. Setting ɛ = α (p+1) 2 approximation ratio. p 1+α p (p+1) 2, we get the claimed Note that even using α = 1, we would not be able to match the ratios in Lee et al. [LMNS09, LSV09]. However, the proof here is somewhat simpler and does not rely on the careful analysis of the local-search procedure for matroids, and instead works seamlessly for all p-independence systems instead of just intersections of matroids, and moreover the running time is only linear in the number of matroids, instead of being exponential as in the local-search. Another aside: instead of running the algorithm p + 1 times, if we just run it twice, we can reduce the run-time further; the proof now uses Lemma 3.2 instead of the full power of [LMNS09, Claim 5], and hence the constants are slightly worse. 5 Submodular Maximization over Knapsacks The paper of Sviridenko [Svi04] gives a greedy algorithm with partial enumeration that achieves a approximation for monotone submodular maximization with respect to a knapsack constraint. In particular, each element e X has a size c e, and we are given a bound B: the goal is to maximize f(s) over subsets S X such that e S c e B. His algorithm is the following for each possible choice of at most three elements from X, start with those three elements and iteratively include the element which maximizes the gain in the function value per unit size, and the resulting set still fits in the knapsack. (If none of the remaining elements gives a positive gain, or fit in the knapsack, stop.) Finally, from among these O( X 3 ) solutions, choose the best one Sviridenko shows that in the monotone submodular case, this is an (1 1/e)-approximation algorithm. It is not difficult to modify Sviridenko s algorithm and proof to show the following result for nonmonotone submodular functions. (The details are in Appendix A). Theorem 5.1. There is a polynomial-time algorithm that given the above input, outputs a (polynomial sized) collection of sets such that for any valid solution C, the collection contains a set S satisfying f(s) (1 1/e)f(S C). 6 e e 1
7 Now using an argument very similar to that in Theorem 3.3 gives us the following result for non-montone submodular maximization with respect to a knapsack constraint. Theorem 5.2. There is an (α + 2e e 1 )-approximation for the problem of maximizing a submodular function with respect a knapsack constraint, where α is the approximation guarantee for unconstrained (nonmonotone) submodular maximization. 6 Closing Remarks Some obvious questions suggest themselves. (a) Can we give approximation guarantees for these problems that match those given by previous results [LMNS09, LSV09, Von09], even for the case of a single matroid constraint? (b) Can we adapt the algorithm of Kulik et al. [KST09] for maximizing a monotone submodular function with respect to a constant number of knapsacks to output a set S such that f(s) βf(s C ) for some constant β (0, 1)? (In fact, ideally β = 1 1/e ɛ for each ɛ > 0, where the run-time grows as a function of ɛ.) Acknowledgments. comments. We thank Viswanath Nagarajan, Maxim Sviridenko, and Jan Vondrák for helpful References [CCPV09] Gruia Calinescu, Chandra Chekuri, Martin Pál, and Jan Vondrák. Maximizing a monotone submodular function subject to a matroid constraint. to appear, [FMV07] [HKJ80] [Jen76] [KH78] [KST09] U. Feige, V. Mirrokni, and J. Vondrak. Maximizing non-monotone submodular functions. In Proceedings of 48th Annual IEEE Symposium on Foundations of Computer Science (FOCS), D. Hausmann, B. Korte, and T. A. Jenkyns. Worst case analysis of greedy type algorithms for independence systems. Math. Programming Stud., (12): , Combinatorial optimization. Thomas A. Jenkyns. The efficacy of the greedy algorithm. In Proceedings of the Seventh Southeastern Conference on Combinatorics, Graph Theory, and Computing (Louisiana State Univ., Baton Rouge, La., 1976), pages Congressus Numerantium, No. XVII, Winnipeg, Man., Utilitas Math. Bernhard Korte and Dirk Hausmann. An analysis of the greedy heuristic for independence systems. Ann. Discrete Math., 2:65 74, Algorithmic aspects of combinatorics (Conf., Vancouver Island, B.C., 1976). Ariel Kulik, Hadas Shachnai, and Tami Tamir. Maximizing submodular set functions subject to multiple linear constraints. In SODA 09: Proceedings of the Nineteenth Annual ACM -SIAM Symposium on Discrete Algorithms, pages , [LMNS09] J. Lee, V. Mirrokni, V. Nagarajan, and M. Sviridenko. Maximizing non-monotone submodular functions under matroid and knapsack constraints. In Proceedings of the 41st Annual ACM Symposium on Theory of Computing (STOC), [LSV09] [NWF78] [Svi04] [Von09] Jon Lee, Maxim Sviridenko, and Jan Vondrák. Submodular maximization over multiple matroids via generalized exchange properties. In Proceedings, International Workshop on Approximation Algorithms for Combinatorial Optimization Problems, pages , G. L. Nemhauser, L. A. Wolsey, and M. L. Fisher. An analysis of approximations for maximizing submodular set functions. I. Math. Programming, 14(3): , Maxim Sviridenko. A note on maximizing a submodular set function subject to a knapsack constraint. Oper. Res. Lett., 32(1):41 43, Jan Vondrák. Symmetry and approximability of submodular maximization problems. In Proceedings, IEEE Symposium on Foundations of Computer Science, page to appear,
8 A Adapting Sviridenko s Knapsack Proof The proof is virtually identical to that in [Svi04] except we avoid any uses of monotonicity; we give the complete proof here only for completeness. We use the notation similar to his paper for consistency. Let f be a non-negative submodular function with f( ) = 0. Let I = [n], and we are given n items with weights c i Z +, and B 0; let F = {S I c(s) B, where c(s) = i S c i. Our goal to solve max S F f(s). To that end, we want to prove the following result: Theorem A.1. There is a polynomial-time algorithm that outputs a collection of sets such that for any C F, the collection contains a set S satisfying f(s) (1 1/e)f(S C). A.1 The Algorithm The algorithm is the following: it constructs a polynomial number of solutions and chooses the best among them (and in case of ties, outputs the lexicographically smallest one of them). First, the family contains all solutions with cardinality 1, 2, 3: clearly, if C 3 then we will output C itself, which will satisfy the condition of the theorem. Now for each solution U I of cardinality 3, we greedily extend it as follows: Set S 0 = U, I 0 = I. At step t, we have a partial solution S t 1. Now compute θ t = f(s t 1 + i) f(s t 1 ) max. (10) i I t 1 \S t 1 c i Let the maximum be achieved on index i t. If θ t 0, terminate the algorithm. Else check if c(s t 1 + i t ) B: if so, set S t = S t 1 +i t and I t = I t 1, else set S t = S t 1 and I t = I t 1 i t. Stop if I t \S t =. The family of sets we output is all sets of cardinality at most three, as well as for each greedy extension of a set of cardinality three, we output all the sets S t created during the run of the algorithm. Since each set can have at most n elements, we get O(n 4 ) sets output by the algorithm. A.2 The Analysis Let us assume that C = t > 3, and order C as j 1, j 2,..., j t such that j k = max f {j 1,...,j k 1 }({j}), (11) j C\{j 1,...,j k 1 } i.e., index the elements in the order they would be considered by the greedy algorithm that picks items of maximum marginal value (and does not consider their weights c i ). Let Y = {j 1, j 2, j 3 }. Submodularity and the ordering of C gives us the following: Lemma A.2. For any j k C with k 4 and any Z I \ {j 1, j 2, j 3, j k }, it holds that: f Y Z ({j k }) f({j k }) f({j 1 }) f Y Z ({j k }) f({j 1, j k }) f({j 1 }) f({j 1, j 2 }) f({j 1 }) f Y Z ({j k }) f({j 1, j 2, j k }) f({j 1, j 2 }) f({j 1, j 2, j 3 }) f({j 1, j 2 }) Summing the above three inequalities we get that for j k Y Z, 3 f Y Z ({j k }) f(y ). (12) For the rest of the discussion, consider the iteration of the algorithm which starts with S 0 = Y. For S such that S 0 = Y S I, recall that f Y (S) = f(y S) f(y ) = f(s) f(y ). Proposition 2.1 shows that f Y ( ) is a submodular function with f Y ( ) = 0. The following lemma is the analog of [Svi04, eq. 2]: 8
9 Lemma A.3. For any submodular function g and all S, T I it holds that g(t S) g(s) + (g(s + i) g(s)) (13) i T \S Proof. g(t S) = g(s) + (g(t S) g(s)) = g(s) + g S (T \ S) g(s) + i T \S g S({i}) = g(s) + i T \S (g(s + i) g(s)), where we used subadditivity of the submodular function g S. Let τ + 1 be the first step in the greedy algorithm at which either (a) the algorithm stops because θ τ+1 0, or (b) we consider some element i τ+1 C and it is dropped by the greedy algorithm i.e., we set S τ+1 = S τ and I τ+1 = I τ i τ+1. Note that before this point either we considered elements from C and picked them, or the element considered was not in C. In fact, let us assume that there are no elements that are neither in C nor are picked by our algorithm, since we can drop them and perform the same algorithm and analysis again, it will not change anything hence we can assume we have not dropped any elements before this, and S t = {i 1, i 2,..., i t } for all t {0, 1,..., τ}. Now we apply Lemma A.3 to the submodular function f Y ( ) with sets S = S t and T = C to get f Y (C S t ) f Y (S t ) + f Y (S t + i) f Y (S t ) = f Y (S t ) + f(s t + i) f(s t ) (14) i C\S t i C\S t Suppose case (a) happened and we stopped because θ τ+1 0. This means that every term in the summation in (14) must be negative, and hence f Y (C S τ ) f Y (S τ ), or equivalently, f(c S τ ) f(s τ ). In this case, we are not even losing the (1 1/e) factor. Case (b) is if the greedy algorithm drops the element i τ+1 C. Since i τ+1 was dropped, it must be the case that c(s τ ) B but c(s τ + i τ+1 ) = B > B. In this case the right-hand expression in (14) has some positive terms for each of the values of t τ, and hence for each t, we get f Y (C S t ) f Y (S t ) + B θ t+1. (15) We now make a technical lemma; the proof consists of some algebraic violence, and the casual reader can skip to the remainder of the proof of the main theorem. Lemma A.4. f Y (S τ +i τ+1 ) f Y (S τ C) (1 1/e). Proof. For t τ, define B t = t t c i t = c(s t \ S 0 ), and let B τ+1 = B = B τ + c τ+1. Note that B τ B < B. For all integers t τ + 1 and j {B t 1 + 1,..., B t }, define ρ j = θ t. Also, In fact, for all t τ, it holds that By the greedy algorithm, the ρ s are non-increasing. Hence f Y (S τ + i τ+1 ) = t τ+1 c i t θ t = B j=1 ρ j. (16) f Y (S t ) = t τ+1 c i t θ t = B t j=1 ρ j. (17) min s 1...B { s 1 i=1 ρ j + B ρ s } = min t 1...τ { B t i=1 ρ j + B ρ Bt+1} = min t 1...τ {f Y (S t ) + B θ t+1 } Hence, from the above calculations and from (16) f Y (C S t ) (by (15)) f Y (S τ + i τ+1 ) f Y (S τ C) B j=1 ρ j min s 1...B { s 1 i=1 ρ j + B ρ s } (1 e B /B ) (1 1/e). where the inequalities on the second line follow from [Svi04, eq. 2]. 9
10 Now for the final calculations: f(s τ ) f(y ) + f Y (S τ ) f(y ) + f Y (S τ + i τ+1 ) ( f Y (S τ + i τ+1 ) f Y (S τ ) ) f(y ) + f Y (S τ + i τ+1 ) ( f(s τ + i τ+1 ) f(s τ ) ) f(y ) + (1 1/e)f Y (S τ C) f(y )/3 (using Lemma A.4 and (12)) (1 1/e)f(S τ C). (by the definition of f Y () and e 3) Hence this set S τ will be in the family of sets output, and will satisfy the claim of the theorem. 10
Monotone Submodular Maximization over a Matroid
Monotone Submodular Maximization over a Matroid Yuval Filmus January 31, 2013 Abstract In this talk, we survey some recent results on monotone submodular maximization over a matroid. The survey does not
More informationarxiv: v2 [cs.ds] 6 Oct 2010
Constrained Non-Monotone Submodular Maximization: Offline and Secretary Algorithms Anupam Gupta Aaron Roth Grant Schoenebeck Kunal Talwar arxiv:003.57v2 [cs.ds] 6 Oct 200 Abstract Constrained submodular
More informationMaximizing Non-monotone Submodular Set Functions Subject to Different Constraints: Combined Algorithms
Maximizing Non-monotone Submodular Set Functions Subject to Different Constraints: Combined Algorithms Salman Fadaei MohammadAmin Fazli MohammadAli Safari August 31, 2011 We study the problem of maximizing
More informationA (k + 3)/2-approximation algorithm for monotone submodular k-set packing and general k-exchange systems
A (k + 3)/-approximation algorithm for monotone submodular k-set packing and general k-exchange systems Justin Ward Department of Computer Science, University of Toronto Toronto, Canada jward@cs.toronto.edu
More informationOptimization of Submodular Functions Tutorial - lecture I
Optimization of Submodular Functions Tutorial - lecture I Jan Vondrák 1 1 IBM Almaden Research Center San Jose, CA Jan Vondrák (IBM Almaden) Submodular Optimization Tutorial 1 / 1 Lecture I: outline 1
More informationSymmetry and hardness of submodular maximization problems
Symmetry and hardness of submodular maximization problems Jan Vondrák 1 1 Department of Mathematics Princeton University Jan Vondrák (Princeton University) Symmetry and hardness 1 / 25 Submodularity Definition
More informationSubmodularity and curvature: the optimal algorithm
RIMS Kôkyûroku Bessatsu Bx (200x), 000 000 Submodularity and curvature: the optimal algorithm By Jan Vondrák Abstract Let (X, I) be a matroid and let f : 2 X R + be a monotone submodular function. The
More information1 Submodular functions
CS 369P: Polyhedral techniques in combinatorial optimization Instructor: Jan Vondrák Lecture date: November 16, 2010 1 Submodular functions We have already encountered submodular functions. Let s recall
More informationarxiv: v1 [math.oc] 1 Jun 2015
NEW PERFORMANCE GUARANEES FOR HE GREEDY MAXIMIZAION OF SUBMODULAR SE FUNCIONS JUSSI LAIILA AND AE MOILANEN arxiv:1506.00423v1 [math.oc] 1 Jun 2015 Abstract. We present new tight performance guarantees
More informationMaximizing Submodular Set Functions Subject to Multiple Linear Constraints
Maximizing Submodular Set Functions Subject to Multiple Linear Constraints Ariel Kulik Hadas Shachnai Tami Tamir Abstract The concept of submodularity plays a vital role in combinatorial optimization.
More informationMaximum Coverage over a Matroid Constraint
Maximum Coverage over a Matroid Constraint Yuval Filmus Justin Ward University of Toronto STACS 2012, Paris Max Coverage: History Location of bank accounts: Cornuejols, Fisher & Nemhauser 1977, Management
More informationStreaming Algorithms for Submodular Function Maximization
Streaming Algorithms for Submodular Function Maximization Chandra Chekuri Shalmoli Gupta Kent Quanrud University of Illinois at Urbana-Champaign October 6, 2015 Submodular functions f : 2 N R if S T N,
More informationRevisiting the Greedy Approach to Submodular Set Function Maximization
Submitted to manuscript Revisiting the Greedy Approach to Submodular Set Function Maximization Pranava R. Goundan Analytics Operations Engineering, pranava@alum.mit.edu Andreas S. Schulz MIT Sloan School
More informationSubmodular Functions and Their Applications
Submodular Functions and Their Applications Jan Vondrák IBM Almaden Research Center San Jose, CA SIAM Discrete Math conference, Minneapolis, MN June 204 Jan Vondrák (IBM Almaden) Submodular Functions and
More information1 Maximizing a Submodular Function
6.883 Learning with Combinatorial Structure Notes for Lecture 16 Author: Arpit Agarwal 1 Maximizing a Submodular Function In the last lecture we looked at maximization of a monotone submodular function,
More information9. Submodular function optimization
Submodular function maximization 9-9. Submodular function optimization Submodular function maximization Greedy algorithm for monotone case Influence maximization Greedy algorithm for non-monotone case
More informationMaximization of Submodular Set Functions
Northeastern University Department of Electrical and Computer Engineering Maximization of Submodular Set Functions Biomedical Signal Processing, Imaging, Reasoning, and Learning BSPIRAL) Group Author:
More informationSubmodular Functions: Extensions, Distributions, and Algorithms A Survey
Submodular Functions: Extensions, Distributions, and Algorithms A Survey Shaddin Dughmi PhD Qualifying Exam Report, Department of Computer Science, Stanford University Exam Committee: Serge Plotkin, Tim
More informationCSE541 Class 22. Jeremy Buhler. November 22, Today: how to generalize some well-known approximation results
CSE541 Class 22 Jeremy Buhler November 22, 2016 Today: how to generalize some well-known approximation results 1 Intuition: Behavior of Functions Consider a real-valued function gz) on integers or reals).
More informationRandomized Pipage Rounding for Matroid Polytopes and Applications
Randomized Pipage Rounding for Matroid Polytopes and Applications Chandra Chekuri Jan Vondrák September 23, 2009 Abstract We present concentration bounds for linear functions of random variables arising
More informationSubmodular Secretary Problem and Extensions
Submodular Secretary Problem and Extensions MohammadHossein Bateni MohammadTaghi Hajiaghayi Morteza Zadimoghaddam Abstract Online auction is the essence of many modern markets, particularly networked markets,
More informationarxiv: v2 [cs.ds] 28 Aug 2014
Constrained Monotone Function Maximization and the Supermodular Degree Moran Feldman Rani Izsak arxiv:1407.6328v2 [cs.ds] 28 Aug 2014 August 29, 2014 Abstract The problem of maximizing a constrained monotone
More informationOn Lagrangian Relaxation and Subset Selection Problems
On agrangian Relaxation and Subset Selection Problems (Extended Abstract) Ariel Kulik Hadas Shachnai Abstract We prove a general result demonstrating the power of agrangian relaxation in solving constrained
More informationRobust and MaxMin Optimization under Matroid and Knapsack Uncertainty Sets
Robust and MaxMin Optimization under Matroid and Knapsack Uncertainty Sets ANUPAM GUPTA, Carnegie Mellon University VISWANATH NAGARAJAN, University of Michigan R. RAVI, Carnegie Mellon University Consider
More informationStochastic Submodular Cover with Limited Adaptivity
Stochastic Submodular Cover with Limited Adaptivity Arpit Agarwal Sepehr Assadi Sanjeev Khanna Abstract In the submodular cover problem, we are given a non-negative monotone submodular function f over
More informationSubmodular and Linear Maximization with Knapsack Constraints. Ariel Kulik
Submodular and Linear Maximization with Knapsack Constraints Ariel Kulik Submodular and Linear Maximization with Knapsack Constraints Research Thesis Submitted in partial fulfillment of the requirements
More informationA necessary and sufficient condition for the existence of a spanning tree with specified vertices having large degrees
A necessary and sufficient condition for the existence of a spanning tree with specified vertices having large degrees Yoshimi Egawa Department of Mathematical Information Science, Tokyo University of
More informationMaximizing a Submodular Function with Viability Constraints
Maximizing a Submodular Function with Viability Constraints Wolfgang Dvořák, Monika Henzinger, and David P. Williamson 2 Universität Wien, Fakultät für Informatik, Währingerstraße 29, A-090 Vienna, Austria
More informationThe Power of Local Search: Maximum Coverage over a Matroid
The Power of Local Search: Maximum Coverage over a Matroid Yuval Filmus 1,2 and Justin Ward 1 1 Department of Computer Science, University of Toronto {yuvalf,jward}@cs.toronto.edu 2 Supported by NSERC
More informationThe Power of Local Search: Maximum Coverage over a Matroid
The Power of Local Search: Maximum Coverage over a Matroid Yuval Filmus,2 and Justin Ward Department of Computer Science, University of Toronto {yuvalf,jward}@cs.toronto.edu 2 Supported by NSERC Abstract
More informationCommunication Complexity of Combinatorial Auctions with Submodular Valuations
Communication Complexity of Combinatorial Auctions with Submodular Valuations Shahar Dobzinski Weizmann Institute of Science Rehovot, Israel dobzin@gmail.com Jan Vondrák IBM Almaden Research Center San
More informationMore Approximation Algorithms
CS 473: Algorithms, Spring 2018 More Approximation Algorithms Lecture 25 April 26, 2018 Most slides are courtesy Prof. Chekuri Ruta (UIUC) CS473 1 Spring 2018 1 / 28 Formal definition of approximation
More informationThe Power of Randomization: Distributed Submodular Maximization on Massive Datasets
The Power of Randomization: Distributed Submodular Maximization on Massive Datasets Rafael Barbosa Department of Computer Science and DIMAP, University of Warwick Alina Ene Department of Computer Science
More information1 Matroid intersection
CS 369P: Polyhedral techniques in combinatorial optimization Instructor: Jan Vondrák Lecture date: October 21st, 2010 Scribe: Bernd Bandemer 1 Matroid intersection Given two matroids M 1 = (E, I 1 ) and
More informationKnapsack Cover Subject to a Matroid Constraint
Knapsack Cover Subject to a Matroid Constraint Venkatesan T. Chakaravarthy 1, Anamitra Roy Choudhury 2, Sivaramakrishnan R. Natarajan 3, and Sambuddha Roy 4 1 IBM Research, New Delhi, India, vechakra@in.ibm.com
More informationApproximation Algorithms for Maximum. Coverage and Max Cut with Given Sizes of. Parts? A. A. Ageev and M. I. Sviridenko
Approximation Algorithms for Maximum Coverage and Max Cut with Given Sizes of Parts? A. A. Ageev and M. I. Sviridenko Sobolev Institute of Mathematics pr. Koptyuga 4, 630090, Novosibirsk, Russia fageev,svirg@math.nsc.ru
More informationLecture 11 October 7, 2013
CS 4: Advanced Algorithms Fall 03 Prof. Jelani Nelson Lecture October 7, 03 Scribe: David Ding Overview In the last lecture we talked about set cover: Sets S,..., S m {,..., n}. S has cost c S. Goal: Cover
More informationDependent Randomized Rounding for Matroid Polytopes and Applications
Dependent Randomized Rounding for Matroid Polytopes and Applications Chandra Chekuri Jan Vondrák Rico Zenklusen November 4, 2009 Abstract Motivated by several applications, we consider the problem of randomly
More informationA Note on the Budgeted Maximization of Submodular Functions
A Note on the udgeted Maximization of Submodular Functions Andreas Krause June 2005 CMU-CALD-05-103 Carlos Guestrin School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 Abstract Many
More informationSubmodular Functions Properties Algorithms Machine Learning
Submodular Functions Properties Algorithms Machine Learning Rémi Gilleron Inria Lille - Nord Europe & LIFL & Univ Lille Jan. 12 revised Aug. 14 Rémi Gilleron (Mostrare) Submodular Functions Jan. 12 revised
More informationThis means that we can assume each list ) is
This means that we can assume each list ) is of the form ),, ( )with < and Since the sizes of the items are integers, there are at most +1pairs in each list Furthermore, if we let = be the maximum possible
More informationDisjoint Bases in a Polymatroid
Disjoint Bases in a Polymatroid Gruia Călinescu Chandra Chekuri Jan Vondrák May 26, 2008 Abstract Let f : 2 N Z + be a polymatroid (an integer-valued non-decreasing submodular set function with f( ) =
More informationSubmodular function optimization: A brief tutorial (Part I)
Submodular function optimization: A brief tutorial (Part I) Donglei Du (ddu@unb.ca) Faculty of Business Administration, University of New Brunswick, NB Canada Fredericton E3B 9Y2 Workshop on Combinatorial
More informationPartitions and Covers
University of California, Los Angeles CS 289A Communication Complexity Instructor: Alexander Sherstov Scribe: Dong Wang Date: January 2, 2012 LECTURE 4 Partitions and Covers In previous lectures, we saw
More informationarxiv: v1 [cs.ds] 15 Jul 2018
Online Submodular Maximization: Beating 1/2 Made Simple Niv Buchbinder Moran Feldman Mohit Garg July 17, 2018 arxiv:1807.05529v1 [cs.ds] 15 Jul 2018 Abstract The Submodular Welfare Maximization problem
More informationApproximating maximum satisfiable subsystems of linear equations of bounded width
Approximating maximum satisfiable subsystems of linear equations of bounded width Zeev Nutov The Open University of Israel Daniel Reichman The Open University of Israel Abstract We consider the problem
More informationApproximation Algorithms for Asymmetric TSP by Decomposing Directed Regular Multigraphs
Approximation Algorithms for Asymmetric TSP by Decomposing Directed Regular Multigraphs Haim Kaplan Tel-Aviv University, Israel haimk@post.tau.ac.il Nira Shafrir Tel-Aviv University, Israel shafrirn@post.tau.ac.il
More informationMulti-level Facility Location as the Maximization of a Submodular Set Function
Multi-level Facility Location as the Maximization of a Submodular Set Function Camilo Ortiz-Astorquiza a, Ivan Contreras a, Gilbert Laporte b a Concordia University and Interuniversity Research Centre
More informationOptimal Approximation for the Submodular Welfare Problem in the Value Oracle Model
Optimal Approximation for the Submodular Welfare Problem in the Value Oracle Model Jan Vondrák Department of Mathematics Princeton University Princeton, NJ 08544, USA jvondrak@gmail.com ABSTRACT In the
More informationCS 374: Algorithms & Models of Computation, Spring 2017 Greedy Algorithms Lecture 19 April 4, 2017 Chandra Chekuri (UIUC) CS374 1 Spring / 1
CS 374: Algorithms & Models of Computation, Spring 2017 Greedy Algorithms Lecture 19 April 4, 2017 Chandra Chekuri (UIUC) CS374 1 Spring 2017 1 / 1 Part I Greedy Algorithms: Tools and Techniques Chandra
More informationSanta Claus Schedules Jobs on Unrelated Machines
Santa Claus Schedules Jobs on Unrelated Machines Ola Svensson (osven@kth.se) Royal Institute of Technology - KTH Stockholm, Sweden March 22, 2011 arxiv:1011.1168v2 [cs.ds] 21 Mar 2011 Abstract One of the
More informationFast algorithms for even/odd minimum cuts and generalizations
Fast algorithms for even/odd minimum cuts and generalizations András A. Benczúr 1 and Ottilia Fülöp 2 {benczur,otti}@cs.elte.hu 1 Computer and Automation Institute, Hungarian Academy of Sciences, and Department
More informationApproximation Algorithms for Online Weighted Rank Function Maximization under Matroid Constraints
Approximation Algorithms for Online Weighted Rank Function Maximization under Matroid Constraints Niv Buchbinder 1, Joseph (Seffi) Naor 2, R. Ravi 3, and Mohit Singh 4 1 Computer Science Dept., Open University
More informationc 2014 Society for Industrial and Applied Mathematics
SIAM J. COMPUT. Vol. 43, No. 2, pp. 514 542 c 2014 Society for Industrial and Applied Mathematics MONOTONE SUBMODULAR MAXIMIZATION OVER A MATROID VIA NON-OBLIVIOUS LOCAL SEARCH YUVAL FILMUS AND JUSTIN
More informationCS599: Convex and Combinatorial Optimization Fall 2013 Lecture 24: Introduction to Submodular Functions. Instructor: Shaddin Dughmi
CS599: Convex and Combinatorial Optimization Fall 2013 Lecture 24: Introduction to Submodular Functions Instructor: Shaddin Dughmi Announcements Introduction We saw how matroids form a class of feasible
More informationA Multi-Exchange Local Search Algorithm for the Capacitated Facility Location Problem
A Multi-Exchange Local Search Algorithm for the Capacitated Facility Location Problem Jiawei Zhang Bo Chen Yinyu Ye October 19, 2003 Abstract We present a multi-exchange local search algorithm for approximating
More informationOn Maximizing Welfare when Utility Functions are Subadditive
On Maximizing Welfare when Utility Functions are Subadditive Uriel Feige October 8, 2007 Abstract We consider the problem of maximizing welfare when allocating m items to n players with subadditive utility
More information3.7 Cutting plane methods
3.7 Cutting plane methods Generic ILP problem min{ c t x : x X = {x Z n + : Ax b} } with m n matrix A and n 1 vector b of rationals. According to Meyer s theorem: There exists an ideal formulation: conv(x
More informationMatroid Representation of Clique Complexes
Matroid Representation of Clique Complexes Kenji Kashiwabara 1, Yoshio Okamoto 2, and Takeaki Uno 3 1 Department of Systems Science, Graduate School of Arts and Sciences, The University of Tokyo, 3 8 1,
More informationLecture 10 February 4, 2013
UBC CPSC 536N: Sparse Approximations Winter 2013 Prof Nick Harvey Lecture 10 February 4, 2013 Scribe: Alexandre Fréchette This lecture is about spanning trees and their polyhedral representation Throughout
More informationStreaming Algorithms for Maximizing Monotone Submodular Functions under a Knapsack Constraint
Streaming Algorithms for Maximizing Monotone Submodular Functions under a Knapsack Constraint Chien-Chung Huang 1, Naonori Kakimura 2, and Yuichi Yoshida 3 1 CNRS, École Normale Supérieure, Paris, France
More informationNETS 412: Algorithmic Game Theory March 28 and 30, Lecture Approximation in Mechanism Design. X(v) = arg max v i (a)
NETS 412: Algorithmic Game Theory March 28 and 30, 2017 Lecture 16+17 Lecturer: Aaron Roth Scribe: Aaron Roth Approximation in Mechanism Design In the last lecture, we asked how far we can go beyond the
More informationReachability-based matroid-restricted packing of arborescences
Egerváry Research Group on Combinatorial Optimization Technical reports TR-2016-19. Published by the Egerváry Research Group, Pázmány P. sétány 1/C, H 1117, Budapest, Hungary. Web site: www.cs.elte.hu/egres.
More informationCombinatorial Auctions Do Need Modest Interaction
Combinatorial Auctions Do Need Modest Interaction Sepehr Assadi University of Pennsylvania Motivation A fundamental question: How to determine efficient allocation of resources between individuals? Motivation
More informationThe Steiner k-cut Problem
The Steiner k-cut Problem Chandra Chekuri Sudipto Guha Joseph (Seffi) Naor September 23, 2005 Abstract We consider the Steiner k-cut problem which generalizes both the k-cut problem and the multiway cut
More informationDiscrete Newton s Algorithm for Parametric Submodular Function Minimization
Discrete Newton s Algorithm for Parametric Submodular Function Minimization Michel X. Goemans, Swati Gupta, and Patrick Jaillet Massachusetts Institute of Technology, USA, goemans@math.mit.edu, swatig@mit.edu,
More informationAn Improved Approximation Algorithm for Requirement Cut
An Improved Approximation Algorithm for Requirement Cut Anupam Gupta Viswanath Nagarajan R. Ravi Abstract This note presents improved approximation guarantees for the requirement cut problem: given an
More informationLearning symmetric non-monotone submodular functions
Learning symmetric non-monotone submodular functions Maria-Florina Balcan Georgia Institute of Technology ninamf@cc.gatech.edu Nicholas J. A. Harvey University of British Columbia nickhar@cs.ubc.ca Satoru
More informationMaximization Problems with Submodular Objective Functions
Maximization Problems with Submodular Objective Functions Moran Feldman Maximization Problems with Submodular Objective Functions Research Thesis In Partial Fulfillment of the Requirements for the Degree
More informationIsomorphisms between pattern classes
Journal of Combinatorics olume 0, Number 0, 1 8, 0000 Isomorphisms between pattern classes M. H. Albert, M. D. Atkinson and Anders Claesson Isomorphisms φ : A B between pattern classes are considered.
More informationAlgorithms and Adaptivity Gaps for Stochastic Probing
Algorithms and Adaptivity Gaps for Stochastic Probing Anupam Gupta Viswanath Nagarajan Sahil Singla Abstract A stochastic probing problem consists of a set of elements whose values are independent random
More informationA Min-Max Theorem for k-submodular Functions and Extreme Points of the Associated Polyhedra. Satoru FUJISHIGE and Shin-ichi TANIGAWA.
RIMS-1787 A Min-Max Theorem for k-submodular Functions and Extreme Points of the Associated Polyhedra By Satoru FUJISHIGE and Shin-ichi TANIGAWA September 2013 RESEARCH INSTITUTE FOR MATHEMATICAL SCIENCES
More informationMinimization of Symmetric Submodular Functions under Hereditary Constraints
Minimization of Symmetric Submodular Functions under Hereditary Constraints J.A. Soto (joint work with M. Goemans) DIM, Univ. de Chile April 4th, 2012 1 of 21 Outline Background Minimal Minimizers and
More informationStructured Robust Submodular Maximization: Offline and Online Algorithms
Structured Robust Submodular Maximization: Offline and Online Algorithms Nima Anari Nika Haghtalab Joseph (Seffi) Naor Sebastian Pokutta Mohit Singh Alfredo Torrico Abstract Constrained submodular function
More informationApproximation algorithms for cycle packing problems
Approximation algorithms for cycle packing problems Michael Krivelevich Zeev Nutov Raphael Yuster Abstract The cycle packing number ν c (G) of a graph G is the maximum number of pairwise edgedisjoint cycles
More informationApproximating Submodular Functions. Nick Harvey University of British Columbia
Approximating Submodular Functions Nick Harvey University of British Columbia Approximating Submodular Functions Part 1 Nick Harvey University of British Columbia Department of Computer Science July 11th,
More informationApplications of Submodular Functions in Speech and NLP
Applications of Submodular Functions in Speech and NLP Jeff Bilmes Department of Electrical Engineering University of Washington, Seattle http://ssli.ee.washington.edu/~bilmes June 27, 2011 J. Bilmes Applications
More informationEECS 495: Combinatorial Optimization Lecture Manolis, Nima Mechanism Design with Rounding
EECS 495: Combinatorial Optimization Lecture Manolis, Nima Mechanism Design with Rounding Motivation Make a social choice that (approximately) maximizes the social welfare subject to the economic constraints
More informationAll-or-Nothing Generalized Assignment with Application to Scheduling Advertising Campaigns
All-or-Nothing Generalized Assignment with Application to Scheduling Advertising Campaigns Ron Adany 1, Moran Feldman 2, Elad Haramaty 2, Rohit Khandekar 3, Baruch Schieber 4, Roy Schwartz 2, Hadas Shachnai
More informationLearning with Submodular Functions: A Convex Optimization Perspective
Foundations and Trends R in Machine Learning Vol. 6, No. 2-3 (2013) 145 373 c 2013 F. Bach DOI: 10.1561/2200000039 Learning with Submodular Functions: A Convex Optimization Perspective Francis Bach INRIA
More informationCell-Probe Lower Bounds for Prefix Sums and Matching Brackets
Cell-Probe Lower Bounds for Prefix Sums and Matching Brackets Emanuele Viola July 6, 2009 Abstract We prove that to store strings x {0, 1} n so that each prefix sum a.k.a. rank query Sumi := k i x k can
More informationA Faster Strongly Polynomial Time Algorithm for Submodular Function Minimization
A Faster Strongly Polynomial Time Algorithm for Submodular Function Minimization James B. Orlin Sloan School of Management, MIT Cambridge, MA 02139 jorlin@mit.edu Abstract. We consider the problem of minimizing
More informationFacets for Node-Capacitated Multicut Polytopes from Path-Block Cycles with Two Common Nodes
Facets for Node-Capacitated Multicut Polytopes from Path-Block Cycles with Two Common Nodes Michael M. Sørensen July 2016 Abstract Path-block-cycle inequalities are valid, and sometimes facet-defining,
More informationarxiv: v1 [cs.ds] 22 Nov 2018
Approximate Multi-Matroid Intersection via Iterative Refinement arxiv:1811.09027v1 [cs.ds] 22 Nov 2018 André Linhares,1, Neil Olver,2, Chaitanya Swamy,1, and Rico Zenklusen,3 1 Dept. of Combinatorics and
More informationMatroid Optimisation Problems with Nested Non-linear Monomials in the Objective Function
atroid Optimisation Problems with Nested Non-linear onomials in the Objective Function Anja Fischer Frank Fischer S. Thomas ccormick 14th arch 2016 Abstract Recently, Buchheim and Klein [4] suggested to
More informationLecture 15 (Oct 6): LP Duality
CMPUT 675: Approximation Algorithms Fall 2014 Lecturer: Zachary Friggstad Lecture 15 (Oct 6): LP Duality Scribe: Zachary Friggstad 15.1 Introduction by Example Given a linear program and a feasible solution
More informationSubmodular Maximization by Simulated Annealing
Submodular Maximization by Simulated Annealing Shayan Oveis Gharan Jan Vondrák Abstract We consider the problem of maximizing a nonnegative (possibly non-monotone) submodular set function with or without
More informationProof methods and greedy algorithms
Proof methods and greedy algorithms Magnus Lie Hetland Lecture notes, May 5th 2008 1 Introduction This lecture in some ways covers two separate topics: (1) how to prove algorithms correct, in general,
More informationApproximation Algorithms for the Incremental Knapsack Problem via Disjunctive Programming
Approximation Algorithms for the Incremental Knapsack Problem via Disjunctive Programming Daniel Bienstock, Jay Sethuraman, Chun Ye Department of Industrial Engineering and Operations Research Columbia
More informationOn the correlation of parity and small-depth circuits
Electronic Colloquium on Computational Complexity, Report No. 137 (2012) On the correlation of parity and small-depth circuits Johan Håstad KTH - Royal Institute of Technology November 1, 2012 Abstract
More informationStochastic Submodular Cover with Limited Adaptivity
Stochastic Submodular Cover with Limited Adaptivity Arpit Agarwal Sepehr Assadi Sanjeev Khanna University of Pennsylvania {aarpit,sassadi,sanjeev}@cis.upenn.edu arxiv:1810.13351v1 [cs.ds] 31 Oct 2018 Abstract
More informationA General Model for Matroids and the Greedy Algorithm
RIMS Preprint No. 1590 A General Model for Matroids and the Greedy Algorithm Ulrich Faigle Mathematisches Institut/ZAIK Universität zu Köln D-50931 Köln, Germany Satoru Fujishige Research Institute for
More informationMath 541 Fall 2008 Connectivity Transition from Math 453/503 to Math 541 Ross E. Staffeldt-August 2008
Math 541 Fall 2008 Connectivity Transition from Math 453/503 to Math 541 Ross E. Staffeldt-August 2008 Closed sets We have been operating at a fundamental level at which a topological space is a set together
More informationMaximizing Bisubmodular and k-submodular Functions
Maximizing Bisubmodular and k-submodular Functions Justin Ward Department of Computer Science, University of Warwick, UK J.D.Ward@dcs.warwick.ac.uk Stanislav Živný Department of Computer Science, University
More informationAn algorithm to increase the node-connectivity of a digraph by one
Discrete Optimization 5 (2008) 677 684 Contents lists available at ScienceDirect Discrete Optimization journal homepage: www.elsevier.com/locate/disopt An algorithm to increase the node-connectivity of
More informationDuality of LPs and Applications
Lecture 6 Duality of LPs and Applications Last lecture we introduced duality of linear programs. We saw how to form duals, and proved both the weak and strong duality theorems. In this lecture we will
More informationThe decomposability of simple orthogonal arrays on 3 symbols having t + 1 rows and strength t
The decomposability of simple orthogonal arrays on 3 symbols having t + 1 rows and strength t Wiebke S. Diestelkamp Department of Mathematics University of Dayton Dayton, OH 45469-2316 USA wiebke@udayton.edu
More informationMATHEMATICAL ENGINEERING TECHNICAL REPORTS. On the Pipage Rounding Algorithm for Submodular Function Maximization A View from Discrete Convex Analysis
MATHEMATICAL ENGINEERING TECHNICAL REPORTS On the Pipage Rounding Algorithm for Submodular Function Maximization A View from Discrete Convex Analysis Akiyoshi SHIOURA (Communicated by Kazuo MUROTA) METR
More information1 The Knapsack Problem
Comp 260: Advanced Algorithms Prof. Lenore Cowen Tufts University, Spring 2018 Scribe: Tom Magerlein 1 Lecture 4: The Knapsack Problem 1 The Knapsack Problem Suppose we are trying to burgle someone s house.
More informationCS264: Beyond Worst-Case Analysis Lecture #15: Smoothed Complexity and Pseudopolynomial-Time Algorithms
CS264: Beyond Worst-Case Analysis Lecture #15: Smoothed Complexity and Pseudopolynomial-Time Algorithms Tim Roughgarden November 5, 2014 1 Preamble Previous lectures on smoothed analysis sought a better
More information